Documentation Index
Fetch the complete documentation index at: https://docs.agentfront.dev/llms.txt
Use this file to discover all available pages before exploring further.
Agents are autonomous AI units that have their own LLM provider, can execute tools, and are themselves callable as tools. They enable sophisticated multi-agent architectures where specialized agents can be composed and orchestrated.
Agents build on the MCP Tools specification—each agent is automatically exposed as an invoke_<agent-id> tool that any MCP client can call.
Nx users: Scaffold with nx g @frontmcp/nx:agent my-agent --project my-app. See Agent Generator.
Why Agents?
In the Model Context Protocol, agents serve a distinct purpose from tools, resources, and prompts:
| Aspect | Agent | Tool | Resource | Prompt |
|---|
| Purpose | Autonomous AI execution | Execute actions | Provide data | Provide templated instructions |
| Has LLM | Yes (own LLM provider) | No | No | No |
| Direction | Model or user triggers execution | Model triggers execution | Model pulls data | Model uses messages |
| Side effects | Yes (LLM calls, tool execution) | Yes (mutations, API calls) | No (read-only) | No (message generation) |
| Use case | Complex reasoning, multi-step tasks | Actions, integrations | Context loading | Conversation templates |
Agents are ideal for:
- Complex reasoning — tasks requiring multiple LLM calls and tool use
- Specialized expertise — domain-specific agents (research, writing, coding)
- Orchestration — coordinating multiple sub-agents for complex workflows
- Isolation — agents with their own tools, resources, and providers
Creating Agents
Class Style (Default Behavior)
The simplest agent requires no execute() method. The default behavior automatically:
- Runs the execution loop with the LLM
- Connects tools and executes them as needed
- Sends notifications on tool calls and output
import { Agent, AgentContext } from '@frontmcp/sdk';
import { z } from '@frontmcp/sdk';
@Agent({
name: 'research-agent',
description: 'Researches topics and compiles summaries',
systemInstructions: 'You are a research assistant. Search for information and provide concise summaries.',
inputSchema: {
topic: z.string().describe('Topic to research'),
},
llm: {
provider: 'openai',
model: 'gpt-4o',
apiKey: { env: 'OPENAI_API_KEY' },
},
tools: [WebSearchTool, SummarizeTool],
})
class ResearchAgent extends AgentContext {} // No execute() needed!
Class Style (Custom Behavior)
Override execute() only when you need custom pre/post processing:
@Agent({
name: 'custom-agent',
description: 'Agent with custom logic',
systemInstructions: 'You are a helpful assistant.',
inputSchema: {
query: z.string(),
},
llm: {
provider: 'openai',
model: 'gpt-4o',
apiKey: { env: 'OPENAI_API_KEY' },
},
})
class CustomAgent extends AgentContext {
async execute(input: { query: string }) {
// Custom pre-processing
this.notify('Starting custom processing...', 'info');
// Call the default agent loop
const result = await super.execute(input);
// Custom post-processing
return { ...result, customField: 'added' };
}
}
Function Style
For simpler agents, use the functional builder:
import { agent } from '@frontmcp/sdk';
import { z } from '@frontmcp/sdk';
const EchoAgent = agent({
name: 'echo-agent',
description: 'Echoes back the input message',
inputSchema: {
message: z.string(),
},
llm: {
provider: 'openai',
model: 'gpt-4o-mini',
apiKey: { env: 'OPENAI_API_KEY' },
},
})(({ message }) => ({ echoed: `Echo: ${message}` }));
Registering Agents
Add agents to your app via the agents array:
import { App } from '@frontmcp/sdk';
@App({
id: 'my-app',
name: 'My Application',
agents: [ResearchAgent, CalculatorAgent, WriterAgent],
})
class MyApp {}
Each agent is automatically exposed as a tool:
invoke_research-agent
invoke_calculator-agent
invoke_writer-agent
Loading from npm or Remote Servers
Mix local agents with those loaded from npm or proxied from remote servers:
import { App, Agent } from '@frontmcp/sdk';
@App({
id: 'my-app',
name: 'My Application',
agents: [
ResearchAgent, // Local class
Agent.esm('@acme/agents@^1.0.0', 'writer'), // Single agent from npm
Agent.remote('https://api.example.com/mcp', 'assistant'), // Single agent from remote
],
})
class MyApp {}
Agent.esm() and Agent.remote() load individual agents. For loading entire apps, use App.esm() or App.remote().
Environment Availability
Restrict when an agent is discoverable and invocable using availableWhen:
@Agent({
name: 'local-build-agent',
llm: { provider: 'openai', model: 'gpt-4-turbo', apiKey: { env: 'OPENAI_API_KEY' } },
availableWhen: { runtime: ['node'], deployment: ['standalone'] },
})
class LocalBuildAgent extends AgentContext { ... }
See Environment Awareness for the full reference.
LLM Configuration
Agents require an LLM configuration. FrontMCP provides native adapters for OpenAI and Anthropic SDKs with built-in retry logic, streaming support, and token tracking.
Built-in Providers (Shorthand)
The simplest way to configure an LLM — the SDK auto-creates the appropriate adapter:
# For OpenAI
npm install openai
# For Anthropic
npm install @anthropic-ai/sdk
@Agent({
name: 'research-agent',
llm: {
provider: 'openai',
model: 'gpt-4o',
apiKey: { env: 'OPENAI_API_KEY' },
},
// ...
})
class ResearchAgent extends AgentContext {}
@Agent({
name: 'claude-agent',
llm: {
provider: 'anthropic',
model: 'claude-sonnet-4-20250514',
apiKey: { env: 'ANTHROPIC_API_KEY' },
},
// ...
})
class ClaudeAgent extends AgentContext {}
OpenAI Adapter (Direct)
For full control, use OpenAIAdapter directly. Supports the Chat Completions API (default) and the Responses API:
import { OpenAIAdapter } from '@frontmcp/sdk';
// Chat Completions API (default)
const chatAdapter = new OpenAIAdapter({
model: 'gpt-4o',
apiKey: process.env.OPENAI_API_KEY,
});
// Responses API
const responsesAdapter = new OpenAIAdapter({
model: 'gpt-4o',
apiKey: process.env.OPENAI_API_KEY,
api: 'responses',
});
// Pre-configured client
import OpenAI from 'openai';
const clientAdapter = new OpenAIAdapter({
model: 'gpt-4o',
client: new OpenAI({ apiKey: process.env.OPENAI_API_KEY }),
});
@Agent({
name: 'research-agent',
llm: { adapter: chatAdapter },
// ...
})
class ResearchAgent extends AgentContext {}
OpenAI-Compatible Providers
Use baseUrl to connect to any OpenAI-compatible API (OpenRouter, Azure, Groq, Mistral, etc.):
import { OpenAIAdapter } from '@frontmcp/sdk';
// OpenRouter — access 100+ models
const openRouterAdapter = new OpenAIAdapter({
model: 'anthropic/claude-3-opus',
apiKey: process.env.OPENROUTER_API_KEY,
baseUrl: 'https://openrouter.ai/api/v1',
});
// Groq
const groqAdapter = new OpenAIAdapter({
model: 'llama-3.1-70b-versatile',
apiKey: process.env.GROQ_API_KEY,
baseUrl: 'https://api.groq.com/openai/v1',
});
// Azure OpenAI
const azureAdapter = new OpenAIAdapter({
model: 'gpt-4o',
apiKey: process.env.AZURE_OPENAI_API_KEY,
baseUrl: 'https://my-resource.openai.azure.com/openai/deployments/gpt-4o',
});
Anthropic Adapter (Direct)
import { AnthropicAdapter } from '@frontmcp/sdk';
const adapter = new AnthropicAdapter({
model: 'claude-sonnet-4-20250514',
apiKey: process.env.ANTHROPIC_API_KEY,
});
// Or with pre-configured client
import Anthropic from '@anthropic-ai/sdk';
const clientAdapter = new AnthropicAdapter({
model: 'claude-sonnet-4-20250514',
client: new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY }),
});
@Agent({
name: 'claude-agent',
llm: { adapter },
// ...
})
class ClaudeAgent extends AgentContext {}
Custom Adapter
Implement AgentLlmAdapter for any other provider:
import { AgentLlmAdapter, AgentCompletion, AgentPrompt, AgentToolDefinition } from '@frontmcp/sdk';
const myAdapter: AgentLlmAdapter = {
async completion(prompt: AgentPrompt, tools?: AgentToolDefinition[]): Promise<AgentCompletion> {
const response = await myLlmClient.chat({
messages: prompt.messages,
system: prompt.system,
tools: tools?.map(t => ({ name: t.name, description: t.description })),
});
return {
content: response.text,
finishReason: response.hasToolCalls ? 'tool_calls' : 'stop',
toolCalls: response.toolCalls,
};
},
};
@Agent({
name: 'custom-agent',
llm: { adapter: myAdapter },
})
class CustomAgent extends AgentContext {}
Agent-Scoped Components
Agents can have their own isolated tools, resources, prompts, and providers:
@Agent({
name: 'isolated-agent',
description: 'Agent with its own scope',
// Agent-scoped tools (only this agent can use them)
tools: [PrivateTool],
// Agent-scoped resources
resources: [AgentConfig],
// Agent-scoped prompts
prompts: [AgentInstructions],
// Agent-scoped providers
providers: [DatabaseService],
llm: { ... },
})
class IsolatedAgent extends AgentContext { ... }
Swarm Configuration
Control agent visibility for multi-agent coordination:
@Agent({
name: 'orchestrator-agent',
description: 'Coordinates other agents',
swarm: {
canSeeOtherAgents: true, // Can this agent invoke others?
visibleAgents: ['worker-a', 'worker-b'], // Which agents can it see?
isVisible: true, // Can other agents see this one?
maxCallDepth: 3, // Max nested agent invocations
},
llm: { ... },
})
class OrchestratorAgent extends AgentContext {
async execute({ task }: { task: string }) {
// Can invoke visible agents
const result = await this.invokeAgent('worker-a', { data: task });
return { orchestrated: result };
}
}
Visibility Patterns
Orchestrator Pattern — A central agent coordinates specialized workers:
// Workers (can't see each other)
@Agent({ name: 'research-worker', swarm: { isVisible: true, canSeeOtherAgents: false }, ... })
class ResearchWorker extends AgentContext { ... }
@Agent({ name: 'writer-worker', swarm: { isVisible: true, canSeeOtherAgents: false }, ... })
class WriterWorker extends AgentContext { ... }
// Orchestrator (can see and invoke workers)
@Agent({
name: 'orchestrator',
swarm: { canSeeOtherAgents: true, visibleAgents: ['research-worker', 'writer-worker'] },
...
})
class Orchestrator extends AgentContext { ... }
Execution Configuration
Control agent execution behavior:
@Agent({
name: 'long-running-agent',
execution: {
maxIterations: 10, // Max tool call rounds (default: 10)
timeout: 120000, // Timeout in ms (default: 120000)
enableStreaming: false, // Stream responses
enableNotifications: true, // Send progress notifications
enableAutoProgress: false, // Auto progress during LLM calls (default: false)
inheritParentTools: true, // Include parent scope tools (default: true)
useToolFlow: true, // Execute tools through call-tool flow (default: true)
},
llm: { ... },
})
class LongRunningAgent extends AgentContext { ... }
By default, agents execute tools through the full call-tool flow, which includes:
- Plugin hooks (caching, rate limiting, audit logging)
- Authorization checks
- Tool middleware and transformations
For performance-critical scenarios, you can disable flow execution:
@Agent({
name: 'fast-agent',
execution: {
useToolFlow: false, // Direct execution, bypasses plugins
},
llm: { ... },
})
class FastAgent extends AgentContext { ... }
Setting useToolFlow: false bypasses all plugin hooks and middleware. Only use this when you need maximum performance and don’t require plugin features.
Overriding Behavior
Customize agent behavior by overriding methods in AgentContext:
@Agent({ name: 'custom-agent', llm: { ... } })
class CustomAgent extends AgentContext {
// Override LLM completion for custom logic
protected override async completion(prompt, tools, options) {
this.logger.info('Making LLM request...');
const result = await super.completion(prompt, tools, options);
this.logger.info(`LLM returned: ${result.finishReason}`);
return result;
}
// Override tool execution for logging/caching
protected override async executeTool(name: string, args: Record<string, unknown>) {
this.logger.info(`Executing tool: ${name}`);
return super.executeTool(name, args);
}
async execute(input: { query: string }) {
return { result: '...' };
}
}
Progress Notifications
Keep users informed during long operations using manual or automatic notifications.
Manual Notifications
Use this.notify() to send custom messages at specific points:
@Agent({ name: 'notifying-agent', llm: { ... } })
class NotifyingAgent extends AgentContext {
async execute({ task }: { task: string }) {
await this.notify('Starting task...', 'info');
// Step 1
await this.notify('Gathering data...', 'info');
const data = await this.gatherData();
// Step 2
await this.notify('Processing...', 'info');
const result = await this.process(data);
await this.notify('Complete!', 'info');
return { result };
}
}
Use this.progress() for progress bars when the client provides a progressToken:
@Agent({ name: 'progress-agent', llm: { ... } })
class ProgressAgent extends AgentContext {
async execute({ files }: { files: string[] }) {
for (let i = 0; i < files.length; i++) {
await this.progress(i + 1, files.length, `Processing ${files[i]}`);
await this.processFile(files[i]);
}
return { processed: files.length };
}
}
Automatic Progress (Opt-in)
Enable enableAutoProgress to automatically send progress notifications during the agent execution loop:
@Agent({
name: 'auto-progress-agent',
execution: {
enableAutoProgress: true, // Opt-in to automatic progress
},
llm: { ... },
})
class AutoProgressAgent extends AgentContext { }
When enabled, the agent automatically sends progress updates at these lifecycle points:
| Event | Progress % | Message Example |
|---|
| LLM call start | 0-80% | “Starting LLM call (iteration 1/10)“ |
| LLM response received | varies | ”LLM response received (500P + 200C tokens)“ |
| Tools identified | - | ”Identified 2 tool call(s): search, calculate” |
| Tool execution | varies | ”Executing tool 1/2: search” |
| Completion | 100% | “Agent completed” |
Auto progress requires both enableAutoProgress: true and enableNotifications: true (the default).
Progress notifications are only sent if the client includes a progressToken in the request’s _meta field.
Error Handling
Handle errors gracefully in agents:
import { AgentExecutionError, AgentTimeoutError } from '@frontmcp/sdk';
@Agent({ name: 'safe-agent', llm: { ... } })
class SafeAgent extends AgentContext {
async execute(input: { data: string }) {
try {
return await this.riskyOperation(input.data);
} catch (error) {
if (error instanceof ValidationError) {
// Return error response with fail()
this.fail(`Invalid input: ${error.message}`);
}
throw error;
}
}
}
Nested Agents
Agents can contain other agents, creating hierarchical structures:
@Agent({
name: 'parent-agent',
description: 'Parent with nested child agents',
// Nested agents — registered as tools within this agent
agents: [ChildAgentA, ChildAgentB],
// Parent's own tools
tools: [PlanningTool],
llm: { ... },
})
class ParentAgent extends AgentContext {
async execute({ task }: { task: string }) {
// LLM can call: invoke_child-agent-a, invoke_child-agent-b, planning-tool
return { result: '...' };
}
}