Documentation Index Fetch the complete documentation index at: https://docs.agentfront.dev/llms.txt
Use this file to discover all available pages before exploring further.
Basic Usage
import { Agent , AgentContext , Tool , ToolContext } from ' @frontmcp/sdk ' ;
import { z } from ' @frontmcp/sdk ' ;
@ Tool ({
name : ' search_web ' ,
inputSchema : { query : z . string () },
})
class SearchWebTool extends ToolContext {
async execute ( input : { query : string }) {
return { results : [ ' Result 1 ' , ' Result 2 ' ] };
}
}
@ Agent ({
name : ' research-agent ' ,
description : ' Researches topics using web search ' ,
llm : {
provider : ' openai ' ,
model : ' gpt-4o ' ,
apiKey : { env : ' OPENAI_API_KEY ' },
},
tools : [ SearchWebTool ],
})
export default class ResearchAgent extends AgentContext {
// Default behavior: runs agent loop automatically
}
Signature
function Agent < I extends ZodRawShape , O extends OutputSchema >(
opts : AgentMetadataOptions < I , O >
): TypedClassDecorator
Type Safety
The @Agent decorator validates at compile time that:
The decorated class extends AgentContext
When inputSchema is provided, the execute() parameter matches the schema type
When outputSchema is provided, the execute() return type is compatible
Invalid metadata options (e.g., typos in concurrency) produce specific compile-time errors
Agents can use the default execute() from AgentContext (which runs the LLM agent loop) without overriding it. The type checker allows this pattern.
Configuration Options
Required Properties
Property Type Description namestringUnique agent identifier llmAgentLlmBuiltinConfig | AgentLlmAdapterConfigLLM provider configuration
Optional Properties
Property Type Description descriptionstringAgent description inputSchemaZodShapeInput validation schema outputSchemaZodTypeOutput validation schema toolsToolType[]Tools available to the agent systemInstructionsstringSystem prompt for the agent idstringStable identifier tagsstring[]Categorization tags
LLM Configuration
The llm property accepts one of two configuration shapes:
Built-in provider shorthand (AgentLlmBuiltinConfig):
interface AgentLlmBuiltinConfig {
provider : ' openai ' | ' anthropic ' ;
model : string ;
apiKey : string | { env : string } | WithConfig < string >;
baseUrl ?: string ; // For OpenAI-compatible APIs
temperature ?: number ;
maxTokens ?: number ;
}
Direct adapter instance (AgentLlmAdapterConfig):
interface AgentLlmAdapterConfig {
adapter : AgentLlmAdapter | OpenAIAdapter | AnthropicAdapter ;
}
LLM Providers
OpenAI
@ Agent ({
name : ' assistant ' ,
llm : {
provider : ' openai ' ,
model : ' gpt-4o ' ,
apiKey : { env : ' OPENAI_API_KEY ' },
temperature : 0.7 ,
},
})
Anthropic (Claude)
@ Agent ({
name : ' assistant ' ,
llm : {
provider : ' anthropic ' ,
model : ' claude-sonnet-4-6 ' ,
apiKey : { env : ' ANTHROPIC_API_KEY ' },
},
})
The process.env examples above are Node.js-specific. Decorator and config arguments
are evaluated synchronously, so async token retrieval must happen before agent
creation. In browser environments, use the function-based agent() API: import { agent } from ' @frontmcp/sdk ' ;
// Fetch a short-lived token from your backend, then create the agent
const token = await fetch ( ' /api/llm-token ' ). then (( r ) => r . text ());
const researcher = agent ({
name : ' research ' ,
llm : { provider : ' openai ' , model : ' gpt-4o ' , apiKey : token },
})(( input , ctx ) => {
return { findings : ' ... ' };
});
// Local development only (never ship in production bundles):
// apiKey: import.meta.env.VITE_OPENAI_API_KEY,
Warning: Never embed long-lived API keys in client-side bundles shipped to end users.
When using Browser LLM adapters (OpenAIAdapter / AnthropicAdapter), prefer a backend
proxy or short-lived tokens to avoid exposing persistent credentials.
OpenAI Responses API
import { OpenAIAdapter } from ' @frontmcp/sdk ' ;
@ Agent ({
name : ' assistant ' ,
llm : {
adapter : new OpenAIAdapter ({
model : ' gpt-4o ' ,
apiKey : process . env . OPENAI_API_KEY ,
api : ' responses ' ,
}),
},
})
Direct Adapter Instance
import { OpenAIAdapter } from ' @frontmcp/sdk ' ;
@ Agent ({
name : ' assistant ' ,
llm : {
adapter : new OpenAIAdapter ({
model : ' gpt-4o ' ,
apiKey : process . env . OPENAI_API_KEY ,
baseUrl : ' https://api.groq.com/openai/v1 ' , // OpenAI-compatible APIs
}),
},
})
Agent Loop
By default, agents run an automatic loop:
Send input to LLM with available tools
If LLM requests tool call, execute tool and return result
Repeat until LLM returns final response
Parse and return output
@ Agent ({
name : ' task-agent ' ,
llm : { provider : ' openai ' , model : ' gpt-4o ' , apiKey : { env : ' OPENAI_API_KEY ' } },
tools : [ Tool1 , Tool2 ],
systemInstructions : ' You are a helpful assistant. ' ,
})
class TaskAgent extends AgentContext {
// No execute() override needed - uses default loop
}
Custom Execution
Override execute() for custom behavior:
@ Agent ({
name : ' custom-agent ' ,
inputSchema : { task : z . string () },
outputSchema : z . object ({ result : z . string () }),
llm : { provider : ' openai ' , model : ' gpt-4o ' , apiKey : { env : ' OPENAI_API_KEY ' } },
})
class CustomAgent extends AgentContext {
async execute ( input : { task : string }) {
// Pre-processing
await this . notify ( ' Starting task... ' , ' info ' );
// Custom validation
if ( input . task . length < 10 ) {
return { result : ' Task too short ' };
}
// Run default agent loop
const result = await super . execute ( input );
// Post-processing
return {
result : ` Completed: ${ result } ` ,
};
}
}
Function-Based Alternative
import { agent } from ' @frontmcp/sdk ' ;
import { z } from ' @frontmcp/sdk ' ;
const researchAgent = agent ({
name : ' research ' ,
inputSchema : { topic : z . string () },
llm : { provider : ' openai ' , model : ' gpt-4o ' , apiKey : { env : ' OPENAI_API_KEY ' } },
})(( input , ctx ) => {
// Custom execution logic
return { findings : ' ... ' };
});
Context Methods
LLM Completion
protected async completion (
prompt : AgentPrompt ,
tools ?: AgentToolDefinition [],
options ?: AgentCompletionOptions
) : Promise < AgentCompletion >
protected async * streamCompletion (
prompt : AgentPrompt ,
tools ?: AgentToolDefinition []
) : AsyncGenerator < AgentCompletionChunk >
protected async executeTool (
name : string ,
args : Record < string , unknown >
) : Promise < unknown >
protected async invokeAgent (
agentId : string ,
input : unknown
) : Promise < unknown >
Notifications
protected async notify (
message : string | Record < string , unknown >,
level ?: ' debug ' | ' info ' | ' warning ' | ' error '
) : Promise < boolean >
protected async progress (
progress : number ,
total ?: number ,
message ?: string
) : Promise < boolean >
Elicitation
protected async elicit < S extends ZodType >(
message : string ,
requestedSchema : S ,
options ?: ElicitOptions
) : Promise < ElicitResult >
Agent Visibility
Agents can invoke other agents:
@ Agent ({
name : ' orchestrator ' ,
llm : { provider : ' openai ' , model : ' gpt-4o ' , apiKey : { env : ' OPENAI_API_KEY ' } },
tools : [ ResearchAgent , WriterAgent ], // Agents as tools
})
class OrchestratorAgent extends AgentContext {
async execute ( input : { task : string }) {
// Can invoke sub-agents
const research = await this . invokeAgent ( ' research-agent ' , { topic : input . task });
return research ;
}
}
Full Example
import { Agent , AgentContext , Tool , ToolContext , App , FrontMcp } from ' @frontmcp/sdk ' ;
import { z } from ' @frontmcp/sdk ' ;
// Tools for the agent
@ Tool ({
name : ' search_database ' ,
inputSchema : { query : z . string (), table : z . string () },
})
class SearchDatabaseTool extends ToolContext {
async execute ( input ) {
const db = this . get ( DatabaseToken );
return db . search ( input . table , input . query );
}
}
@ Tool ({
name : ' create_report ' ,
inputSchema : { title : z . string (), data : z . unknown () },
})
class CreateReportTool extends ToolContext {
async execute ( input ) {
return { reportId : ' rpt_123 ' , title : input . title };
}
}
// Agent definition
@ Agent ({
name : ' data-analyst ' ,
description : ' Analyzes data and creates reports ' ,
systemInstructions : ` You are a data analyst. Use the available tools to:
1. Search the database for relevant data
2. Analyze the results
3. Create a comprehensive report
Always explain your reasoning before taking actions. ` ,
inputSchema : {
request : z . string (). describe ( ' Analysis request ' ),
tables : z . array ( z . string ()). describe ( ' Tables to analyze ' ),
},
outputSchema : z . object ({
reportId : z . string (),
summary : z . string (),
}),
llm : {
provider : ' openai ' ,
model : ' gpt-4o ' ,
apiKey : { env : ' OPENAI_API_KEY ' },
temperature : 0.3 ,
},
tools : [ SearchDatabaseTool , CreateReportTool ],
tags : [ ' analytics ' , ' reports ' ],
})
class DataAnalystAgent extends AgentContext {
// Override for custom pre/post processing
async execute ( input ) {
await this . notify ( ` Starting analysis: ${ input . request } ` , ' info ' );
await this . progress ( 0 , 100 , ' Initializing... ' );
// Run the default agent loop
const result = await super . execute ( input );
await this . progress ( 100 , 100 , ' Complete ' );
return result ;
}
// Override tool execution for logging
protected async executeTool ( name : string , args : Record < string , unknown >) {
this . logger . info ( ` Executing tool: ${ name } ` , { args });
return super . executeTool ( name , args );
}
}
@ App ({
name : ' analytics ' ,
agents : [ DataAnalystAgent ],
tools : [ SearchDatabaseTool , CreateReportTool ],
})
class AnalyticsApp {}
@ FrontMcp ({
info : { name : ' Analytics Platform ' , version : ' 1.0.0 ' },
apps : [ AnalyticsApp ],
})
export default class AnalyticsPlatform {}
AgentContext Context class details
AgentRegistry Agent registry API
Agent Errors Agent-related errors