Documentation Index Fetch the complete documentation index at: https://docs.agentfront.dev/llms.txt
Use this file to discover all available pages before exploring further.
The Cache Plugin provides transparent response caching for tools, dramatically improving performance by avoiding redundant computations and API calls. This guide shows you how to add caching to your FrontMCP tools.
What You’ll Learn
By the end of this guide, you’ll know how to:
✅ Enable caching for specific tools
✅ Configure TTL (time-to-live) per tool
✅ Use sliding windows to keep hot data cached
✅ Switch between memory and Redis storage
✅ Handle cache misses and invalidation
Caching is perfect for tools that make expensive computations, database queries, or third-party API calls with
deterministic outputs.
Prerequisites
A FrontMCP project with at least one app and tool
Understanding of tool execution flow
(Optional) Redis server for production caching
Step 1: Install the Cache Plugin
npm install @frontmcp/plugin-cache
Step 2: Add Plugin to Your App
Simple (in-memory)
Custom TTL (in-memory)
Redis (production)
import { App } from ' @frontmcp/sdk ' ;
import { CachePlugin } from ' @frontmcp/plugin-cache ' ;
@ App ({
id : ' my-app ' ,
name : ' My App ' ,
plugins : [ CachePlugin ], // Default: memory store, 1-day TTL
tools : [
/* your tools */
],
})
export default class MyApp {}
Caching is opt-in per tool. Add the cache field to your tool metadata:
Step 4: Test Cache Behavior
Call the tool twice
Use the MCP Inspector or a client to call your cached tool twice with the same input: // First call
{ " userId " : " user-123 " }
The first call executes the tool normally (cache miss).
Observe cache hit
The second call returns instantly from cache! Check your logs for: [DEBUG] Cache hit for get-user-profile
Test cache expiration
Wait for the TTL to expire, then call again. The cache will miss and the tool will execute.
How Caching Works
Cache Key Generation
When a tool is called, the plugin creates a deterministic hash from:
Tool name (e.g., get-user-profile)
Validated input (e.g., { userId: "user-123" })
Same input = Same cache key
Before Execution (Will Hook)
The plugin checks the cache store for the key: - Cache Hit : Return cached result immediately, skip execution -
Cache Miss : Allow tool to execute normally
After Execution (Did Hook)
If the tool executed, the plugin stores the result in the cache with the configured TTL
Sliding Window (Optional)
If slideWindow: true, each cache read refreshes the TTL, keeping popular data cached longer
The cache operates at the hook level , so it works transparently without modifying your tool code.
Configuration Options
Enable caching for this tool
true - Use plugin’s default TTL
{ ttl, slideWindow } - Custom configuration
Time-to-live in seconds. Overrides plugin’s defaultTTL. Examples:
60 - 1 minute
300 - 5 minutes
3600 - 1 hour
86400 - 1 day
When true, reading from cache refreshes the TTL
Use cases:
Trending/popular data
Frequently accessed reports
User dashboards
Common Patterns
Pattern 1: Fast-Changing Data (Short TTL)
For data that changes frequently: @ Tool ({
name : ' get-stock-price ' ,
inputSchema : { symbol : z . string () },
cache : {
ttl : 5 , // Only 5 seconds
},
})
class GetStockPriceTool extends ToolContext {
async execute ( input : { symbol : string }) {
return await this . marketData . getPrice ( input . symbol );
}
}
Pattern 2: Expensive Reports (Long TTL)
For computationally expensive operations: @ Tool ({
name : ' generate-annual-report ' ,
inputSchema : {
year : z . number (),
department : z . string (),
},
cache : {
ttl : 86400 , // 24 hours
},
})
class GenerateAnnualReportTool extends ToolContext {
async execute ( input ) {
// Very expensive computation
return await this . reports . generateAnnual ( input . year , input . department );
}
}
Pattern 3: Hot Data with Sliding Window
For frequently accessed data: @ Tool ({
name : ' get-user-dashboard ' ,
inputSchema : { userId : z . string () },
cache : {
ttl : 300 , // 5 minutes
slideWindow : true , // Keep hot dashboards cached
},
})
class GetUserDashboardTool extends ToolContext {
async execute ( input : { userId : string }) {
return await this . dashboard . generate ( input . userId );
}
}
Pattern 4: Multi-Tenant Isolation
Include tenant ID in input for automatic isolation: @ Tool ({
name : ' get-tenant-data ' ,
inputSchema : {
tenantId : z . string (), // Automatically part of cache key
dataType : z . string (),
},
cache : { ttl : 600 },
})
class GetTenantDataTool extends ToolContext {
async execute ( input ) {
return await this . tenantService . getData (
input . tenantId ,
input . dataType
);
}
}
Each tenant’s data is cached separately!
Memory vs Redis
When to Use Memory Cache
Development Perfect for local development and testing
Single Instance When running one server instance
Non-Critical Data Data loss on restart is acceptable
Simple Setup No external dependencies needed
Memory cache resets when the server restarts. Not shared across multiple instances.
When to Use Redis
Production Recommended for production deployments
Multi-Instance Cache shared across multiple server instances
Persistence Cache survives server restarts
Better Eviction Redis handles memory limits gracefully
Redis provides persistence, sharing, and better memory management for production use.
Troubleshooting
Checklist:
Tool has cache: true or cache: { ... } in metadata
Plugin is registered in app’s plugins array
Redis is running (if using Redis backend)
No errors in server logs
Debug: logging : {
level : LogLevel . DEBUG , // See cache hit/miss logs
}
Stale data being returned
Problem: Cache TTL is too long for your data freshness requirements.Solution: Reduce the TTL:cache : {
ttl : 60 , // Shorter TTL = fresher data
}
Cache not shared across instances
Problem: Using memory cache with multiple server instances.Solution: Switch to Redis:CachePlugin . init ({
type : ' redis ' ,
config : { host : ' localhost ' , port : 6379 },
})
Non-deterministic tools being cached
Best Practices
1. Only Cache Deterministic Tools
2. Choose Appropriate TTLs
Match TTL to data change frequency: Data Type Suggested TTL Real-time prices 5-10 seconds User profiles 5-15 minutes Reports 30 minutes - 1 hour Static content Hours to days
3. Include Scoping Fields
Always include tenant/user IDs in inputs: // Good: Automatic tenant isolation
inputSchema : {
tenantId : z . string (),
userId : z . string (),
reportId : z . string (),
}
// Bad: Shared across tenants
inputSchema : {
reportId : z . string (),
}
4. Use Redis for Production
Redis provides:
Persistence across restarts
Sharing across instances
Better memory management
Monitoring and debugging tools
// Production config
CachePlugin . init ({
type : ' redis ' ,
defaultTTL : 600 ,
config : {
host : process . env . REDIS_HOST ,
port : parseInt ( process . env . REDIS_PORT || ' 6379 ' ),
password : process . env . REDIS_PASSWORD ,
},
})
5. Monitor Cache Performance
Complete Example
Here’s a full example with multiple tools using different caching strategies:
import { FrontMcp , App , Tool , ToolContext } from ' @frontmcp/sdk ' ;
import { CachePlugin } from ' @frontmcp/plugin-cache ' ;
import { z } from ' @frontmcp/sdk ' ;
// Real-time data: short TTL
@ Tool ({
name : ' get-stock-price ' ,
inputSchema : { symbol : z . string () },
cache : { ttl : 10 }, // 10 seconds
})
class GetStockPriceTool extends ToolContext {
async execute ( input : { symbol : string }) {
return await this . marketData . getPrice ( input . symbol );
}
}
// User data: medium TTL
@ Tool ({
name : ' get-user ' ,
inputSchema : {
tenantId : z . string (),
userId : z . string (),
},
cache : { ttl : 300 }, // 5 minutes
})
class GetUserTool extends ToolContext {
async execute ( input ) {
return await this . database . getUser ( input . tenantId , input . userId );
}
}
// Popular content: sliding window
@ Tool ({
name : ' get-trending ' ,
inputSchema : { category : z . string () },
cache : {
ttl : 120 , // 2 minutes
slideWindow : true , // Keep hot data cached
},
})
class GetTrendingTool extends ToolContext {
async execute ( input : { category : string }) {
return await this . analytics . getTrending ( input . category );
}
}
// Expensive reports: long TTL
@ Tool ({
name : ' generate-report ' ,
inputSchema : {
tenantId : z . string (),
month : z . string (),
},
cache : { ttl : 3600 }, // 1 hour
})
class GenerateReportTool extends ToolContext {
async execute ( input ) {
// Very expensive operation
return await this . reports . generate ( input . tenantId , input . month );
}
}
@ App ({
id : ' analytics ' ,
name : ' Analytics App ' ,
plugins : [
CachePlugin . init ({
type : ' redis ' ,
defaultTTL : 600 , // 10 minutes default
config : {
host : process . env . REDIS_HOST || ' localhost ' ,
port : parseInt ( process . env . REDIS_PORT || ' 6379 ' ),
password : process . env . REDIS_PASSWORD ,
},
}),
],
tools : [ GetStockPriceTool , GetUserTool , GetTrendingTool , GenerateReportTool ],
})
class AnalyticsApp {}
@ FrontMcp ({
info : { name : ' Analytics Server ' , version : ' 1.0.0 ' },
apps : [ AnalyticsApp ],
http : { port : 3000 },
})
export default class Server {}
What’s Next?
Cache Plugin Docs Full Cache Plugin reference documentation
Custom Hooks Learn how the cache plugin uses hooks internally
Plugin Development Create your own plugins with custom behavior