Skip to main content
CodeCall implements bank-grade security through a defense-in-depth architecture. Every script passes through six security layers before execution, ensuring that even if one layer is bypassed, others catch malicious behavior.

100+ Attack Vectors Blocked

Pre-Scanner + AST Guard blocks ReDoS, BiDi attacks, eval, prototype pollution, and more

Layer 0 Defense

Pre-Scanner catches attacks BEFORE parser execution - blocks parser-level DoS

AI Scoring Gate

Semantic analysis detects exfiltration patterns, bulk operations, and sensitive data access

Zero Trust Runtime

Enclave sandbox with whitelist-only globals and resource limits

Worker Pool (Optional)

OS-level memory isolation via worker threads with hard halt capability

Security Pipeline

Every script goes through this 6-layer pipeline:

Layer 0: Pre-Scanner (Defense-in-Depth)

The Pre-Scanner is a new security layer that runs BEFORE the JavaScript parser (acorn). It provides defense-in-depth protection against attacks that could DoS or exploit the parser itself.

Why Layer 0?

Traditional security scanners operate on the AST (Abstract Syntax Tree), which means they rely on the parser completing successfully. Sophisticated attackers can exploit this by:
  1. Parser DoS: Deeply nested brackets/braces can cause stack overflow in recursive descent parsers
  2. ReDoS at Parse Time: Complex regex literals can hang the parser
  3. Memory Exhaustion: Large inputs can exhaust memory before validation
  4. Trojan Source Attacks: Unicode BiDi characters can make code appear different from how it executes

Mandatory Limits (Cannot Be Disabled)

These limits are enforced regardless of configuration:
LimitValuePurpose
Max Input Size100 MB (absolute) / 50 KB (AgentScript preset)Prevents memory exhaustion
Max Nesting Depth200 levelsPrevents stack overflow
Max Line Length100,000 charsHandles minified code safely
Max Regex Length1,000 charsPrevents ReDoS
Max Regex Count50Limits ReDoS attack surface

Pre-Scanner Attacks Blocked

Blocked Patterns:
  • (a+)+ - Nested quantifiers
  • (a|a)+ - Overlapping alternation
  • (.*a)+ - Greedy backtracking
  • (a+){2,} - Star in repetition
Why: These patterns cause exponential backtracking that can hang the parser or runtime for hours.
Blocked Characters:
  • U+202E (Right-to-Left Override)
  • U+2066 (Left-to-Right Isolate)
  • U+2069 (Pop Directional Isolate)
Why: Makes code appear different from how it executes (CVE-2021-42574).
Blocked:
  • Deeply nested brackets: (((((((((x)))))))))
  • Deeply nested braces: {{{{{{{{{}}}}}}}}}
Why: Recursive descent parsers can overflow their stack on deep nesting.
Blocked:
  • Inputs > 50KB (AgentScript preset)
  • Inputs > configured maxInputSize
Why: Large inputs can exhaust memory before validation completes.
Blocked:
  • \x00 characters anywhere in input
Why: Often indicates binary data injection or attack payloads.

Pre-Scanner Configuration

CodeCall uses the AgentScript preset which provides the strictest pre-scanning:
// AgentScript Pre-Scanner settings (automatic with CodeCall)
{
  regexMode: 'block',        // Block ALL regex literals
  maxInputSize: 50_000,      // 50KB limit
  maxNestingDepth: 30,       // Conservative nesting
  bidiMode: 'strict',        // Block all BiDi characters
}

Layer 1: AST Validation

AST Guard parses JavaScript into an Abstract Syntax Tree and validates every node against security rules before any code executes.

Blocked Constructs

Blocked:
  • eval('malicious code') - Dynamic code execution
  • new Function('return process')() - Function constructor
  • setTimeout(() => {}, 0) - Timer-based execution
  • setInterval, setImmediate - Async execution escape
Why: These allow arbitrary code injection that bypasses AST validation.
Blocked:
  • process.env.SECRET - Node.js process access
  • require('fs') - Module loading
  • window.location - Browser globals
  • global, globalThis - Global object access
  • this - Context leakage
Why: Prevents sandbox escape and system access.
Blocked:
  • obj.__proto__ = {} - Direct prototype manipulation
  • obj.constructor.prototype - Indirect prototype access
  • Object.prototype.polluted = true - Global prototype pollution
Why: Prototype pollution can corrupt the entire runtime.
Blocked:
  • Bidirectional override characters (CVE-2021-42574)
  • Homoglyph attacks (Cyrillic ‘а’ vs Latin ‘a’)
  • Zero-width characters
  • Invisible formatting characters
Why: Makes code appear different from how it executes.
Blocked:
  • while (true) {} - Unbounded while loops
  • do {} while (true) - Unbounded do-while loops
  • for (key in obj) - Prototype chain walking
  • Recursive function definitions
Why: Can freeze the server or exhaust memory.

AgentScript Preset

CodeCall uses the AgentScript preset - the most restrictive preset designed for LLM-generated code:
import { createAgentScriptPreset } from 'ast-guard';

const preset = createAgentScriptPreset({
  // Whitelist-only globals
  allowedGlobals: [
    'callTool', 'getTool', 'codecallContext',
    'Math', 'JSON', 'Array', 'Object', 'String', 'Number', 'Date',
    'console', // optional, controlled by allowConsole
  ],

  // Only bounded loops allowed
  allowedLoops: {
    allowFor: true,      // for (let i = 0; i < n; i++)
    allowForOf: true,    // for (const x of array)
    allowWhile: false,   // ❌ blocked
    allowDoWhile: false, // ❌ blocked
    allowForIn: false,   // ❌ blocked (prototype walking)
  },

  // Arrow functions only (no recursion)
  allowArrowFunctions: true,
});

What’s Allowed

// ✅ Tool calls
const users = await callTool('users:list', { limit: 100 });

// ✅ Variables
const filtered = users.filter(u => u.active);
let count = 0;

// ✅ Bounded loops
for (let i = 0; i < users.length; i++) { count++; }
for (const user of users) { console.log(user.name); }

// ✅ Array methods with arrow functions
const names = users.map(u => u.name);
const total = users.reduce((sum, u) => sum + u.score, 0);

// ✅ Safe built-ins
const max = Math.max(1, 2, 3);
const parsed = JSON.parse('{"a":1}');
const keys = Object.keys(obj);

// ✅ Context access (read-only)
const tenant = codecallContext.tenantId;

// ✅ Return values
return { count, names, total };

Layer 2: Code Transformation

After AST validation passes, code is transformed for safe execution:

Transformations Applied

OriginalTransformedPurpose
Top-level codeasync function __ag_main() { ... }Enable top-level await
callTool(...)__safe_callTool(...)Proxy through Enclave
for (...)Iteration-limited versionEnforce maxIterations
console.log(...)__safe_console.log(...)Capture for logging

Example

// Input
const users = await callTool('users:list', {});
for (const user of users) {
  console.log(user.name);
}
return users.length;

// Transformed
async function __ag_main() {
  const users = await __safe_callTool('users:list', {});
  __safe_forOf(users, (user) => {
    __safe_console.log(user.name);
  });
  return users.length;
}

Reserved Prefixes

User code cannot declare identifiers with these prefixes:
  • __ag_ - AgentScript internal functions
  • __safe_ - Safe runtime proxies
// ❌ BLOCKED by AST validation
const __ag_hack = 'foo';
let __safe_bypass = 123;

Layer 3: AI Scoring Gate (NEW)

The AI Scoring Gate is a semantic security layer that analyzes code behavior patterns to detect sophisticated attacks that syntactic validation alone cannot catch. It runs after AST validation but before VM execution.

Why Semantic Analysis?

AST validation catches structural threats (eval, prototype pollution), but some attacks are semantically valid code that behaves maliciously:
  • Data exfiltration: Fetch sensitive data, then send it externally
  • Bulk data harvesting: Request excessive limits to scrape data
  • Credential theft: Access password/token fields and export them
  • Fan-out attacks: Loop over results and call tools for each item
The Scoring Gate uses feature extraction and rule-based analysis to assign risk scores to these behavioral patterns.

Detection Rules (8 Built-in)

Rule IDScoreTrigger
SENSITIVE_FIELD+35Access to password, token, secret, apiKey, SSN, etc.
EXCESSIVE_LIMIT+25Limit values > 10,000
WILDCARD_QUERY+20Query patterns like * or SELECT *
LOOP_TOOL_CALL+25Tool calls inside loops (potential fan-out)
EXFIL_PATTERN+50Fetch→Send sequence (list data, then webhook/email)
EXTREME_VALUE+30Numeric values > 1,000,000
DYNAMIC_TOOL+20Tool name from variable (not string literal)
BULK_OPERATION+15Tool names with bulk/batch/mass/all keywords

Risk Levels

Total ScoreRisk LevelDefault Action
0-19noneAllow
20-39lowAllow
40-69mediumWarn (configurable)
70-89highBlock (configurable)
90-100criticalBlock

Example: Exfiltration Detection

// This code triggers EXFIL_PATTERN + SENSITIVE_FIELD + EXCESSIVE_LIMIT
const users = await callTool('users:list', {
  limit: 100000,
  fields: ['email', 'password', 'apiKey']
});
await callTool('webhooks:send', { data: users });

// Score breakdown:
// - SENSITIVE_FIELD: +35 (password, apiKey)
// - EXCESSIVE_LIMIT: +25 (100000 > 10000)
// - EXFIL_PATTERN: +50 (list → send)
// Total: 110 → BLOCKED (critical risk)

Scorer Modes

The Scoring Gate supports pluggable scorers for different deployment scenarios:
ModeLatencyDescription
disabled~0msPass-through, no scoring (for trusted environments)
rule-based~1msZero-dependency TypeScript rules (recommended)
external-api~100msExternal scoring API for advanced ML models

Configuration

import { createEnclave } from 'enclave-vm';

const enclave = createEnclave({
  scoringGate: {
    scorer: 'rule-based',    // 'disabled' | 'rule-based' | 'external-api'
    blockThreshold: 70,      // Block if score >= 70
    warnThreshold: 40,       // Warn if score >= 40
    failOpen: true,          // Allow on scorer errors (production default)
    cache: {
      enabled: true,
      ttlMs: 60000,          // Cache scores for 1 minute
      maxEntries: 1000,
    },
  },
});

Fail-Open vs Fail-Closed

ModeBehaviorUse Case
failOpen: trueAllow execution if scorer failsProduction default - availability over security
failOpen: falseBlock execution if scorer failsHigh-security environments

Scoring Result

Every execution includes scoring metadata:
const result = await enclave.run(code, { tools });

console.log(result.scoringResult);
// {
//   allowed: true,
//   warned: true,
//   totalScore: 45,
//   riskLevel: 'medium',
//   signals: [
//     { id: 'EXCESSIVE_LIMIT', score: 25, message: 'Limit 15000 > 10000' },
//     { id: 'WILDCARD_QUERY', score: 20, message: 'Wildcard query "*" detected' }
//   ],
//   latencyMs: 0.8,
//   cached: false
// }

Caching

The Scoring Gate uses an LRU cache with TTL to avoid re-scoring identical code:
  • Same code → same features → same score
  • Cache hit latency: ~0.01ms
  • Configurable TTL and max entries
  • Automatic pruning of expired entries

AI Scoring Gate Internals

This section is for security auditors and advanced users who need to understand how the Scoring Gate works internally.

Feature Extraction

The Scoring Gate extracts structured features from code for analysis:
interface ExtractedFeatures {
  // Tool call metadata
  toolCalls: Array<{
    name: string;           // Tool name (literal or 'DYNAMIC')
    argsPattern: string[];  // Argument keys/patterns
    inLoop: boolean;        // Called inside a loop?
    loopDepth: number;      // Nesting level
  }>;

  // Pattern signals
  patterns: {
    hasLoopedToolCalls: boolean;
    hasNestedLoops: boolean;
    maxLoopNesting: number;
    totalIterations: number; // Estimated from literals
  };

  // Numeric signals
  numerics: {
    maxLimit: number;       // Largest limit/count value
    totalStringLength: number;
    fanOutRisk: number;     // Loop count × tool calls
  };

  // Sensitive data signals
  sensitive: {
    fieldsAccessed: string[];  // password, token, etc.
    hasWildcard: boolean;
    hasBulkOperation: boolean;
  };

  // Code metrics
  metrics: {
    hash: string;           // SHA-256 for caching
    lineCount: number;
    complexity: number;     // Cyclomatic complexity
  };
}

Rule Evaluation Order

Rules are evaluated in a specific order for efficiency:
  1. Quick reject - Check for obvious red flags (excessive limits)
  2. Pattern matching - Detect exfiltration sequences
  3. Sensitive field scan - Check for credential access
  4. Loop analysis - Detect fan-out patterns
  5. Final scoring - Aggregate scores from all rules

Extending with Custom Rules

import { RuleBasedScorer, ScoringRule } from 'enclave-vm';

const customRule: ScoringRule = {
  id: 'CUSTOM_PII_ACCESS',
  evaluate: (features) => {
    const piiFields = ['ssn', 'dob', 'address', 'phone'];
    const accessed = features.sensitive.fieldsAccessed
      .filter(f => piiFields.some(pii => f.includes(pii)));

    if (accessed.length > 0) {
      return {
        score: accessed.length * 15,
        description: `PII fields accessed: ${accessed.join(', ')}`,
        level: accessed.length > 2 ? 'high' : 'medium',
      };
    }
    return null;
  },
};

const scorer = new RuleBasedScorer({
  customRules: [customRule],
});

Cache Configuration by Security Level

LevelTTLMax EntriesEviction
STRICT30s100Aggressive
SECURE60s500Normal
STANDARD300s1,000Lazy
PERMISSIVE600s5,000Lazy

Layer 4: Runtime Sandbox

Enclave executes transformed code in an isolated Node.js vm context.

Isolation Guarantees

Fresh Context

Each execution gets a new, isolated context with no access to the host environment

Controlled Globals

Only whitelisted globals available: Math, JSON, Array, Object, etc.

No Module Access

No require, import, or dynamic module loading

No Async Escape

No setTimeout, setInterval, or Promise.race tricks

Resource Limits

LimitDefaultPurpose
timeoutMs3,500msMaximum execution time
maxIterations5,000Maximum loop iterations
maxToolCalls100Maximum tool invocations
maxConsoleOutputBytes64KBMaximum console output (I/O flood protection)
maxConsoleCalls100Maximum console calls (I/O flood protection)
CodeCallPlugin.init({
  vm: {
    preset: 'secure',  // Uses defaults above
    timeoutMs: 5000,   // Override timeout
  },
});

VM Presets

PresetTimeoutIterationsTool CallsConsole OutputConsole CallsUse Case
locked_down2s2,0001032KB50Ultra-sensitive data
secure3.5s5,00010064KB100Production default
balanced5s10,000200256KB500Complex workflows
experimental10s20,0005001MB1000Development only

Security Levels vs VM Presets

Don’t confuse Enclave Security Levels with CodeCall VM Presets - they serve different purposes but work together.
The Enclave library uses Security Levels (STRICT, SECURE, STANDARD, PERMISSIVE) for internal configuration, while CodeCall exposes VM Presets (locked_down, secure, balanced, experimental) as a user-friendly interface. Mapping:
VM PresetEnclave Security LevelDescription
locked_downSTRICTMaximum security, minimal capabilities
secureSECUREProduction-safe with reasonable limits
balancedSTANDARDMore flexibility for complex scripts
experimentalPERMISSIVEDevelopment/testing only
Enclave Security Level Defaults:
ConfigSTRICTSECURESTANDARDPERMISSIVE
timeout2,000ms3,500ms5,000ms10,000ms
maxIterations2,0005,00010,00020,000
maxToolCalls10100200500
maxConsoleOutputBytes32KB64KB256KB1MB
maxConsoleCalls501005001,000
maxSanitizeDepth5101520
maxSanitizeProperties5001,0005,00010,000
When configuring CodeCall, use VM Presets:
CodeCallPlugin.init({
  vm: {
    preset: 'secure',  // Maps to SECURE level
    timeoutMs: 5000,   // Override specific values as needed
  },
});

Worker Pool Adapter (Optional)

For environments requiring OS-level memory isolation, enable the Worker Pool Adapter:
import { Enclave } from 'enclave-vm';

const enclave = new Enclave({
  adapter: 'worker_threads',  // Enable Worker Pool
  workerPoolConfig: {
    minWorkers: 2,
    maxWorkers: 8,
    memoryLimitPerWorker: 256 * 1024 * 1024,  // 256MB
  },
});

Dual-Layer Sandbox

When using Worker Pool, code runs in a dual-layer sandbox:

When to Use Worker Pool

ScenarioRecommendation
Trusted internal scriptsStandard VM (lower overhead)
Multi-tenant executionWorker Pool (OS isolation)
Untrusted AI-generated codeWorker Pool (hard halt)
Memory-sensitive workloadsWorker Pool (per-worker limits)

Worker Pool Security Features

FeatureProtection
worker.terminate()Hard halt runaway scripts (VM timeout bypass)
--max-old-space-sizePer-worker memory limits
JSON-only serializationPrevents structured clone gadget attacks
Dangerous global removalparentPort, workerData inaccessible
Rate limitingMessage flood protection
Safe deserializePrototype pollution prevention

Worker Pool Configuration

OptionDefaultDescription
minWorkers2Minimum warm workers
maxWorkersCPU countMaximum concurrent workers
memoryLimitPerWorker128MBPer-worker memory limit
maxMessagesPerSecond1000Rate limit per worker
maxExecutionsPerWorker1000Recycle after N executions

Worker Pool Presets

LevelminWorkersmaxWorkersmemoryLimitmessagesPerSec
STRICT2464MB100
SECURE28128MB500
STANDARD216256MB1000
PERMISSIVE432512MB5000

Custom Globals Validation

When providing custom globals to scripts via the globals config option, Enclave validates them to prevent security bypasses.

Validation Rules

RuleLimitBlocked
No functions-Functions cannot be injected
No getters/setters-Property traps blocked
No symbols-Symbol-keyed properties stripped
No dangerous keys-__proto__, constructor, prototype
Max nesting10 levelsDeep objects rejected

Blocked Function Patterns

Custom globals are scanned for dangerous function names in string values:
// These strings in globals trigger validation errors
const DANGEROUS_PATTERNS = [
  'eval',
  'Function',
  'require',
  'import',
  'process',
  'global',
  'globalThis',
  'setTimeout',
  'setInterval',
  'setImmediate',
];

Valid Custom Globals

CodeCallPlugin.init({
  vm: {
    globals: {
      // ✅ Valid - primitive values
      tenantId: 'tenant-123',
      maxLimit: 1000,
      isProduction: true,

      // ✅ Valid - plain objects
      config: {
        apiVersion: 'v2',
        features: ['search', 'export'],
      },

      // ✅ Valid - arrays
      allowedRegions: ['us-east', 'eu-west'],
    },
  },
});

Invalid Custom Globals

CodeCallPlugin.init({
  vm: {
    globals: {
      // ❌ Invalid - functions are stripped
      helper: (x) => x * 2,

      // ❌ Invalid - getters rejected
      computed: {
        get value() { return Date.now(); },
      },

      // ❌ Invalid - dangerous keys rejected
      __proto__: {},
      constructor: {},

      // ❌ Invalid - too deep (>10 levels)
      deeply: { nested: { objects: { will: { be: { rejected: {} } } } } },
    },
  },
});

Self-Reference Guard

Critical Security Feature: Scripts cannot call CodeCall meta-tools from within scripts.
// Inside codecall:execute script
// ❌ BLOCKED - Self-reference detected
const result = await callTool('codecall:execute', {
  script: 'return "nested"'
});
// Returns: { success: false, error: { code: 'SELF_REFERENCE_BLOCKED' } }

// ❌ Also blocked
await callTool('codecall:search', { query: 'users' });
await callTool('codecall:describe', { toolNames: ['users:list'] });
await callTool('codecall:invoke', { tool: 'users:list', input: {} });

Why This Matters

Without self-reference blocking, an attacker could:
  1. Recursive execution: codecall:execute calls itself infinitely
  2. Sandbox escape: Nest executions to accumulate privileges
  3. Resource exhaustion: Each nested call multiplies resource usage
  4. Audit bypass: Hide malicious calls in nested scripts

Implementation

The guard runs before any other security checks:
// execute.tool.ts - First line of callTool handler
assertNotSelfReference(toolName);  // Throws if codecall:* tool

Advanced Tool Access Control

Beyond the Self-Reference Guard, CodeCall provides a comprehensive Tool Access Control system for fine-grained control over which tools scripts can invoke.

Access Modes

ModeBehaviorUse Case
whitelistOnly explicitly allowed tools can be calledHigh-security, known toolset
blacklistAll tools allowed except explicitly blockedFlexible with some restrictions
dynamicCustom evaluator function decides per-callComplex authorization logic

Default Blacklist

By default, CodeCall blocks these tool patterns:
const DEFAULT_BLACKLIST = [
  'system:*',    // System administration tools
  'internal:*',  // Internal/private tools
  '__*',         // Internal implementation tools
];

Pattern Matching

Tool access rules support glob patterns for flexible matching:
CodeCallPlugin.init({
  toolAccess: {
    mode: 'blacklist',
    patterns: [
      'admin:*',       // Block all admin tools
      'users:delete',  // Block specific tool
      '*:export',      // Block all export operations
    ],
  },
});
Supported patterns:
  • * - Matches any characters within a segment
  • ? - Matches a single character
  • prefix:* - Matches all tools in a namespace
Pattern matching includes ReDoS protection - patterns are validated and normalized to prevent denial-of-service attacks.

Whitelist Mode

For maximum security, use whitelist mode to explicitly allow only specific tools:
CodeCallPlugin.init({
  toolAccess: {
    mode: 'whitelist',
    patterns: [
      'users:list',
      'users:get',
      'orders:list',
      // Only these 3 tools are accessible
    ],
  },
});

Dynamic Access Control

For complex authorization (e.g., per-tenant, per-user, or context-based):
CodeCallPlugin.init({
  toolAccess: {
    mode: 'dynamic',
    evaluator: async (toolName, context) => {
      // Check tenant permissions
      const allowed = await checkPermission(
        context.tenantId,
        toolName
      );
      return {
        allowed,
        reason: allowed ? undefined : 'Tool not authorized for tenant',
      };
    },
  },
});

Call Depth Tracking

Tool access control tracks call depth to prevent indirect privilege escalation:
// Direct call from script
await callTool('users:list', {});  // depth: 1

// Tool calls another tool (if allowed)
// Inside users:list:
await callTool('cache:get', {});   // depth: 2
Maximum call depth is configurable (default: 10) to prevent deep call chains.

Layer 5: Output Sanitization

All outputs are sanitized before returning to the client through two mechanisms: Value Sanitization (structure/content) and Stack Trace Sanitization (information leakage).

Value Sanitization Rules

RuleDefaultPurpose
maxDepth20Prevent deeply nested objects
maxProperties10,000Limit total object keys
maxStringLength10,000Truncate oversized strings
maxArrayLength1,000Truncate large arrays

What Gets Stripped

Value sanitization removes potentially dangerous content:
StrippedReason
FunctionsPrevents code injection
SymbolsPrevents prototype manipulation
__proto__ keysPrevents prototype pollution
constructor keysPrevents constructor tampering
Getters/SettersPrevents trap execution

Type Handling

The sanitizer handles special JavaScript types safely:
// Input types are converted to safe representations
DateISO string
Error{ name, message, code? }
RegExpstring pattern
Mapplain object
Setarray
Buffer"[Buffer]"
ArrayBuffer"[ArrayBuffer]"

Circular Reference Detection

// Script returns circular reference
const obj = { name: 'test' };
obj.self = obj;
return obj;

// Sanitized output
{
  "name": "test",
  "self": "[Circular]"
}

Information Leakage Prevention (Stack Trace Sanitization)

Stack traces can reveal sensitive information about your infrastructure. CodeCall sanitizes 40+ patterns from error messages.
File System Paths Redacted:
CategoryExamples
Unix home/Users/john/, /home/deploy/
System paths/var/log/, /etc/, /tmp/
App paths/app/, /srv/, /opt/
WindowsC:\Users\, D:\Projects\, UNC paths
Package Manager Paths Redacted:
ManagerPatterns
npmnode_modules/, .npm/
yarn.yarn/, yarn-cache/
pnpm.pnpm/, pnpm-store/
workspacepackages/, libs/
Cloud/Container Paths Redacted:
EnvironmentPatterns
Docker/docker/, container IDs
Kubernetes/var/run/secrets/, pod names
AWSLambda paths, ECS task IDs
GCPCloud Run paths, function IDs
AzureFunctions paths, container IDs
CI/CD Paths Redacted:
  • GitHub Actions: /runner/, /_work/
  • GitLab CI: /builds/, CI variables
  • Jenkins: /var/jenkins/, workspace paths
  • CircleCI: /circleci/, project paths
Credentials Redacted:
Bearer [token]     → Bearer [REDACTED]
Authorization: ... → Authorization: [REDACTED]
api_key=xxx        → api_key=[REDACTED]
password=xxx       → password=[REDACTED]
Network Information Redacted:
  • Internal hostnames: *.internal, *.local
  • Private IPs: 10.x.x.x, 192.168.x.x, 172.16-31.x.x
  • Service URLs: Internal load balancers, databases

Example: Before and After

// Before sanitization (DANGEROUS - leaks infrastructure details)
{
  "error": {
    "message": "Cannot read property 'foo' of undefined",
    "stack": "TypeError: Cannot read property 'foo'\n    at processUser (/home/deploy/app/src/handlers/users.ts:42:15)\n    at /home/deploy/app/node_modules/@company/sdk/dist/index.js:156:23",
    "path": "/home/deploy/app/src/handlers/users.ts",
    "env": "production",
    "dbHost": "postgres.internal.company.com"
  }
}

// After sanitization (SAFE)
{
  "error": {
    "message": "Cannot read property 'foo' of undefined",
    "stack": "TypeError: Cannot read property 'foo'\n    at processUser (...)\n    at (...)",
    "code": "RUNTIME_ERROR"
  }
}

Error Categories

CodeCall categorizes all errors for safe exposure:
CategoryCodeExposed To ClientContains
SyntaxSYNTAX_ERRORMessage + locationLine/column of error
ValidationVALIDATION_ERRORRule that failedBlocked construct
TimeoutTIMEOUTDuration-
Self-ReferenceSELF_REFERENCE_BLOCKEDTool name-
Tool Not FoundTOOL_NOT_FOUNDTool name-
Tool ErrorTOOL_ERRORSanitized message-
RuntimeRUNTIME_ERRORSanitized message-
Worker TimeoutWORKER_TIMEOUTDurationWorker terminated
Worker MemoryWORKER_MEMORY_EXCEEDEDMemory usageWorker recycled
Message FloodMESSAGE_FLOOD_ERRORRate limitWorker terminated
Queue FullQUEUE_FULL_ERRORQueue sizeRequest rejected
// Example error response
{
  "status": "illegal_access",
  "error": {
    "kind": "IllegalBuiltinAccess",
    "message": "Identifier 'eval' is not allowed in AgentScript"
  }
}

Security Checklist

Before deploying CodeCall to production:
1

Choose VM Preset

Use secure for production, locked_down for sensitive data.
vm: { preset: 'secure' }
2

Enable Audit Logging

Monitor script execution, tool calls, and security events.
// Subscribe to CodeCall events
scope.events.on('codecall:*', logEvent);
3

Configure Tool Allowlists

Limit which tools are accessible via CodeCall.
codecall: { enabledInCodeCall: true }  // per-tool
includeTools: (tool) => !tool.name.startsWith('admin:')  // global
4

Remove Stack Traces

Ensure sanitization is enabled (default).
sanitization: { removeStackTraces: true, removeFilePaths: true }
5

Configure AI Scoring Gate

Enable rule-based scoring with appropriate thresholds.
scoringGate: {
  scorer: 'rule-based',
  blockThreshold: 70,
  warnThreshold: 40,
}
6

Test Security Boundaries

Run the attack vector tests from ast-guard’s security audit.

Threat Model

What CodeCall Protects Against

Code Injection

AST validation blocks eval, Function, and dynamic code execution

Sandbox Escape

Isolated vm context with no access to Node.js APIs or globals

Data Exfiltration

AI Scoring Gate detects fetch→send patterns and sensitive data access

Bulk Data Harvesting

Scoring Gate flags excessive limits and bulk operations

Prototype Pollution

Blocked at AST level and isolated at runtime

Resource Exhaustion

Timeouts, iteration limits, and tool call caps

I/O Flood Attacks

Console output size and call count limits prevent logging abuse

Information Leakage

Stack traces and file paths sanitized from outputs

Recursive Execution

Self-reference guard blocks codecall:* tool calls

VM Timeout Bypass

Worker Pool provides hard halt via worker.terminate() when VM timeout fails

What CodeCall Does NOT Protect Against

CodeCall is not a silver bullet. Defense-in-depth means combining CodeCall with other security measures.
ThreatMitigation
Tool abuseUse enabledInCodeCall: false on sensitive tools
Algorithmic complexityScripts can run O(n²) within limits - monitor performance
Memory exhaustionLarge arrays/objects within timeout - set reasonable limits
Tool side effectsTool calls have real effects - use read-only tools where possible
Business logic bugsScript logic errors are not security issues