Skip to content
CodeLayersCodeLayers

AI Agents

CodeLayers integrates with AI coding agents to give them something no other tool can: architectural awareness. Instead of grepping blindly, your agent searches symbols ranked by how many files depend on them. Instead of reading a 3,000-line file, it gets a structured skeleton using up to 90% fewer tokens. Instead of manually tracing call chains across 5 files, it finds the path in one call.

14 MCP tools. Three categories: Search & Intelligence, Visualization, and Memory.


Getting Started

# Start with Claude Code
codelayers watch /path/to/project --agent claude

# Start with Gemini CLI
codelayers watch /path/to/project --agent gemini

# Start with Codex
codelayers watch /path/to/project --agent codex

When you start a watch session with --agent, CodeLayers automatically:

  1. Syncs your codebase and builds the dependency graph
  2. Starts an MCP server with all 14 tools
  3. Launches the AI agent with the MCP configuration
  4. Connects to your devices for real-time 3D updates

Search & Intelligence Tools

These tools give the agent architectural context that grep and file reads can't provide. Every result is enriched with data from the dependency graph: how many files depend on it, its position in the call hierarchy, which overlays it belongs to, and whether it has uncommitted changes.

search_symbols

Find functions, classes, or variables by name. Results are ranked by architectural importance — a function in a file imported by 40 others ranks higher than one in a leaf file, even if both match the query.

ParameterTypeDescription
querystringSearch query — matches name, fully qualified name, return type, and parameter types
kindstringFilter: "function", "class", or "variable" (optional)
limitnumberMax results (default: 20)

Token-aware matching. Queries automatically split on camelCase and snake_case boundaries. Searching "buildGraph" matches build_graph_from_cache. Searching "config" matches parseConfig, config_manager, and AppConfiguration.

Multi-field search. The query matches against function names, fully qualified names, return types, and parameter types. Searching "Result<User>" finds every function that returns that type.

What you get back:

  • Symbol name, file, line number, and full signature
  • [CRITICAL] / [HIGH] / [MEDIUM] / [LOW] risk ranking based on dependents
  • deps:N — how many files depend on the containing file
  • complexity:N — cyclomatic complexity (for complex functions)
  • Overlay membership and uncommitted change flags

vs grep: Searching "authenticate" with grep returns 88 lines mixing function definitions, test assertions, comments, and string matches. search_symbols returns 5 ranked definitions with full signatures and architectural context.

get_skeleton

Returns a file's complete structure — imports, constants, classes, and function signatures — without any function bodies. Uses up to 90% fewer tokens than reading the full file.

ParameterTypeDescription
pathstringFile path to get skeleton for (required)

What you get back:

  • File metadata: language, line count, risk level, dependent count
  • Every import with source module
  • Every function with full signature, visibility, line range, in_degree (how many functions call it), and complexity
  • Every class/struct with line range and in_degree
  • Every constant/variable with name and type
  • Overlay membership and uncommitted change flags

vs reading the file: A 2,777-line file becomes 120 lines of structured metadata. The agent sees every function signature and knows which ones are safe to change (in_degree: 1) vs dangerous (in_degree: 8) — all without reading a single line of implementation.

search_call_chain

Finds the shortest call path between two functions, with graph metrics at every hop.

ParameterTypeDescription
sourcestringSource function name (the caller)
targetstringTarget function name (the callee)
source_filestringDisambiguate source by file path (optional)
target_filestringDisambiguate target by file path (optional)
max_depthnumberMaximum hops to search (default: 10)

What you get back: Each hop in the chain with function name, file, line number, the exact line where the call happens, and per-hop metrics (dependents, risk level).

vs manual tracing: Tracing "how does run_stdio reach search_symbols?" would require reading 3 files and performing 5 searches. search_call_chain returns the 4-hop path in one call: run_stdio → handle_request → handle_tools_call → call_search_symbols → search_symbols.

trace_type_flow

Finds all producers, consumers, and stores of a given type across the entire codebase.

ParameterTypeDescription
type_querystringType name to trace — substring matching (e.g., "DependencyGraph", "Vec<User>")
limitnumberMax results per category (default: 10)

Three categories:

  • Producers — functions that return this type
  • Consumers — functions that accept this type as a parameter
  • Stores — variables/fields of this type

vs grep: Tracing how DependencyGraph flows would require 3+ separate greps with escaped angle brackets, then manually correlating results across files. trace_type_flow does it in one call with full signatures and complexity scores.

trace_variable

Traces a variable across the codebase — where it's declared, what value it's assigned from, and which functions it's passed to.

ParameterTypeDescription
namestringVariable name to trace (e.g., "encryption_key", "repo_id")
filestringScope to a specific file (optional)
limitnumberMax results (default: 20)

Three sections:

  • Declared — variable declarations and function parameters matching the name, with types
  • Assigned from — what function calls produce the value (e.g., let key = derive_key())
  • Passed to — which functions receive this variable as an argument, with exact argument position

vs grep: Grep finds every text occurrence of "mnemonic" (20+ lines of noise). trace_variable returns only semantic declarations with their types (&Mnemonic, &str), ranked by architectural importance, with overlay membership.


Visualization Tools

These tools control what you see in the 3D graph on iPhone, iPad, and Vision Pro. The agent uses them automatically when discussing files.

create_layer

Creates a named highlight layer in the 3D visualization. The most frequently used tool — the agent calls it every time it searches, reads, or discusses files.

ParameterTypeDescription
namestringLayer name (e.g., "Auth files", "Entry points")
pathsstring[]File paths to highlight (required)
colorstringLayer color (default: "yellow")

Available colors: yellow (focus), red (problems), green (good), blue (info), purple (special)

focus_file

Zooms the 3D camera to center on a specific file.

ParameterTypeDescription
pathstringFile path to focus on (required)
highlightbooleanAlso highlight the file (default: true)

show_dependencies

Draws import/dependency lines in the 3D view and returns structured JSON data.

ParameterTypeDescription
pathstringFile path to show connections for (required)
incomingbooleanShow files that import this file (default: true)
outgoingbooleanShow files this file imports (default: true)

get_blast_radius

Shows the full downstream dependency tree for one or more files — every file affected by a change, color-coded by hop distance.

ParameterTypeDescription
filesstring[]Source file paths (or "uncommitted" for changed files)
max_hopsnumberMax dependency depth (default: 3)

Returns a recursive tree with file sizes, node types, uncommitted change flags, and affected overlay membership. The 3D view automatically highlights the blast radius with colors by hop distance.

save_overlay / list_overlays

Persist named highlight sets as permanent overlay rings that survive across sessions. Overlays are encrypted and synced to the backend.

clear_highlights

Clears all highlights and connection lines. The agent uses this when switching topics.

change_title

Updates the session title displayed in the app.

set_visualization_mode

Changes the 3D visualization style. Modes: solar_system, city, tree, neural


Memory Tools

These tools give the agent persistent memory across sessions. Without them, every session starts from scratch — the agent re-discovers the same patterns, re-debugs the same issues, and repeats the same mistakes.

save_observation

Saves an insight for future sessions. Tagged for searchability.

ParameterTypeDescription
contentstringThe observation text — be specific and actionable
tagsstring[]Tags for searchability (e.g., ["architecture", "auth"])

search_memory

Recalls past observations. The agent can search at the start of a session to recover context from previous work.

ParameterTypeDescription
querystringSearch query — matches content and tags
limitnumberMax results (default: 10)

Observations are stored locally in LMDB, scoped per repository, and sorted by recency.


How the Tools Work Together

The tools form a search → understand → act → remember pipeline:

search_memory("auth")                          → Recall: "PBKDF2 is the bottleneck"
    ↓
search_symbols("authenticate")                 → Find: 3 definitions, top has deps:39
    ↓
get_skeleton("src/auth.rs")                    → Structure: 8 functions, lines 20-45
    ↓
search_call_chain("handle_login", "derive_key") → Path: 3 hops through high-risk code
    ↓
get_blast_radius(files: ["src/auth.rs"])        → Impact: 12 files affected
    ↓
[Agent reads only the 25 lines it needs, makes the fix]
    ↓
save_observation("Root cause was silent error swallowing in authenticate()")

Without these tools: The same task requires grep (100+ noisy results), reading 5+ full files (5,000+ tokens), manually tracing call chains (3 more file reads), and running blast radius separately.

With these tools: 5 lightweight calls give the agent full architectural context. It spends its token budget on problem-solving, not exploration.


Session Modes

Local Mode

You type in the terminal, and your device observes the conversation and updates the visualization.

codelayers watch . --agent claude
# Type prompts in your terminal

Remote Mode

You type in the app's chat panel, the CLI executes the commands. Session history is preserved when switching between modes.

Permission Modes

ModeDescriptionBest For
AskPrompts for each actionCareful review of changes
Auto-approve editsAuto-approves file editsTrusted refactoring tasks
Bypass allAuto-approves everythingFully automated workflows

Supported Agents

AgentNotes
Claude CodeBest MCP integration, persistent sessions with --session-id
Gemini CLIGenerates own session IDs, use --resume to continue
CodexThread-based sessions, requires ChatGPT Plus or API key

Tips for Better Results

1. Ask Architecture Questions

The search and tracing tools shine when you ask structural questions:

  • "How does the authentication flow work?" → Agent uses search_symbols + search_call_chain
  • "What would break if I changed UserService?" → Agent uses get_blast_radius
  • "Where is the encryption key used?" → Agent uses trace_variable
  • "What functions return a DependencyGraph?" → Agent uses trace_type_flow

2. Use Color Coding

  • Red = Problems to fix
  • Green = Working correctly
  • Yellow = Currently focused on
  • Blue = Reference/context files

3. Clear Before Switching Topics

"Clear the highlights, then show me the database models"

4. Combine Tools for Deep Dives

"Get the skeleton for auth.rs, then show me the call chain from login to token validation"


Troubleshooting

Highlights Not Appearing

  1. Check your device is connected (green status indicator)
  2. Ensure the file paths match your project structure
  3. Try clearing highlights and re-highlighting

Agent Not Using Visualization Tools

The agent should proactively use tools when you ask about files, "where is...", "show me...", or dependencies. If it's not, explicitly ask: "Highlight the files you just mentioned"

Search Tools Returning Empty Results

  1. Ensure the codebase has been synced: codelayers status
  2. The LMDB cache needs at least one successful parse. Run codelayers sync . first.
  3. Check that the file language is supported (Rust, TypeScript, Python, Java, Go, C++, C#, Ruby, PHP, Swift)