Warning
This is early experimental work — use it at your own risk! The API and features may change without notice.
A Claude Code plugin marketplace that extends Claude with local capabilities. The first skill lets Claude delegate codebase exploration to local Ollama models.
Add this marketplace to Claude Code:
/plugin marketplace add IsmaelMartinez/local-brainThen install the plugin:
/plugin install local-brain@local-brain-marketplaceDelegate codebase exploration to local Ollama models. Claude offloads read-only tasks to your machine—no cloud round-trips, full privacy.
┌─────────────┐ delegates ┌─────────────┐ calls ┌─────────┐
│ Claude Code │ ──────────────────►│ Local Brain │ ──────────────►│ Ollama │
│ (Cloud) │ │ (Smolagents)│ │ (Local) │
│ │◄────────────────── │ │◄────────────── │ │
└─────────────┘ returns └─────────────┘ responds └─────────┘
results with code execution
What Claude can delegate:
- "Review the code changes"
- "Explain how the auth module works"
- "Generate a commit message"
- "Find all TODO comments"
This repo follows the Claude Code plugin structure:
local-brain/ # MARKETPLACE ROOT
├── .claude-plugin/
│ └── marketplace.json # Marketplace manifest
└── local-brain/ # PLUGIN
├── plugin.json # Plugin manifest
└── skills/
└── local-brain/
└── SKILL.md # Skill documentation
- Install the CLI:
uv pip install local-brainOr with pipx:
pipx install local-brain- Install Ollama from ollama.ai and pull a model:
ollama pull qwen3local-brain "What files changed recently?"
local-brain "Review the code in src/"
local-brain "Generate a commit message"
local-brain "Explain how auth works"local-brain "prompt" # Ask anything (auto-selects best model)
local-brain -v "prompt" # Verbose (show tool calls)
local-brain -m qwen2.5-coder:7b "prompt" # Specific model
local-brain --list-models # Show available models
local-brain --root /path/to/project "prompt" # Set project rootLocal Brain automatically detects installed Ollama models and picks the best one:
local-brain --list-modelsRecommended models:
qwen3:latest— General purpose (default)qwen2.5-coder:7b— Code-focusedllama3.2:3b— Fast, lightweightmistral:7b— Balanced
Custom read-only tools registered with Smolagents' @tool decorator:
| Tool | What it does |
|---|---|
read_file |
Read file contents (path-jailed) |
list_directory |
List files with glob patterns (path-jailed) |
file_info |
Get file metadata (path-jailed) |
git_diff |
Show git changes (staged or unstaged) |
git_status |
Show current branch and changes |
git_log |
View recent commit history |
git_changed_files |
List modified/staged files |
Local Brain uses Smolagents as the agent framework:
local_brain/
├── __init__.py # Version
├── cli.py # Click CLI entry point
├── models.py # Ollama model discovery & selection
├── security.py # Path jailing utilities
└── smolagent.py # CodeAgent + custom tools
What comes from Smolagents:
CodeAgent— Agent that executes tasks via code generationLiteLLMModel— Connects to Ollama via LiteLLM@tooldecorator — Registers our custom tools with the agent
What we implement:
- All 7 tools (read_file, git_diff, etc.) — our code, registered via
@tool - Path jailing security — restricts file access to project root
- Model discovery — detects installed Ollama models
Two-layer security model:
-
Tool layer — Our pre-defined tools are trusted code:
- ✅ Read files within project directory (path-jailed)
- ✅ Execute git commands (read-only via subprocess)
- ❌ File I/O outside project root blocked
- ❌ Sensitive files blocked (
.env, keys)
-
LLM sandbox — Code generated by the LLM runs in LocalPythonExecutor:
- ❌ Cannot import subprocess, socket, os.system, etc.
- ❌ Cannot access network directly
- ✅ Can only call our pre-defined tools
The LLM writes Python code that calls our tools—it cannot bypass them to run arbitrary shell commands.
Why no web access? Claude Code already has web access—delegate web research to Claude, local codebase work to Local Brain. This separation prevents data exfiltration and prompt injection from fetched content.
- MCP Bridge — Ollama ↔ Model Context Protocol bridge when MCP adoption increases
- Docker Sandbox — Stronger isolation via container when Docker is available
- CLI Wrappers — Wrap existing tools (ripgrep, gh, tokei) instead of custom implementations
- Observability — Add tracing and logging for debugging agent behavior
See docs/adrs/ for Architecture Decision Records:
- ADR-001 — Why custom implementation over frameworks
- ADR-002 — Why Smolagents for code execution
- ADR-003 — Why no web tools
Want to add a plugin to this marketplace?
- Create a new directory at the root:
your-plugin/
├── plugin.json
└── skills/
└── your-skill/
└── SKILL.md
- Register it in
.claude-plugin/marketplace.json:
{
"plugins": [
{ "name": "local-brain", "source": "./local-brain", "description": "..." },
{ "name": "your-plugin", "source": "./your-plugin", "description": "..." }
]
}See the Claude Code plugin docs for full specifications.
git clone https://github.com/IsmaelMartinez/local-brain.git
cd local-brain
uv sync
uv run local-brain "Hello!"Note: Requires Python 3.10-3.13 (grpcio build issue with 3.14).
macOS grpcio installation error? If you see compilation errors for grpcio, force installation of pre-built wheels:
uv pip install --only-binary :all: grpcio
uv run pytest tests/ -v# In Claude Code
/plugin marketplace add ./path/to/local-brain
/plugin install local-brain@local-brain-marketplaceMIT