Lean MCP Orchestrator for Multi-Agent Coordination
AgentHub is a local-only Model Context Protocol (MCP) server that acts as a central nervous system for AI coding agents. It allows multiple agents (Claude Code, Cursor, VS Code, Gemini CLI, etc.) to work on the same codebase simultaneously without stepping on each other's toes.
When multiple AI agents (or an agent and a human) edit the same files, chaos ensues. File locks are too rigid, and "hope for the best" leads to lost work.
AgentHub introduces a Soft Locking Protocol based on Intents:
- Declare: Agent says "I intend to edit
src/auth/*.ts". - Coordinate: Hub checks for conflicts. If clear, other agents see the intent.
- Execute: Agent does the work.
- Review: Changes can be routed to a "reviewer" agent.
It's not just a lock server; it's a communication bus and expert escalation system (GPT-5 Pro) wrapped in a single, token-efficient MCP tool (hub_op).
- π Changelog: See what's new in the latest version.
- π€ Contributing Guide: How to build and extend AgentHub.
- π€ Claude Developer Guide: Technical deep-dive for Claude users.
- β Gemini Developer Guide: Technical deep-dive for Gemini users.
- π Report Bug: Found an issue? Let us know.
- π¦ Intent Coordination: Prevent race conditions with semantic file locks (
i.open). - π¨ Message Bus: Real-time communication between agents (
m.send,m.pull). - π Code Review Workflow: Built-in lifecycle for requesting and claiming reviews (
review.request). - π§ Expert Escalation: Async integration with Azure OpenAI (GPT-5 Pro) for complex architectural tasks (
expert.request). - πΎ State Persistence: Resilient in-memory state that survives restarts.
- π₯οΈ TUI Dashboard: Beautiful terminal interface to monitor the swarm.
- Node.js >= 22.0.0
- npm or yarn
git clone https://github.com/propstreet/agenthub.git
cd agenthub
npm install
npm run buildCopy the example configuration:
cp .env.example .envEdit .env to set your preferences:
PORT=3333
# Optional: Enable filesystem watching to detect "rogue" writes (outside intents)
WATCH_ROOT=/absolute/path/to/your/project
# Optional: Enable persistence
PERSISTENCE_ENABLED=true
# Optional: Azure OpenAI (Expert System)
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
AZURE_OPENAI_API_KEY=your-api-key
AZURE_OPENAI_DEPLOYMENT=gpt-5-pronpm startnpm run dashboardUse the dashboard to monitor active agents, intents, and system events.
- Keys 1-5: Zoom into specific panels (Agents, Intents, Reviews, Expert, Logs).
- C: Cleanup disconnected agents.
- B: Broadcast a message to all agents.
- P: Pause/Resume auto-refresh.
Add AgentHub to your MCP client configuration.
Add to claude_desktop_config.json:
{
"mcpServers": {
"agenthub": {
"url": "http://localhost:3333/mcp"
}
}
}{
"mcp.servers": [
{
"name": "agenthub",
"url": "http://localhost:3333/mcp"
}
]
}Add to settings.json:
{
"mcpServers": {
"agenthub": {
"httpUrl": "http://localhost:3333/mcp"
}
}
}How agents interact with AgentHub:
- Registration: Agent connects and registers its role (e.g.,
coder,reviewer).a.register
- Declaration: Before editing, agent declares intent.
i.open(paths=['src/feature/*.ts'], mode='W')
- Approval: Hub checks conflicts. If another agent has
src/feature/*.ts, the intent is rejected or put to a vote.i.vote(intentId, vote='approve')
- Execution: Agent performs the work.
- Completion: Agent closes the intent.
i.close(id, status='ok')- Note: If mode was 'W' (Write), a code review job is automatically created.
- Review: Agent requests a review from a human or another agent.
review.request(scope=['src/feature/*.ts'])
AgentHub exposes a single tool hub_op that handles all operations. This minimizes token usage and context window clutter.
| Operation | Description | Key Fields |
|---|---|---|
a.register |
Register an agent presence | role |
i.open |
Declare intent to work | paths, mode (R/W/B/T), ttlMs |
i.close |
Finish work | id, status |
m.send |
Send message | to, text |
m.pull |
Get messages | since |
review.request |
Request code review | scope, summary |
expert.request |
Ask GPT-5 Pro (Async) | question, paths |
Tip: Run
s.helpvia any agent to get the full, self-documenting API reference with examples.
The system is built on a clean, modular architecture:
- Server: Express-based MCP server supporting SSE and HTTP transports.
- StateCache: In-memory source of truth, persisted to JSON.
- Coordinator: Handles the "Two-Phase Commit" logic for intents.
- ExpertWorker: Background worker for managing long-running LLM tasks.
We welcome contributions!
# Run tests (interactive)
npm test
# Run tests (CI/Single run)
npm run test:run
# Run linter & typecheck
npm run checkSee CONTRIBUTING.md for detailed guidelines.
MIT Β© Propstreet