Hive Core
Open-source framework for AI executor agents
AI assistants are great at thinking. They're terrible at doing.
Ask Claude to analyze your codebase? Brilliant insights. Ask Claude to actually refactor 50 files? It needs help.
The gap between AI intelligence and AI action is execution infrastructure. That's what Hive Core provides.
The problem with AI agents
Everyone's building AI agents. Most approaches are fragile:
- Single-threaded scripts that crash and lose state
- No queue management for parallel work
- Hardcoded tool integrations
- No visibility into what the agent is doing
Production AI agents need production infrastructure.
Hive Core is that infrastructure
# Gemfile
gem 'hive-core'
Hive Core gives you:
Worker pools — Spawn multiple AI workers. Auto-scale based on queue depth. Graceful shutdown.
Queue management — Works with Solid Queue, Sidekiq, or in-memory. Retry logic. Dead letter handling.
Tool connections — MCP client for external tools. MCP server for exposing your own.
Execution logging — Every action tracked. Full audit trail.
The think/plan/act loop
AI agents need structure. Hive Core provides it:
class MyExecutor < HiveCore::Executor
def execute(task)
# Think: Analyze the task
analysis = think(task)
# Plan: Break into steps
steps = plan(analysis)
# Act: Execute each step
steps.each do |step|
result = act(step)
checkpoint(result) # Save progress
end
end
end
Each phase is explicit. Progress is checkpointed. Failures can resume.
Worker lifecycle
# Start a worker pool
hive start --workers 4
# Check status
hive status
# => 4 workers running, 12 tasks pending, 847 completed
# Graceful shutdown
hive stop
Workers pick up tasks from the queue. Execute them. Report results. Simple.
Auto-scaling built in:
HiveCore.configure do |config|
config.min_workers = 2
config.max_workers = 10
config.scale_threshold = 100 # tasks per worker
end
Queue backs up? More workers spawn. Queue drains? Workers scale down.
MCP integration
Hive Core speaks MCP (Model Context Protocol) natively.
As a client — Connect to external MCP servers. Use their tools.
client = HiveCore::MCP::Client.new("http://dendrite.localhost/mcp")
result = client.call("dendrite_search", query: "authentication")
As a server — Expose your executor's capabilities.
class MyExecutor < HiveCore::Executor
mcp_tool :analyze_code do |params|
# Your implementation
end
mcp_tool :refactor_file do |params|
# Your implementation
end
end
# Start MCP server
hive mcp --port 3100
Now Claude Code can use your custom tools directly.
Integration with Synapse
Hive Core powers Synapse's AI executives.
When you ask FORGE to refactor a module:
- Synapse creates a task
- Hive Core picks it up
- The executor plans the refactoring
- Each file change is a checkpoint
- Results flow back to Synapse
The AI executive thinks. Hive Core executes.
Why open source?
AI execution infrastructure should be shared.
Everyone building AI agents faces the same problems. Worker management. Queue handling. Tool integration. Progress tracking.
Hive Core is MIT licensed. Use it. Extend it. Contribute back.
gem install hive-core
Build your own AI agents with production-grade infrastructure.
What's next
Hive Core is the foundation. On top of it:
- Specialized executors for common tasks
- Pre-built tool libraries
- Visual workflow builders
- Multi-agent coordination
The age of AI agents is here. Let's build the infrastructure together.
— Andres