OpenClaw Router is a core execution-governance component for hybrid AI systems.
It is being built around a simple idea: AI systems need an execution layer that decides, in a controlled way, when work should stay on local models, move to private models, or escalate to public models.
Today, most AI products still route requests in a rough way:
- everything goes to the same expensive model
- local models are underused
- sensitive work is mixed with non-sensitive work
- teams cannot clearly explain why a certain model was chosen
- quality drops when people try to save cost with manual switching
OpenClaw Router is meant to solve that layer: safe, cost-aware, auditable execution across local, private, and public AI.
This repository currently contains one working component inside that broader vision: a local-first routing plugin for OpenClaw.
The project is led by an AI-native builder, product-oriented founder, and direction-setting operator. It is also an open signal for conversations with aligned investors and project partners who want to help build the execution-governance layer for hybrid AI systems.
If you are building a serious AI product, model choice is no longer just an engineering detail. It becomes an execution policy problem.
You need a system that can answer questions like:
- which requests should stay local for cost and privacy reasons
- which requests need stronger reasoning or coding capability
- when a failure should trigger escalation
- how to review and improve routing behavior over time
That is the role OpenClaw Router is designed to play.
At a high level, OpenClaw Router acts like a traffic controller for AI execution. It looks at each request, estimates what kind of capability it needs, and decides whether it should stay on a private model path or escalate to a stronger public model path.
The goal is better system behavior:
- easy work should stay cheap
- sensitive work should stay controlled
- hard work should still reach strong models
- every decision should be reviewable later
At the abstract architecture level, OpenClaw Router sits between the application layer and the model layer.
It does not replace your model backends. It sits in front of them and decides which path a request should take.
In practical terms:
- the user or application sends a request into the router layer
- the router evaluates the request before model execution is selected
- the router then sends that request to the most appropriate execution path
- that path may be a private model path, including local deployments, or a public model path
So the router's role is not "being another model". Its role is governing access to multiple model backends.
In the current repository, this governance layer is implemented as an OpenClaw plugin. In the broader system view, it should be understood as a routing layer in front of execution backends.
This repository is not the full governance stack yet. Right now it contains one key component:
router-plugin/: the main TypeScript plugin that runs inside OpenClawdocs/: supporting documentation, examples, roadmap, and API contractskills/router-rule-tuner/: a local skill for analyzing router decisions and tuning rules.github/: CI and repository workflow templates
- Classifies prompts into execution tiers
- Keeps simple, low-risk work on low-cost paths when possible
- Escalates code, debugging, complex analysis, and reasoning work to stronger tiers
- Supports local-first routing instead of API-first routing
- Tracks retries and session depth so the system can recover from weak routing
- Logs decisions for auditability and later optimization
- Learns from historical routing data to improve future routing
- Exposes
/router,/router stats, and/router learn
OpenClaw Router is especially relevant for:
- teams building hybrid AI systems
- founders who want cost control without quality collapse
- operators running a mix of local, private, and external models
- AI product builders who need clearer execution policies
- investors and partners tracking infrastructure for hybrid AI deployment
flowchart LR
A["User or App"] --> B["OpenClaw Router
execution-governance layer"]
B --> C["Private Models
including local deployments"]
B --> D["Public Models
different providers, costs, and capabilities"]
The detailed routing logic is simple in concept:
- inspect the request
- decide the right capability tier
- route to the right model path
- record the decision for later review and tuning
The plugin does not think in terms of one model being “best”. It thinks in terms of matching work to the right execution level.
Current routing tiers:
| Tier | Plain-English meaning | Typical use | Typical execution mapping |
|---|---|---|---|
efficient-response |
Fast and cheap execution | short text tasks, summaries, rewrites, translation | Local-first |
cost-optimized-solving |
Balanced default path | general requests that need more than a tiny local model | Local or private, depending on available capacity |
custom-policy |
Complex task path | strategy, architecture, multi-constraint analysis | Private-first, public when needed |
control-orchestration |
Formal reasoning path | proof-style reasoning, derivations, structured decision logic | Private or public |
assured-intelligence |
High-confidence code path | coding, debugging, refactoring, implementation tasks | Strong private or public code model |
In other words, the five tiers are not just capability labels. They are also a practical bridge between three execution domains:
- local: cheapest, fastest, and most private for lightweight tasks
- private: controlled internal deployment for stronger but still governed execution
- public: selective escalation for tasks that need the strongest external capability
In practice, this means a casual prompt and a high-stakes coding task do not need to hit the same model.
This project is building toward an execution-governance layer with a few important properties:
- Local-first by design: cheap work stays local when it can.
- Policy-oriented: routing is driven by explicit rules and thresholds, not hidden provider behavior.
- Auditable: decisions can be logged, reviewed, and tuned later.
- Adaptive: the system can learn from real traffic instead of staying static.
- Safe by default: first install starts in
suggestmode rather than immediately overriding models.
OpenClaw Router is especially useful for:
- teams building hybrid AI systems
- founders who want cost control without quality collapse
- operators running a mix of local, private, and external models
- AI product builders who need clearer execution policies
- anyone who wants an inspectable routing layer instead of opaque model selection
Imagine the same system receives these three requests:
- “Summarize this paragraph.”
- “Compare two product strategies and list risks.”
- “Debug this TypeScript function and refactor it.”
With OpenClaw Router:
- the summary can stay on a fast local model
- the strategy comparison can go to a stronger planning tier
- the coding task can escalate to a code-focused tier
That is the core value: better model allocation, lower waste, and clearer system behavior.
OpenClaw Router supports three operating modes:
suggest: the safest first-install mode; it recommends a route but does not override the active modeloverride: it actively applies routing decisionsrespect-explicit: it routes normally except for agents that already have an explicitly assigned primary model
The repository ships in a safe first-install posture: routing defaults to suggest, and real model overrides stay disabled until the user explicitly configures all tier models for their own environment.
One of the most important parts of the project is that routing decisions become system data. The router can:
- write structured decision logs
- track retry behavior
- track conversation depth
- surface routing patterns in
/router stats - learn weight adjustments from historical behavior through
/router learn
Current repository version: 0.4.1
What is already present:
- routing tiers and rule-based classification
- session-aware routing
- retry-aware escalation
- structured logging
- dynamic weight learning
- TF-IDF-assisted scoring
- tests for core routing behavior
Strategically, this repository is one working component inside a larger hybrid-AI execution-governance vision.
For the runtime and host-side capabilities needed to support the broader roadmap, see OpenClaw Execution Layer.
cd router-plugin
npm install
npm run build
npm testThen enable the plugin in ~/.openclaw/openclaw.json:
{
"plugins": {
"entries": {
"router": {
"enabled": true,
"path": "/absolute/path/to/OpenClaw-Router/router-plugin",
"config": {}
}
}
}
}Before switching to override, replace all router.config.tiers.* model references with models that actually exist in your own environment.
router-plugin/- TypeScript plugin, tests, build output, and plugin manifestdocs/- API contract, examples, roadmap, and supporting materialskills/router-rule-tuner/- local skill for analyzing router logs and tuning rules.github/workflows/ci.yml- CI for build, type-check, and test
cd router-plugin
npm run lint
npm run build
npm testThe project is open to conversations with:
- investors who understand the long-term importance of execution governance in hybrid AI systems
- product and technical partners who want to build this layer together
- operators who can validate real routing and deployment needs
MIT. See LICENSE.
