I've built a multi-runtime abstraction layer on top of NanoClaw that allows different agent SDKs to be installed as modular skills — mirroring the existing channel pattern (/add-telegram, /add-slack).
What it does
- AgentRuntime interface at the host level — the app calls
runtime.run() instead of runContainerAgent() directly
- Modular SDK registry — SDKs self-register via
registerAgentSdk(), selected per-group via config
- Two SDK adapters: Claude Agent SDK (
/add-agentSDK-claude) and OpenAI Codex SDK (/add-agentSDK-codex)
- Both SDKs run in the same container image — the agent-runner detects the runtime from
ContainerInput.runtime and calls the appropriate SDK
- Per-group model selection via
/model Telegram command
- Local model support — Codex SDK routes to OMLX, Ollama, or LiteLLM via per-group
baseUrl
- AGENT.md as a runtime-agnostic persona file (assembled into SDK-specific format at container startup)
- Skill branches — clean base has no SDK code;
/add-agentSDK-* merges add them
Architecture
```
App Shell (channels, state, scheduling)
→ AgentRuntime (ClaudeRuntime or CodexRuntime)
→ ContainerManager.runAgentSession()
→ Container (agent-runner detects runtime, calls SDK)
→ Claude: query() with built-in tools
→ Codex: thread.runStreamed() with shell + apply_patch
```
Key design decisions
- Runtime is the boundary; container is an implementation detail — both SDKs share the same container image
- SDK registry mirrors the channel registry — self-registration at import, barrel file controls what's loaded
- AGENT.md is runtime-agnostic — persona and instructions work for any SDK
- Skills use native SDK loading — both Claude and Codex load skills on-demand from their respective directories
- No forced abstractions — each SDK uses its own tools natively, no ToolExecutor layer
What stays backwards-compatible
- Existing Claude-only NanoClaw installs continue working unchanged
- The clean base starts without any SDK — `/setup` guides installation
- All channel code, IPC, DB, scheduling untouched
- `ContainerInput`/`ContainerOutput` protocol unchanged
Stats
- ~8,000 lines total (vs ~4,000 original NanoClaw)
- ~600 lines of runtime abstraction layer
- Both SDKs verified working via Telegram
Fork
https://github.com/chiptoe-svg/nanoclaw_flexagents
Happy to discuss the approach, split into smaller PRs, or adapt to upstream conventions. The SDK registry pattern could be upstreamed independently as a first step.
I've built a multi-runtime abstraction layer on top of NanoClaw that allows different agent SDKs to be installed as modular skills — mirroring the existing channel pattern (
/add-telegram,/add-slack).What it does
runtime.run()instead ofrunContainerAgent()directlyregisterAgentSdk(), selected per-group via config/add-agentSDK-claude) and OpenAI Codex SDK (/add-agentSDK-codex)ContainerInput.runtimeand calls the appropriate SDK/modelTelegram commandbaseUrl/add-agentSDK-*merges add themArchitecture
```
App Shell (channels, state, scheduling)
→ AgentRuntime (ClaudeRuntime or CodexRuntime)
→ ContainerManager.runAgentSession()
→ Container (agent-runner detects runtime, calls SDK)
→ Claude: query() with built-in tools
→ Codex: thread.runStreamed() with shell + apply_patch
```
Key design decisions
What stays backwards-compatible
Stats
Fork
https://github.com/chiptoe-svg/nanoclaw_flexagents
Happy to discuss the approach, split into smaller PRs, or adapt to upstream conventions. The SDK registry pattern could be upstreamed independently as a first step.