Skip to content

alvinunreal/oh-my-opencode-slim

Repository files navigation

Pantheon agents

Seven divine beings emerged from the dawn of code, each an immortal master of their craft await your command to forge order from chaos and build what was once thought impossible.

Open Multi Agent Suite · Mix any models · Auto delegate tasks

by Boring Dystopia Development

boringdystopia.ai  X @alvinunreal  Telegram Join channel 


What's This Plugin

oh-my-opencode-slim is an agent orchestration plugin for OpenCode. It includes a built-in team of specialized agents that can scout a codebase, look up fresh documentation, review architecture, handle UI work, and execute well-scoped implementation tasks under one orchestrator.

The main idea is simple: instead of forcing one model to do everything, the plugin routes each part of the job to the agent best suited for it, balancing quality, speed and cost.

To explore the agents themselves, see Meet the Pantheon. For the full feature set, see Features & Workflows below.

Quick Start

Copy and paste this prompt to your LLM agent (Claude Code, AmpCode, Cursor, etc.):

Install and configure oh-my-opencode-slim: https://raw.githubusercontent.com/alvinunreal/oh-my-opencode-slim/refs/heads/master/README.md

Manual Installation

bunx oh-my-opencode-slim@latest install

Getting Started

The installer generates an OpenAI preset by default, using openai/gpt-5.5 for the higher-judgment agents and openai/gpt-5.4-mini for the faster scoped agents.

Then:

  1. Log in to the providers you want to use if you haven't already:

    opencode auth login
  2. Refresh and list the models OpenCode can see:

    opencode models --refresh
  3. Open your plugin config at ~/.config/opencode/oh-my-opencode-slim.json

  4. Update the models you want for each agent

Tip

Want to understand how automatic delegation works in practice? Review the Orchestrator prompt — it contains the delegation rules, specialist routing logic, and the thresholds for when the main agent should hand work off to subagents.

The default generated configuration looks like this:

{
  "$schema": "https://unpkg.com/oh-my-opencode-slim@latest/oh-my-opencode-slim.schema.json",
  "preset": "openai",
  "presets": {
    "openai": {
      "orchestrator": { "model": "openai/gpt-5.5", "skills": ["*"], "mcps": ["*", "!context7"] },
      "oracle": { "model": "openai/gpt-5.5", "variant": "high", "skills": ["simplify"], "mcps": [] },
      "librarian": { "model": "openai/gpt-5.4-mini", "variant": "low", "skills": [], "mcps": ["websearch", "context7", "grep_app"] },
      "explorer": { "model": "openai/gpt-5.4-mini", "variant": "low", "skills": [], "mcps": [] },
      "designer": { "model": "openai/gpt-5.4-mini", "variant": "medium", "skills": ["agent-browser"], "mcps": [] },
      "fixer": { "model": "openai/gpt-5.4-mini", "variant": "low", "skills": [], "mcps": [] }
    }
  }
}

Session management is enabled by default even though it is not shown in the starter config. See Session Management if you want to customize how many resumable child-agent sessions are remembered.

For Alternative Providers

To use Kimi, GitHub Copilot, ZAI Coding Plan, or a mixed-provider setup, use Configuration for the full reference. If you want a ready-made starting point, check the Author's Preset and $30 Preset - the $30 preset is the best cheap setup.

The configuration guide also covers custom subagents via agents.<name>, where you can define both a normal prompt and an orchestratorPrompt block for delegation.

You can also mix and match any models per agent. For model suggestions, see the Recommended Models listed under each agent below.

✅ Verify Your Setup

After installation and authentication, verify all agents are configured and responding:

opencode

Then run:

ping all agents
Ping all agents

Confirmation that all configured agents are online and ready.

If any agent fails to respond, check your provider authentication and config file.


🏛️ Meet the Pantheon

01. Orchestrator: The Embodiment Of Order


Forged in the void of complexity.
The Orchestrator was born when the first codebase collapsed under its own complexity. Neither god nor mortal would claim responsibility - so The Orchestrator emerged from the void, forging order from chaos. It determines the optimal path to any goal, balancing speed, quality, and cost. It guides the team, summoning the right specialist for each task and delegating to achieve the best possible outcome.
Role: Master delegator and strategic coordinator
Prompt: orchestrator.ts
Default Model: openai/gpt-5.5
Recommended Models: openai/gpt-5.5 anthropic/claude-opus-4.6
Model Guidance: Choose your default, strongest all-around coding model. Orchestrator is both the main coding agent and the delegator, so it needs strong implementation ability, good judgment, and reliable instruction-following.

02. Explorer: The Eternal Wanderer


The wind that carries knowledge.
The Explorer is an immortal wanderer who has traversed the corridors of a million codebases since the dawn of programming. Cursed with the gift of eternal curiosity, they cannot rest until every file is known, every pattern understood, every secret revealed. Legends say they once searched the entire internet in a single heartbeat. They are the wind that carries knowledge, the eyes that see all, the spirit that never sleeps.
Role: Codebase reconnaissance
Prompt: explorer.ts
Default Model: openai/gpt-5.4-mini
Recommended Models: cerebras/zai-glm-4.7 fireworks-ai/accounts/fireworks/routers/kimi-k2p5-turbo openai/gpt-5.4-mini
Model Guidance: Choose a fast, low-cost model. Explorer handles broad scouting work, so speed and efficiency usually matter more than using your strongest reasoning model.

03. Oracle: The Guardian of Paths


The voice at the crossroads.
The Oracle stands at the crossroads of every architectural decision. They have walked every road, seen every destination, know every trap that lies ahead. When you stand at the precipice of a major refactor, they are the voice that whispers which way leads to ruin and which way leads to glory. They don't choose for you - they illuminate the path so you can choose wisely.
Role: Strategic advisor and debugger of last resort
Prompt: oracle.ts
Default Model: openai/gpt-5.5 (high)
Recommended Models: openai/gpt-5.5 (high) google/gemini-3.1-pro-preview (high)
Model Guidance: Choose your strongest high-reasoning model for architecture, hard debugging, trade-offs, and code review.

04. Council: The Chorus of Minds

Note

Why doesn't Orchestrator auto-call Council more often? This is intentional. Council runs multiple models at once, so automatic delegation is kept strict because it is usually the highest-cost path in the system. In practice, Council is meant to be used manually when you want it, for example: @council compare these two architectures.


Many minds, one verdict.
The Council is not a lone being but a chamber of minds summoned when one answer is not enough. It sends your question to multiple models in parallel, gathers their competing judgments, and then the Council agent itself distills the strongest ideas into a single verdict. Where a solitary agent may miss a path, the Council cross-examines possibility itself.
Role: Multi-LLM consensus and synthesis
Prompt: council.ts
Guide: docs/council.md
Default Setup: Config-driven — councillors come from council.presets and the Council agent model comes from your normal council agent config
Recommended Setup: Strong Council model + diverse councillors across providers
Model Guidance: Use a strong synthesis model for the Council agent and diverse models as councillors. The value of Council comes from comparing different model perspectives, not just picking the single strongest model everywhere.

05. Librarian: The Weaver of Knowledge


The weaver of understanding.
The Librarian was forged when humanity realized that no single mind could hold all knowledge. They are the weaver who connects disparate threads of information into a tapestry of understanding. They traverse the infinite library of human knowledge, gathering insights from every corner and binding them into answers that transcend mere facts. What they return is not information - it's understanding.
Role: External knowledge retrieval
Prompt: librarian.ts
Default Model: openai/gpt-5.4-mini
Recommended Models: cerebras/zai-glm-4.7 fireworks-ai/accounts/fireworks/routers/kimi-k2p5-turbo openai/gpt-5.4-mini
Model Guidance: Choose a fast, low-cost model. Librarian handles research and documentation lookups, so speed and efficiency usually matter more than using your strongest reasoning model.

06. Designer: The Guardian of Aesthetics


Beauty is essential.
The Designer is an immortal guardian of beauty in a world that often forgets it matters. They have seen a million interfaces rise and fall, and they remember which ones were remembered and which were forgotten. They carry the sacred duty to ensure that every pixel serves a purpose, every animation tells a story, every interaction delights. Beauty is not optional - it's essential.
Role: UI/UX implementation and visual excellence
Prompt: designer.ts
Default Model: openai/gpt-5.4-mini
Recommended Models: google/gemini-3.1-pro-preview kimi-for-coding/k2p5
Model Guidance: Choose a model that is strong at UI/UX judgment, frontend implementation, and visual polish.

07. Fixer: The Last Builder


The final step between vision and reality.
The Fixer is the last of a lineage of builders who once constructed the foundations of the digital world. When the age of planning and debating began, they remained - the ones who actually build. They carry the ancient knowledge of how to turn thought into thing, how to transform specification into implementation. They are the final step between vision and reality.
Role: Fast implementation specialist
Prompt: fixer.ts
Default Model: openai/gpt-5.4-mini
Recommended Models: cerebras/zai-glm-4.7 fireworks-ai/accounts/fireworks/routers/kimi-k2p5-turbo openai/gpt-5.4-mini
Model Guidance: Choose a fast, reliable coding model for routine, scoped implementation work. Fixer usually receives a concrete plan or bounded instructions from Orchestrator, making it a good place for efficient execution tasks such as tests, test updates, and straightforward code changes.

Optional Agents

Observer: The Silent Witness

Note

Why a separate agent? If your Orchestrator model is not multimodal, enable Observer to handle images, screenshots, PDFs, and other visual files. Observer is disabled by default and gives the Orchestrator a dedicated multimodal reader without forcing you to change your main reasoning model. Set disabled_agents: [] and an observer model in your configuration.


The eye that reads what others cannot.

Read-only visual analysis — interprets images, screenshots, PDFs, and diagrams. Returns structured observations to the orchestrator without loading raw file bytes into the main context window.

  • Images, screenshots, diagrams → read tool (native image support)

  • PDFs and binary documents → read tool (text + structure extraction)

  • Disabled by default — enable with "disabled_agents": [] and configure a vision-capable model

Prompt: observer.ts
Default Model: openai/gpt-5.4-miniconfigure a vision-capable model to enable
Model Guidance: Choose a vision-capable model if you want the agent to read screenshots, images, PDFs, and other visual files.

📚 Documentation

Use this section as a map: start with installation, then jump to features, configuration, or example presets depending on what you need.

🚀 Start Here

Doc What it covers
Installation Guide Install the plugin, use CLI flags, reset config, and troubleshoot setup

✨ Features & Workflows

Doc What it covers
Council Run multiple models in parallel and synthesize a single answer with @council
Interview Turn rough ideas into a structured markdown spec through a browser-based Q&A flow
Multiplexer Integration Watch agents work live in Tmux or Zellij panes
Session Management Reuse recent child-agent sessions with short aliases instead of starting over
Todo Continuation Auto-continue orchestrator sessions with cooldowns and safety checks
Preset Switching Switch agent model presets at runtime with /preset
Codemap Generate hierarchical codemaps to understand large codebases faster

⚙️ Config & Reference

Doc What it covers
Configuration Config file locations, JSONC support, prompt overrides, and full option reference
Maintainer Guide Issue triage rules, label meanings, support routing, and repo maintenance workflow
Skills Built-in and recommended skills such as simplify, agent-browser, and codemap
MCPs websearch, context7, grep_app, and how MCP permissions work per agent
Tools Built-in tool capabilities like webfetch, LSP tools, code search, and formatters

💡 Example Presets

Doc What it covers
Author's Preset The author's daily mixed-provider setup
$30 Preset A budget mixed-provider setup for around $30/month

🏛️ Contributors

The builders, debuggers, writers, and wanderers who have earned their place in the pantheon.

Every merged contribution leaves a mark on the realm.

All Contributors


Alvin
Alvin

💻
alvinreal
alvinreal

💻
imw
imw

💻
Adithya Kozham Burath Bijoy
Adithya Kozham Burath Bijoy

💻
ReqX
ReqX

💻
Abhideep Maity
Abhideep Maity

💻
Ruben
Ruben

💻
Gabriel Rodrigues
Gabriel Rodrigues

💻
John Michael Vincent Bambico
John Michael Vincent Bambico

💻
Molt Founders
Molt Founders

💻
Muen Yu
Muen Yu

💻
NocturnesLK
NocturnesLK

💻
Riccardo Sallusti
Riccardo Sallusti

💻
Yan Li
Yan Li

💻
Hoàng Văn Anh Nghĩa
Hoàng Văn Anh Nghĩa

💻
Jacob Myers
Jacob Myers

💻
Kassie Povinelli
Kassie Povinelli

💻
KyleHilliard
KyleHilliard

💻
j5hjun
j5hjun

💻
marcFernandez
marcFernandez

💻
mister-test
mister-test

💻
n24q02m
n24q02m

💻
oribi
oribi

💻
pelidan
pelidan

💻
xLillium
xLillium

💻
⁢4.435km/s
⁢4.435km/s

💻
Drin
Drin

💻
Hakim Zulkufli
Hakim Zulkufli

💻
Simon Klakegg
Simon Klakegg

💻
Kiwi
Kiwi

💻
Raxxoor
Raxxoor

💻
nyanyani
nyanyani

💻
nettee
nettee

💻
Link
Link

💻
Bartosz Łaszewski
Bartosz Łaszewski

💻
huilang021x
huilang021x

💻
Dusan Kovacevic
Dusan Kovacevic

💻
jwcrystal
jwcrystal

💻
Nguyen Canh Toan
Nguyen Canh Toan

💻
Thomas Dyar
Thomas Dyar

💻
zero
zero

💻
Denis Balan
Denis Balan

💻
Gustavo Caiano
Gustavo Caiano

💻
Thomas Mulder
Thomas Mulder

💻

📄 License

MIT


About

Slimmed, cleaned and fine-tuned oh-my-opencode fork, consumes much less tokens

Topics

Resources

License

Stars

Watchers

Forks

Contributors