A lot of AI coding tools start with a small trade. The terminal looks nice, the first prompt works, and then the product gently steers you toward a specific provider, a specific billing path, and a specific way of working. At first that feels convenient. After a few weeks it starts to feel like moving into a furnished apartment where all the chairs are bolted to the floor.
That is why I ended up with OpenCode. In my view, it is probably the best TUI in this category, and the reason it wins for me is simple: it stays open. It is open source, highly configurable, and it gives me one stable environment for many different models and workflows. That is the core value. I do not want to rebuild my habits every time I change the model I use for a task.
I like the TUI itself a lot. It is clean, calm, and out of the way. That sounds subjective because it is, but terminal quality matters when you live in it for hours every day. OpenCode also has strong primitives around plugins, agents, skills, custom commands, MCP servers, and custom tools. All of that is useful, but the thing that made it stick is simpler than the feature list. I can use almost any model I want without changing the environment I work in.
Why model flexibility matters more than the leaderboard
Most people evaluating AI coding tools focus on the model first. That is understandable and usually incomplete. In practice, your speed is shaped by the whole operating environment around the model: how you plan work, how tools are exposed, how review works, how you recover from bad turns, how state is preserved, and whether you can switch brains without switching desks.
OpenCode gets this right because the environment is stable while the model stays swappable. A recent Otto build made the value obvious for me. Over a multi-week stretch, I used Codex 5.3 for coding, GPT 5.4 for task design, and Gemini 3.1 Pro for design work, all inside the same terminal workflow. That is the feature that matters most in daily use. I can pick the model that is best for the task without paying the tax of moving to a different tool with different controls, different context behavior, and different assumptions.
This is also why I prefer it to more provider-shaped tools. Claude Code is very good and worth studying, but I do not want my workflow architecture to be tightly coupled to one company’s product surface. There is also one practical limitation here. Claude subscription use is not something I can sensibly center my setup around because of Anthropic’s terms around this path. By contrast, OpenCode works smoothly for me with ChatGPT subscriptions, Gemini via plugin, and now increasingly with Ollama Cloud. I am currently experimenting quite a bit with Minimax 2.7 through that route.
The setup that makes it useful
OpenCode becomes much more powerful once you lean into its config system. It supports a global opencode.json in ~/.config/opencode and a per-project opencode.json in the repo. Those layers merge instead of replacing each other. I prefer the global side for most things because it gives me shared muscle memory across projects. I only override locally when a repo truly needs different behavior.
This is the lightly cleaned global config I use:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama Cloud",
"options": {
"baseURL": "https://ollama.com/v1"
},
"models": {
"minimax-m2.7:cloud": {
"name": "minimax-m2.7:cloud"
}
}
}
},
"plugin": [
"opencode-gemini-auth@latest",
"opencode-agent-memory"
],
"command": {
"plan-ticket": {
"description": "Plan implementation for a PM ticket file",
"agent": "plan",
"template": "Go ahead and plan an implementation for @$1 . Make sure to thoroughly check through our codebase and analyze accordingly, making sure you check the implementation of the other tickets in this epic, so that you reuse accordingly. Consider our good software practices that we agreed upon"
},
"post-push-pr": {
"description": "Create PR and handle review follow-up",
"template": "Pushed it. Create a PR, wait for actions, fix all reviewer commits, return"
}
},
"autoupdate": true,
"mcp": {
"chrome-devtools": {
"type": "local",
"command": ["npx", "-y", "chrome-devtools-mcp@latest"]
},
"context7": {
"type": "remote",
"url": "https://mcp.context7.com/mcp",
"headers": {
"CONTEXT7_API_KEY": "{env:CONTEXT7_API_KEY}"
},
"enabled": true
},
"playwright": {
"type": "local",
"command": ["npx", "-y", "@playwright/mcp@latest", "--headless"],
"enabled": true
},
"stitch": {
"type": "remote",
"url": "https://stitch.googleapis.com/mcp",
"enabled": true,
"headers": {
"X-Goog-Api-Key": "{env:STITCH_API_KEY}"
}
}
}
}
There are a few reasons I like this structure. The obvious one is that environment variables can stay in my shell config instead of leaking into the file. The less obvious one is that this turns OpenCode into an operating environment rather than a chat box. Providers, commands, plugins, and external tools all live in one system that I can keep improving over time.
The MCPs are where the terminal becomes a real work surface
I use all four MCPs in that config regularly, and each one solves a different problem.
Context7 is the one I use when I need current documentation without leaving the flow. It is effectively my docs lookup layer inside the terminal. That sounds modest and ends up saving a lot of time. Instead of breaking focus, opening browser tabs, and translating library docs back into the task at hand, I can keep the agent grounded in current docs while it is already working in the repo.
Playwright is the frontend builder’s friend. I use it heavily when building UI because it can actually drive the browser, inspect pages, fill forms, click through flows, and verify what changed. That is far more useful than having the model guess what a screen looks like from code alone. If I am building a new flow, adjusting layout, or checking whether a component really behaves correctly across a few steps, Playwright gives the model eyes and hands in the browser.
Chrome DevTools MCP is where debugging gets much better. This is not just another browser driver. It exposes browser-level debugging information such as console messages, network requests, snapshots, screenshots, and performance tooling. When something breaks in the browser, especially in frontend work, this matters more than people think. A coding model that can see the console, inspect failed requests, and look at the actual page state is much more useful than one that stares at TypeScript and improvises.
Stitch is the design-oriented part of the setup. I use Gemini a lot for design work, and Stitch fits that well because it can generate, edit, and vary screens from prompts. That is useful when I want to move quickly on interface ideas without leaving the environment. I do not use it as a replacement for proper product thinking. I use it as a fast design exploration surface inside the same tool where the implementation work happens.
Taken together, those MCPs let me do planning, coding, browser automation, debugging, and visual exploration in one place. That is one of the main reasons the TUI matters so much. If the interface were mediocre, the whole stack would feel heavy. Because the interface is clean, the added capabilities stay usable.
Commands, parameters, and the small ergonomics that compound
I also get a lot of value from custom commands. This part is easy to underestimate because it sounds like a convenience feature. It is more than that. Commands let you turn recurring patterns into named entry points, and OpenCode supports parameters, so the commands can be generic rather than hardcoded.
My plan-ticket command is a good example. It uses $1, which means I can pass a ticket file into the command and have the prompt reference that exact file. That seems small and is not small. It means I do not have to restate the planning pattern every time. I can encode the behavior I want once and then reuse it with different inputs. The result is more consistency and less prompt typing.
The same idea applies to post-push-pr. It packages a recurring review-and-follow-up loop into one named step. I like this style because it treats good prompts as durable tools rather than disposable messages.
The built-in commands are also genuinely useful. I use /review as part of my normal flow, then I review the result myself. I also use /undo and /redo a lot more than I expected to when I first started. Because OpenCode snapshots file changes through Git, those commands are not cosmetic history controls. They are real steering tools. When an agent takes a good idea one step too far, being able to revert both the message turn and the file changes is a very practical quality-of-life feature.
How I use OpenCode in my day to day workflow
My workflow in OpenCode is fairly stable now. I usually preplan the work before I build it. I let the planning side shape the problem, then I give the implementation to the model that is best suited for the job, then I ask for a review, and then I do my own review. That loop has become much more spec-driven over time. I used to steer things far more step by step. Model and tool quality now let me hand over larger chunks of work, provided the planning and review are good.
This is also where my custom agents come in. My global config directory is not there for decoration.
~/.config/opencode
├── agents
│ ├── architecture-incremental-check.md
│ ├── architecture-initial-scan.md
│ ├── designer.md
│ ├── systemplanner.md
│ ├── writer-essays.md
│ └── writer-linkedin.md
├── memory
│ ├── human.md
│ ├── persona.md
│ └── remember_instruction.md
├── opencode.json
├── package.json
└── skills
├── npm-trusted-publishing/
├── obsidian-markdown/
├── playwright/
├── pr-writer/
└── ts_js_docs/
Those agents each have a real role. designer is my lean build planner for turning an idea into small, deployable engineering tickets without a lot of product theater. systemplanner is more interactive and useful when the shape of the thing is still fuzzy. The architecture agents help me baseline a codebase and then check drift after changes. The writing agents handle essays and LinkedIn posts with their own constraints and tone.
That setup matters for teams as much as for solo work. Once your good practices live inside the environment, you stop relying on memory and repetition alone. The tool starts carrying part of the discipline.
Skills and memory make the whole thing stick
I also use the memory plugin heavily. In practice that means the assistant carries forward durable preferences and constraints through small memory files like human.md, persona.md, and remember_instruction.md. That reduces repeated setup and makes the system feel less stateless across sessions.
The skills layer helps in a different way. My skills are focused and practical. npm-trusted-publishing exists because publishing workflows fail in very specific ways, and I want those lessons encoded once instead of rediscovered. pr-writer turns pull request descriptions into a repeatable output rather than an afterthought. ts_js_docs helps keep TypeScript docs consistent without changing behavior. obsidian-markdown matters because I work in Obsidian as well and want the assistant to understand the actual markdown flavor instead of producing generic markdown that almost works. I also keep a Playwright skill so browser workflows are executed in a deliberate, verifiable way.
None of these pieces is magical by itself. The value comes from accumulation. Since the end of last year, I have kept adding MCPs, commands, memory, agents, and model paths on top of the same OpenCode base. That means the environment gets better without forcing me to relearn it.
Where this setup does and does not fit
I would not recommend this exact setup to everyone. OpenCode is strongest if you are willing to invest a bit in your environment. If you want a very opinionated tool with fewer moving parts, something more locked in may feel simpler on day one.
Still, for my work, this trade is clearly worth it. I get a terminal I genuinely enjoy using, an open system that does not force me into one ecosystem, a strong plugin and command model, real browser and debugging tooling, and the ability to swap models based on the task instead of the product wrapper. That is why OpenCode became my AI coding home base.
Sources
- OpenCode documentation
- OpenCode repository and built-in
/reviewcommand template
Practical experiment: set up a minimal global OpenCode config with one extra model route, one MCP server, and one custom command that takes a parameter. Use that environment for a week across at least two different tasks, and switch models without changing anything else. If the workflow still feels coherent, you have probably found a setup worth keeping.