January 29, 2026

Otto 1/3: Vision, Boundaries, and the Build

Otto is a personal assistant that runs on your network, with explicit permissions and a sandboxed execution model. This is the vision and why it starts with trust boundaries, not hype.

AIAgentsArchitectureSecurityProduct

I have always wanted a personal assistant that can see my network, access my files, and handle real tasks on my behalf when I allow it. LLMs make that possible, and Clawd is the first system I admired that showed me what this could look like in practice. Otto is not a copycat of Clawd. It is a different bet: an explicit server-client setup that runs cleanly on a small Linux box, with explicit permissions and a sandbox, because that fits how I want to live with it. I am building it in public as an experiment, and I want the focus to stay on trust boundaries, not on hype.

The Assistant I Actually Want

Most AI assistants today are great at answering questions. That is useful, but it is not the shape I want. I want an assistant that can do the quiet life admin that eats a day. Sort the important emails, track the calendar, prepare a summary before a meeting, and remind me when a thread has been stuck for a week. It needs context across channels, and it needs the ability to take action when I decide it should.

The gap is integration and control, not intelligence. A system that can see everything but cannot explain why it acted is not a trustworthy one. I want autonomy as a setting, not as a personality trait, and I want that autonomy tied to a visible contract. If Otto does something, I should know exactly what it did, why it did it, and what permission allowed it.

Clawd Proved the Shape

Clawd showed the right shape for a personal assistant. It is always on, lives in real channels, and feels local instead of theoretical. It made the idea concrete for me. I admire the ambition and the craft, and I am not trying to compete with it. If Clawd is the proof that this category can work, Otto is the proof that a stricter, more bounded version can fit a different set of constraints.

Clawd is also unapologetically macOS first, with deep integration into that ecosystem and a UX that makes sense for that audience. That is a valid choice, just not the one I want for this experiment. I want a path that assumes a small Linux machine in the corner, and a client that can run wherever I am. That single preference changes almost every decision you make next.

Trust Boundaries Are the Product

Most agent demos optimize for capability. They show a system that can browse, message, buy, and execute without pause. The result is thrilling and fragile. The first failure looks like a glitch, but the real failure is always a boundary failure. The system did something it should not have been allowed to do, or it used data it should not have touched, or it executed a command because it could.

Otto starts at the opposite end: secure by default, explicit file access, sandboxed command execution, mandatory authentication, and visible autonomy tiers. If this assistant is going to live on my network, it needs a clear contract with me and it needs to honor that contract when I am distracted, tired, or not watching.

That is why I care about audit trails and explicit permissions more than clever prompts, why Otto needs to be self-aware about what it can and cannot do, and why configuration changes need confirmation.

Server Client as a Constraint

The server client split is a trust boundary, not an implementation detail. The server runs on a small device that is always on, like a Raspberry Pi or Jetson. The client is where I interact, which can be a laptop, a phone, or a messaging channel. That separation gives me two critical things: reliability and containment. The assistant can keep running without my laptop open, and if something goes wrong, it is contained to a known box with known permissions.

Where We Are Going

In the near term, Otto will support WhatsApp first because that is what I use, with real integrations like email, calendar, and tasks. It will have explicit autonomy tiers, confirm actions that carry risk, and log every action with a clear explanation. That is the baseline.

In the mid term, it will support local models for the parts that are safe to run locally and cheap to keep resident, while leaving an explicit option to use cloud models when the job is too heavy.

In the long term, Otto should feel like infrastructure users cannot see. It should be there, reliable, and quiet, until you need it. That only happens when the boundaries are clear and the system is boring enough to forget. The assistant wins when it becomes part of the routine, not another system you have to babysit.

Building this in public is part of the constraint. I will show the decisions, the tradeoffs, and the failures. I want to see if a personal assistant can be both useful and bounded, and whether those constraints produce a system that feels trustworthy instead of merely impressive. That is the experiment.

Sources

Enjoyed this article?

Check out more articles or connect with me