You designed the system correctly. Clean module boundaries, dependency injection, repository patterns for database access, adapter interfaces for external services. You anticipated change and abstracted accordingly.
Then the CEO came back from a conference. The company is pivoting from B2B to B2C. The investor wants a marketplace model. The acquirer needs the system to run in their infrastructure within 90 days.
None of your abstractions help. The repository layer is intact, but the entire domain model is wrong. The adapter interfaces are clean, but you need to rethink how users flow through the system. You designed for the changes you anticipated and got blindsided by everything else.
This is the gap that traditional architecture leaves open. We know requirements change. We've known since at least the Agile Manifesto. We have decades of patterns designed for flexibility: hexagonal architecture, ports and adapters, clean architecture. We talk constantly about designing for change.
But we're still guessing which changes to design for. We look at our system, ask "what here might change?", and add abstractions around those parts. Payment provider might change? Add an interface. Database might change? Add a repository layer. We prepare for anticipated change and hope we anticipated correctly.
We usually don't.
Ordered Systems in Disordered Environments
Barry O'Reilly, in his work on Residuality Theory, frames the problem differently. Software is an ordered system: predictable, constrained, testable. We put that ordered system inside a disordered one: markets, organizations, human behavior, politics, economics. The disordered system doesn't care about our abstractions. It generates stressors we never considered, and those stressors tear our carefully designed architecture apart.
The traditional response is to try harder at prediction. Better requirements gathering. More comprehensive risk registers. Longer planning phases. But you can't predict what you can't predict. No amount of stakeholder interviews would have surfaced "pickup truck owners will block electric car chargers to make TikTok videos" as a requirement. No risk register includes "someone stuffs mince meat into the charging ports."
Those are real stressors that hit real systems. The teams that survived them didn't predict them. They built systems that happened to be resilient to stressors they never imagined.
A Different Question
Residuality Theory asks a different question. Instead of "what might change, and how do we prepare for it?", it asks " what survives when this system is stressed?"
The shift is subtle but significant. You stop trying to predict sources of change and start examining how your system falls apart under pressure. You stress the architecture with scenarios, including absurd ones, and observe what breaks, what survives, and what breaks together.
The surviving pieces are called residues. The states your system naturally falls into under stress are called attractors. The tool for mapping which components break together under which stressors is called an incidence matrix, borrowed from network science.
Here's how it works in practice. You have a naive architecture that solves the functional problem. You throw stressors at it: competitor drops prices, key vendor goes bankrupt, traffic spikes 100x, regulatory change forces data restructuring, infrastructure provider has an outage. For each stressor, you ask: what's left? What survives? What would need to exist for this architecture to keep working?
You identify residues, the components or design decisions that let the system survive each stressor. Then you compress all those residues into a single coherent architecture.
The Leverage
Something interesting happens when you do this. An architecture designed for one stressor often survives stressors you never designed for.
O'Reilly gives an example from an electric vehicle charging platform. One stressor they considered was keyfob failure: customer holds up their keyfob, nothing happens, queue builds up behind them. The solution was automatic license plate recognition. Let everyone charge, bill them later based on their plates. The ALPR system solved the keyfob problem.
Later, a stressor emerged that nobody predicted. In some regions, drivers of fossil fuel vehicles started parking in front of chargers to block electric cars from charging. They'd film themselves doing a dance and post it to TikTok. This behavior, called "ICEing" in the EV community, became widespread enough to be a real operational problem. No requirements document anticipated this.
But the system survived anyway. ALPR captured their plates. They got billed for occupying the spot. The architecture had redundancy for chargers being out of commission. A sliding billing scale made extended occupation expensive. The system resisted a stressor it was never designed for, because the residues from other stressors happened to cover it.
This is the leverage that Residuality Theory exposes. In a complex environment, attractors cluster. Multiple stressors push the system toward similar states. If you design for one stressor in a cluster, you often survive the others without knowing they exist.
How This Differs From Risk Analysis
Risk analysis asks "what might go wrong?" and ranks possibilities by probability and impact. You build a register, estimate likelihoods, prioritize the high-probability high-impact items, and prepare mitigations.
The problem is that probability estimates in complex systems are often meaningless. We don't have the data. We're expressing the biases of the most powerful people in the room, not the actual likelihood of events. And the events that actually hurt us are often the ones we rated as unlikely or never considered at all. Nassim Taleb's work on Black Swan events makes this point sharply: the most consequential events are precisely the ones our models miss.
Residuality Theory ignores probability entirely. It doesn't ask "how likely is this stressor?" It asks "what happens to my system if this stressor occurs?" The goal isn't to predict which stressors will hit. The goal is to understand your system's fault lines so deeply that you build something resilient to stressors you can't predict.
The absurd stressors are intentional. "What if Godzilla attacks your data center?" sounds ridiculous. But answering it exposes fault lines. The system that survives Godzilla also survives floods, fires, regional outages, political instability, and pandemics that prevent physical access. One absurd scenario covers a cluster of realistic ones.
The Edge of Chaos
O'Reilly roots this in complexity science, specifically Kauffman networks. Stuart Kauffman, a theoretical biologist, studied how complex systems self-organize using networks of simple boolean nodes, each either on or off, connected to neighbors. His key finding: systems with too few connections are static and fragile, while systems with too many connections become chaotic and unpredictable. But at a critical threshold, systems find a balance where they're both stable and adaptable. Kauffman called this the "edge of chaos".
The details matter less than the insight: systems with too few components and connections are fragile. One stress kills everything. Systems with too many components and connections collapse under their own coordination weight.
The goal is criticality: a balance where the system is flexible enough to survive stresses it doesn't control or understand, but not so complex that maintaining itself becomes unsustainable. This maps directly to architecture debates we already have. Monoliths are fragile because one failure cascades everywhere. Microservices are chaotic because the operational overhead eventually overwhelms the team. The modular monolith, the well-structured system with clear boundaries but unified deployment, often sits closer to that critical edge.
Residuality thinking pushes you toward that edge. You stress your simple architecture until it breaks. You add what's needed to survive. You stress it again. You keep going until new stressors stop breaking the system, until it starts pushing back, until it survives things you didn't design for. That's when you've reached criticality.
A Lens, Not a Prescription
I haven't used Residuality Theory in production. I find the concept compelling because it addresses something I've felt but couldn't articulate: the gap between designing for change and actually surviving it.
We've all built systems that were "flexible" in ways that never mattered and rigid in ways that killed us. We've all seen architectures that survived chaos they weren't designed for and architectures that collapsed under the first unexpected stress. Residuality Theory offers a vocabulary for that difference and a method for tilting the odds.
Whether the full methodology is practical for every team, I don't know. But the core question is worth asking: not "what might change?" but "what survives when this breaks?"
If your environment is stable and predictable, this might be overkill. If your environment keeps moving, if requirements shift, if the ground under your architecture never stops changing, this lens might be worth exploring.
Try This
Pick one module or service in your current system. List five stressors that could hit it: a vendor outage, a traffic spike, a regulatory change, a key person leaving, a business pivot. For each one, trace what breaks. Not just the module itself, but what else fails because of dependencies, shared infrastructure, or assumptions baked into the design.
If the same components keep appearing across multiple stressors, you've found hidden coupling. That's a fault line worth understanding, whether or not you use the full Residuality methodology.
Sources
- Barry O'Reilly, Residues: Time, Change, and Uncertainty in Software Architecture (Leanpub)
- Barry O'Reilly, "Residuality Theory" (NDC Oslo 2024, YouTube)
- Oskar Dudycz, "Residuality Theory: A Rebellious Take on Building Systems That Actually Survive" (Architecture Weekly, 2025)
- Stuart Kauffman, The Origins of Order: Self-Organization and Selection in Evolution (Oxford University Press, 1993)
- Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (Random House, 2007)
- Alistair Cockburn, "Hexagonal Architecture" (2005)
- Kamil Grzybek, "Modular Monolith: A Primer" (2019)