When most engineers hear "monolith," they picture a sprawling, tangled codebase. A single file with 10,000 lines. Functions that reach across the entire system. Tests that break when you rename a variable three folders away.
That image is real, but it describes bad code, not monoliths. I've seen microservices just as tangled, except the mess was distributed across six repositories and three message queues. A monolith is a deployment decision: your application runs as a single process instead of being split into dozens of independently deployed services. Whether your code is well-structured has nothing to do with how it deploys.
What Makes a Monolith Modern
The monoliths I see teams build today have clear module boundaries, domain-driven structure, and explicit interfaces between components. A well-structured Node.js application using Fastify has modules for billing, user management, and notifications. Each has its own domain logic, data access layer, and tests. They just happen to run in the same process and share a database connection pool.
When DHH describes Basecamp serving millions of users across six platforms with 12 programmers, he's describing a system with clear structure that happens to deploy as one unit. Local reasoning and transactional consistency without the operational cost of distributed systems.
How We Got Here
Stories from Netflix, Amazon, and Uber became the architectural playbook for an entire generation of engineers. Those companies had thousands of engineers, extreme load, and genuine coordination problems. Microservices let independent teams deploy on independent timelines without stepping on each other.
But copying that pattern without those constraints creates problems it was never meant to solve. Martin Fowler's research on the MonolithFirst pattern shows this clearly: successful microservice stories almost always started with a monolith that grew too large, while systems built as microservices from scratch often struggled.
Why Small Teams Struggle With Microservices
The promise of microservices is independence. Each service has its own codebase, deployment pipeline, and database. Teams can move without coordinating.
That promise breaks down when you have three engineers responsible for six services.
Microservices assume organizational structure. At Amazon, a service maps to a team that owns it end-to-end. Small teams don't have that. The same person who wrote the billing service also wrote the notification service and is debugging the analytics pipeline. When every feature spans three services, you're multiplying the number of places one person has to think about simultaneously.
Every service needs its own pipeline: CI, deployment automation, health checks, logging, monitoring, alerting. By the sixth service, you're spending 40% of your time keeping infrastructure running instead of shipping features. Kelsey Hightower points out that Kubernetes—often adopted to manage this complexity—creates more operational burden than most teams can handle.
Debugging gets harder. In a monolith, you have a stack trace showing exactly where the error happened. You can reproduce it locally. In microservices, a request touches five services. The error in service D came from malformed data in service A that passed through B and C. Now you're correlating logs and tracing request IDs across systems.
Onboarding gets slower. In a monolith, new engineers clone one repo, run pnpm install, and have the system running locally. In microservices, they clone six repos, figure out dependencies, spin up Docker Compose, and hope their laptop has enough RAM. Most teams give up on local development entirely.
API versioning becomes a tax. When you change a microservice's API, service B might not be ready for the new data shape. You version the endpoint, support v1 and v2 simultaneously, coordinate deploys, and deprecate v1 months later. In a monolith, your compiler tells you what breaks immediately.
When Microservices Make Sense
None of this means microservices are wrong. If you have 200 engineers and genuinely independent product teams, service boundaries let teams move autonomously. If one part of your system sees 100x the load of the rest, splitting it out makes operational sense. If regulatory requirements force data isolation, service boundaries provide that.
But those are specific problems with specific constraints. Most B2B companies building SaaS products for thousands of users don't have those problems. They have 15 engineers building features that touch multiple parts of the system, and every deploy needs coordination anyway because the product story is unified.
As Sam Newman argues in Building Microservices, the benefits only materialize at organizational scale. Below that threshold, you're paying the premium without getting the returns.
Where Complexity Lives
Microservices push complexity into infrastructure: service discovery, distributed tracing, eventual consistency, deployment orchestration. Monoliths push it into code: module boundaries, dependency discipline, architectural tests.
The difference is that you can refactor code. You can add linting rules to enforce boundaries. You can write tests that fail when modules reach into each other's internals. Infrastructure complexity doesn't refactor. It accumulates.
For most teams building most products, the monolith works better. Your architecture should help you ship features, not get in the way.
Try This
Look at your current system. How many services do you have? How many engineers? How often do you deploy a feature that only touches one service?
If you have more services than engineers, or if most features require coordinated deploys across services, you have the costs of microservices without the benefits.
Track how your team spends time over two weeks. How much goes to features versus deploying services, debugging distributed issues, or maintaining infrastructure? If the ratio is worse than 70/30, your architecture is working against you.
Next
2/4: Building a Modular Monolith