The modular monolith isn't a temporary solution while you wait to "graduate" to microservices. For most teams, it's where you should stay. But sometimes you do need to split services out, and knowing when matters.
The decision isn't about size or maturity. It's about constraints. Specific, measurable constraints that make distribution worth its cost.
Reasons to Stay Monolithic
Team size is the most reliable indicator. If you have fewer than 50 engineers, you probably don't have the organizational problems that microservices solve. Coordination happens through conversations, not service boundaries. Everyone can still understand the full system.
Most features touch multiple domains. When you add a new project type, you need to update billing logic, notification templates, and analytics tracking. In a monolith, that's three modules in one pull request. In microservices, it's three repos, three deploys, and careful coordination to avoid breaking changes.
Your deployment cadence is unified. Product wants features to ship together because they tell a coherent story to users. Marketing wants coordinated releases. Sales needs predictable timelines. Fighting for independent deploys when nobody wants them wastes energy.
You value simplicity over scalability. One deploy is easier than six. One database is easier than distributed transactions. One repository is easier than keeping dependencies synchronized across codebases. Until that simplicity becomes a bottleneck, there's no reason to give it up.
Signals That Suggest Splitting
Some parts of your system have genuinely different scaling needs. Your image processing service handles 100x the requests of your CRUD API. Scaling them together means paying for 100 instances when you need 10. Splitting lets you scale independently based on actual load.
Different modules have different data residency requirements. GDPR requires customer data in EU servers. Other data can live anywhere. Separate services with separate databases give you that isolation cleanly.
You have independent teams with independent roadmaps. Two product teams work on different features with different release cycles. They rarely need to coordinate changes. Service boundaries let them move without waiting for each other.
One module has different technology requirements. Your main API is Node.js, but your ML inference pipeline works better in Python. You've tried keeping it in the monolith, but the deployment complexity isn't worth it. Splitting gives you the flexibility you actually need.
Specific compliance requirements force isolation. PCI compliance for payment processing requires strict boundaries. Building those boundaries within a monolith is harder than separating the service entirely.
The Migration Path
If you've built a modular monolith, extracting a service is straightforward. The module already has a defined public API. Other modules only call that API, never reaching into internals. The boundary already exists in code.
Converting the module to a service means wrapping the API in HTTP. Replace function calls with HTTP requests. Deploy the service independently. Update calling code to use the HTTP client instead of direct imports. The business logic doesn't change.
Database separation takes more work. If you used schemas to separate tables, you can split the database without changing queries. If modules were calling each other's APIs for data, those calls now work across the network. The pattern stays the same.
This is why the modular monolith matters. You've done the hard work of defining boundaries and eliminating coupling. Moving to separate deployment is a refactor, not a rewrite.
The Threshold Question
Microservices make sense when the coordination cost of a shared codebase exceeds the operational cost of distributed systems. That threshold is real, but higher than most teams think.
At 10 engineers, coordination cost is low. You talk to each other. You share context naturally. Distributed systems cost is high. You need deployment pipelines, monitoring, service discovery, and people who know how to run it all.
At 200 engineers, the equation flips. Coordination cost is high. Merge conflicts are constant. Deploy queues are long. Context is fragmented. Distributed systems cost becomes acceptable because you have the people and infrastructure to handle it.
The threshold sits somewhere between 50 and 100 engineers for most organizations. Below that, stay monolithic. Above that, consider splitting. But consider based on actual pain, not theoretical concerns.
What About Matteo Collina and Platformatic?
Matteo Collina, creator of Platformatic and Node.js core contributor, has been vocal about the costs of premature distribution. His talks emphasize that most Node.js applications don't need microservices. They need good structure.
Platformatic exists specifically to support this model. It lets you build services that compose together, giving you the modularity benefits without forcing you to deploy independently until you need to. The framework makes the monolith-to-services path smoother because the boundaries are already there.
This isn't about avoiding microservices forever. It's about not paying for them before you need them. When you do need them, the transition is easier because you've built the right foundations.
Common Mistakes
Splitting because you read it's the "right way." Architecture doesn't have universal right answers. It has trade-offs that match different constraints. Copying Netflix's architecture when you're not Netflix sets you up to fail.
Splitting to solve organizational problems. If teams aren't collaborating well in a monolith, they probably won't collaborate well with microservices. Service boundaries don't fix communication problems. They just move them to runtime.
Splitting because the codebase feels messy. Mess is a structure problem, not a deployment problem. A messy monolith becomes messy microservices. Clean up the structure first, then decide if you need distribution.
Splitting to improve scalability before measuring load. Most systems never reach the scale where independent service scaling matters. Vertical scaling and horizontal scaling of the monolith handle far more traffic than people assume.
Making the Decision
Write down the specific problem you're trying to solve. "We need to scale better" isn't specific enough. "The image processing endpoint sees 10x the load of everything else and requires different instance types" is specific.
For each problem, ask: can we solve this within the monolith? Often the answer is yes. PostgreSQL can handle more load than you think. Caching can reduce database pressure significantly. Horizontal scaling of the monolith works until you're very large.
If you genuinely need to split, start with one service. The one with the clearest boundary and strongest reason to separate. See what problems emerge. Learn from them before splitting further.
Try This
List the reasons you think you might need microservices. For each one, write down whether it's a problem you have now or a problem you're afraid of having later. If most of them are in the "afraid of" column, you don't need microservices yet.
Next
6/6: Operating the Monolith—Simplicity as Strategy