Deep Dive - Your Microservices are Actually a Distributed Monolith
You have taken all the complexity of a single codebase and added the two things that kill performance Network Latency and Serialization Costs.
You broke your legacy code into 50 distinct services. You deployed them in containers managed by Kubernetes. You have a fancy architecture diagram with lines connecting every box.
But you have a dirty secret.
If Service A (the User Service) goes down Service B (the Order Service) starts throwing errors. If you need to deploy a database migration for the Inventory Service you have to coordinate with the Billing team to stop their deployments.
You did not build microservices. You built a Distributed Monolith.
You have taken all the complexity of a single codebase and added the two things that kill performance Network Latency and Serialization Costs. This post will dissect the technical anti patterns that create this architectural nightmare and show you how to fix them using Temporal Decoupling and Bounded Contexts.
The Synchronous Coupling
The most common mistake is treating an HTTP request between services just like a function call in a monolith.
In a monolith function A calls function B in memory. It is fast (nanoseconds) and reliable. In a distributed monolith Service A calls Service B via HTTP REST or gRPC.
This creates Temporal Coupling. For Service A to do its job Service B must be alive and responding at that exact second. They are coupled in time.
The Availability Maths
Let us look at the math. Suppose you have a call chain where A calls B which calls C which calls D. Assume each service has a Service Level Objective (SLO) of 99.9% uptime.
Availability(total) = 99.9% × 99.9% × 99.9% × 99.9% ≈ 99.6%
By chaining four services together synchronously you have mathematically guaranteed that your system is less reliable than any single service within it.
The Compounding Latency
It gets worse. In a synchronous chain the latency is additive.
Latency(total) = L(A) + L(B) + L(C) + L(D) + (4 × NetworkOverheard)
If Service C has a database hiccup and slows down by 500 ms Service A also slows down by 500 ms. The user waits. You have created a system where the performance is determined by the slowest single component in the entire chain.
The Shared Database
This is the fastest way to destroy a microservice architecture.
To save time you decided to let Service A and Service B read from the same Postgres instance. Maybe they even look at the same tables.
The Schema Coupling
In a true microservice architecture a service should be able to change its internal data storage without telling anyone. It is a black box.
If Service A and Service B share a database Service A cannot change a column name or update a schema without breaking Service B.
Which resuts in, deployments become coupled. You cannot deploy A until B is updated. You have lost Independent Deployability which was the whole point of microservices.
The Database as an Integration Layer
Using the database to share data allows services to bypass the API contract. Service B might read data that Service A is in the middle of writing. You lose all encapsulation logic. The database becomes the Public API of your system which is a massive security and stability risk.
In fact, a true microservice must be the only thing that touches its own database tables. If another service wants that data it must ask via an API.
Distributed Transactions
In a monolith you have ACID transactions. You can update the Inventory table and the Order table in one transaction. It either all happens or none of it happens.
In a distributed system you typically cannot do this. But engineers try anyway using Two Phase Commit (2PC).
Why 2PC is a Trap?
In a Two Phase Commit the transaction coordinator tells Service A and Service B to “Prepare” (lock their rows). If both say yes it tells them to “Commit”.
The Blocking Problem - During the “Prepare” phase the database rows are locked. If the Coordinator crashes or if Service B is slow Service A acts as if it is frozen. It holds the lock preventing any other user from touching that inventory.
Throughput Killer - Distributed transactions limit your throughput to the speed of your slowest network link. They are inherently unscalable.
So what would fix this?
Decoupling Domain Driven Design
To turn a Distributed Monolith into real Microservices you must break these couplings.
Temporal Decoupling with Async Queues
Instead of Service A calling Service B and waiting use an Event Driven Architecture.
Lets consider this scenario, your system has to Ship an Order, once the order is placed.
Most of the time, I see the Order Service calls Shipping Service and wait for a 200 OK. Instead the Order Service should publish an event OrderPlaced to a Message Queue (Kafka) topic. It immediately returns success to the user.
The benefits to this, the Shipping Service can be offline. It can be slow. It can be upgrading. The Order Service does not care. They are decoupled in time.
Bounded Contexts
You must define boundaries based on Domain Driven Design (DDD) not data entities.
You may think that only User Service must holds everything about a User and every other service calls it. This will create a dependency hell for User Service. In fact we must duplicate the data when needed. The Shipping Service needs the user’s address. The Billing Service needs the User’s payment details. They should have their own local copy of the data they need.
When user updates their address an event UserAddressUpdated will be sent to Shipping Service vai Message queue. Shipping service will update their local copy of the User’s address.
Conclusion
How do you know if you have built a Distributed Monolith? Ask yourself one question.
Can I deploy a major change to Service A and shut down its database without notifying the team that runs Service B?
If the answer is No you are running a Monolith. You have paid the high price of distributed complexity but purchased none of the benefits.


