Skip to content

Why We Chose Modular Monolith Over Microservices

Every greenfield .NET project faces the same fork in the road: monolith or microservices? The industry has spent a decade pushing teams toward microservices, often before they need them. Granit takes a deliberate, different stance.

Architecture is not binary. It is a spectrum with real trade-offs at every point.

graph LR
    A["Monolith"] --> B["Modular Monolith"]
    B --> C["Microservices"]

    style A fill:#e8f5e9,stroke:#388e3c,color:#1b5e20
    style B fill:#fff3e0,stroke:#f57c00,color:#e65100
    style C fill:#ffebee,stroke:#c62828,color:#b71c1c
MonolithModular MonolithMicroservices
DeploymentSingle unitSingle unitIndependent per service
Module boundariesNone or conventionsEnforced in codeEnforced by network
Data isolationShared tablesIsolated DbContext per moduleSeparate databases
CommunicationDirect callsIn-process channels or messagingNetwork (HTTP, gRPC, messaging)
Operational costLowLowHigh
Team scalabilityLimitedGoodHigh

Moving right on the spectrum adds operational cost: distributed tracing, network failure handling, data consistency challenges, deployment orchestration, contract versioning. That cost is justified only when the benefit — independent scaling, team autonomy, fault isolation — outweighs it.

Most teams adopt microservices for the wrong reasons:

  1. “We need to scale.” — In practice, 90% of applications serve fewer than 100k requests per minute. A single well-tuned .NET process handles that comfortably.

  2. “We need team autonomy.” — Team boundaries do not require network boundaries. Module boundaries enforced at the compiler level provide the same decoupling without the serialization overhead.

  3. “Everyone else does it.” — Survivorship bias. You hear about Netflix and Uber’s microservices. You don’t hear about the hundreds of startups that collapsed under the weight of premature decomposition.

The real cost is rarely discussed: a microservices architecture needs dedicated infrastructure for service discovery, configuration management, distributed tracing, circuit breakers, contract testing, and deployment orchestration. For a team of 5–15 developers, this overhead dwarfs the complexity of the application itself.

A well-designed modular monolith provides the isolation benefits that matter without the operational tax:

In Granit, every module declares its dependencies explicitly:

[DependsOn(typeof(GranitPersistenceModule))]
[DependsOn(typeof(GranitSecurityModule))]
public class InvoiceModule : GranitModule
{
public override void ConfigureServices(IServiceCollection services)
{
// This module can only access services from declared dependencies
}
}

Circular dependencies are rejected at startup. Architecture tests enforce the DAG at build time. This gives you the same contract clarity as microservice boundaries — without the network hop.

Database isolation without separate databases

Section titled “Database isolation without separate databases”

Each module owns an isolated DbContext. No shared tables, no accidental coupling:

public class InvoiceDbContext : DbContext
{
// Only Invoice entities — cannot access Identity or Notification tables
public DbSet<Invoice> Invoices => Set<Invoice>();
public DbSet<InvoiceLine> InvoiceLines => Set<InvoiceLine>();
}

When extraction day comes, the database split is mechanical: point the connection string to a new database and run the module’s migrations independently.

Modules communicate through channels — fast, in-process, zero serialization:

public static async Task Handle(
InvoiceApprovedEvent @event,
IMessageBus bus)
{
// This handler works identically with channels and Wolverine
await bus.PublishAsync(new SendInvoiceNotification(@event.InvoiceId));
}

The key insight: the same handler code works with both channels and Wolverine. When you later need durable messaging with transactional outbox guarantees, add GranitWolverinePostgresqlModule. Your business logic stays untouched.

Granit does not argue against microservices. It argues against premature microservices. The framework is designed so that extraction is a non-event when the trade-off justifies it.

  • Fewer than 5 developers on the codebase
  • Under 100k requests per minute
  • Single team with a shared release cycle
  • No module requires independent scaling
  • Deployment frequency is uniform across modules

Because Granit enforces the same patterns in both modes, extraction follows a predictable path:

  1. Identify the candidate — look for modules with scaling, cadence, or isolation requirements that differ from the rest.
  2. Create a new host — a separate Program.cs that composes only the extracted module and its dependencies.
  3. Switch to Wolverine — replace channel-based messaging with durable Wolverine transport. Handler code does not change.
  4. Split the database — the module already owns its DbContext. Point the connection string to a new database.
  5. Deploy independently — the module now runs as its own service.

No rewrite. No “big bang” migration. One module at a time.

Building Granit, we studied the architecture choices of teams across dozens of .NET projects. The pattern was consistent:

  • Teams that started with microservices spent their first year building infrastructure instead of features.
  • Teams that started with a well-structured modular monolith shipped faster and extracted services only when data proved the need.
  • The teams that struggled most were those with monoliths that lacked internal boundaries — making later extraction a rewrite.

Granit exists to make the modular monolith the path of least resistance: enforce boundaries from day one, make extraction mechanical when needed, and never force a team into operational complexity they haven’t earned.