Skip to content

Modular Monolith vs Microservices

Architecture is not a binary choice between monolith and microservices. It is a spectrum, and the right position depends on your team size, operational maturity, and scaling requirements. Granit is designed to let you start at the sweet spot and move along the spectrum when the trade-offs justify it.

graph LR
    A["Monolith<br/>Low complexity<br/>Low ops cost"] --> B["Modular Monolith<br/>Moderate complexity<br/>Low ops cost"]
    B --> C["Microservices<br/>High complexity<br/>High ops cost"]

    style A fill:#e8f5e9,stroke:#388e3c,color:#1b5e20
    style B fill:#fff3e0,stroke:#f57c00,color:#e65100
    style C fill:#ffebee,stroke:#c62828,color:#b71c1c
MonolithModular MonolithMicroservices
DeploymentSingle unitSingle unitIndependent per service
Module boundariesNone or conventionsEnforced in codeEnforced by network
Data isolationShared tablesIsolated DbContext per moduleSeparate databases
CommunicationDirect method callsIn-process channels or messagingNetwork (HTTP, messaging)
Operational costLowLowHigh
Team scalabilityLimitedGoodHigh
Failure isolationNonePartialFull

Every step to the right adds operational cost: distributed tracing, network failure handling, data consistency challenges, deployment orchestration. That cost is justified only when the benefit — independent scaling, team autonomy, fault isolation — outweighs it.

Granit is designed for modular monoliths. Every architectural decision — isolated DbContext per module, explicit DependsOn graphs, channel-based messaging — enforces module boundaries that mirror microservice boundaries. This is deliberate: when extraction becomes necessary, the seams already exist.

The same GranitModule that runs inside a modular monolith can run as the entry point of an independent service. No rewrite required.

A single deployable process composed of multiple modules, each with clear boundaries.

Program.cs
var builder = WebApplication.CreateBuilder(args);
builder.AddGranit(granit =>
{
// Core infrastructure
granit.AddModule<GranitPersistenceModule>();
granit.AddModule<GranitCachingHybridModule>();
granit.AddModule<GranitObservabilityModule>();
// Domain modules — each owns its DbContext
granit.AddModule<GranitIdentityModule>();
granit.AddModule<GranitWorkflowModule>();
granit.AddModule<GranitNotificationsModule>();
granit.AddModule<GranitBlobStorageModule>();
// Application module
granit.AddModule<AppHostModule>();
});
var app = builder.Build();
app.UseGranit();
app.Run();

What this gives you:

  • Module isolation — each module declares its dependencies via [DependsOn]. Circular references are rejected at startup.
  • Database isolation — each *.EntityFrameworkCore module owns a separate DbContext. No shared tables, no accidental coupling.
  • In-process messaging — modules communicate through channels without requiring Wolverine or any external infrastructure.
  • Single process — inter-module calls are fast method invocations. No serialization, no network latency.
  • Bundles — related modules are grouped into bundles for convenience, but each module remains independently configurable.

When a module needs independent scaling, a different deployment cadence, or team-level ownership, extract it into its own service.

NotificationService/Program.cs
var builder = WebApplication.CreateBuilder(args);
builder.AddGranit(granit =>
{
granit.AddModule<GranitPersistenceModule>();
granit.AddModule<GranitObservabilityModule>();
granit.AddModule<GranitWolverinePostgresqlModule>();
// This module now runs as an independent service
granit.AddModule<GranitNotificationsModule>();
});
var app = builder.Build();
app.UseGranit();
app.Run();

What changes:

  • Wolverine replaces channels — durable messaging over PostgreSQL transport handles inter-service communication with transactional outbox guarantees.
  • Separate database — the module already had its own DbContext. Now it gets its own PostgreSQL database.
  • Independent deployment — ship notification changes without redeploying the entire platform.
  • Independent scaling — scale notification processing horizontally without scaling the rest of the system.
  • Multi-tenancy per service — each service can apply its own tenant isolation strategy.

What stays the same:

  • The GranitModule class and its [DependsOn] declarations.
  • Message handlers — a handler that processed channel messages processes Wolverine messages with zero code changes.
  • The DbContext and all entity configurations.
  • Observability — context propagation ensures distributed traces span services.

The migration from modular monolith to extracted microservices is incremental. No big-bang rewrite.

graph TB
    subgraph "Step 1-2: Modular Monolith"
        M1["Module A"] -->|Channel| M2["Module B"]
        M1 -->|Channel| M3["Module C"]
        M2 -->|Channel| M3
    end

    subgraph "Step 3-5: Extracted Service"
        N1["Module A"] -->|Wolverine| N2["Module B"]
        N1 -->|Channel| N3["Module C<br/>(still in monolith)"]
        N2["Module B<br/>(separate service)"]
    end

    style M1 fill:#e8eaf6,stroke:#3f51b5,color:#1a237e
    style M2 fill:#e8eaf6,stroke:#3f51b5,color:#1a237e
    style M3 fill:#e8eaf6,stroke:#3f51b5,color:#1a237e
    style N1 fill:#e8eaf6,stroke:#3f51b5,color:#1a237e
    style N2 fill:#fff3e0,stroke:#f57c00,color:#e65100
    style N3 fill:#e8eaf6,stroke:#3f51b5,color:#1a237e

Deploy all modules in a single process. Use channel-based messaging for inter-module communication. This is the default and covers most teams.

The question is never “should we use microservices?” — it is “does extracting this specific module justify the operational cost?”

  • Fewer than 5 developers working on the codebase
  • Under 100k requests per minute
  • Single team with a shared release cycle
  • No module requires independent scaling
  • Deployment frequency is uniform across modules

Granit does not bet on one architecture. It enforces patterns that work regardless of deployment topology:

PatternMonolith benefitMicroservice benefit
Isolated DbContext per moduleNo accidental table couplingDatabase split is mechanical
[DependsOn] graphStartup validation, no circular refsExplicit service contracts
Channel / Wolverine messagingIn-process decouplingDurable cross-service messaging
Context propagation (OpenTelemetry)Structured logs per moduleDistributed traces across services
Multi-tenancy (ICurrentTenant)Tenant-scoped queriesPer-service tenant strategy
GranitModule as entry pointComposable monolithIndependent service host

The key insight: the upgrade from channel to Wolverine requires zero handler code changes. You change the infrastructure registration, not the business logic. This is the difference between an architecture that supports migration and one that requires a rewrite.