Modular Monolith vs Microservices
Architecture is not a binary choice between monolith and microservices. It is a spectrum, and the right position depends on your team size, operational maturity, and scaling requirements. Granit is designed to let you start at the sweet spot and move along the spectrum when the trade-offs justify it.
The spectrum
Section titled “The spectrum”graph LR
A["Monolith<br/>Low complexity<br/>Low ops cost"] --> B["Modular Monolith<br/>Moderate complexity<br/>Low ops cost"]
B --> C["Microservices<br/>High complexity<br/>High ops cost"]
style A fill:#e8f5e9,stroke:#388e3c,color:#1b5e20
style B fill:#fff3e0,stroke:#f57c00,color:#e65100
style C fill:#ffebee,stroke:#c62828,color:#b71c1c
| Monolith | Modular Monolith | Microservices | |
|---|---|---|---|
| Deployment | Single unit | Single unit | Independent per service |
| Module boundaries | None or conventions | Enforced in code | Enforced by network |
| Data isolation | Shared tables | Isolated DbContext per module | Separate databases |
| Communication | Direct method calls | In-process channels or messaging | Network (HTTP, messaging) |
| Operational cost | Low | Low | High |
| Team scalability | Limited | Good | High |
| Failure isolation | None | Partial | Full |
Every step to the right adds operational cost: distributed tracing, network failure handling, data consistency challenges, deployment orchestration. That cost is justified only when the benefit — independent scaling, team autonomy, fault isolation — outweighs it.
Where Granit sits
Section titled “Where Granit sits”Granit is designed for modular monoliths. Every architectural decision — isolated DbContext per module, explicit DependsOn graphs, channel-based messaging — enforces module boundaries that mirror microservice boundaries. This is deliberate: when extraction becomes necessary, the seams already exist.
The same GranitModule that runs inside a modular monolith can run as the entry point of an independent service. No rewrite required.
Modular monolith with Granit
Section titled “Modular monolith with Granit”A single deployable process composed of multiple modules, each with clear boundaries.
var builder = WebApplication.CreateBuilder(args);
builder.AddGranit(granit =>{ // Core infrastructure granit.AddModule<GranitPersistenceModule>(); granit.AddModule<GranitCachingHybridModule>(); granit.AddModule<GranitObservabilityModule>();
// Domain modules — each owns its DbContext granit.AddModule<GranitIdentityModule>(); granit.AddModule<GranitWorkflowModule>(); granit.AddModule<GranitNotificationsModule>(); granit.AddModule<GranitBlobStorageModule>();
// Application module granit.AddModule<AppHostModule>();});
var app = builder.Build();app.UseGranit();app.Run();What this gives you:
- Module isolation — each module declares its dependencies via
[DependsOn]. Circular references are rejected at startup. - Database isolation — each
*.EntityFrameworkCoremodule owns a separateDbContext. No shared tables, no accidental coupling. - In-process messaging — modules communicate through channels without requiring Wolverine or any external infrastructure.
- Single process — inter-module calls are fast method invocations. No serialization, no network latency.
- Bundles — related modules are grouped into bundles for convenience, but each module remains independently configurable.
Microservices with Granit
Section titled “Microservices with Granit”When a module needs independent scaling, a different deployment cadence, or team-level ownership, extract it into its own service.
var builder = WebApplication.CreateBuilder(args);
builder.AddGranit(granit =>{ granit.AddModule<GranitPersistenceModule>(); granit.AddModule<GranitObservabilityModule>(); granit.AddModule<GranitWolverinePostgresqlModule>();
// This module now runs as an independent service granit.AddModule<GranitNotificationsModule>();});
var app = builder.Build();app.UseGranit();app.Run();What changes:
- Wolverine replaces channels — durable messaging over PostgreSQL transport handles inter-service communication with transactional outbox guarantees.
- Separate database — the module already had its own
DbContext. Now it gets its own PostgreSQL database. - Independent deployment — ship notification changes without redeploying the entire platform.
- Independent scaling — scale notification processing horizontally without scaling the rest of the system.
- Multi-tenancy per service — each service can apply its own tenant isolation strategy.
What stays the same:
- The
GranitModuleclass and its[DependsOn]declarations. - Message handlers — a handler that processed channel messages processes Wolverine messages with zero code changes.
- The
DbContextand all entity configurations. - Observability — context propagation ensures distributed traces span services.
Migration path
Section titled “Migration path”The migration from modular monolith to extracted microservices is incremental. No big-bang rewrite.
graph TB
subgraph "Step 1-2: Modular Monolith"
M1["Module A"] -->|Channel| M2["Module B"]
M1 -->|Channel| M3["Module C"]
M2 -->|Channel| M3
end
subgraph "Step 3-5: Extracted Service"
N1["Module A"] -->|Wolverine| N2["Module B"]
N1 -->|Channel| N3["Module C<br/>(still in monolith)"]
N2["Module B<br/>(separate service)"]
end
style M1 fill:#e8eaf6,stroke:#3f51b5,color:#1a237e
style M2 fill:#e8eaf6,stroke:#3f51b5,color:#1a237e
style M3 fill:#e8eaf6,stroke:#3f51b5,color:#1a237e
style N1 fill:#e8eaf6,stroke:#3f51b5,color:#1a237e
style N2 fill:#fff3e0,stroke:#f57c00,color:#e65100
style N3 fill:#e8eaf6,stroke:#3f51b5,color:#1a237e
Deploy all modules in a single process. Use channel-based messaging for inter-module communication. This is the default and covers most teams.
When you need guaranteed message delivery (e.g., notification fan-out must not lose messages), add GranitWolverinePostgresqlModule. The transactional outbox ensures messages survive process crashes. Your handlers do not change.
Look for modules with: different scaling requirements, a different release cadence, a dedicated team, or regulatory isolation needs (e.g., a payment module that requires PCI DSS scope reduction).
Create a new host project for the extracted module. It gets its own Program.cs, its own GranitModule composition, and Wolverine for messaging back to the monolith.
Because each module already owns an isolated DbContext, the database split is mechanical. Point the extracted service’s connection string to a new database and run its migrations independently.
Decision framework
Section titled “Decision framework”The question is never “should we use microservices?” — it is “does extracting this specific module justify the operational cost?”
- Fewer than 5 developers working on the codebase
- Under 100k requests per minute
- Single team with a shared release cycle
- No module requires independent scaling
- Deployment frequency is uniform across modules
- A module has fundamentally different scaling needs (e.g., notification delivery spikes)
- A module has a different deployment cadence (daily vs. weekly)
- Team boundaries align with module boundaries
- Regulatory isolation requires a reduced compliance scope (e.g., PCI DSS for payments)
- A module failure should not take down the entire system
Why Granit works in both modes
Section titled “Why Granit works in both modes”Granit does not bet on one architecture. It enforces patterns that work regardless of deployment topology:
| Pattern | Monolith benefit | Microservice benefit |
|---|---|---|
Isolated DbContext per module | No accidental table coupling | Database split is mechanical |
[DependsOn] graph | Startup validation, no circular refs | Explicit service contracts |
| Channel / Wolverine messaging | In-process decoupling | Durable cross-service messaging |
| Context propagation (OpenTelemetry) | Structured logs per module | Distributed traces across services |
Multi-tenancy (ICurrentTenant) | Tenant-scoped queries | Per-service tenant strategy |
GranitModule as entry point | Composable monolith | Independent service host |
The key insight: the upgrade from channel to Wolverine requires zero handler code changes. You change the infrastructure registration, not the business logic. This is the difference between an architecture that supports migration and one that requires a rewrite.
Further reading
Section titled “Further reading”- Module System — how
GranitModuleand[DependsOn]work - Messaging — channels, Wolverine, and the handler model
- Wolverine Optionality — why Wolverine is optional and how to upgrade
- Wolverine reference — configuration, PostgreSQL transport, transactional outbox