Configure blob storage
Granit.BlobStorage provides sovereign, Direct-to-Cloud file storage with a unified
API across multiple providers. Cloud providers (S3, Azure Blob) use native presigned
URLs where the server never handles file bytes. Server-side providers (FileSystem,
Database) use Granit.BlobStorage.Proxy for token-based upload/download endpoints.
This guide covers the S3 provider setup. For other providers, see the BlobStorage reference.
Prerequisites
Section titled “Prerequisites”- A .NET 10 project with Granit module system configured
- An S3-compatible object storage endpoint (AWS S3, OVHcloud, MinIO, etc.)
- A PostgreSQL (or other EF Core-supported) database for metadata persistence
Step 1 — Install packages
Section titled “Step 1 — Install packages”dotnet add package Granit.BlobStoragedotnet add package Granit.BlobStorage.S3dotnet add package Granit.BlobStorage.EntityFrameworkCoreStep 2 — Declare modules
Section titled “Step 2 — Declare modules”Register the blob storage modules in your application module:
using Granit.BlobStorage;using Granit.BlobStorage.EntityFrameworkCore;using Granit.Core.Modularity;
[DependsOn( typeof(GranitBlobStorageModule), typeof(GranitBlobStorageEntityFrameworkCoreModule))]public sealed class MyAppModule : GranitModule { }Step 3 — Configure S3 credentials
Section titled “Step 3 — Configure S3 credentials”Add the BlobStorage section to appsettings.json:
{ "BlobStorage": { "ServiceUrl": "https://s3.rbx.io.cloud.ovh.net", "Region": "rbx", "DefaultBucket": "my-blobs", "ForcePathStyle": false, "AccessKey": "INJECT_FROM_VAULT", "SecretKey": "INJECT_FROM_VAULT" }}{ "BlobStorage": { "ServiceUrl": "http://localhost:9000", "Region": "us-east-1", "DefaultBucket": "my-blobs", "ForcePathStyle": true, "AccessKey": "minioadmin", "SecretKey": "minioadmin" }}| Property | Type | Default | Description |
|---|---|---|---|
ServiceUrl | string | — | S3 endpoint (required) |
AccessKey | string | — | Access key — inject from Granit.Vault |
SecretKey | string | — | Secret key — inject from Granit.Vault |
Region | string | us-east-1 | S3 region. European sovereign hosting: rbx |
DefaultBucket | string | — | Default S3 bucket (required) |
ForcePathStyle | bool | true | Enable for MinIO and some providers |
TenantIsolation | BlobTenantIsolation | Prefix | Isolation strategy: Prefix (single bucket, tenant prefix) |
Step 4 — Register services
Section titled “Step 4 — Register services”// S3 provider (required)builder.AddGranitBlobStorageS3();
// EF Core persistence (required in production)builder.AddGranitBlobStorageEntityFrameworkCore(options => options.UseNpgsql(connectionString));Step 5 — Run EF Core migrations
Section titled “Step 5 — Run EF Core migrations”dotnet ef migrations add InitBlobStorage \ --project src/Granit.BlobStorage.EntityFrameworkCore \ --startup-project src/MyAppThe migration creates the storage_blob_descriptors table with a unique index
on ObjectKey and a composite index on (TenantId, ContainerName).
Uploading files
Section titled “Uploading files”The upload flow uses a Direct-to-Cloud pattern: the server issues a presigned URL, and the client uploads bytes directly to S3.
Initiate an upload
Section titled “Initiate an upload”Inject IBlobStorage and call InitiateUploadAsync:
public sealed class DocumentUploadService(IBlobStorage blobStorage){ public async Task<PresignedUploadTicket> InitiateAsync( string fileName, string contentType, CancellationToken cancellationToken) { PresignedUploadTicket ticket = await blobStorage.InitiateUploadAsync( containerName: "documents", request: new BlobUploadRequest( FileName: fileName, ContentType: contentType, MaxAllowedBytes: 10_000_000L), cancellationToken: cancellationToken);
// Return to the client: ticket.BlobId, ticket.UploadUrl, ticket.ExpiresAt return ticket; }}The client then performs a PUT directly to ticket.UploadUrl with the file
bytes. The application server never touches the file content.
Validate after upload
Section titled “Validate after upload”After the client confirms the S3 upload succeeded, trigger server-side validation:
await blobStorage.ValidateAsync( containerName: "documents", blobId: ticket.BlobId, cancellationToken: cancellationToken);The validation pipeline runs in order:
- MagicBytesValidator (Order=10) — detects the real MIME type via magic bytes (S3 range GET, 261 bytes max)
- MaxSizeValidator (Order=20) — checks
ActualSizeBytes <= MaxAllowedBytesvia S3 HEAD (no download)
The BlobDescriptor transitions to Valid or Rejected depending on the result.
Downloading files
Section titled “Downloading files”Generate a presigned download URL for validated blobs:
public async Task<PresignedDownloadUrl> GetDownloadUrlAsync( Guid blobId, CancellationToken cancellationToken){ PresignedDownloadUrl url = await blobStorage.CreateDownloadUrlAsync( containerName: "documents", blobId: blobId, options: new DownloadUrlOptions { ExpiryOverride = TimeSpan.FromMinutes(30) }, cancellationToken: cancellationToken);
return url; // url.Url: presigned URL to pass to the client // url.ExpiresAt: expiration timestamp}The method throws BlobNotFoundException if the blob does not exist for the
current tenant, and BlobNotValidException if the blob is not in Valid status.
Deleting files (GDPR Crypto-Shredding)
Section titled “Deleting files (GDPR Crypto-Shredding)”await blobStorage.DeleteAsync( containerName: "documents", blobId: blobId, deletionReason: "GDPR Art. 17 -- erasure request", cancellationToken: cancellationToken);The S3 object is physically deleted. The BlobDescriptor remains in the
database with Status = Deleted, DeletedAt, and DeletionReason for the
ISO 27001 audit trail (3-year minimum retention).
Reading metadata (CQRS)
Section titled “Reading metadata (CQRS)”Blob metadata is accessed through two separate interfaces following CQRS:
IBlobDescriptorReader— read operations (find, list)IBlobDescriptorWriter— write operations (create, update status)
public sealed class BlobInfoService( IBlobDescriptorReader descriptorReader, IBlobDescriptorWriter descriptorWriter){ public async Task<BlobDescriptor?> FindAsync( Guid blobId, CancellationToken cancellationToken) { return await descriptorReader.FindAsync(blobId, cancellationToken); }}Multi-tenant isolation
Section titled “Multi-tenant isolation”S3 object keys follow the format {tenantId}/{containerName}/{yyyy}/{MM}/{blobId}.
The tenantId/ prefix ensures tenant isolation without a dedicated bucket per tenant.
IBlobDescriptorReader.FindAsync always filters by the active tenant’s
TenantId. A tenant cannot access another tenant’s blobs, even with a valid
BlobId.
Adding custom validators
Section titled “Adding custom validators”Implement IBlobValidator to add custom validation (e.g., antivirus scanning):
public sealed class AntivirusValidator : IBlobValidator{ public int Order => 30; // Runs after MagicBytes (10) and MaxSize (20)
public async Task<BlobValidationResult> ValidateAsync( BlobDescriptor descriptor, IBlobStorageClient storageClient, CancellationToken cancellationToken) { // Perform antivirus scan return BlobValidationResult.Success(); }}Register it before calling AddGranitBlobStorageS3():
builder.Services.AddScoped<IBlobValidator, AntivirusValidator>();BlobDescriptor lifecycle
Section titled “BlobDescriptor lifecycle”| Status | Trigger |
|---|---|
Pending | InitiateUploadAsync() — ticket issued, upload not yet received |
Uploading | S3 notification received — validation in progress |
Valid | All validators passed |
Rejected | A validator failed — S3 object already deleted |
Deleted | DeleteAsync() — GDPR Crypto-Shredding |
Next steps
Section titled “Next steps”- Set up localization to localize blob storage
error messages (
BlobStorage:NotFound,BlobStorage:NotValid) - Set up end-to-end tracing to trace blob operations in Grafana Tempo
- Blob storage reference for the complete API surface