Skip to content

Cache-Aside

The Cache-Aside pattern loads data into the cache on demand: on a miss, data is retrieved from the source (DB), stored in cache, then returned. Subsequent accesses are served from the cache.

Granit uses a HybridCache (L1 in-process + L2 Redis) with anti-stampede protection via double-check locking.

flowchart TD
    REQ[GetOrAddAsync] --> L1{L1 Memory Cache}
    L1 -->|hit| RET[Return value]
    L1 -->|miss| L2{L2 Redis Cache}
    L2 -->|hit| SET1[Store in L1] --> RET
    L2 -->|miss| LOCK[Acquire SemaphoreSlim]
    LOCK --> DC{Double-check L2}
    DC -->|hit| REL1[Release lock] --> SET1
    DC -->|miss| FAC[Execute factory<br/>= DB query]
    FAC --> SET2[Store in L1 + L2]
    SET2 --> REL2[Release lock] --> RET
ComponentFileRole
DistributedCacheServicesrc/Granit.Caching/DistributedCacheService.csCache-aside with double-check locking and optional encryption
FeatureCheckersrc/Granit.Features/Checker/FeatureChecker.csHybridCache for feature resolution
CachedLocalizationOverrideStoresrc/Granit.Localization/CachedLocalizationOverrideStore.csIn-memory cache for localization overrides

The SemaphoreSlim in DistributedCacheService prevents the “thundering herd” problem: when 100 simultaneous requests have a cache miss, only one executes the factory. The other 99 wait for the lock then find the value in cache (double-check).

In FeatureChecker, cache keys include the tenant: t:{tenantId}:{featureName}. Invalidation targets only the affected tenant.

ProblemSolution
Feature resolution too slow (DB query on every request)L1 cache (nanoseconds) + L2 Redis (microseconds)
Stampede on cache miss (100 requests = 100 DB queries)SemaphoreSlim + double-check locking
Sensitive data in Redis cacheConditional AES-256 encryption via [CacheEncrypted]
// Cache-aside is transparent to the caller
ICacheService<PatientDto> cache = serviceProvider
.GetRequiredService<ICacheService<PatientDto>>();
PatientDto patient = await cache.GetOrAddAsync(
$"patient:{patientId}",
async ct => await LoadPatientFromDbAsync(patientId, ct),
cancellationToken);
// 1st call -> DB + stores in cache
// 2nd call -> returned from cache (L1 or L2)