Daily vs Hourly Cache
Hello, I am facing a problem that I have encountered frequently lately. I have explained the underlying logic to many people and made them say “yes, you are right” about the problem related to cache TTL (cache lifespan).
We examine in detail the issue of whether a service can be cached or not and the problems that may arise with or without a cache. However, the length of time it will be stored is usually assigned with a previously familiar date assignment and overlooked. Maybe it is thought that “let’s cache it, it doesn’t matter how long it is stored” 🤔. I think this is a very wrong idea. Why? Let me explain with an example.
Let’s assume that I have a service (it is not important what it does for us). In theory, this service is open to infinite requests per day and can handle each request, right? It looks like an endless bottomless pit. What if I could cache this service? For example, what if I were using a tiny cache of 1 second?
1 Day = 24 Hours = 1440 Minutes = 86400 Seconds
If we think daily, my service will reduce the infinite number of possible requests to 86400.
- In 1 minute: 1440,
- In 5 minutes: 288,
- In 10 minutes: 144,
- In 1-hour cache: my service would only receive 24 requests.
“We could have calculated this, Emre, it’s not that hard” you can say, yes, it is. But many developers seem to overlook the following point:
Let’s put a daily cache, let it be called once a day, and cached. These data won’t change much anyway.
What if something unexpected happens in a real-time situation? Will you wait a day for the cache to evict? If it’s written, can you call the eviction method in the production environment? This is exactly the situation I am talking about. There are only 23 requests between caching once a day (daily cache) and caching 24 times a day (hourly cache). Do you think it’s worth taking this risk for 23 requests?
I need to evaluate well what I gain and lose when determining how long I want to reset this cache. You do not want to see that the user is faced with wrong data while trying to prevent too many requests to the service.