(One of my summaries of the 2025 pycon.de conference in Darmstadt, DE).
Full title: cache me if you can: boosted application performance with redis and client-side caching.
Redis can be used:
As an in-memory database.
As a cache.
For data streaming.
As a message broker.
Even some vector database functionality.
Redis develops client libaries, for instance redis-py for python. Though you probably use redis through some web framework integration or so.
Why cache your data? Well, performance, scalability, speed. Instead of scaling your database, for instance, you can also put some caching in front of it (and scale the cache instead).
Caching patterns built into redis:
“Look-aside”: app reads from cache and if there’s a miss, it looks in the actual data source instead.
“Change data capture”: the app reads from the cache and writes to the data source. Upon a change, the data source writes to the cache.
Redis has the regular cache features like expiration (time to live, expiration keys, etc), eviction (explixitly removing known stale items), LRU/LFU (least recently used, least frequently used). Redis behaves like a key/value store.
Why client side caching? Again performance and scalability. Some items/keys in redis are accessed much more often than others: “hot keys”. Caching those locally on the client can help improvement.
Redis has a feature called client tracking. The client’s cache is connected to the “main” one: the main one can invalidate keys on the client side.
Now to the redis-py client library. Some responsibilities are endpoint discovery, topology responsibility, authentication, etc. And, since recently, handling a local cache, connected to the main cache.
Photo explanation: picture from our 2024 vacation around Kassel (DE)
My name is Reinout van Rees and I program in Python, I live in the Netherlands, I cycle recumbent bikes and I have a model railway.
Most of my website content is in my weblog. You can keep up to date by subscribing to the automatic feeds (for instance with Google reader):