/Cache

Every Caching Strategy Explained In 5 Minutes

tl;dr: “The Goal: Make things faster and reduce load on primary data stores. Caches offer quicker access and shield your backend from repetitive requests.” The author outlines (1) Cache-Aside (Lazy Loading). (2) Read-Through. (3) Write-Through. (4) Write-Behind (Write-Back). (5) Write-Around. 

featured in #608


Every Caching Strategy Explained In 5 Minutes

tl;dr: “The Goal: Make things faster and reduce load on primary data stores. Caches offer quicker access and shield your backend from repetitive requests.” The author outlines (1) Cache-Aside (Lazy Loading). (2) Read-Through. (3) Write-Through. (4) Write-Behind (Write-Back). (5) Write-Around. 

featured in #607


Speed Up Your App By Caching Database Queries

- Jon Harrell tl;dr: Users demand speed and efficiency. Caching database queries can significantly improve your app's performance. Discover how to use caching to get lightning-fast load times and a smooth user experience for all of your users globally, no matter where your database lives.

featured in #541


Meet Chrono, Our Scalable, Consistent, Metadata Caching Solution

tl;dr: From the team at Dropbox, “If we wanted to solve our high-volume read QPS problem while upholding our clients’ expectation of read consistency, traditional caching solutions would not work. We needed to find a scalable, consistent caching solution to solve both problems at once. This article discusses Chrono, a scalable, consistent caching system built on top of Dropbox’s key-value storage system.“

featured in #536


Cache Locality, Your Sneaky Performance Culprit

- Dr. Panos Patros tl;dr: When you know you’ve written efficient code but performance is still laggy, the answer might lie in cache locality. Go into the nitty-gritty of how data is accessed, how to optimize memory usage, and perhaps how to get some major speed gains. Explore not only how but also why these techniques can be critical to responsiveness and efficiency.

featured in #519


How Uber Serves Over 40 Million Reads Per Second From Online Storage Using An Integrated Cache

tl;dr: “Docstore is Uber’s in-house, distributed database built on top of MySQL. Storing tens of PBs of data and serving tens of millions of requests/second, it is one of the largest database engines at Uber used by microservices from all business verticals. Docstore users and use cases are growing, and so are the request volume and data footprint. This post discusses the challenges serving applications that require low-latency read access and high scalability.

featured in #491


Data-Caching Techniques For 1.2 Billion Daily API Requests

- Guillermo Pérez tl;dr: The cache needs to achieve three things: (1) Low latency: It needs to be fast. If a cache server has issues, you can’t retry. (2) Up and warm: It needs to hold the majority of the critical data. If you lose it, it would surely bring down the backend systems with too much load. (3) Consistency: It should never hold stale or incorrect data. “A lot of the techniques mentioned in this article are supported by our open source meta-memcache cache client.”

featured in #486


How DoorDash Standardized And Improved Microservices Caching

- Jason Fan Lev Neiman tl;dr: DoorDash's expanding microservices architecture led to challenges in interservice traffic and caching. The article details how DoorDash addressed these challenges by developing a library to standardize caching, enhancing performance without altering existing business logic. Key features include layered caches, runtime feature flag control, and observability with cache shadowing. The authors also provides guidance on when to use caching.

featured in #459


FIFO Queues Are All You Need For Cache Eviction

- Juncheng Yang tl;dr: “I will describe a simple, scalable eviction algorithm with three static FIFO queues. Evaluated on 6594 cache traces from 14 companies, we show that S3-FIFO has lower miss ratios than 12 state-of-the-art algorithms designed in the past decades. Moreover, S3-FIFO’s efficiency is robust — it has the lowest mean miss ratio on 10 of the 14 datasets. The use of FIFO queues enables S3-FIFO to achieve good scalability with 6× higher throughput compared to optimized LRU in cachelib at 16 threads.”

featured in #443


Architecture Patterns: Caching

- Kislay Verma tl;dr: "Depending on the type of application, the type of data, and the expectation of failure, there are several strategies that can be applied for caching." Kislay discusses the levels in a systems architecture where caching commonly occurs and various caching strategies, such as read through, write through, write behind. 

featured in #302