/Database

DuckDB As The New Jq

- Paul Gross tl;dr: “Recently, I’ve been interested in the DuckDB project. And one of the amazing features is that it has many data importers included without requiring extra dependencies. This means it can natively read and parse JSON as a database table, among many other formats.” Paul discusses how this has impacted his work. 

featured in #499


How Figma’s Databases Team Lived To Tell The Scale

- Sammy Steele tl;dr: “The data revealed that some of our tables, containing several terabytes and billions of rows, were becoming too large for a single database. At this size, we began to see reliability impact during Postgres vacuums, which are essential background operations that keep Postgres from running out of transaction IDs and breaking down. Our highest write tables were growing so quickly that we would soon exceed the maximum IO operations per second supported by Amazon’s Relational Database Service. Vertical partitioning couldn’t save us here because the smallest unit of partitioning is a single table. To keep our databases from toppling, we needed a bigger lever.”

featured in #498


Postgres Is Eating The Database World

- Ruohang Feng tl;dr: “PostgreSQL isn’t just a simple relational database; it’s a data management framework with the potential to engulf the entire database realm. The trend of “Using Postgres for Everything” is no longer limited to a few elite teams but is becoming a mainstream best practice.”

featured in #498


Better Benchmarks Through Graphs

- Marc Brooker tl;dr: “I believe that one of the things that’s holding back databases as an engineering discipline is a lack of good benchmarks, especially ones available at the design stage. The gold standard is designing for and benchmarking against real application workloads, but there are some significant challenges achieving this ideal.” Marc discusses an approach to develop benchmarks that shine light on a database’s design decisions.

featured in #493


How Uber Serves Over 40 Million Reads Per Second From Online Storage Using An Integrated Cache

tl;dr: “Docstore is Uber’s in-house, distributed database built on top of MySQL. Storing tens of PBs of data and serving tens of millions of requests/second, it is one of the largest database engines at Uber used by microservices from all business verticals. Docstore users and use cases are growing, and so are the request volume and data footprint. This post discusses the challenges serving applications that require low-latency read access and high scalability.

featured in #491


The Billion Row Challenge (1BRC) - Step-By-Step From 71s To 1.7s

- Marko Topolnik tl;dr: “The main thing I'd like to show you in this post is that a good part of that amazing speed comes from easy-to-grasp, reusable tricks that you could apply in your code as well. Towards the end, I'll also show you some of the magical parts that take it beyond that level.”

featured in #491


Let's Talk About Joins

- Crystal Lewis tl;dr: “In general, there are two ways to link our data, horizontally or vertically. When linking or joining data horizontally we are matching rows by one or more variables (i.e., keys), making a wider dataset. When joining vertically, column names are matched and datasets are stacked on top of each other, making a longer dataset. Joins can be done in many different programs (e.g., SQL, R, Stata, SAS). Most of this post will be applicable to any language, but examples in R will be provided.”

featured in #481


The One Billion Row Challenge

- Gunnar Morling tl;dr: "Your mission, should you decide to accept it, is deceptively simple: write a Java program for retrieving temperature measurement values from a text file and calculating the min, mean, and max temperature per weather station. There’s just one caveat: the file has 1,000,000,000 rows!"

featured in #477


Building A Faster Hash Table For High Performance SQL Joins

- Andrei Pechkurov tl;dr: Andrei delves into QuestDB’s unique hash table, FastMap, designed to enhance SQL execution for JOIN and GROUP BY operations. FastMap employs open addressing and linear probing, optimized for high performance in database environments. It supports variable-size keys and fixed-size values, facilitating efficient data handling and updates. Notably, FastMap operates off-heap, reducing garbage collection pressure to improve performance.

featured in #475


Database Fundamentals

- Tony Solomonik tl;dr: "I tried thinking which database I should choose for my next project, and came to the realization that I don't really know the differences of databases enough. I went to different database websites and saw mostly marketing and words I don't understand. This is when I decided to read the excellent books Database Internals by Alex Petrov and Designing Data-Intensive Applications by Martin Kleppmann. The books piqued my curiosity enough to write my own little database I called dbeel. This post is basically a short summary of these books, with a focus on the fundamental problems a database engineer thinks about in the shower."

featured in #474