featured in #463
Include Only Relevant Details In Tests
- Dagang Wei tl;dr: "A good test should include only details relevant to the test, while hiding noise:" Dadang shows us an example of this by presenting an embedded function where there is a lot of noise, making it hard to tell which details are relevant to the assert statement and testing function.featured in #461
10 Things We've Learned About A/B Testing For Startups
- Ian Vanagas tl;dr: “In this week’s issue, we explore the secrets of running truly successful A/B tests (and some pitfalls to avoid).” These include: (1) You need to embrace failure. (2) Good A/B tests have 5 traits. (3) Use the “right place, right time” rule. (4) Create a proposal system. (5) Understanding significance. And more.featured in #454
Simplifying Fluffy Constructors In Unit Tests
- Brian Kihoon Lee tl;dr: Brian discusses the challenges of writing unit tests that become bloated with unnecessary details. “A very common problem is that, over time, objects accumulate fields and subobjects, until it takes significant effort just to construct an object.” To address this, he proposes two solutions: (1) Factory methods: hide irrelevant details, making it easier to write and read tests. (2) Domain-Specific Languages: reduce syntactic fluff, making the code more readable and maintainable.featured in #451
When, Why, And How GitHub And GitLab Use Feature Flags
- Ian Vanagas tl;dr: Ian discusses several benefits, such as reduced stress on developers, fewer failed deployments, and a higher rate of shipping features. GitLab calculated that fixing an issue without flags is as time-consuming as "developing a whole new feature." The article explores the advantages of feature flags over long-living feature branches for collaboration. Feature flags keep code changes small, make reviews easier, and limit merge conflicts. Both GitHub and GitLab use feature flags not just based on users but also on "actors" like organizations, teams, and repositories to create consistent experiences.featured in #445
featured in #444
featured in #441
Fuzz Testing Is the Best Thing To Happen To Our Application Tests
- Andrei Pechkurov tl;dr: The team at QuestDB faced challenges with segfaults, data corruption, and concurrency bugs. To address these, the team implemented fuzz testing, an automated software testing technique that provides invalid or unexpected data to a program to monitor for exceptions. This article details the process of introducing fuzz testing, revealing critical issues and leading to more robust database performance. The team also collaborated with SQLancer, a tool for testing SQL Database Management Systems, to uncover issues in their SQL engine.featured in #441
A/B Testing Examples From Airbnb And YC's Top Companies
- Ian Vanagas tl;dr: Ian provides a comprehensive look at A/B testing examples from various successful companies, including Monzo, Instacart, Coinbase, Airbnb, and Convoy. It explores different approaches to A/B testing, such as Monzo's low-risk "pellets" strategy, Instacart's complex sampling problem-solving, Coinbase's scaling of tests, Airbnb's interleaving and dynamic p-values, and Convoy's Bayesian approach.featured in #437
A Software Engineer's Guide To A/B Testing
- Lior Neu-ner tl;dr: This guide provides an introduction to A/B testing for software engineers. It explains the basics of A/B testing, including how to devise, implement, monitor and analyze tests, and answers common questions about A/B testing. The guide also lists conditions under which you may want to avoid A/B testing, such as lack of traffic, high implementation costs, and ethical considerations. The post concludes with a launch checklist for A/B tests.featured in #434