tl;dr:“Running experiments is equal parts powerful and terrifying. Powerful because you can validate changes that will transform your product for the better; terrifying because there are so many ways to mess them up. I’ve run hundreds of A/B tests, both in my previous life as a growth engineer at Meta, and on my personal side project. These are some classic mistakes I’ve learned the hard way and how to avoid them.”
tl;dr:This guide provides an introduction to A/B testing for software engineers. It explains the basics of A/B testing, including how to devise, implement, monitor and analyze tests, and answers common questions about A/B testing. The guide also lists conditions under which you may want to avoid A/B testing, such as lack of traffic, high implementation costs, and ethical considerations. The post concludes with a launch checklist for A/B tests.
tl;dr:Tests are run when one user interaction with your product impacts how others use it. “Suppose Slack wants to improve the usage of a new video calling feature. Improving the feature's discoverability for a single user will increase their own usage with it, but since they use it with their coworkers, their coworkers will also discover it.”
tl;dr:(1) Including unaffected users in your experiment. (2) Only viewing results in aggregate (aka Simpson's paradox). (3) Conducting an experiment without a predetermined duration. Lior discusses these and 5 more anti-patterns.