tl;dr:“As our ML models today become larger and their (pre-)training sets grow to inscrutable sizes, people are increasingly interested in the concept of machine unlearning to edit away undesired things like private data, stale knowledge, copyrighted materials, toxic / unsafe content, dangerous capabilities, and misinformation, without retraining models from scratch.” Ken provides us with an introduction.