featured in #430
Generating Code Without Generating Technical Debt?
- Reka Horvath tl;dr: GPT and other large language models can produce huge volumes of code quickly. This allows for faster prototyping and iterative development, trying out multiple solutions. But it can also leave us with a bigger amount of mess code to maintain… This article explores several ways how to improve the code generated by these powerful tools and how to fit it into your project.featured in #429
featured in #428
featured in #424
featured in #421
featured in #421
All The Hard Stuff Nobody Talks About When Building Products With LLMs
- Phillip Carter tl;dr: (1) Context windows are a challenge with no complete solution. (2) LLMs are slow and chaining is a nonstarter. (3) Prompt engineering is weird and has few best practices. (4) Correctness and usefulness can be at odds. (5) Prompt injection is an unsolved problem.featured in #418
Lua: The Little Language That Could
- Matt Blewitt tl;dr: “Lua is probably my favourite “little language” — a language designed to have low cognitive load, and be easy to learn and use. It’s embedded in a lot of software, such as Redis. It’s also used as a scripting language in games such as World of Warcraft and Roblox via Luau. This post is a brief love letter to the language, with some examples of why I like it so much.”featured in #418
The Case Against Relying Solely On DRY
- Ashley Peacock tl;dr: Instead of being applied when needed, Ashley believes DRY is thrown around anytime duplication is spotted, and this leads to worse code in the long run. Ashley walks through why duplication is not the root of all evil, and why it’s perfectly fine to repeat yourself sometimes.featured in #398
No-Code Has No Future In A World Of AI
- Ravi Parikh tl;dr: Ravi Parikh, CEO of Airplane, discusses how AI-driven software development will dwarf no-code tools' capabilities and eventually make no-code obsolete.featured in #394