The Shelf Life of an AI Study Is About Six Months
There’s a much-referenced study that says if you’re using LLMs for programming, you only think it’s making you faster. Programmers claim they’re more productive with AI tools, but when studied they actually take longer to get the job done. It’s a proper report — all the hard facts you’d need to justify a long discussion of AI hype & AI slop, setting up an essay on why the author won’t be changing their habits any time soon.
But there’s a massive problem with that study — it’s ancient. The sample data was gathered in early 2025. A year can be a long time in technology, but this particular year the AI world has moved at a staggering pace. Last winter, AI coding tools felt like very fancy autocomplete — they worked well provided your problem had already been roughly answered on StackOverflow. Today, they tackle difficult, multi-commit changes to large codebases fully autonomously, while you check your email. As I write this, I have one agent migrating an Obsidian plugin to Svelte 5 and another researching networking models from the early 2000s1. I’ll check in on them both later, while an agent proofreads this post for me. You simply can’t talk about AI in February 2026 by referencing a study from February 2025. They’re twelve months and a lifetime apart.
Don’t get me wrong — objectively studying these models is important, valuable research. But at this stage, citing “Becker et al. says AI makes you 19% slower” isn’t a counter-argument. It’s a history essay.
Footnotes
-
Don’t ask why, but it’s Ben Gamble’s fault. ↩