research Major

METR Study Finds AI Tools Slow Down Experienced Developers

Summary

A rigorous study by METR found that experienced developers were 19% slower when using AI coding tools on familiar codebases, challenging the dominant narrative that AI universally accelerates software development and sparking intense debate about how productivity should be measured.

What Happened

METR published a controlled study in which experienced open-source developers worked on real tasks in codebases they were already familiar with. Counter to expectations — and to the developers' own self-assessment (most believed they were faster with AI) — the AI-assisted group completed tasks 19% slower on average. The study went viral alongside a parallel Hacker News post from an 18-year veteran developer who reported being replaced by two AI-assisted juniors.

Why It Matters

The study introduced critical nuance into the AI productivity debate. It suggested that AI tools may help most on unfamiliar or boilerplate tasks but can actually slow experts down on work where deep codebase knowledge matters. The finding that developers believed they were faster even when they weren't raised questions about self-reported AI productivity gains across the industry.

Tags

#productivity #software-development #evaluation #benchmarks