AI coding tools may not speed up every developer, study shows

Workflows for Software Engineer have been transformed in recent years by an inflow of AI coding tools such as Cursor and Github Copilot, who promise to improve productivity by automatically writing code rules, repairing bugs and testing changes. The tools are powered by AI models from OpenAi, Google DeepMind, Anthropic and Xai that have quickly increased their performance in recent years on a series of software engineering tests.
How but one New study Published on Thursday by the Non-profit AI Research Group Metr, question itself in the extent to which today’s AI coding tools improve productivity for experienced developers.
METR carried out a randomized controlled study for this study by recruiting 16 experienced Open Source developers and completing them 246 real tasks on large code repositories to which they regularly contribute. The researchers have randomly assigned approximately half of those tasks as ‘AI-tuning’, giving developers permission to use ultramodern AI coding tools such as Cursor Pro, while the other half of the tasks banned the use of AI tools.
Before completing their assigned tasks, the developers predicted that the use of AI coding tools would reduce their completion time by 24%. That was not the case.
“Surprisingly, we believe that allowing AI will increase the completion time by 19% – developers are slower when using AI tooling,” the researchers said.
In particular, only 56% of the developers in the study had experience with the use of Cursor, the most important AI tool offered in the study. Although almost all developers (94%) had experience with the use of some web -based LLMs in their coding workflows, this study was the first time that some people specifically used specifically. The researchers note that developers have been trained in the use of cursor in preparation for the research.
Nevertheless, METR’s findings raise questions about the supposed universal productivity gains promised by AI coding tools in 2025. Based on the research, developers should not assume that AI coding tools – specifically what is known as “vibe coder” – will immediately accelerate their workflows.
METR researchers point to a few possible reasons why AI developers delayed instead of accelerating them: developers spend much more time asking for AI and waiting for it to respond when using atmospheric coders instead of actually coding. AI also tends to struggle in large, complex code bases that used this test.
The authors of the study are careful not to draw strong conclusions from these findings, and explicitly note that they do not believe that AI systems do not accelerate much or most software developers at the moment. Other Large -scale studies have shown that AI coding aids accelerate workflows for software engineer.
The authors also note that the AI preliminary output has been considerable in recent years and that they would not expect the same results even in three months. Metr has also shown that AI coding tools improve their ability to improve considerably Complete complex, long-horizon tasks in recent years.
However, the research still offers another reason to be skeptical about the promised profit of AI coding tools. Other studies have shown that today’s AI coding tools can introduce mistakes And in some cases, Vulnerabilities of security.




