“Tokenmaxxing” is making developers less productive than they think

There’s an old saw in management: what you measure matters. And you usually get more of whatever you measure.
Software engineers have debated productivity metrics for decades, starting with lines of code. But with the new generation of AI coding agents delivering more code than ever, it’s less clear what their managers should be measuring.
Huge token budgets – essentially the amount of AI processing power a developer is allowed to consume – have become a badge of honor among Silicon Valley developers, but that’s a very strange way to think about productivity. Measuring input to the process makes little sense if you presumably care more about the output. It might make sense if you’re trying to encourage more AI adoption (or sell tokens), but not if you’re trying to become more efficient.
Consider the evidence from a new class of companies working in the “developer productivity insights” space. They find that developers using tools like Claude Code, Cursor, and Codex generate much more accepted code than before. But they also find that engineers must return to review that accepted code much more often than before, undermining claims of increased productivity.
Alex Circei, the CEO and founder of Waydevbuilds an intelligence layer to monitor these dynamics; his company works with 50 different clients employing more than 10,000 software engineers. (Circei has contributed to TechCrunch in the past, but this reporter had never met him before.)
He says engineering managers see code adoption rates of 80% to 90% (that is, the share of AI-generated code that developers approve and retain), but they miss the churn that occurs when engineers have to review that code in subsequent weeks, which drops real-world adoption rates to between 10% and 30% of generated code.
The rise of AI coding tools prompted Waydev, founded in 2017 to provide developer analytics, to completely rework its platform over the past six months to address the proliferation of fast coding tools. Now the company is releasing new tools that track the metadata generated by AI agents and provide analytics on the quality and cost of their code to help engineering managers better understand both AI adoption and effectiveness.
WAN event
San Francisco, CA
|
October 13-15, 2026
While analytics companies have an incentive to highlight the problems they encounter, evidence is mounting that large organizations are still figuring out how to use AI tools efficiently. Big companies are taking notice: Atlassian last year acquired DX, another tech intelligence startup, for $1 billion to help its customers understand return on coding asset investments.
The data from across the industry tells a consistent story: more code is being written, but a disproportionate share of it isn’t sticking.
GitClearanother company in this space, published a report in January, it was found that AI tools increased productivity, but also that the data showed that “regular AI users averaged 9.4x more code churn than their non-AI counterparts” – more than double the productivity gains offered by the tools offered.
Faros AI, a technical analysis platform, used two years of customer data for this Report March 2026. The finding: Code churn (lines of code removed versus lines added) increased by 861% under high AI adoption.
Jellyfish, which bills itself as an intelligence platform for AI-integrated engineering, collected data at 7,548 engineers in the first quarter of 2026. The company found that the engineers with the largest token budgets produced the most pull requests (proposed changes to a shared codebase), but the productivity improvement did not scale. They achieved twice the throughput at ten times the cost of tokens. In other words, the tools generate volume and not value.
These types of statistics ring true when you talk to developers who are finding code review and technical debt piling up as they enjoy the freedom of the new tools. A common finding is the difference between senior and junior engineers, with the latter accepting much more AI-generated code and, as a result, dealing with a greater amount of rewrites.
But even as developers try to understand what exactly their agents are up to, they don’t expect to return anytime soon.
“This is a new era of software development, and you have to adapt, and you are forced to adapt as a company,” Circei told TechCrunch. “It’s not like it’s going to be a cycle that passes.”




