Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

There is an old adage in management: What you measure matters. And, more often than not, you get more for what you measure.
Software engineers have been arguing about functionality for years, starting with lines of code. But with the new generation of AI code assistants providing more code than ever before, what their managers should measure is not clear.
Big budgets — specifically, the amount of AI processing power a developer is allowed to consume — have become a badge of honor among Silicon Valley developers, but this is a strange way to think about productivity. Measuring the inputs to the process doesn’t make sense if you care too much about the outputs. It might make sense if you’re trying to promote AI adoption (or sell tokens), but not if you’re trying to be successful.
Consider the evidence from a new group of companies operating in the “developer productivity insight” space. They are finding that developers using tools like Claude Code, Cursor, and Codex are creating more valid code than ever before. But they also find that engineers have to go back to review the approved code more often than before, reducing productivity gains.
Alex Circei, CEO and founder of Waydevis creating an intelligence team to monitor these events; his company works with 50 different clients employing more than 10,000 software engineers. (Circei has contributed to TechCrunch in the past, but this reporter had never met her.)
It is said that engineering managers are seeing approval rates of 80% to 90% – meaning the proportion of AI-generated code that developers accept and keep – but they are missing the challenges that occur when engineers have to review the code in the following weeks, causing the actual approval rate to fall between 10% and 30% of generated code.
The rise of AI writing tools prompted Waydev, which was founded in 2017 to provide advanced analytics, to redesign its platform over the past six months to cope with the rapid increase in writing tools. Now, the company is releasing new tools that track the metadata created by AI agents, providing insights into the quality and value of their code to give engineering managers more insight into AI adoption and effectiveness.
Techcrunch event
San Francisco, CA
| |
October 13-15, 2026
Although analytics companies have an incentive to analyze the problems they find, evidence is mounting that large organizations are still considering how to use AI tools effectively. Big companies are taking note – Atlassian acquired DX, another tech giant, for $1 billion last year, to help its clients understand the return on investment of coding assistants.
Data from various companies tell a consistent story: More code is being written, but more of it isn’t sticking.
GitClearanother company in this area, published a report in January that found AI tools increased productivity, and that its data showed “regular AI users have an average of 9.4x higher code churn rates than their non-AI counterparts” – more than double the productivity reported.
Faros AI, an engineering analytics platform, adopted two years of customer experience March 2026 report. Findings: code churn – lines of code deleted versus lines added – increased 861% with advanced AI.
Jellyfish, which bills itself as an intelligent platform for integrated AI technology, data collected for 7,548 experts in the first quarter of 2026. The company found that experts with large investment exposures issued more pull requests (changes made to the shared codebase), but the change in productivity did not begin. They have doubled the output at 10 times the value of the tokens. In other words, equipment is creating volume, not profit.
These kinds of statistics ring true when you talk to developers, who are seeing code reviews and technical debt on the rise, even as they enjoy the freedom of new tools. One of the most common is the difference between senior and junior engineers, with the latter accepting more AI-generated code, and dealing with more rewriting as a result.
However, even if the producers try to understand what their supporters are, they do not expect to return any time soon.
“This is a new era of software development, and you have to change, and you’re forced to change as a company,” Circei told TechCrunch. “It’s not like it’s a cycle that’s going to end.”