t>

Qodo raises $70M to validate code as AI scales


As AI writing tools generate billions of lines of code every month, a new bottleneck is emerging: ensuring that software functions as intended. Digstartups building AI assistants to review code, test, and improve, betting that validation will define the next phase of software development.

The New York-based startup has raised a $70 million Series B round led by Qumra Capital, bringing its total funding to $120 million. Maor Ventures, Phoenix Venture Partners, S Ventures, Square Peg, Susa Ventures, TLV Partners, Vine Ventures, Peter Welinder (OpenAI), and Clara Shih (Meta) also joined the round.

Qodo aims to be the layer that focuses on increasing reliance on AI-generated code as businesses accelerate adoption of tools like OpenClaw and Claude Code. Many feel that releasing code faster does not translate into more reliable or secure software.

While most AI monitoring tools focus on changes, Qodo looks at how code changes affect the entire system, based on organizational performance, history, and risk tolerance to help companies better manage AI-generated code with confidence.

Itamar Friedman, former co-founder Observations and he led the machine vision business at Alibaba (which acquired Visualead), founded Qodo in 2022. He told TechCrunch that two important moments in his career – his time at Mellanox, which was later acquired by Nvidia, and building Visualead – inspired him to start Qodo, a few months before starting ChatGPT.

At Mellanox, where he worked on machine validation using machine learning, he realized that “creating systems and machine validation requires different methods (different tools, different ideas).” Later, at Alibaba’s Damo Academy, he saw AI evolve into systems that could reason in human language. By 2021-2022, ahead of GPT-3.5, it became clear to him that AI will create a large part of the world’s content – especially code – reinforcing his view that code generation and verification would require different systems.

A recent study shows that while 95% of developers do not fully trust AI-generated code, only 48% review it before committing, showing the gap between perception and action.

Techcrunch event

San Francisco, CA
| |
October 13-15, 2026

“Coding companies are built around LLMs. But for good law and leadership, LLMs alone are not enough,” Friedman said. “Quality is static. It depends on the organization’s performance, past decisions, and ethnic knowledge. An LLM can’t understand this. It’s like taking a senior engineer from another company and asking them to review another code – they have no internal meaning.”

Companies like OpenAI and Anthropic are helping to improve the broader issue of AI, including in adjacent areas such as code review, but they’re focused on the architecture rather than the final answer, Friedman explained. While there are some startups in place, many are still in their early stages and have not seen the launch of many companies, the CEO said.

Qodo is leaning on success to stand out in a crowded market. The founders recently ranked number 1 on Martian’s Code Review BenchScoring 64.3% – more than 10 points ahead of the next competition and 25 points ahead of the Claude Code Review. The benchmark demonstrates its ability to handle malicious bugs and file problems without overwhelming developers with noise.

Last month, it launched Qodo 2.0, a multi-agent monitoring system that is now leading the way in modern codes, introducing tools that learn the meaning of each code organization.

The company is already working with major companies such as Nvidia, Walmart, Red Hat, Intuit, and Texas Instruments, as well as heavyweights such as Monday.com and JFrog.

“Each year has had a defining moment — from Copilot to ChatGPT to full automation,” Friedman said. “We are now entering a new phase: from limitless AI to advanced systems – from intelligence to ‘artificial intelligence.’ That’s what Qodo was built for.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *