
Endor Labs, the application security startup backed by more than $208 million in venture funding, today launched AURI, a platform that embeds real-time security intelligence directly into the AI coding tools that are reshaping how software gets built. The product is available free to individual developers and integrates natively with popular AI coding assistants including Cursor, Claude, and Augment through the Model Context Protocol (MCP).
The announcement arrives against a sobering backdrop. While 90% of development teams now use AI coding assistants, research published in December by Carnegie Mellon University, Columbia University, and Johns Hopkins University found that leading models produce functionally correct code only about 61% of the time — and just 10% of that output is both functional and secure.
"Even though AI can now produce functionally correct code 61% of the time, only 10% of that output is both functional and secure," Endor Labs CEO Varun Badhwar told VentureBeat in an exclusive interview. "These coding agents were trained on open source code from across the internet, so they've learned best practices — but they've also learned to replicate a lot of the same security problems of the past."
That gap between code that works and code that is safe defines the market AURI is designed to capture — and the urgency behind its launch.
The security crisis hiding inside the AI coding revolution
To understand why Endor Labs built AURI, it helps to understand the structural problem at the heart of AI-assisted software development. AI coding models are trained on vast repositories of open-source code scraped from across the internet — code that includes not only best practices but also well-documented vulnerabilities, insecure patterns, and flaws that may not be discovered for years after the code was originally written.
Badhwar, a repeat cybersecurity entrepreneur who previously built RedLock (acquired by Palo Alto Networks), founded Endor Labs four years ago with Dimitri Stiliadis. The original thesis was straightforward: developers were becoming "software assemblers," writing less original code and importing most components from open source repositories. Then came the explosion of AI-powered coding tools, which Badhwar described as "the once in a generation opportunity of how to rewrite software development life cycle powered by AI."
The productivity gains are real — more efficiency, faster time to market, and the democratization of software creation beyond trained engineers. But the security consequences are potentially devastating. New vulnerabilities are discovered every day in code that may have been written a decade ago, and that constantly evolving threat intelligence is not easily available to the AI models generating new code.
"Every day, every hour, new vulnerabilities are found in software that might have been written 5, 10, 12 years ago — and that information isn't easily available to the models," Badhwar explained. "If you started filtering out anything that ever had a vulnerability, you'd have no code left to train on."
The result is a feedback loop: AI tools generate code at unprecedented speed, much of it modeled on insecure patterns, and security teams scramble to keep up. Traditional scanning tools, designed for a world where humans wrote and reviewed code at human speed, are increasingly overmatched.
How AURI traces vulnerabilities through every layer of an application
AURI's core technical differentiator is what Endor Labs calls its "code context graph" — a deep, function-level map of how an application's first-party code, open source dependencies, container layers, and AI models interconnect. Where competitors like Snyk and GitHub's Dependabot examine what libraries an application imports and cross-reference them against known vulnerability databases, Endor Labs traces exactly how and where those components are actually used, down to the individual line of code.
"We have this code intelligence graph that understands not just what libraries and dependencies you use, but pinpoints exactly how, where, and in what context they're used — down to the specific line of code where you're calling a piece of functionality that has a vulnerability," Badhwar said.
He illustrated the difference with a concrete example. A developer might import a large library like an AWS SDK but only call two services comprising 10 lines of code. The remaining 99,000 lines in that open source library are unreachable by the application. Traditional tools flag every known vulnerability across the entire library. AURI's full-stack reachability analysis trims those irrelevant findings away.
Building that capability required significant investment. Endor Labs hired 13 PhDs specializing in program analysis, many of whom previously built similar technology internally at companies like Meta, GitHub, and Microsoft. The company has indexed billions of functions across millions of open source packages and created over half a billion embeddings to identify the provenance of copied code, even when function names or structures have been changed.
The platform combines this deterministic analysis with agentic AI reasoning. Specialized agents work together to detect, triage, and remediate vulnerabilities automatically, while multi-file call graphs and dataflow analysis detect complex business logic flaws that span multiple components. The result, according to Endor Labs, is an average 80% to 95% reduction in security findings for enterprise customers — trimming away what Badhwar called "tens of millions of dollars a year in developer productivity" lost to investigating false positives.
A free tier for developers, a paid platform for the enterprise
In a strategic move aimed at rapid adoption, Endor Labs is offering AURI's core functionality free to individual developers through an MCP server that integrates directly with popular IDEs including VS Code, Cursor, and Windsurf. The free tier requires no credit card, no sign-up process, and no complex registration.
"The idea is that there's no policy, no administration, no customization. It just helps your code generation tools stop creating more vulnerabilities," Badhwar said.
Privacy-conscious developers will note a key architectural choice: the free product runs entirely on the developer's machine. Only non-proprietary vulnerability intelligence is pulled from Endor Labs' servers. "All of your code stays local and is scanned locally. It never gets copied into AURI or Endor Labs or anything else," Badhwar explained.
The enterprise version adds the features large organizations need: full customization, policy configuration, role-based access control for teams of thousands of developers, and integration across CI/CD pipelines. Enterprise pricing is based on the number of developers and the volume of scans. Deployment options include local scanning, ephemeral cloud containers, and on-premises Kubernetes clusters with full tenant isolation — flexibility Badhwar said is "the most any vendor offers in this space."
The freemium approach mirrors the playbook that worked for developer tools companies like GitHub and Atlassian: win individual developers first, then expand into their organizations. But it also reflects a practical reality. In a world where AI coding agents are proliferating across every team, Endor Labs needs to be wherever code is being written — not waiting behind a procurement process.
"Over 97% of vulnerabilities flagged by our previous tool weren't reachable in our application," said Travis McPeak, Security at Cursor, in a statement sent to VentureBeat. "AURI by Endor Labs shows the few vulnerabilities that are impactful, so we patch quickly, focusing on what matters."
Why Endor Labs says independence from AI coding tools is essential
The application security market is increasingly crowded. Snyk, GitHub Advanced Security, and a growing number of startups all compete for developer attention. Even the AI model providers themselves are entering the fray: Anthropic recently announced a code security product built into Claude, a move that sent ripples through the market.
Badhwar, however, framed Anthropic's announcement as validation rather than threat. "That's one of the biggest validations of what we do, because it says code security is one of the hottest problems in the market," he told VentureBeat. The deeper question, he argued, is whether enterprises want to trust the same tool generating code to also review it.
"Claude is not going to be the only tool you use for agentic coding. Are you going to use a separate security product for Cursor, a separate one for Claude, a separate one for Augment, and another for Gemini Code Assist?" Badhwar said. "Do you want to trust the same tool that's creating the software to also review it? There's a reason we've always had reviewers who are different from the developers."
He outlined three principles he believes will define effective security in the agentic era: independence (security review must be separate from the tool that generated the code), reproducibility (findings must be consistent, not probabilistic), and verifiability (every finding must be backed by evidence). It is a direct challenge to purely LLM-based approaches, which Badhwar characterized as "completely non-deterministic tools that you have no control over in terms of having verifiability of findings, consistency."
AURI's approach combines LLMs for what they do best — reasoning, explanation, and contextualization — with deterministic tools that provide the consistency enterprises require. Beyond detection, the platform simulates upgrade paths and tells developers which remediation route will work without introducing breaking changes, a step beyond what most competitors offer. Developers can then execute those fixes themselves or route them to AI coding agents with confidence that the changes have been deterministically validated.
Real-world results show AURI can already find zero-day vulnerabilities
Endor Labs has already demonstrated AURI's capabilities in high-profile scenarios. In February 2026, the company announced that AURI had identified and validated seven security vulnerabilities in OpenClaw, the popular agentic AI assistant, which were later acknowledged by the OpenClaw development team. As reported by Infosecurity Magazine, OpenClaw subsequently patched six of the vulnerabilities, which ranged from high-severity server-side request forgery bugs to path traversal and authentication bypass flaws.
"These are zero days. They've never been found, but AURI did an incredible job of finding those," Badhwar said. The company has also been detecting active malware campaigns in ecosystems like NPM, including tracking campaigns like Shai-Hulud for several months.
The company is well-capitalized to sustain its push. Endor Labs closed an oversubscribed $93 million Series B round in April 2025 led by DFJ Growth, with participation from Salesforce Ventures, Lightspeed Venture Partners, Coatue, Dell Technologies Capital, Section 32, and Citi Ventures. The company reported 30x annual recurring revenue growth and 166% net revenue retention since its Series A just 18 months earlier. Its platform now protects more than 5 million applications and runs over 1 million scans each week for customers including OpenAI, Cursor, Dropbox, Atlassian, Snowflake, and Robinhood.
Several dozen enterprise customers already use Endor Labs to accelerate compliance with frameworks including FedRAMP, NIST standards, and the European Cyber Resilience Act — a growing priority as regulators increasingly treat software supply chain security as a matter of national security.
The bet that security can keep pace with autonomous software agents
The broader question hanging over AURI's launch — and over the application security industry as a whole — is whether security tooling can evolve fast enough to match the pace of AI-driven development. Critics of agentic security warn that the industry is moving too quickly, granting AI agents permissions across critical systems without fully understanding the risks. Badhwar acknowledged the concern but argued that resistance is futile.
"I've seen this play out when I was building cloud security products, and people were fearful of moving to AWS," he said. "There was a perception of control when it was in your data center. Yet, guess what? That was the biggest movement of its time, and we as an industry built the right technology and security tooling and visibility around it to make ourselves comfortable."
For Badhwar, the most exciting implication of agentic development is not the new risks it creates but the old problems it can finally solve. Security teams have spent decades struggling to get developers to prioritize fixing vulnerabilities over building features. AI agents, he argued, do not have that problem — if you give them the right instructions and the right intelligence, they simply execute.
"Security has always struggled for lack of a developer's attention," Badhwar said. "But we think you can get an AI agent that's writing software's attention by giving them the right context, integrating into the right workflows, and just having them do the right thing for you, so you don't take an automation opportunity and make it a human's problem."
It is a characteristically optimistic framing from a founder who has built his career at the intersection of tectonic technology shifts and the security gaps they leave behind. Whether AURI can deliver on that vision at the scale the AI coding revolution demands remains to be seen. But in a world where machines are writing code faster than humans can review it, the alternative — hoping the models get security right on their own — is a bet few enterprises can afford to make.

