
The federal directive ordering all U.S. government agencies to cease using Anthropic technology comes with a six-month phaseout window. That timeline assumes agencies already know where Anthropicâs models sit inside their workflows. Most donât today.
Most enterprises wouldnât, either. The gap between what enterprises think theyâve approved and whatâs actually running in production is wider than most security leaders realize.
AI vendor dependencies don't stop at the contract you signed; they cascade through your vendors, your vendors' vendors, and the SaaS platforms your teams adopted without a procurement review. Most enterprises have never mapped that chain.
The inventory nobody has run
A January 2026 Panorays survey of 200 U.S. CISOs put a number on the problem: Only 15% said they have full visibility into their software supply chains, up from just 3% a year ago. And 49% had adopted AI tools without employer approval, according to a BlackFog survey of 2,000 workers at companies with more than 500 employees; 69% of C-suite members said they were fine with it.
Thatâs where undocumented AI vendor dependencies accumulate, invisible to the security team until a forced migration makes them everyoneâs problem.
âIf you asked a typical enterprise to produce a dependency graph that includes second- and third-order AI calls, theyâd be building it from scratch under pressure,â said Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, in an exclusive interview with VentureBeat. âMost security programs were built for static assets. AI is dynamic, compositional, and increasingly indirect.â
When a vendor relationship ends overnight
The directive creates a forced migration unlike anything the federal government has attempted with an AI provider. Any enterprise running critical workflows on a single AI vendor faces the same math if that vendor disappears.
Shadow AI incidents now account for 20% of all breaches, adding as much as $670,000 to average breach costs, IBMâs 2025 Cost of Data Breach Report found. You canât execute a transition plan for infrastructure you havenât inventoried.
Your contract with Anthropic may not exist, but your vendors' contracts might. A CRM platform could have Claude embedded in its analytics engine. A customer service tool might call it on every ticket you process. You didn't sign for that exposure, but you inherited it, and when a vendor cutoff hits upstream, it cascades downstream fast. The enterprise at the end of that chain doesn't know the dependency exists until something breaks or the compliance letter shows up.
Anthropic has said eight of the 10 largest U.S. companies use Claude. Any organization in those companiesâ supply chains has indirect Anthropic exposure, whether they contracted for it or not. AWS and Palantir, which hold billions in military contracts, may need to reassess their commercial relationships with Anthropic to maintain Pentagon business.
The supply chain risk designation means any company doing business with the Pentagon now has to prove its workflows donât touch Anthropic.
âModels are not interchangeable,â Baer told VentureBeat. âSwitching vendors changes output formats, latency characteristics, safety filters, and hallucination profiles. That means revalidating controls, not just functionality.â
She outlined a sequence that starts with triage and blast radius assessment, moves to behavioral drift analysis, and ends with credential and integration churn. âRotating keys is the easy part,â Baer said. âUntangling hardcoded dependencies, vendor SDK assumptions, and agent workflows is where things break.â
The dependencies your logs don't show
A senior defense official described disentangling from Claude as an âenormous pain in the ass,â according to Axios. If thatâs the assessment inside the most well-resourced security apparatus on the planet, the question for enterprise CISOs is straightforward. How long would yours take?
The shadow IT wave that followed SaaS adoption taught security teams about unsanctioned technology risk. Most caught up. They deployed CASBs, tightened SSO, and ran spend analysis. The tools worked because the threat was visible. A new application meant a new login, a new data store, a new entry in the logs.
AI vendor dependencies donât leave those traces.
âShadow IT with SaaS was visible at the edges,â Baer said. âAI dependencies are embedded inside other vendorsâ features, invoked dynamically rather than persistently installed, non-deterministic in behavior, and opaque. You often donât know which model or provider is actually being used.â
Four moves for Monday morning
The federal directive didnât create the AI supply chain visibility problem. It exposed it.
âNot âinventory your AI,â because thatâs too abstract and too slow,â Baer told VentureBeat. She recommended four concrete moves that a security leader can execute in 30 days.
Map execution paths, not vendors. Instrument at the gateway, proxy, or application layer to log which services are making model calls, to which endpoints, with what data classifications. Youâre building a live map of usage, not a static vendor list.
Identify control points you actually own. If your only control is at the vendor boundary, youâve already lost. You want enforcement at ingress (what data goes into models), egress (what outputs are allowed downstream), and orchestration layers where agents and pipelines operate.
Run a kill test on your top AI dependency. Pick your most critical AI vendor and simulate its removal in a staging environment. Kill the API key, monitor for 48 hours, and document what breaks, what silently degrades, and what throws errors your incident response playbook doesnât cover. This exercise will surface dependencies you didnât know existed.
Force vendor disclosure on sub-processors and models. Your AI vendors should be able to answer which models they rely on, where those models are hosted, and what fallback paths exist. If they canât, thatâs your fourth-party blind spot. Ask the questions now, while the relationship is stable. Once a cutoff hits, the leverage shifts, and the answers come too late.
The control illusion
âEnterprises believe theyâve âapprovedâ AI vendors, but what theyâve actually approved is an interface, not the underlying system,â Baer told VentureBeat. âThe real dependencies are one or two layers deeper, and those are the ones that fail under stress.â
The federal directive against Anthropic is one organizationâs weather event. Every enterprise will eventually face its own version, whether the trigger is regulatory, contractual, operational, or geopolitical. The organizations that mapped their AI supply chain before the storm will recover. The ones that didnât will scramble.
Map your AI vendor dependencies to the sub-tier level. Run the kill test. Force the disclosure. Give yourself 30 days. The next forced migration wonât come with a six-month warning.

