Top News

Why the US government labelled Anthropic a ‘supply chain risk’: A timeline
ETtech | March 8, 2026 12:38 AM CST

Synopsis

The Department of War has demanded that Anthropic’s AI models be made available to it for "any lawful use." Anthropic has, however, refused to cross its strict ethical red lines — no fully autonomous weapons and no mass domestic surveillance.

Listen to this article in summarized format

Loading...
×
Dario Amodei, CEO of Anthropic
Anthropic has drawn headlines over its recent standoff with the US Department of Defense, pushing the artificial intelligence (AI) company into an increasingly difficult position.

The US government has designated Anthropic a “supply chain risk,” a move that could restrict federal contractors from working with the firm. The designation appears to have halted recent discussions between Anthropic and the Pentagon on how its AI models could be deployed in defence settings.

Anthropic’s flagship model Claude, is used across US national security agencies for applications such as intelligence analysis, modelling and simulation, operational planning, and cyber operations.


The Department of War is demanding that AI models be available for "any lawful use," while Anthropic refuses to cross its strict ethical red lines, specifically, no fully autonomous weapons and no mass domestic surveillance.

ET maps out the timeline of the dispute and how it escalated.

July 11, 2024: US data analytics firm Palantir announced a partnership with Anthropic to bring its Claude AI models into US government intelligence and defence operations.

November 7, 2024: Anthropic and Palantir expanded their collaboration with Amazon Web Services (AWS) to provide US intelligence and defence agencies access to the Claude 3 and Claude 3.5 models through AWS’s secure government cloud infrastructure.

July 14, 2025: The Pentagon awarded contracts worth up to $200 million each to Anthropic, Google, OpenAI, and xAI as part of its push to deploy advanced AI systems for defence applications.

Anthropic, the developer of the Claude chatbot, was among the first companies cleared for classified government use, with officials citing the model’s security and reliability.

July 14, 2025: Anthropic signed a $200 million contract with the Department of Defense, incorporating restrictions from its acceptable use policy, including limits on certain military applications.

October 25, 2025: David Sacks, Donald Trump’s AI and crypto adviser, had repeatedly criticised Anthropic for what he described as “woke” AI policies. Sacks accused Anthropic in October of "running a sophisticated regulatory capture strategy based on fear-mongering."

November 2025: Anthropic deepened its partnership with Palantir, integrating Claude as the reasoning engine inside a military decision-support platform used by defence agencies.

Early January 2026: Anthropic submitted a $100 million proposal to the Pentagon to develop voice-controlled autonomous drone swarming technology, according to Bloomberg News.

The proposal envisioned Claude translating a commander’s intent into digital instructions that could coordinate fleets of drones.

January 2026: Tensions escalated after media reports suggested Anthropic’s technology may have been used during a US military operation in Venezuela on January 3, which reportedly resulted in the capture of President Nicolás Maduro and his wife Cilia Flores.

Anthropic later said it did not identify any violation of its usage policies linked to the operation.

January 9, 2026: US Defense Secretary Pete Hegseth issued a memorandum stating that AI systems used by the military must allow “any lawful use” by defence agencies.

“Diversity, Equity and Inclusion and social ideology have no place in the Department of War,” the memo stated, arguing that AI models should not include ideological tuning that could affect responses.

The memo directed the Pentagon to incorporate the phrase “any lawful use” into future AI procurement contracts within 180 days.

February 2026: Anthropic updated its Responsible Scaling Policy, its internal framework for mitigating catastrophic risks from advanced AI systems.

According to Time magazine, the revisions included removing a commitment not to release models unless risk mitigations were guaranteed in advance.

February 2026: Anthropic signed a $19,000 contract with the US State Department to deploy Claude.

February 24, 2026: Defense Secretary Pete Hegseth planned to meet with the CEO of Anthropic.

Hegseth said his vision for military AI systems meant that they operate "without ideological constraints that limit lawful military applications," before adding that the Pentagon's "AI will not be woke."

February 25, 2026: Defense Secretary Pete Hegseth gave Anthropic chief executive Dario Amodei a deadline to remove two restrictions on military use of its AI models.

February 28, 2026: President Donald Trump publicly criticised the company, warning it must cooperate with the Pentagon.

“Anthropic better get their act together… or I will use the full power of the presidency to make them comply,” Trump said. Anthropic CEO Dario Amodei called the move “retaliatory and punitive."

March 3, 2026: US Treasury Secretary Scott Bessent announced the department would terminate all use of Anthropic products, including the Claude platform.

“Under President Trump no private company will dictate the terms of our national security,” Bessent wrote on X.

March 3, 2026:
Within hours of Anthropic declining to remove safeguards on Claude, OpenAI disclosed its own agreement to supply AI models for classified settings.

OpenAI said it would modify its Pentagon contract to ensure its AI models are not used for domestic surveillance after facing criticism over oversight concerns.

March 5, 2026: Anthropic CEO Dario Amodei said the company’s relationship with the US government had deteriorated because it refused to offer what he described as “dictator-style praise” for President Trump.

However, Amodei also confirmed that discussions with the Pentagon had resumed, raising hopes that the dispute could be resolved.

March 6, 2026: The US government formally designated AI company Anthropic a “supply chain risk,”.

Microsoft, however, said it would continue deploying Anthropic’s AI models in its products despite the company’s dispute with the Pentagon including Anthropic’s investors, such as Amazon and Nvidia.

Anthropic in response said it would challenge the “supply chain risk” designation, calling it a legally unsound action never before applied to an American company.

The Pentagon, meanwhile, has begun preparing to transition its AI services to other providers within six months.

Amodei later told The Economist that the episode had been one of the most “disorienting” periods in the company’s history while apologising for "the way he handled the recent crisis."

Also Read: Anthropic vs US govt: Amodei the last holdout against Trump regime's demand for unrestricted access to AI


READ NEXT
Cancel OK