Altman & Amodei, hand-to-hand combat?
Last week, Donald Trump ordered every federal agency to stop using Anthropic's AI technology, designating the San Francisco company a 'supply chain risk to national security'. US defence secretary Pete Hegseth went further, announcing that any contractor or partner of Anthropic doing business with the US military must cease all commercial activity with the company.
The issue started after Anthropic refused to remove two restrictions from its Pentagon contract: its Claude models could not be used for, one, mass surveillance of US citizens, and, two, to power fully autonomous weapons - systems that select and kill targets without human involvement. The supply chain designation, if it survives legal challenge, could force any company with Pentagon ties to sever relationships with Anthropic, a potentially devastating blow to a company valued at $380 bn whose revenue depends heavily on enterprise contracts.
Facing Trump's 6-mth deadline, the company threatened to sue, calling the designation 'legally unsound', and warning that it would 'set a dangerous precedent for any American company that negotiates with the government'.
The feud can be traced back to a $200 mn Department of War contract signed in July 2025. Anthropic had embedded ethical guard rails into the contract that Pentagon found unacceptable. Military officials insisted they needed AI cleared for 'any lawful use' - in other words, they wanted language that allowed those guard rails to be, in Anthropic's words, 'disregarded at will'.
Anthropic CEO Dario Amodei countered that mass domestic surveillance 'constitutes a violation of fundamental rights', and that current AI models are 'simply not reliable enough' to make lethal targeting decisions without oversight. Interestingly, Anthropic's restrictions had not blocked a single military mission. This was never about operational necessity. It was about who holds the levers of control.
Human stakes are easy to underestimate amid Washington's political politics. But the two red lines Anthropic drew represent the most consequential questions of the AI age. AI-powered mass surveillance is not a camera on a street corner. It's the capacity to track millions simultaneously, flag dissent and build behavioural profiles at scale. The difference between a government that watches some of its people and one that watches all of them, all the time, is the difference between authority and totalitarianism.
Autonomous weapons raise equally grave concerns. AI systems hallucinate and misidentify. Unlike human soldiers, they carry no moral hesitation. A drone that selects a target algorithmically and fires without human review is not a warrior - it's a replicable process, deployable at scale, impossible to recall. Allowing current-generation AI to make irreversible life-and-death decisions is not a military efficiency gain. It is a civilisational gamble.
OpenAI's entry complicates the picture. Within hours of Anthropic's expulsion, OpenAI announced a deal to provide AI to Pentagon's classified networks. Sam Altman claimed his company holds identical red lines - no mass surveillance, no autonomous lethal decisions, humans in the loop. But the critical question is whether OpenAI's safeguards carry the same binding contractual weight that Anthropic did, or whether they are aspirational language dressed up as policy.
Google's trajectory is instructive. The tech giant quietly updated its ethical guidelines last year to remove earlier pledges against weapons development and surveillance applications. Without a press session or public debate, it just revised policy documents that few people read. This is how principles erode and the role of tech giants becomes worrisome. OpenAI risks the same quiet capitulation if its safeguards lack enforcement mechanisms.
For the broader tech industry, the stakes extend further still. Anthropic attempted something novel: asserting that AI developers bear ongoing responsibility for how their technology is deployed, even after agreements are signed. Pentagon's furious response, and White House's swift punishment, reveal how threatening that idea is to traditional military procurement logic, where contractors build to specification and cede all control thereafter.
There is a valid reason for that and, so far, it has worked well. But the rapid development of AI capabilities has altered the situation. If the principle of AI developers as moral stakeholders, not mere vendors, survives Anthropic's defeat, it could reshape the industry's entire relationship with government clients.
The battle over two contract clauses is, at its core, a battle over something far larger: whether those who build powerful technology bear any enduring responsibility for how it's turned against other humans. Pentagon has said no, Anthropic, yes. OpenAI is telling us it agrees with Anthropic, while signing the Pentagon's deal.
The writer is a commentator ondigital policy issues
The issue started after Anthropic refused to remove two restrictions from its Pentagon contract: its Claude models could not be used for, one, mass surveillance of US citizens, and, two, to power fully autonomous weapons - systems that select and kill targets without human involvement. The supply chain designation, if it survives legal challenge, could force any company with Pentagon ties to sever relationships with Anthropic, a potentially devastating blow to a company valued at $380 bn whose revenue depends heavily on enterprise contracts.
Facing Trump's 6-mth deadline, the company threatened to sue, calling the designation 'legally unsound', and warning that it would 'set a dangerous precedent for any American company that negotiates with the government'.
The feud can be traced back to a $200 mn Department of War contract signed in July 2025. Anthropic had embedded ethical guard rails into the contract that Pentagon found unacceptable. Military officials insisted they needed AI cleared for 'any lawful use' - in other words, they wanted language that allowed those guard rails to be, in Anthropic's words, 'disregarded at will'.
Anthropic CEO Dario Amodei countered that mass domestic surveillance 'constitutes a violation of fundamental rights', and that current AI models are 'simply not reliable enough' to make lethal targeting decisions without oversight. Interestingly, Anthropic's restrictions had not blocked a single military mission. This was never about operational necessity. It was about who holds the levers of control.
Human stakes are easy to underestimate amid Washington's political politics. But the two red lines Anthropic drew represent the most consequential questions of the AI age. AI-powered mass surveillance is not a camera on a street corner. It's the capacity to track millions simultaneously, flag dissent and build behavioural profiles at scale. The difference between a government that watches some of its people and one that watches all of them, all the time, is the difference between authority and totalitarianism.
Autonomous weapons raise equally grave concerns. AI systems hallucinate and misidentify. Unlike human soldiers, they carry no moral hesitation. A drone that selects a target algorithmically and fires without human review is not a warrior - it's a replicable process, deployable at scale, impossible to recall. Allowing current-generation AI to make irreversible life-and-death decisions is not a military efficiency gain. It is a civilisational gamble.
OpenAI's entry complicates the picture. Within hours of Anthropic's expulsion, OpenAI announced a deal to provide AI to Pentagon's classified networks. Sam Altman claimed his company holds identical red lines - no mass surveillance, no autonomous lethal decisions, humans in the loop. But the critical question is whether OpenAI's safeguards carry the same binding contractual weight that Anthropic did, or whether they are aspirational language dressed up as policy.
Google's trajectory is instructive. The tech giant quietly updated its ethical guidelines last year to remove earlier pledges against weapons development and surveillance applications. Without a press session or public debate, it just revised policy documents that few people read. This is how principles erode and the role of tech giants becomes worrisome. OpenAI risks the same quiet capitulation if its safeguards lack enforcement mechanisms.
For the broader tech industry, the stakes extend further still. Anthropic attempted something novel: asserting that AI developers bear ongoing responsibility for how their technology is deployed, even after agreements are signed. Pentagon's furious response, and White House's swift punishment, reveal how threatening that idea is to traditional military procurement logic, where contractors build to specification and cede all control thereafter.
There is a valid reason for that and, so far, it has worked well. But the rapid development of AI capabilities has altered the situation. If the principle of AI developers as moral stakeholders, not mere vendors, survives Anthropic's defeat, it could reshape the industry's entire relationship with government clients.
The battle over two contract clauses is, at its core, a battle over something far larger: whether those who build powerful technology bear any enduring responsibility for how it's turned against other humans. Pentagon has said no, Anthropic, yes. OpenAI is telling us it agrees with Anthropic, while signing the Pentagon's deal.
The writer is a commentator ondigital policy issues
(Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of www.economictimes.com.)





Subimal Bhattacharjee
Commentator on digital policy issues