Top News

Palantir faces tough task of severing ties with Anthropic
NewsBytes | March 5, 2026 7:39 PM CST



Palantir faces tough task of severing ties with Anthropic
05 Mar 2026


Palantir, a leading tech company, is facing the tough task of severing its ties with Anthropic. This comes after a dispute between the AI lab and the Pentagon over safety protocols.

The conflict has raised concerns about Maven Smart Systems, Palantir's military software platform.

The system relies on prompts and workflows developed using Anthropic's Claude code, according to Reuters.


US President halts government work with Anthropic
Government action


Last week, US President Donald Trump ordered a halt to all government work with Anthropic.

This was after the AI lab hit an impasse with the Pentagon over whether its policies could limit autonomous weapons and government surveillance.

The decision has put Palantir in a difficult position as it holds Maven-related contracts worth over $1 billion with the Defense Department and other national security agencies.


Palantir to replace Claude with another AI model
Software overhaul


In light of the government's decision, Palantir will have to replace Anthropic's Claude with another AI model. The company will also have to rebuild parts of its Maven Smart Systems software platform.

However, it remains unclear how long this process will take.

Defense Secretary Pete Hegseth has stressed that the change should be immediate, ordering all contractors and suppliers doing business with the US military to stop any commercial activity with Anthropic.


Pentagon's flagship AI program
Program significance


Maven Smart Systems is the Pentagon's flagship AI program. It processes data from various sources to identify military points of interest and expedite intelligence analysis and targeting decisions.

The system has been instrumental in recent US military operations, although it remains unclear if it was used during the January raid in Venezuela that captured former President Nicolas Maduro or the recent strikes on Iran.


Competition among top AI companies
Ethical implications


Anthropic's ethical stance on US military use of AI is changing the competition among top AI companies. It has also raised questions about the readiness of chatbots for warfare.

Notably, Anthropic's chatbot Claude has overtaken rival ChatGPT in phone app downloads in the US this week.

This shows a growing consumer interest in siding with Anthropic amid its standoff with the Pentagon, according to market research firm Sensor Tower.


Large language models inherently unreliable
Industry critique


Missy Cummings, a former Navy fighter pilot who now directs the robotics and automation center at George Mason University, has criticized the AI industry for years of marketing that led the government to apply this technology to high-stakes tasks.

She argues that large language models behind chatbots like Claude are "inherently unreliable and not appropriate in environments that could result in the loss of life."


Anthropic to challenge penalties in court
Company stance


In light of the administration's decision, Anthropic has said it will challenge the penalties in court once it receives formal notice.

The company has also defended its ethical stance by emphasizing that "frontier AI systems are simply not reliable enough to power fully autonomous weapons."

Despite the legal challenges, this move has bolstered Anthropic's reputation as a safety-minded AI developer.


READ NEXT
Cancel OK