Pentagon clashes with Anthropic over military AI use

Published 30 Jan, 2026 04:11pm 3 min read
Anthropic logo. – Reuters file
Anthropic logo. – Reuters file

The Pentagon is at odds with artificial-intelligence developer Anthropic over safeguards that would prevent the government from deploying its technology to target weapons autonomously and conduct US domestic surveillance, three people familiar with the matter told Reuters.

The discussions represent an early test case for whether Silicon Valley, in Washington’s good graces after years of tensions, can sway how US military and intelligence personnel deploy increasingly powerful AI on the battlefield.

After extensive talks under a contract worth up to $200 million, the US Department of Defence and Anthropic are at a standstill, six people familiar with the matter said, on condition of anonymity.

The company’s position on how its AI tools can be used has intensified disagreements between it and the Trump administration, the details of which have not been previously reported.

A spokesperson for the Defence Department, which the Trump administration renamed the Department of War, did not immediately respond to requests for comment.

Anthropic said its AI is “extensively used for national security missions by the US government, and we are in productive discussions with the Department of War about ways to continue that work.”

The spat, which could threaten Anthropic’s Pentagon business, comes at a delicate time for the company.

The San Francisco-based startup is preparing for an eventual public offering. It has spent significant resources courting the US national security business and sought an active role in shaping government AI policy.

Anthropic is one of a few major AI developers that were awarded contracts by the Pentagon last year. Others were Alphabet’s Google, Elon Musk’s xAI and OpenAI.

Weapons targeting

In its discussions with government officials, Anthropic representatives raised concerns that its tools could be used to spy on Americans or assist weapons targeting without sufficient human oversight, some of the sources told Reuters.

The Pentagon has bristled at the company’s guidelines. In line with a January 9 department memo on AI strategy, Pentagon officials have argued they should be able to deploy commercial AI technology regardless of companies’ usage policies, so long as they comply with US law, sources said.

Still, Pentagon officials would likely need Anthropic’s cooperation moving forward. Its models are trained to avoid taking steps that might lead to harm, and Anthropic staffers would be the ones to retool its AI for the Pentagon, some of the sources said.

Anthropic’s caution has drawn conflict with the Trump administration before, Semafor has.

In an essay on his personal blog, Anthropic CEO Dario Amodei warned this week that AI should support national defence “in all ways except those which would make us more like our autocratic adversaries.”

Amodei was among Anthropic’s co-founders, critical of fatal shootings of US citizens protesting immigration enforcement actions in Minneapolis, which he described as a “horror” in a post X.

The deaths have compounded concern among some in Silicon Valley about government use of their tools for potential violence.

For the latest news, follow us on Twitter @Aaj_Urdu. We are also on Facebook, Instagram and YouTube.