Technology

Anthropic collides with the Pentagon over AI safety — here's everything you need to know

2026-03-07 13:00
982 views
Anthropic collides with the Pentagon over AI safety — here's everything you need to know

As Anthropic releases its most autonomous agents yet, a mounting clash with the military reveals the impossible choice between global scaling and a "safety first" ethos

  1. Technology
  2. Artificial Intelligence
Anthropic collides with the Pentagon over AI safety — here's everything you need to know

News By Deni Ellis Béchard published 7 March 2026

As Anthropic releases its most autonomous agents yet, a mounting clash with the military reveals the impossible choice between global scaling and a "safety first" ethos

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

A white striped sign holds the word "Anthropic" on it with the i being a backslash. The shadows from the letters show on the white sign. The AI company Anthropic has reached a conflict with the Pentagon regarding AI safety. (Image credit: Bloomberg via Getty Images)
  • Copy link
  • Facebook
  • X
  • Whatsapp
  • Reddit
  • Pinterest
  • Flipboard
  • Email
Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Live Science Get the Live Science Newsletter

Get the world’s most fascinating discoveries delivered straight to your inbox.

Become a Member in Seconds

Unlock instant access to exclusive member features.

Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

You are now subscribed

Your newsletter sign-up was successful

Want to add more newsletters?

Daily Newsletter

Delivered Daily

Daily Newsletter

Sign up for the latest discoveries, groundbreaking research and fascinating breakthroughs that impact you and the wider world direct to your inbox.

Signup + Life's Little Mysteries

Once a week

Life's Little Mysteries

Feed your curiosity with an exclusive mystery every week, solved with science and delivered direct to your inbox before it's seen anywhere else.

Signup + How It Works

Once a week

How It Works

Sign up to our free science & technology newsletter for your weekly fix of fascinating articles, quick quizzes, amazing images, and more

Signup + Space.com Newsletter

Delivered daily

Space.com Newsletter

Breaking space news, the latest updates on rocket launches, skywatching events and more!

Signup + Watch This Space

Once a month

Watch This Space

Sign up to our monthly entertainment newsletter to keep up with all our coverage of the latest sci-fi and space movies, tv shows, games and books.

Signup + Night Sky This Week

Once a week

Night Sky This Week

Discover this week's must-see night sky events, moon phases, and stunning astrophotos. Sign up for our skywatching newsletter and explore the universe with us!

Signup +

Join the club

Get full access to premium articles, exclusive features and a growing list of member rewards.

Explore An account already exists for this email address, please log in. Subscribe to our newsletter

On February 5 Anthropic released Claude Opus 4.6, its most powerful artificial intelligence model. Among the model's new features is the ability to coordinate teams of autonomous agents — multiple AIs that divide up the work and complete it in parallel. Twelve days after Opus 4.6's release, the company dropped Sonnet 4.6, a cheaper model that nearly matches Opus's coding and computer skills. In late 2024, when Anthropic first introduced models that could control computers, they could barely operate a browser. Now Sonnet 4.6 can navigate Web applications and fill out forms with human-level capability, according to Anthropic. And both models have a working memory large enough to hold a small library.

Enterprise customers now make up roughly 80 percent of Anthropic's revenue, and the company closed a $30-billion funding round last week at a $380-billion valuation. By every available measure, Anthropic is one of the fastest-scaling technology companies in history.

But behind the big product launches and valuation, Anthropic faces a severe threat: the Pentagon has signaled it may designate the company a "supply chain risk" — a label more often associated with foreign adversaries — unless it drops its restrictions on military use. Such a designation could effectively force Pentagon contractors to strip Claude from sensitive work.

You may like
  • A smartphone displays the Moltbook homepage. What is Moltbook? A social network for AI threatens a 'total purge' of humanity — but some experts say it's a hoax
  • Asian woman using mobile phone smartphone laying on the bed in the bedroom. Sleepy exhausted, can not sleep. Insomnia, addiction concept. Women scrolling social networks on mobile dark bedroom. 'It won’t be so much a ghost town as a zombie apocalypse': How AI might forever change how we use the internet
  • A human brain model made by needle felting. Acing this new AI exam — which its creators say is the toughest in the world — might point to the first signs of AGI

Tensions boiled over after January 3, when U.S. special operations forces raided Venezuela and captured Nicolás Maduro. The Wall Street Journal reported that forces used Claude during the operation via Anthropic's partnership with the defense contractor Palantir — and Axios reported that the episode escalated an already fraught negotiation over what, exactly, Claude could be used for. When an Anthropic executive reached out to Palantir to ask whether the technology had been used in the raid, the question raised immediate alarms at the Pentagon. (Anthropic has disputed that the outreach was meant to signal disapproval of any specific operation.) Secretary of Defense Pete Hegseth is "close" to severing the relationship, a senior administration official told Axios, adding, "We are going to make sure they pay a price for forcing our hand like this."

The collision exposes a question: Can a company founded to prevent AI catastrophe hold its ethical lines once its most powerful tools — autonomous agents capable of processing vast datasets, identifying patterns and acting on their conclusions — are running inside classified military networks? Is a "safety first" AI compatible with a client that wants systems that can reason, plan and act on their own at military scale?

Anthropic has drawn two red lines: no mass surveillance of Americans and no fully autonomous weapons. CEO Dario Amodei has said Anthropic will support "national defense in all ways except those which would make us more like our autocratic adversaries." Other major labs — OpenAI, Google and xAI — have agreed to loosen safeguards for use in the Pentagon's unclassified systems, but their tools aren't yet running inside the military's classified networks. The Pentagon has demanded that AI be available for "all lawful purposes."

The friction tests Anthropic's central thesis. The company was founded in 2021 by former OpenAI executives who believed the industry was not taking safety seriously enough. They positioned Claude as the ethical alternative. In late 2024 Anthropic made Claude available on a Palantir platform with a cloud security level up to "secret" — making Claude, by public accounts, the first large language model operating inside classified systems.

Sign up for the Live Science daily newsletter nowContact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

The question the standoff now forces is whether safety-first is a coherent identity once a technology is embedded in classified military operations and whether red lines are actually possible. "These words seem simple: illegal surveillance of Americans," says Emelia Probasco, a senior fellow at Georgetown's Center for Security and Emerging Technology. "But when you get down to it, there are whole armies of lawyers who are trying to sort out how to interpret that phrase."

The Pentagon seems to be interested in AI surveillance measures. The question is, what does that look like? (Image credit: Richard Baker via Getty Images)

Consider the precedent. After the Edward Snowden revelations, the U.S. government defended the bulk collection of phone metadata — who called whom, when and for how long — arguing that these kinds of data didn't carry the same privacy protections as the contents of conversations. The privacy debate then was about human analysts searching those records. Now imagine an AI system querying vast datasets — mapping networks, spotting patterns, flagging people of interest. The legal framework we have was built for an era of human review, not machine-scale analysis.

How about we have safety and national security?

Emelia Probasco, senior fellow at Georgetown's Center for Security and Emerging Technology

"In some sense, any kind of mass data collection that you ask an AI to look at is mass surveillance by simple definition," says Peter Asaro, co-founder of the International Committee for Robot Arms Control. Axios reported that the senior official "argued there is considerable gray area around" Anthropic's restrictions "and that it's unworkable for the Pentagon to have to negotiate individual use-cases with" the company. Asaro offers two readings of that complaint. The generous interpretation is that surveillance is genuinely impossible to define in the age of AI. The pessimistic one, Asaro say, is that "they really want to use those for mass surveillance and autonomous weapons and don't want to say that, so they call it a gray area."

What to read next
  • a cartoon illustration of a robot and two humans working on laptops together Even AI has trouble figuring out if text was written by AI — here's why
  • A robot looking at itself in a mirror. Giving AI the ability to monitor its own thought process could help it think like humans
  • A cartoon showing a series of figures carrying different dark blue numbers walking across a green and yellow circuit board. In the background, a human brain floats in the center of blue concentric circles with a circuit board pattern in the shape of the brain 'Proof by intimidation': AI is confidently solving 'impossible' math problems. But can it convince the world's top mathematicians?

Regarding Anthropic's other red line, autonomous weapons, the definition is narrow enough to be manageable — systems that select and engage targets without human supervision. But Asaro sees a more troubling gray zone. He points to the Israeli military's Lavender and Gospel systems, which have been reported as using AI to generate massive target lists that go to a human operator for approval before strikes are carried out. "You've automated, essentially, the targeting element, which is something [that] we're very concerned with and [that is] closely related, even if it falls outside the narrow strict definition," he says. The question is whether Claude, operating inside Palantir's systems on classified networks, could be doing something similar — processing intelligence, identifying patterns, surfacing persons of interest — without anyone at Anthropic being able to say precisely where the analytical work ends and the targeting begins.

The Maduro operation tests exactly that distinction. "If you're collecting data and intelligence to identify targets, but humans are deciding, 'Okay, this is the list of targets we're actually going to bomb' — then you have that level of human supervision we're trying to require," Asaro says. "On the other hand, you're still becoming reliant on these AIs to choose these targets, and how much vetting and how much digging into the validity or lawfulness of those targets is a separate question."

RELATED STORIES

—Your own voice could be your biggest privacy threat. How can we stop AI technologies exploiting it?

—The more advanced AI models get, the better they are at deceiving us — they even know when they're being tested

—32 times artificial intelligence got it catastrophically wrong

Anthropic may be trying to draw the line more narrowly — between mission planning, where Claude might help identify bombing targets, and the mundane work of processing documentation. "There are all of these kind of boring applications of large language models," Probasco says.

But the capabilities of Anthropic's models may make those distinctions hard to sustain. Opus 4.6's agent teams can split a complex task and work in parallel — an advancement in autonomous data processing that could transform military intelligence. Both Opus and Sonnet can navigate applications, fill out forms and work across platforms with minimal oversight. These features driving Anthropic's commercial dominance are what make Claude so attractive inside a classified network. A model with a huge working memory can also hold an entire intelligence dossier. A system that can coordinate autonomous agents to debug a code base can coordinate them to map an insurgent supply chain. The more capable Claude becomes, the thinner the line between the analytical grunt work Anthropic is willing to support and the surveillance and targeting it has pledged to refuse.

As Anthropic pushes the frontier of autonomous AI, the military's demand for those tools will only grow louder. Probasco fears the clash with the Pentagon creates a false binary between safety and national security. "How about we have safety and national security?" she asks.

This article was first published at Scientific American. © ScientificAmerican.com. All rights reserved. Follow on TikTok and Instagram, X and Facebook.

Deni Ellis BéchardDeni Ellis BéchardScience journalist

Deni Ellis Béchard is Scientific American’s senior tech reporter. He is author of 10 books and has received a Commonwealth Writers’ Prize, a Midwest Book Award and a Nautilus Book Award for investigative journalism. He holds two master’s degrees in literature, as well as a master’s degree in biology from Harvard University. His most recent novel, We Are Dreams in the Eternal Machine, explores the ways that artificial intelligence could transform humanity.

View More

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Logout Read more A smartphone displays the Moltbook homepage. What is Moltbook? A social network for AI threatens a 'total purge' of humanity — but some experts say it's a hoax    Asian woman using mobile phone smartphone laying on the bed in the bedroom. Sleepy exhausted, can not sleep. Insomnia, addiction concept. Women scrolling social networks on mobile dark bedroom. 'It won’t be so much a ghost town as a zombie apocalypse': How AI might forever change how we use the internet    A human brain model made by needle felting. Acing this new AI exam — which its creators say is the toughest in the world — might point to the first signs of AGI    a cartoon illustration of a robot and two humans working on laptops together Even AI has trouble figuring out if text was written by AI — here's why    A robot looking at itself in a mirror. Giving AI the ability to monitor its own thought process could help it think like humans    A cartoon showing a series of figures carrying different dark blue numbers walking across a green and yellow circuit board. In the background, a human brain floats in the center of blue concentric circles with a circuit board pattern in the shape of the brain 'Proof by intimidation': AI is confidently solving 'impossible' math problems. But can it convince the world's top mathematicians?    Latest in Artificial Intelligence A human brain model made by needle felting. Acing this new AI exam — which its creators say is the toughest in the world — might point to the first signs of AGI    Businessman and robot looking down against blue background - stock illustration Scientists made AI agents ruder — and they performed better at complex reasoning tasks    An illustration of a rainbow-colored sine wave with multiple peaks and troughs in front of a pink and orange background Your own voice could be your biggest privacy threat. How can we stop AI technologies exploiting it?    Sad lonely man at home alone sitting on the couch with his caring AI robot AI griefbots could change how we mourn — but there are serious risks ahead    Vancouver, Canada - January 15, 2012: A hobgoblin archer from the Wizards of the Coast tabletop Dungeons and Dragons game, posed on a rocky background. How well can AI and humans work together? Scientists are turning to Dungeons & Dragons to find out    A smartphone displays the Moltbook homepage. What is Moltbook? A social network for AI threatens a 'total purge' of humanity — but some experts say it's a hoax    Latest in News A white wall shows golden and blue heiroglyphs along with small letters carved into the wall 'Cikai Korran came here and saw': Visitors from India graffitied dozens of Egyptian tombs 2,000 years ago    Science news this week March 7. 2026 Cannibal orcas identified near Russia, two 'extinct' marsupials found, humans do cranial modification, China's oracle bones reveal climate disaster, and a barefoot volcanologist    A series of trees stand in the middle of a drowned swampland, with still water reflecting the purple and orange dusky sky and a few patches of grass poking above the water here and there Planting trees in the sea could act as a huge carbon sink and save millions of dollars in storm damage every year. What is stopping us from doing it?    A pygmy long-fingered possum climbing a branch in New Guinea. Scientists find 2 marsupial species, thought to have gone extinct 6,000 years ago, living in the forests of New Guinea    A still from a night-vision camera in black and white showing a red fox scrambling backwards out of a wolf den holding a small dark wolf pup in its mouth. 'Striking' footage captures the moment a red fox preys on a wolf pup — a behavior never seen on film before    Illustration of an asteroid the passing the Moon as it approaches Earth. 'City killer' asteroid will narrowly miss the moon, James Webb Telescope reveals    LATEST ARTICLES