Dario Amodei, chief executive officer of Anthropic, at the AI Impact Summit in New Delhi, India, on Thursday, Feb. 19, 2026.
Ruhani Kaur | Bloomberg | Getty Images
Last August, Pentagon technology chief Emil Michael, a former Uber executive and attorney, took on the added role of overseeing the Defense Department’s artificial intelligence portfolio. A month earlier, Anthropic had been awarded a $200 million DOD contract that expanded its work with the agency.
“I said, ‘I just want to see the contracts,'” Michael told the All-In Podcast on Friday, reflecting on his early days managing the AI portfolio. “You know, the old lawyer in me.”
Michael’s request kicked off a months-long review process that culminated in the Defense Department officially banning Anthropic’s technology last week, leaving the military without its hand-picked AI models to operate in the most sensitive environments. In an extraordinary move, the DOD designated Anthropic a supply chain risk, a label that’s historically only been applied to foreign adversaries. It will require defense vendors and contractors to certify that they don’t use the company’s models in their work with the Pentagon.
Anthropic sued the Trump administration on Monday, calling the government’s actions “unprecedented and unlawful,” and claiming that they are “harming Anthropic irreparably,” putting hundreds of millions of dollars worth of contracts in jeopardy.
The DOD’s sudden reversal came as a shock to many officials in Washington who viewed Anthropic’s models as superior — they were the first to be deployed in the agency’s classified networks — and championed the company’s ability to integrate with existing defense contractors like Palantir. The decision was all the more puzzling since the Trump administration had threatened during negotiations to invoke the Defense Production Act, which could have forced Anthropic to grant the military access to its technology.
“I don’t know how those two things can both be true in reality,” said Mark Dalton, a retired Navy rear admiral who now leads technology and cybersecurity policy at R Street, a think tank in Washington, D.C. “Something is so necessary that you need to invoke DPA and so harmful that you put a designation on it that’s reserved for foreign adversaries.”
Defense experts like Dalton expressed concern about the government’s decision. Not only does it set a troubling precedent, they argue, but it also means the administration is banishing a key technology vendor that’s been lauded for its diligence with respect to AI safety, tough rhetoric against China and its entrepreneurial chops, becoming one of the fastest-growing tech startups in the U.S.
Former DOD official Brad Carson, who’s now co-founder and president of AI policy nonprofit Americans for Responsible Innovation, said the move is particularly troubling for military personnel, who have come to rely on Claude. An ex-Navy intelligence officer who served in Iraq, Carson said he’s talked to a number of retired officers who told him that “warfighters are not happy about it.”
“You’re not so excited if you’re in the military,” said Carson, who worked in President Obama’s Defense Department until 2016 and before that was deployed to Iraq while in the Army and also served two terms in Congress as a Democrat in Oklahoma. “They view Claude as being a better product, the most reliable, with the most user friendly outputs they can assimilate into planning.”
CNBC spoke to 17 AI policy experts, former Palantir and Anthropic employees, tech analysts and researchers about Anthropic’s critical role in the Defense Department and what comes next. Several of the people asked not to be named because they weren’t authorized to speak on the matter.
Anthropic declined to provide a comment for the story.

Anthropic CEO Dario Amodei founded the San Francisco-based company in 2021 alongside his sister, Daniela Amodei, and a handful of other researchers. The group had defected from OpenAI, before the launch of ChatGPT, over concerns about the company’s direction and attitude toward safety. They spent years carefully constructing Anthropic’s reputation as a firm that was more dedicated to responsible AI deployment.
Anthropic launched its family of AI models, known as Claude, in March 2023, a few months after ChatGPT hit the market and quickly went viral. In the three years since introducing Claude, Anthropic has raised billions of dollars of capital, en route to a $380 billion valuation.
The company is now under immense pressure to justify that price tag and has been forced to rapidly commercialize its technology in an effort to keep pace with OpenAI and other rivals like Google.
Partnering with AWS and Palantir
While OpenAI was enthralling consumers, Anthropic found quick success selling to large enterprises, including the DOD. It’s an area Amodei started focusing on early, recognizing the business and societal importance of working closely with the government and military and helping to establish principals for safe uses of a technology that has the power to bring about potential catastrophes, according to people with knowledge of the matter.
The company began building relationships and making inroads with officials in Washington, and Amodei was among the few AI industry executives invited to meet with then-Vice President Kamala Harris, in May 2023.
Around that same time, Anthropic turned to a familiar tech partner that could help it prosper among D.C. technologists: Amazon Web Services.
Claude became available within AWS’ Bedrock service that year, which helped it gain traction within the government tech community, multiple sources said. Federal agencies could begin experimenting with Anthropic’s models because they were accessible within AWS’ government-sanctioned environment.
Amazon has been one of Anthropic’s largest financial backers since 2023, investing a total of $8 billion in the startup.
As pilot projects got underway, many federal employees found that Claude produced more compelling results than other models from companies like OpenAI and Meta, the sources said. Claude could provide step-by-step reasons for why it would derive an answer or complete a task, which was crucial for federal agencies that require strong auditing and verification, sources said.
And since Anthropic had prioritized building for enterprise customers, the company’s user experience was especially suitable for desktop computers, the people said. With a strong AI model and an intuitive user interface, Anthropic began to earn credibility with federal workers, sowing the seeds for a major partnership with Palantir, a software and services provider that counts on government contracts for about 60% of its U.S. revenue.

Anthropic’s government push was assisted by its head of global affairs, Michael Sellitto, who led cybersecurity policy at the National Security from 2015 through 2018, and former AWS executive Thiyagu Ramasamy, the head of the company’s public sector business.
In a LinkedIn post last week, Ramasamy, who joined in early 2025, expressed his concern about the Pentagon’s actions.
“Today, I mourn for the customers I have deep respect for,” Ramasamy wrote. “They were moving at a pace I couldn’t have imagined across my two decades in this industry, and now it comes to a halt over a weekend. Down, but not out.”
Sellitto and Ramasamy did not respond to requests for comment.
In November of 2024, shortly before Ramasamy’s arrival, Anthropic and Palantir announced a partnership with AWS that would allow U.S. intelligence and defense agencies to access Claude. Some Anthropic staffers were upset about the deal when it was announced, a former employee said, adding that it prompted “many big Slack threads” and became a point of lingering tension within the company.
Having Palantir as a partner helped Anthropic build direct lines with the DOD and fast tracked its integration into the highest-level, classified projects. The partnerships were crucial in helping Anthropic become the first model company to officially deploy across classified networks, said Lauren Kahn, a senior research analyst at Georgetown’s Center for Security and Emerging Technology.
“The fact that Anthropic is able to basically play nice with others like Palantir, AWS, Google, etc., specifically Palantir,” she said, “is extremely valuable.”
In its July 2025 release announcing its $200 million defense contract, Anthropic said it had “accelerated mission impact across U.S. defense workflows with partners like Palantir.” Anthropic said its technology helped the government’s defense and intelligence organizations “rapidly process and analyze vast amounts of complex data.”
A month after winning the Pentagon contract, Anthropic partnered with the U.S. General Services Administration to bring its AI models to other participating agencies for $1 dollar a year.
But by that point, Anthropic’s relationship with the government was beginning to sour.
President Trump had been sworn into office that January, and Amodei wasn’t a fan, having once likened the commander in chief to a “feudal warlord” in a since deleted Facebook post, according to Fortune.
Other industry executives, including OpenAI CEO Sam Altman, Apple CEO Tim Cook and Google CEO Sundar Pichai, had been photographed rubbing elbows with Trump at the White House, but Amodei was conspicuously absent. He didn’t attend Trump’s inauguration last year.
Amodei told staffers earlier this month that the administration doesn’t like Anthropic because it hasn’t donated or offered “dictator-style praise to Trump,” according to a report from The Information.
He apologized for the tone of those remarks in a statement on Thursday, writing that they were written after a “difficult day for the company” and do not “reflect my careful or considered views.”
Amodei has also drawn the ire of David Sacks, the venture capitalist serving as the White House AI and crypto czar who has accused Anthropic of supporting “woke AI,” largely for its positions on regulation.
“This feels to me like a dispute that is about politics and personalities,” Michael Horowitz, a senior fellow for technology and innovation at the Council on Foreign Relations, said in an interview. “It’s masquerading as a policy dispute.”
By the time the Trump administration blacklisted Anthropic, the startup’s tools had been widely adopted across government agencies. A transition is already underway, as groups including the U.S. Department of Health and Human Services, the Treasury Department and the State Department have confirmed they are moving off of Claude.
But that process is especially complicated within the DOD, in part because the U.S. is actively carrying out a military operation in Iran. Anthropic’s models have been used to support that operation, even after it was blacklisted, as CNBC previously reported.
Transitioning away from Anthropic toward a new vendor will take the DOD time and comes at a significant cost in terms of efficiency, said Jacquelyn Schneider, a Hargrove Hoover fellow at Stanford University’s Hoover Institution, in an interview.
“You’re not going to walk away from technologies that are deeply embedded in your wartime processes right before you go to war,” Schneider said.
WATCH: Why the U.S. Defense Department blacklist of Anthropic is so unprecedented


