A major legal battle is unfolding between artificial intelligence firm Anthropic and the administration of Donald Trump after the company challenged a sweeping federal decision that bars government agencies from using its technology.
The AI developer filed a lawsuit Monday in federal court in California, arguing that a recent national security designation by the U.S. Department of Defense labeling Anthropic a “supply chain risk” was both unlawful and unprecedented. The designation came shortly after negotiations between the company and defense officials collapsed over the potential military use of advanced artificial intelligence.
Anthropic says the government’s move effectively blocks defense contractors from working with its technology and has already forced federal agencies to halt the use of its AI systems, including its flagship Claude models. According to the company, the designation carries major financial and reputational consequences and could reshape how the U.S. government engages with private AI developers.
“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the suit states. “No federal statue authorizes the actions taken here.”
The company told the court it pursued legal action only after other avenues failed. Its attorneys described the lawsuit as a “last resort,” alleging the federal government retaliated against Anthropic over its public stance on AI safety.
Specifically, the firm claims officials took action because it maintained strict positions on the ethical boundaries of artificial intelligence, particularly around domestic surveillance and autonomous weapons. Anthropic said it refused to support systems that could enable mass surveillance or allow machines to make life-or-death decisions independently.
Lawyers for the company said the administration “retaliated” against the firm for “adhering to its protected viewpoint on a subject of great public significance — AI safety and the limitations of its own AI models.” They further argued the government was attempting to “destroy” the economic value of the Anthropic.
Anthropic, founded with a mission centered on responsible AI development, has worked with U.S. national security agencies since late 2024 through a partnership with analytics company Palantir Technologies. The collaboration allowed the Pentagon and intelligence agencies to access certain AI capabilities for research and operational analysis.
However, tensions reportedly escalated during negotiations with the Pentagon earlier this year. According to the lawsuit, Anthropic established two non-negotiable conditions: its technology should not be used for domestic mass surveillance and should not power autonomous weapons capable of making lethal decisions.
Defense officials rejected those limits. The Pentagon maintained that it must retain authority to deploy AI systems for what it considers “all lawful purposes.”
Talks ultimately collapsed earlier this month. Soon afterward, Pentagon leadership formally classified the company as a supply chain risk. The designation was communicated to Anthropic by Defense Secretary Pete Hegseth, while the president ordered federal agencies to immediately stop using the firm’s AI products.
Anthropic CEO Dario Amodei said the company could not “in good conscience” agree to the Pentagon’s terms.
The lawsuit names the Defense Department, multiple federal agencies, and the executive office of the president as defendants.
When contacted about the case, the Pentagon declined to comment, citing its policy against discussing active litigation.
The White House offered a more forceful response. Spokesperson Liz Huston defended the decision and suggested the administration viewed the company’s stance as a national security concern.
“The President and Secretary of War are ensuring America’s courageous warfighters have the appropriate tools they need to be successful and will guarantee that they are never held hostage by the ideological whims of any Big Tech leaders,” Huston wrote.
Despite the federal restrictions, major technology platforms appear unwilling to sever ties with the AI firm. Companies including Google, Amazon, and Apple said last week they would continue offering Anthropic’s AI tools to customers for uses unrelated to the Pentagon.
The dispute has also produced mixed signals about whether negotiations could resume. Amodei said last week that discussions with defense officials had recently been “having productive” conversations. Yet at roughly the same time, Defense Department research chief Emil Michael told reporters there were no “active” negotiations underway.
Complicating matters further, an internal memo from Amodei criticizing the administration circulated last week before the executive later apologized for the remarks.
Legal analysts say the case could become a defining moment in how the U.S. government regulates artificial intelligence partnerships with private companies — particularly when disagreements arise over military use and ethical safeguards.
If the court sides with Anthropic, it could limit the federal government’s authority to blacklist domestic technology providers based on policy disputes. But if the administration prevails, it may set a precedent allowing national security agencies broader power to restrict AI firms whose policies conflict with defense priorities.
For now, the lawsuit places the fast-growing AI industry at the center of a high-stakes confrontation between government power, national security concerns, and corporate speech rights.
