Unlock the White House Watch newsletter for free
Your guide to what Trump’s second term means for Washington, business and the world
Anthropic is in discussions with the US government about granting access to its new Mythos model to agencies including the Treasury despite federal lawsuits over whether the AI lab is a national security threat.
US officials have been pushing the White House to test the new model, which Anthropic has said has a much greater ability to identify and exploit cyber security vulnerabilities.
Several agencies have been in discussions with Anthropic about access since the release of Claude Mythos Preview, and the White House indicated in an email this week it was considering these requests, said people familiar with the matter.
Anthropic released the new model to a small group of tech companies earlier this month, but delayed a public launch to prevent its cyber security capabilities being used by bad actors.
The talks around access to Mythos come despite federal lawsuits over the administration’s efforts to bar Anthropic from working for the federal government and brand it a supply-chain risk after its clash with the Pentagon over limits on the military’s use of AI.
The prospect of the White House allowing US agencies to use Mythos suggests the need to harness the most advanced AI systems may override the concerns raised by defence secretary Pete Hegseth about the lab.
An administration official told the FT that the White House “continues to proactively engage across government and industry to protect the United States and Americans” and that “this includes working with frontier AI labs to ensure their models help secure critical software vulnerabilities”.
“Any new technology that would potentially be used or deployed by the federal government requires a technical period of evaluation for fidelity and security,” they added. “The collective effort of all involved will ultimately benefit industry and our country as a whole.”
Treasury secretary Scott Bessent last week summoned the leaders of some of the largest US banks to discuss the cyber risks posed by the model.
The government’s push to assess such risks is being co-ordinated by Sean Cairncross at the White House’s Office of the National Cyber Director.
Several chief information officers across the US government have asked for access to Mythos, said a person familiar with the matter, but have yet to receive authorisation from the White House.
Those requests came after President Donald Trump in February ordered all federal agencies to stop using Anthropic’s products. “We don’t need it, we don’t want it, and will not do business with them again!” the president said at the time.
Anthropic clashed with the White House after it tried to impose limits on the military’s use of its technology. The start-up sued to challenge the administration’s moves to cut it from government supply chains and label it a national security risk.
It won a preliminary injunction from a California court last month which allowed it to continue working with the government. In a parallel case, a federal appeals court in Washington denied Anthropic’s emergency request to pause its designation, but expedited further hearings on the matter.
Asked at the Semafor World Economy forum on Wednesday whether agencies would eventually get access to Mythos, Cairncross said: “Sure . . . we’re working closely with the large language model companies, we’re working closely with the tech sector, working closely with industry to make sure that we do this in a responsible fashion.”
He emphasised the government was in “close collaboration” with industry and said advanced models have a “tremendous capacity on the defensive side to help fix vulnerabilities that were previously very hard to do”.
Anthropic has said it will hold off on a wider release of the model until it is reassured that it is safe and cannot be abused by bad actors. The company also has a finite amount of computing power and has suffered outages in recent weeks.
Multiple people with knowledge of the matter suggested Anthropic was holding back from a wider release until it could reliably serve the model to customers.
Anthropic declined to comment.