Pentagon labels Anthropic supply-chain risk
Digest more
Pentagon, Anthropic and US Department of War
Digest more
The formal declaration will require defense vendors and contractors to certify that they don’t use Anthropic’s models in their work with the Pentagon.
Anthropic PBC vowed to legally contest a Pentagon decision to declare the company a threat to the US supply chain under an authority normally reserved for foreign adversaries, escalating a showdown with the Trump administration over artificial intelligence safeguards.
The maker of the Claude chatbot says its research could help identify economic disruptions by measuring how AI is currently reshaping work.
7hon MSN
Anthropic CEO says 'no choice' but to challenge Trump admin's supply chain risk designation in court
Anthropic said even with the designation, the government can't forbid it from working with companies in other capacities.
Anthropic, the AI company behind the popular Claude AI chatbot, received praise last week for standing up to the Trump administration over the U.S. military's use of its AI tools. However, the company may be reversing course.
March 6 (Reuters) - The Trump administration has drawn up strict rules for civilian artificial-intelligence contracts requiring companies to allow "any lawful" use of their models amid a stand-off between the Pentagon and Anthropic,
Anthropic plans legal action against the Pentagon after being labelled a security risk, while OpenAI CEO Sam Altman says governments must remain more powerful than private companies.
Technology Technology The Big Story Tech giants break silence on Anthropic spat Three major tech companies — Microsoft, Google and Amazon — have said Anthropic’s AI tools will remain
Amazon joined Microsoft and Google in continue to offer Anthropic's Claude AI technology to customers after the Pentagon deemed it a "supply chain risk."