Pentagon labels Anthropic supply-chain risk
Digest more
The formal declaration will require defense vendors and contractors to certify that they don’t use Anthropic’s models in their work with the Pentagon.
March 6 (Reuters) - The Trump administration has drawn up strict rules for civilian artificial-intelligence contracts requiring companies to allow "any lawful" use of their models amid a stand-off between the Pentagon and Anthropic,
Palantir Technologies Inc.'s flagship military AI platform is reportedly facing disruption after the Pentagon ordered contractors to halt commercial ties with Anthropic. Pentagon Order Forces AI Supplier Shift Palantir's Maven Smart Systems — software used by the U.
6hon MSN
Anthropic CEO says 'no choice' but to challenge Trump admin's supply chain risk designation in court
Anthropic said even with the designation, the government can't forbid it from working with companies in other capacities.
Anthropic, the AI company behind the popular Claude AI chatbot, received praise last week for standing up to the Trump administration over the U.S. military's use of its AI tools. However, the company may be reversing course.
Startup funding surged to a historic $189 billion in February, driven almost entirely by three AI mega-rounds that captured 83% of the total.
The maker of the Claude chatbot says its research could help identify economic disruptions by measuring how AI is currently reshaping work.
A top Pentagon official says a fight with Anthropic centered on how the military could someday use artificial intelligence in autonomous weapons.
Amazon joined Microsoft and Google in continue to offer Anthropic's Claude AI technology to customers after the Pentagon deemed it a "supply chain risk."