Thumbnail 1727156
thumbnail
Large (256x256)

Articles

Anthropic refuses to bow to Pentagon despite Hegseth's threats
Despite an ultimatum from Defense Secretary Pete Hegseth, Anthropic said that it can't "in good conscience" comply with a Pentagon edict to remove guardrails on its AI, CEO Dario Amodei wrote in a blog post. The Department of Defense had threatened to cancel a $200 million contract and label Anthropic a "supply chain risk" if it didn't agree to remove safeguards over mass surveillance and autonomous weapons."Our strong preference is to continue to serve the Department and our warfighters - with our two requested safeguards in place," Amodei said. "We remain ready to continue our work to support the national security of the United States."In response, US Under Secretary of Defense Emil Michael accused Amodei in a post on X of wanting "nothing more than to try to personally control the US military and is OK putting our nation's safety at risk."The standoff began when the Pentagon demanded that Anthropic its Claude AI product available for "all lawful purposes" - including mass surveillance and the development of fully autonomous weapons that can kill without human supervision. Anthropic refused to offer its tech for those things, even with a "safety stack" built into that model.Yesterday, Axios reported that Hegseth gave Anthropic a deadline of 5:01 PM on Friday to agree to the Pentagon's terms. At the same time, the DoD requested an assessment of its reliance on Claude, an initial step toward potentially labelling Anthropic as a "supply chain risk" - a designation usually reserved for firms from adversaries like China and "never before applied to an American company," Anthropic wrote.Amodei declined to change his stance and stated that if the Pentagon chose to offboard Anthropic, "we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations or other critical missions." Grok is one of the other providers the DoD is reportedly considering, along with Google's Gemini and OpenAI.It may not be that simple for the military to disentangle itself from Claude, however. Up until now, Anthropic's model has been the only one allowed for the military's most sensitive tasks in intelligence, weapons development and battlefield operations. Claude was reportedly used in the Venezuelan raid in which the US military exfiltrated the country's president, Nicolas Maduro, and his wife.AI companies have been widely criticized for potential harm to users, but mass surveillance and weapons development would clearly take that to a new level. Anthropic's potential reply to the Pentagon was seen as a test of its claim to be the most safety-forward AI company, particularly after dropping its flagship safety pledge a few days ago. Now that Amodei has responded, the focus will shift to the Pentagon to see if it follows through on its threats, which could seriously harm Anthropic.This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-refuses-to-bow-to-pentagon-despite-hegseths-threats-085553126.html?src=rss
Anthropic says it will challenge Defense Department's supply chain risk designation in court
In a new blog post, Anthropic CEO Dario Amodei has admitted that it received a letter from the Defense Department, officially labeling it a supply chain risk. He said he doesn't believe this action is legally sound," and that his company sees no choice" but to challenge it in court. Hours before Amodei published the post, the Pentagon announced that it notified the company that its products are deemed a supply chain risk, effective immediately."If you'll recall, the Defense Department (called the Department of War under the current administration) threatened to give the company the designation typically reserved for firms from adversaries like China if it didn't agree to remove its safeguards over mass surveillance and autonomous weapons. President Trump then ordered federal agencies to stop using Anthropic's tech.Amodei explained that the designation has a narrow scope, because it only exists to protect the government. That is why the general public, and even Defense Department contractors, can still use Anthropic's Claude chatbot and its AI technologies. Microsoft told CNBC that it will continue using Claude after its lawyers had concluded that it can keep on working with Anthropic on non-defense related projects.The CEO has also admitted that his company had productive conversations" with the department over the past few days. He said that they were looking at ways to serve the Pentagon that adheres to its two exceptions, namely that its technology not be used for mass surveillance and the development of fully autonomous weapons, and at ways to ensure a smooth transition if that is not possible." That confirms reports that Anthropic is back in talks with the agency in an effort to reach a new deal. In addition, he apologized for a leaked internal memo, wherein he reportedly said that OpenAI's messaging about its own deal with the department is just straight up lies."This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-says-it-will-challenge-defense-departments-supply-chain-risk-designation-in-court-054459618.html?src=rss
1