Finance News

Anthropic’s Claude would ‘pollute’ defense supply chain: Pentagon CTO


Defense Undersecretary Emil Michael: Anthropic’s Claude would ‘pollute’ defense supply chain

Defense Department CTO Emil Michael on Thursday said Anthropic’s Claude artificial intelligence models would “pollute” the agency’s supply chain because they have “a different policy preference” that is baked in.

“We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our warfighters are getting ineffective weapons, ineffective body armor, ineffective protection,” Michael told CNBC’s “Squawk Box.” “That’s really where the supply chain risk designation came from.”

Anthropic is the first American company to publicly be labeled a supply chain risk, an extraordinary move that’s historically been reserved for foreign adversaries. The designation will require defense contractors and vendors to certify that they don’t use Claude in their work with the Pentagon.

Michael’s comments on Thursday are the clearest explanation the DOD has offered about why it believes Anthropic is a supply chain risk. The agency sent an official letter to notify the company about the designation earlier this month, but the letter did not outline what risk Claude poses to national security. 

Anthropic sued the Trump administration on Monday, calling the government’s actions “unprecedented and unlawful.” Anthropic said in a filing that the company was being harmed “irreparably,” and that hundreds of millions of dollars worth of contracts are in jeopardy.

“This is not meant to be punitive,” Michael said Thursday.

He added that Anthropic has a “huge commercial business,” and that a “tiny fraction” comes from the U.S. government. Michael also dismissed Anthropic’s claim that the government has actively reached out to companies and told them them not to use Anthropic, calling the notion “rumors.”

“The Department of War is not reaching out to companies to tell them what to do, so long as it’s not in our supply chain,” he said.

Anthropic was founded in 2021 by a group of researchers and executives who defected from OpenAI. The company is best known for its family of Claude models, and it’s had early success selling into large enterprises, including the DOD.

The startup has drafted and published a “constitution” that it uses to help train its mainline, general-access Claude models. Anthropic said the constitution plays a “crucial role” in this process, and that its content “directly shapes Claude’s behavior,” according to its website.

Anthropic shared the most recent version of Claude’s constitution in January.

“In it, we explain what we think it means for Claude to be helpful while remaining broadly safe, ethical, and compliant with our guidelines,” Anthropic said in a blog post. “The constitution gives Claude information about its situation and offers advice for how to deal with difficult situations and tradeoffs, like balancing honesty with compassion and the protection of sensitive information.”

Even after Anthropic was blacklisted, the company’s models have…



Read More: Anthropic’s Claude would ‘pollute’ defense supply chain: Pentagon CTO

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More