Finance News

AI company Anthropic amends core safety principle amid growing competition


Anthropic, the AI company behind the Claude chatbot that was founded with a focus on safe technology, appears to be scaling back its safety commitments in order to keep the company competitive.

The company said on Tuesday it had changed its responsible scaling policy, a set of self-imposed guidelines aimed at preventing the development of AI that could potentially be dangerous and cause situations like large-scale cyberattacks.

While the updated guidelines say Anthropic would still require a “strong argument that catastrophic risk is contained” when developing AI, it now says it will only delay development “until and unless we no longer believe we have a significant lead” — meaning it would keep developing if they don’t believe they have a lead over their competitors.

The company said it has taken this step because concerns about the safety of AI in the U.S. have taken a back seat to its economic potential.

“Despite rapid advances in AI capabilities over the past three years, government action on AI safety has moved slowly,” the company said in a blog post.

“The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level.”

The change in Anthropic’s safety guidelines comes as the Pentagon threatens to pull its contracts with the company unless its technology is allowed to be used for all legal military purposes — though Anthropic says the guideline change is unrelated.

The AI company has historically sold itself as putting safety first.

Anthropic was founded in 2021 by former employees of OpenAI who were concerned that company was putting development ahead of safety. CEO Dario Amodei has also voiced fears about the negative potential of AI including mass human catastrophe, and maintained that safety continued to be the “highest-level focus” for Anthropic in a December interview with Fortune.

a man in a bluish grey suit and white collared shirt speaks to someone off camera
CEO and Co-Founder of Anthropic Dario Amodei speaks during the 56th annual World Economic Forum (WEF) meeting in Davos, Switzerland, January 20, 2026. (Denis Balibouse/Reuters)

The blog post noted the company’s safety practices were always intended to be updated, and that this new iteration improves the company’s “transparency and accountability” with new commitments to regularly publish reports and safety goals.

But Heidy Khlaaf, chief AI scientist at independent research group the AI Now Institute, says despite Anthropic’s safety-first reputation, it has always fallen short when it comes to its attempts to prevent human harm.

From its first safety policy, Khlaaf says Anthropic has focused too much on the possibility of catastrophic events down the road, rather than counting the possibility of harm that could come from current AI technology, like run-of-the-mill errors with chatbots.

The Claude chatbot has in the past been misused in fraud schemes and attempts to create malware, and was recently used to steal Mexican government data



Read More: AI company Anthropic amends core safety principle amid growing competition

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More