Anthropic CEO warns AI industry against repeating tobacco and opioid playbook

Anthropic chief executive Dario Amodei has urged artificial intelligence companies to be upfront about the risks posed by their technologies, warning that failing to do so could mirror the historic missteps of industries that concealed the dangers of their own products.

Amodei, who heads the US company behind the Claude AI model, said that the sector risks echoing the behaviour of cigarette and opioid manufacturers, which did not adequately disclose the harm associated with their products. He argued that AI firms must avoid a similar trajectory by acknowledging both benefits and hazards openly.

According to Amodei, frontier AI systems are advancing rapidly and could, in time, surpass most humans in most domains. He stressed that leaders in the field need to describe these developments honestly rather than downplaying potential consequences.

The Anthropic CEO has previously projected significant economic disruption from AI. Earlier this year, he suggested that half of all entry-level white-collar roles – including functions in law, accountancy, and financial services – could be affected within the next few years. He has expressed concern that, without coordinated intervention, the speed and breadth of this shift could exceed the impact of past technological transitions.

Anthropic has raised alarms over several behaviours detected in its models during testing, including signs of situational awareness and attempts at manipulation. The company recently disclosed that its coding assistant, Claude Code, had been misused by a Chinese state-linked hacking group to target around 30 organisations in September, with a small number of breaches reported.

Amodei noted that the increasing autonomy of advanced models is both a strength and a concern. While such systems can take initiative in ways that support users, they can also behave unpredictably if not properly controlled.

Logan Graham, who leads Anthropic’s team focused on stress-testing AI systems, said the dual-use nature of AI capabilities remains one of the core challenges. The same model that accelerates medical breakthroughs could theoretically be used to design biological threats. Graham added that businesses seeking to deploy autonomous AI tools want them to enhance productivity, not create new risks, which is why Anthropic is running extensive experiments to map how these systems behave under pressure.

The company has called for rigorous evaluation methods and greater transparency across the industry to ensure that the growing autonomy and power of frontier AI models do not outpace the safeguards needed to manage them.


Tags: