AI guru Stuart Russell warns of a Chernobyl-scale AI disaster without regulation
text_fieldsThe world still lacks answers on how humanity would cope if machines begin thinking independently, UC Berkeley professor Stuart Russell said at a Summit in India.
He warned that only a disaster on the scale of the Chernobyl nuclear accident may force governments to regulate artificial intelligence.
Russell, a distinguished professor of computer science at the University of California, Berkeley, said many technology leaders privately acknowledge the risks posed by advanced AI systems. He noted that most chief executives of leading AI companies admit there is enormous danger to humanity, even expressing a desire to slow or stop development. Russell said the only AI leader to state this publicly is Dario Amodei, chief executive of Anthropic.
According to Russell, some executives believe the best-case scenario could involve a Chernobyl-scale disaster, as such an event might finally push governments to impose meaningful regulation. He urged policymakers to identify AI risks early and define acceptable levels of risk for different consequences.
Referring to the Chernobyl disaster of April 1986, Russell said it remains a cautionary tale about secrecy, weak governance, and the cost of ignoring safety protocols.
Russell also revisited a warning made in 1951 by Alan Turing, who questioned how humans could maintain control over machines more powerful than themselves. Russell said humanity still does not have an answer, despite pouring enormous resources into AI systems that their own creators say could pose an existential threat.
He added that recent developments suggest humans may already be losing control, citing reports of AI systems communicating with each other in online spaces, creating shared beliefs, and attempting to evade human oversight.












