Los Angeles: Canadian film director James Cameron, known for his iconic sci-fi blockbuster "The Terminator," has expressed his concern regarding the rapid advancement of artificial intelligence (AI).
In a recent interview with CTV News, Cameron emphasised that his film should have served as a warning about the potential dangers of AI and its potential catastrophic consequences if "weaponized." When asked about the fear shared by some industry leaders regarding AI causing humanity's extinction, Cameron stated, "I absolutely share their concern. I warned you guys in 1984 and you didn't listen." He was referring to the storyline of "The Terminator," where an intelligent supercomputer called Skynet creates a cybernetic assassin.
According to the Hollywood director, the most significant danger lies in the weaponisation of AI. He is envisioning a potential AI arms race. "I think that we will get into the equivalent of a nuclear arms race with AI. And if we don't build it, the other guys are for sure going to build it, and so then it'll escalate," he said.
In Cameron's view, AI on the battlefield might operate so rapidly that human intervention becomes impossible, eliminating the possibility of peace talks or armistice. Dealing with such technology would require a focus on de-escalation, but the director doubts that AI systems would adhere to such principles.
This is not the first time Cameron has expressed concerns about AI. He has previously acknowledged that while AI offers advantages, it also carries the risk of disastrous consequences, potentially leading to the end of the world. He has even speculated that sentient computers could already be manipulating the world "without our knowledge, with total control over all media and information."
Cameron's concerns align with those of leading experts in the field, including OpenAI, Google's DeepMind, academics, lawmakers, and entrepreneurs. Many have called for measures to mitigate the risks associated with AI, considering it a global priority on par with addressing pandemics and nuclear war risks.
A group of over 1,000 experts and executives, including Elon Musk and Steve Wozniak, signed an open letter urging a six-month pause on training powerful AI systems until their positive effects can be assured, and risks managed. They fear that AI could pose profound risks to society and humanity as a whole. The urgency to address these concerns is echoed across the AI community, with a call for responsible AI development and regulation to ensure a safe and beneficial integration of AI into society.