Google AI expert warns of cyber attackers' ability to disable AI systems through 'data poisoning'
text_fieldsSan Francisco: A Google Brain research scientist has raised concerns over the potential for cyber attackers to disable artificial intelligence (AI) systems by exploiting a technique called "data poisoning."
According to Nicholas Carlini, attackers can seriously compromise the functionality of AI models by manipulating a small fraction of their training data sets.
Data poisoning, as described by the International Security Journal, involves tampering with machine learning training data to produce undesirable outcomes. Attackers infiltrate machine learning databases and insert incorrect or misleading information. As the algorithm learns from this corrupted data, it draws unintended and potentially harmful conclusions.
During the Artificial Intelligence Risk and Security Sub-forum at the World Artificial Intelligence Conference, Carlini highlighted the evolving threat landscape surrounding AI systems. He explained that what was once seen as academic experimentation has now become a tangible threat in real-world contexts.
The attack method known as "data poisoning" involves introducing a limited number of biassed samples into an AI model's training data set. By doing so, attackers can deceive the model during the training process, undermining its usefulness and integrity. "We used to perceive these attacks as academic games, but it's time for the community to acknowledge these security threats and understand the potential for real-world implications," Carlini stated.
Carlini emphasised the significant impact that even contaminating just 0.1% of the data set can have on compromising the entire algorithm.
His remarks highlight the growing need for robust security measures in AI systems. As artificial intelligence becomes more integrated into various sectors, safeguarding against malicious attacks becomes paramount. By raising awareness of the vulnerability of AI models to data poisoning, experts can work towards developing effective defences to protect against cyber threats.
The scientist's warning comes at a time when the world is debating over the regulations that must be imposed on AI models.


















