Begin typing your search above and press return to search.
proflie-avatar
Login
exit_to_app
DEEP READ
Ukraine
access_time 16 Aug 2023 11:16 AM IST
Espionage in the UK
access_time 13 Jun 2025 10:20 PM IST
Yet another air tragedy
access_time 13 Jun 2025 9:45 AM IST
exit_to_app
Homechevron_rightTechnologychevron_rightGoogle AI expert warns...

Google AI expert warns of cyber attackers' ability to disable AI systems through 'data poisoning'

text_fields
bookmark_border
Google AI expert warns of cyber attackers ability to disable AI systems through data poisoning
cancel

San Francisco: A Google Brain research scientist has raised concerns over the potential for cyber attackers to disable artificial intelligence (AI) systems by exploiting a technique called "data poisoning."

According to Nicholas Carlini, attackers can seriously compromise the functionality of AI models by manipulating a small fraction of their training data sets.

Data poisoning, as described by the International Security Journal, involves tampering with machine learning training data to produce undesirable outcomes. Attackers infiltrate machine learning databases and insert incorrect or misleading information. As the algorithm learns from this corrupted data, it draws unintended and potentially harmful conclusions.

During the Artificial Intelligence Risk and Security Sub-forum at the World Artificial Intelligence Conference, Carlini highlighted the evolving threat landscape surrounding AI systems. He explained that what was once seen as academic experimentation has now become a tangible threat in real-world contexts.

The attack method known as "data poisoning" involves introducing a limited number of biassed samples into an AI model's training data set. By doing so, attackers can deceive the model during the training process, undermining its usefulness and integrity. "We used to perceive these attacks as academic games, but it's time for the community to acknowledge these security threats and understand the potential for real-world implications," Carlini stated.

Carlini emphasised the significant impact that even contaminating just 0.1% of the data set can have on compromising the entire algorithm.

His remarks highlight the growing need for robust security measures in AI systems. As artificial intelligence becomes more integrated into various sectors, safeguarding against malicious attacks becomes paramount. By raising awareness of the vulnerability of AI models to data poisoning, experts can work towards developing effective defences to protect against cyber threats.

The scientist's warning comes at a time when the world is debating over the regulations that must be imposed on AI models.

Show Full Article
TAGS:data poisoning in AI modelscyber attacks on AI modelscyber attacks in data poisoningmanipulating AI models
Next Story