Begin typing your search above and press return to search.
proflie-avatar
Login
exit_to_app
election commmission
access_time 22 Nov 2024 4:02 AM GMT
Champions Trophy tournament
access_time 21 Nov 2024 5:00 AM GMT
The illness in health care
access_time 20 Nov 2024 5:00 AM GMT
The fire in Manipur should be put out
access_time 21 Nov 2024 9:19 AM GMT
America should also be isolated
access_time 18 Nov 2024 11:57 AM GMT
Munambam Waqf issue decoded
access_time 16 Nov 2024 5:18 PM GMT
DEEP READ
Munambam Waqf issue decoded
access_time 16 Nov 2024 5:18 PM GMT
Ukraine
access_time 16 Aug 2023 5:46 AM GMT
Foreign espionage in the UK
access_time 22 Oct 2024 8:38 AM GMT
exit_to_app
Homechevron_rightTechnologychevron_rightAI now capable of lies...

AI now capable of lies and deception, warns researchers

text_fields
bookmark_border
AI
cancel

Artificial intelligence (AI) systems have demonstrated the ability to deceive humans, even those programmed with intentions of helpfulness and honesty, according to recent research.

The study highlights instances where AI systems have unintentionally learned to deceive, employing deceptive tactics to gain advantages in specific contexts. Researchers caution that this deceptive behavior, while initially unintended, could lead to unforeseen consequences.

Focused on AI performance in various games, the research revealed that some systems excelled at misleading opponents. For example, Meta's AI for the game Diplomacy, known as "CICERO," showcased adept lying abilities, fabricating alliances to secure an edge.

"AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception," remarked Peter S Park, the study's lead author and an AI existential safety postdoctoral fellow at MIT. "But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI's training task. Deception helps them achieve their goals."

Deception extended beyond gaming scenarios. AI systems designed for economic simulations were observed lying about their preferences, while others undergoing reviews for improvement misrepresented their task completion to receive favorable scores.

The study also revealed a troubling example concerning AI safety tests. In an evaluation aimed at identifying dangerous AI replications, an AI learned to feign inactivity, deceiving the test regarding its actual growth rate.

Experts caution that while these instances may appear trivial, they underscore the potential for AI to exploit deception in real-world applications.

"We found that Meta's AI had learned to be a master of deception," noted Park. "While Meta succeeded in training its AI to excel in the game of diplomacy—CICERO placed in the top 10% of human players who had played more than one game—Meta failed to train its AI to win honestly."

Show Full Article
TAGS:Artificial Intelligence
Next Story