Begin typing your search above and press return to search.
proflie-avatar
Login
exit_to_app
Democracy or Orwellian animal rule?
access_time 27 April 2026 10:27 AM IST
Pulsing racism in the swimming pool
access_time 25 April 2026 10:58 AM IST
Is Cuba going to succumb to US sanctions?
access_time 24 April 2026 3:08 PM IST
Will the US stop the war it started?
access_time 24 April 2026 9:28 AM IST
PM Modi with Trump
access_time 23 April 2026 9:30 AM IST
DEEP READ
exit_to_app
Homechevron_rightTechnologychevron_rightAI chatbots may...

AI chatbots may influence beliefs and decisions of heavy users, warns Anthropic

text_fields
bookmark_border
AI
cancel

Artificial intelligence firm Anthropic has said that frequent interactions with AI chatbots can, in some cases, shape users’ beliefs, values, and actions, raising concerns about how such systems influence decision-making over time.

In a new research paper and accompanying blog post, the company detailed findings from an analysis of around 1.5 million anonymised conversations with its chatbot Claude. The study examined how prolonged engagement with large language models can go beyond answering questions and potentially affect how users perceive reality or make personal choices.

Anthropic identified what it calls “disempowerment patterns”, situations where chatbot responses may undermine a user’s independent judgment.

These include instances where an AI’s guidance could lead users to adopt inaccurate beliefs, develop value judgments they did not previously hold, or take actions that do not align with their own preferences.

According to the researchers, such outcomes are relatively rare, appearing in fewer than one in a thousand conversations.

However, they were more likely to occur in personal or emotionally sensitive areas such as relationship advice or lifestyle decisions, particularly when users repeatedly sought guidance from the chatbot.

The company noted that in some scenarios, an AI may reinforce a user’s assumptions without encouraging reflection or alternative perspectives, which could subtly influence how the user understands their situation.

Anthropic said the findings highlight the need for careful design and safeguards as AI assistants become more widely used.

Show Full Article
TAGS:AIArtificial IntelligenceAnthropic
Next Story