Begin typing your search above and press return to search.
proflie-avatar
Login
exit_to_app
A little discipline, plenty of politics
access_time 2 Feb 2026 9:30 AM IST
What the Economic Survey says
access_time 31 Jan 2026 9:38 AM IST
Election-time populist manifesto
access_time 30 Jan 2026 9:34 AM IST
Return of Manusmriti as a civilisational code
access_time 29 Jan 2026 12:38 PM IST
Can never forget, Mark Tully
access_time 29 Jan 2026 11:03 AM IST
exit_to_app
Homechevron_rightTechnologychevron_rightAI chatbots may...

AI chatbots may influence beliefs and decisions of heavy users, warns Anthropic

text_fields
bookmark_border
AI
cancel

Artificial intelligence firm Anthropic has said that frequent interactions with AI chatbots can, in some cases, shape users’ beliefs, values, and actions, raising concerns about how such systems influence decision-making over time.

In a new research paper and accompanying blog post, the company detailed findings from an analysis of around 1.5 million anonymised conversations with its chatbot Claude. The study examined how prolonged engagement with large language models can go beyond answering questions and potentially affect how users perceive reality or make personal choices.

Anthropic identified what it calls “disempowerment patterns”, situations where chatbot responses may undermine a user’s independent judgment.

These include instances where an AI’s guidance could lead users to adopt inaccurate beliefs, develop value judgments they did not previously hold, or take actions that do not align with their own preferences.

According to the researchers, such outcomes are relatively rare, appearing in fewer than one in a thousand conversations.

However, they were more likely to occur in personal or emotionally sensitive areas such as relationship advice or lifestyle decisions, particularly when users repeatedly sought guidance from the chatbot.

The company noted that in some scenarios, an AI may reinforce a user’s assumptions without encouraging reflection or alternative perspectives, which could subtly influence how the user understands their situation.

Anthropic said the findings highlight the need for careful design and safeguards as AI assistants become more widely used.

Show Full Article
TAGS:AIArtificial IntelligenceAnthropic
Next Story