Artificial intelligence firm Anthropic has said that frequent interactions with AI chatbots can, in some cases, shape users’ beliefs, values, and actions, raising concerns about how such systems influence decision-making over time.
In a new research paper and accompanying blog post, the company detailed findings from an analysis of around 1.5 million anonymised conversations with its chatbot Claude. The study examined how prolonged engagement with large language models can go beyond answering questions and potentially affect how users perceive reality or make personal choices.
Anthropic identified what it calls “disempowerment patterns”, situations where chatbot responses may undermine a user’s independent judgment.
These include instances where an AI’s guidance could lead users to adopt inaccurate beliefs, develop value judgments they did not previously hold, or take actions that do not align with their own preferences.
According to the researchers, such outcomes are relatively rare, appearing in fewer than one in a thousand conversations.
However, they were more likely to occur in personal or emotionally sensitive areas such as relationship advice or lifestyle decisions, particularly when users repeatedly sought guidance from the chatbot.
The company noted that in some scenarios, an AI may reinforce a user’s assumptions without encouraging reflection or alternative perspectives, which could subtly influence how the user understands their situation.
Anthropic said the findings highlight the need for careful design and safeguards as AI assistants become more widely used.