People use AI for companionship much less than we’re led to think

The abundance of attention pays to how people turn to AI chatbots for emotional support, sometimes even Relations stand outoften leads to someone thinking that such behavior is commonplace.
A new report By Anthropic, who makes the popular AI Chatbot Claude, a different reality reveals: in fact, people rarely seek company of Claude and they only turn 2.9% of the time to the bone for emotional support and personal advice.
“Corned and role play combined include less than 0.5% of the conversations,” the company emphasized in its report.
Anthropic says that his study tried to trace insights into the use of AI for ‘affective conversations’, which defines it as personal exchanges in which people with Claude spoke for coaching, counseling, company, role play or advice about relationships. The company analyzed 4.5 million conversations that users had on the Claude Free and Pro -Tiers, the company said that the vast majority of the use of Claude is related to work or productivity, in which people usually use the chatbot to make content.

That said, it was anthropically established that people use Claude more often for interpersonal advice, coaching and counseling, where users usually ask for advice on improving mental health, personal and professional development and studying communication and interpersonal skills.
However, the company notes that help -seeking conversations can sometimes change in a company seeker in cases where the user is confronted with emotional or personal need, such as existential fear, loneliness, or finding it difficult to make meaningful connections in their real life.
“We have also noticed that in longer conversations, counseling or coaching conversations occasionally change into company – despite the fact that it is not the original reason that someone reached,” Anthropic wrote, and noticed that extensive conversations (with more than 50+ human messages) were not the norm.
Anthropic also emphasized other insights, such as how Claude himself rarely resists the requests of users, except when the programming prevents the safety limits from being encouraged, such as giving dangerous advice or supporting self -harm. Conversations also usually become more positive over time when people seek coaching or advice from the bone, the company said.
The report is certainly interesting – it makes us remind us again how much and often AI tools are used for purposes that go beyond the work. Yet it is important to remember that AI chatbots are still a work in progress across the board: they hallucinate is known that it is easy Give the wrong information or dangerous adviceAnd as anthropic himself has recognized, can even resort to blackmail.




