People use AI for companionship much less than we’re led to believe


The excessive abundance paid attention to how people turn to AI talks for emotional support, sometimes even Striking relationshipsoften leads one to think that such behavior is common.

New Report With Antropic, which makes the popular AI chat Claude, reveals a different reality: in fact people rarely seek Claude’s company and turn to the bot for emotional support and personal advice only 2.9% of the time.

“Combined fellowship and role, contains less than 0.5% of conversations,” the company emphasized in its report.

Antropic says that its study sought to discover insights into the use of AI for “affected conversations”, which it defines as personal exchanges in which people have spoken with Claude for training, counseling, fellowship, role -playing or advice on relationships. Analyzing 4.5 million conversations that users had on the Claude Free and due to levels, the company said the vast majority of Claude’s use is related to work or productivity, with people mostly using the chat for content creating.

Image Credits:Anthropic

This said, Antropic has found that people use Claude more often for interpersonal advice, training and advice, with users most often asking for advice on improving mental health, personal and professional development and study of communication and interpersonal skills.

However, the company notes that seeking help can sometimes become a seeking company in cases where the user faces emotional or personal distress, such as existential fear or loneliness, or when they are difficult to make significant relationships in their real life.

“We also realized that in longer conversations, counseling or training of conversations is sometimes transformed into a comrade – despite not being the original reason anyone has achieved,” Antropic wrote, noticing that extensive conversations (with more than 50+ human messages) were not the norm.

Antropic also emphasized other insights, as Claude itself rarely resists users’ requests, except when its programming prevents it to broke security limits, such as delivering dangerous advice or supporting self-damage. Conversations also tend to become more positive with the time when people seek training or advice from the bot, the company said.

The report is definitely interesting – it does a good job to remind us again about how much and how often AIs are used for purposes beyond work. However, it is important to remember that AI chat boots, across the board, are still very work in progress: they halluciniare known easily Provide incorrect information Or Dangerous adviceand as anthropical itself acknowledged, can Even resort to blackmail.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *