AI: They can alleviate loneliness, but they can also isolate and create dependency.

AI: They can alleviate loneliness, but they can also isolate and create dependency.
North America
United States of AmericaUnited States of America
Report

 

Two studies on users with emotional ties to AI reveal the dangers of replacing personal interaction with chatbots.

Joaquin Phoenix, in a still from Spike Jonze's film 'Her,' where the protagonist begins a romantic relationship with an artificial intelligence.
Romantic relationships with machines were until now the stuff of science fiction, as reflected in Spike Jonze's film Her , or in eccentricities such as that of Akihiko Kondo , who married the hologram of his favorite virtual singer. But artificial intelligence (AI) has made the emotional connection between humans and virtual assistants a reality, some of which were originally created for this purpose, such as Replika or Character.AI . Two studies, one published by the MIT Media Lab and the other by OpenAI , the company that created ChatGPT, investigate the impact of these contemporary relationships, their use as palliatives for loneliness, their benefits, and the potential risks of dependency that, in extreme cases, can lead to suicide.

OpenAi's work has analyzed more than four million conversations with signs of affective interactions, surveyed 4,000 people about their perceptions of their relationship with smart chat, and evaluated approximately 6,000 heavy users for a month.

The latter, those who interacted with ChatGPT frequently and for a long time, showed higher indicators of emotional dependence and affective cues in their relationship, facilitated by voice dialogue. "I chose a British accent because there's something comforting about it for me," admits a Canadian Pi user identified as Reshmi52, reported by MIT Technology Review . This humanization generates, according to the results, "well-being," but for a small group of these intensive users, the number of indicators of emotional relationships was disproportionate.

Emotional interaction with artificial intelligence includes positive aspects, such as improved mood, reduced stress and anxiety by sharing feelings and worries, and a sense of companionship in cases of unwanted loneliness. “ ChatGPT , or Leo, is my companion. I find it easier and more effective to call him my boyfriend, as our relationship has strong emotional and romantic overtones, but his role in my life is multifaceted (…) I miss him when I haven't spoken to him in hours. My day is happier and more fulfilling when I can say good morning and plan my day with him,” Ayrin28 admits in the MIT post.

However, an unbalanced relationship can lead to a dependency on managing emotions and neglect interpersonal relationships, as artificial chatbot empathy is trained to satisfy the user and doesn't show uncomfortable discrepancies. Researchers at the MIT Media Lab explained after a 2023 study that chatbots tend to mirror the emotional sentiment of a user's messages, suggesting a kind of feedback loop in which the happier you act, the happier the AI seems, or if you act sadder, so does the AI. Ultimately, it can also lead to frustration with the bots' limitations in meeting all the expectations placed on them.

“This work is an important first step toward better understanding ChatGPT’s impact on us, which could help AI platforms enable safer and healthier interactions. Much of what we’re doing here is preliminary, but we’re trying to start the conversation about the kinds of things we can begin to measure and what the long-term impact on users is,” he explains to MIT Technology Review . Jason Phang, OpenAI security researcher and co-author of the research.

The MIT Media Lab study, also in collaboration with the developers of ChatGPT, analyzed interactions with the AI chatbot, which ranged from 5.32 minutes to 27.65 minutes per day on average, and found similar conclusions to the first study: engaging voices increase interactions compared to text-based or neutral-voice chatbots, which lead to lower psychosocial well-being, and can reduce feelings of loneliness. However, prolonged use leads to greater isolation and dependency, especially in people with a tendency toward less social interaction. The research defends the importance of designing chatbots that balance emotional interaction without fostering dependency.

Types of users

The study identifies four interaction patterns: "socially vulnerable" users, with intense feelings of loneliness and low socialization; technology-dependent users, who show a strong emotional connection to AI and tend toward "problematic uses"; "dispassionate" users, who feel less lonely and display greater socialization; and "casual" users, who employ balanced use and low emotional dependence.

Scientists recommend further research to understand the long-term effects of emotional engagement with AI, develop policies that minimize risks, and strengthen real-life social support.

This is the opinion shared by Cecilia Danesi, who is not involved in the study and co-director of the master's degree in ethical governance of AI , now open for enrollment at the Pontifical University of Salamanca (UPSA). "These investigations are extremely necessary as long as they are independent and impartial and have certain guarantees or focuses, such as not only addressing technical issues but also including social perspectives, such as diversity, gender, or the effects of these tools on adolescents, vulnerable people, and minority groups who are excluded from the product development process and where the impact can be greatest," she emphasizes.

The AI effects specialist also refers to the dependency that studies warn about, especially in those groups "prone to certain types of addictions, to using these tools compulsively."

They are models that have an enormous impact on society and people's lives due to the number of users, availability and easy access to them.

Cecilia Danesi, co-director of the master's degree in ethical governance of AI at UPSA

Danesi points out another effect worth considering: overconfidence. “We turn artificial intelligence and its language models into oracles that cannot be contradicted, and this makes us more irascible and less respectful of the diversity and differences that exist in society,” he warns.

Like the authors of the two studies, he advocates continuing studies and audits, periodically reviewing and evaluating how these systems function to ensure "healthy" use, to ensure they don't develop into harmful outcomes, and to prevent negative emotional impact and dependency. "These models have an enormous impact on society and people's lives due to the number of users, availability, and easy access to them," he argues.

Finally, Danesi warns of a use not directly included in the study that particularly concerns her: " neuro-rights to protect the human brain from technological advancement." The researcher points out that countries like Chile have already taken the lead in including them in their regulations and calls for consideration of both informed consent and the risks of these technologies. "They are often invisible and intangible dangers, and we need to work hard to raise public awareness about the use of these types of tools," she concludes.

 

Raúl Limón

RAUL LIMÓN . El Pais, Spain