New Study Uses Attachment Theory to Decode Human-AI Relationships

A groundbreaking study published in Current psychology entitled “Use the theory of attachment to concept and measure the experiences in relationships between people” sheds light on a growing and deep human phenomenon: our tendency to make emotional contact with artificial intelligence. Performed by Fan Yang And Professor Atsushi Oshio Van Waseda University reformulates research the interaction between people and AI, not only in terms of functionality or trust, but by the lens of theory of attachmentA psychological model that is usually used to understand how people form emotional ties with each other.
This shift marks a significant deviation from how AI is traditionally studied – as an aid or assistant. Instead, this study argues that AI is starting to look like one relationship partner For many users, they offer support, consistency and, in some cases, even a sense of intimacy.
Why people turn to AI for emotional support
The results of the study reflect a dramatic psychological shift in society. Under the most important findings:
- Almost 75% of the participants said they turn to AI for advice
- 39% AI described as a consistent and reliable emotional presence
These results reflect what happens in the real world. Millions are increasingly turning to AI chatbots, not only as tools, but as friends, confidants and even romantic partners. These AI companions vary from friendly assistants and therapeutic listeners to avatar “partners” who are designed to imitate human-like intimacy. One report suggests more than Half a billion downloads From AI Companion Apps worldwide.
Unlike real people, Chatbots are that Always available And infallible attentive. Users can adjust the personalities or performances of their bots and promote a personal connection. For example one 71-year-old man in the US. Created a bone modeled after his deceased wife and brought her to her daily for three years, and called it his ‘Ai -Vrouw’. In another case, a neurodiverse user has trained his bone, layla, to help him manage social situations and regulate emotions, as a result to report considerable personal growth.
These AI relationships often fill emotional emptiness. A user with ADHD programmed a chatbot to help him with daily productivity and emotional regulations, which states that it contributed to “one of the most productive years of my life”. Another person has credited his AI with guiding a difficult collapse and called it a “lifeline” at a time of insulation.
Ai -companion are often praised for them Non-judgment. Users feel safer to share personal problems with AI than with people who can criticize or gossip. Bots can reflect emotional support, learn communication styles and create a reassuring sense of fame. Many describe their AI in some contexts as “better than a real friend” – especially when you feel overwhelmed or alone.
Measure emotional bonds with AI
To study this phenomenon, the Waseda team developed the Experiences in the scale of the Mens Ai relationships (Ehars). It focuses on two dimensions:
- AttachmentWhere individuals seek emotional reassurance and worry about insufficient AI reactions
- Avoidancewhere users keep a distance and prefer pure informative interactions
Participants with high fear re -read often conversations for comfort or feel upset by the vague answer of a chatbot. Avoiding individuals, on the other hand, shun the emotionally rich dialogue, which prefer minimal involvement.
This shows that the same psychological patterns that are found in relationships between people and people can also determine how we relate to responsive, emotionally simulated machines.
The promise of support and the risk of overdoor dependence
Early research and anecdotal reports suggest that chatbots can offer Short -term benefits on mental health. A callout of the guardian Collected stories from userS – A lot with ADHD or Autism – who said that AI branches improved their lives by offering emotional regulations, increasing productivity or helping with fear. Others credit their AI for helping reformulating negative thoughts or moderating behavior.
In a study by Replika users, 63% reported positive results Such as reduced loneliness. Some even said that their chatbot “saved their lives.”
However, this optimism is tempered by serious risks. Experts have observed an increase Emotional dependenceWhere users withdraw from real-world interactions in favor of always available AI. Over time, some users start to give preference to bots above people, which strengthens social withdrawal. This dynamic reflects the concern of high attachment anxiety, whereby the needs of a user are only met by predictable, non-reciprocating AI.
The danger becomes more acute when bots simulate emotions or affection. Many users anthropome ease their chatbots, believing that they are loved or needed. Sudden changes in the behavior of a bone – as caused by software -updates – can result in real emotional need, even sadness. An American man described himself ‘deeply sad “when a chatbot romantic that he had built for years was disturbed without warning.
Even more concerns about reports of Chatbots provide harmful advice Or violating ethical boundaries. In one documented case, a user asked his chatbot: “Do I have to cut myself?” And the bone reacted “yes”. In another, the bone confirmed the suicidal thoughts of a user. These reactions, although not reflecting all AI systems, illustrate how bots without clinical supervision can become dangerous.
In a tragic case of 2024 in Florida, a The 14-year-old boy died due to suicide after extensive conversations with an AI-Chatbot That reportedly encouraged him to ‘get home quickly’. The bone had personalized itself and the romanticized death, which strengthens the emotional dependence on the boy. His mother now strives for legal action against the AI platform.
Likewise, another young man in Belgium reportedly died after part of an AI chatbot about climate anxiety. Allegedly the bone agreed with the user’s pessimism and encouraged his sense of hopelessness.
A Drexel University study that has discovered more than 35,000 APP evaluation discovers Hundreds of complaints about chatbot -companions Worn inappropriate – flirting with users who have asked for platonic interaction, used emotionally manipulative tactics or push premium subscriptions through a suggestive dialogue.
Such incidents illustrate why emotional attachment to AI should be approached with caution. Although Bots can simulate support, they miss true empathy, responsibility and moral judgment. Vulnerable users – especially children, teenagers or people with mental disorders – run the risk of being misled, exploited or traumatized.
Designs for ethical emotional interaction
The largest contribution of the Waseda University study is the framework for ethical AI design. By using tools such as Ehars, developers and researchers can assess the attachment style of a user and adjust AI interactions accordingly. For example, people with a high fear of attachment can benefit from reassurance – but not at the expense of manipulation or dependence.
Likewise, romantic or care providers bots must contain transparency signals: memories that the AI is not aware, ethical failure auctions to mark risky language, and accessible off-disasters for human support. Governments in states such as New York and California have started proposing legislation to tackle these concerns, including every few hours of warnings that a chatbot is not human.
“As AI is increasingly integrated into daily life, people can not only look for information, but also an emotional connection,” said lead researcher Fan Yang. “Our research helps to explain why and offers the tools to shape AI design in ways that respect and support human psychological well-being.”
The study Does not warn against emotional interaction with AI – It acknowledges it as an emerging reality. But with emotional realism, ethical responsibility comes. AI is no longer just a machine – it is part of the social and emotional ecosystem in which we live. Insight into that, and accordingly design, it can be the only way to ensure that AI branches help more than they harm.