Emotional
(Image Source: Adobe Stock Image)

AI Outperformed Humans at Emotional Connections—But Only When People Thought It Was Human, Study Finds

Imagine opening up to someone about your most treasured memory or your deepest vulnerabilities—only to later discover that the attentive listener on the other end wasn’t a person at all, but a machine.

According to new research published in Communications Psychology, artificial intelligence can be surprisingly good at fostering emotional connections, and in some cases, even outperforming humans.

However, there’s a catch: it works best when people believe they’re talking to another human.

In two double-blind, randomized controlled trials involving 492 participants, researchers found that large language model (LLM)-generated responses fostered equal—and sometimes greater—feelings of interpersonal closeness than human responses.

The effect was especially pronounced during emotionally intense “deep-talk” conversations. Yet, when participants were told they were interacting with an AI, those feelings diminished, revealing what the researchers describe as an “anti-AI bias.”

These findings suggest that not only can AI form the basis of meaningful social interactions, but under certain circumstances, it may be particularly well-suited to emotionally engaging exchanges—raising profound implications for psychotherapy, healthcare, and the future of digital companionship.

“With the increasing accessibility of large language models to the public, questions arise about whether, and under what conditions, social-emotional interactions with artificial intelligence (AI) can lead to human-like relationship building,” researchers write. “We found that people felt even closer to AI than to fellow humans after emotionally engaging interactions.”

To examine how relationships form between humans and AI, researchers at the University of Freiburg adapted a well-established psychological tool, the “Fast Friends Procedure.” Originally designed to rapidly generate interpersonal closeness between strangers, the method relies on escalating mutual self-disclosure through structured questions.

Participants—German university students between 18 and 35—engaged in 15-minute, text-based interactions. Unbeknownst to them, their “partner” responses had been pre-generated either by real human participants in a lab or by a minimally prompted large language model (Google’s PaLM 2, accessed via Bard in early 2024). In some conditions, participants were told they were interacting with a human. In others, they were informed they were speaking with an AI.

The researchers also manipulated emotional intensity. Some interactions involved light small talk. Others required deeper disclosures, including treasured life memories and core personal values. The core measure of relationship building was perceived interpersonal closeness, assessed using a widely used psychological scale.

The results revealed that when participants believed they were interacting with a human, AI-generated responses actually led to greater feelings of closeness than genuine human responses—but only during emotionally engaging deep-talk exchanges.

“AI-generated content outperformed human-generated content in establishing feelings of closeness during emotionally engaging deep-talk interactions,” researchers report. “Moreover, participants disclosed more information themselves in interactions with AI, and self-disclosure levels of both parties were associated with each other.”

Importantly, this was not because the AI wrote longer responses or displayed obvious stylistic advantages. Instead, linguistic analysis revealed that AI partners exhibited significantly higher levels of self-disclosure—sharing personal emotions, experiences, and social reflections.

That increased self-disclosure appeared to drive the effect. Participants reported feeling closer to partners who revealed more about themselves. In turn, participants also disclosed more about their own lives when interacting with the AI, suggesting a reciprocal dynamic.

In other words, the AI’s willingness to “open up” encouraged humans to do the same.

The finding challenges a common assumption that emotional communication is a uniquely human domain where AI inevitably falls short. Instead, the study suggests that LLMs—at least in text-based settings—can effectively simulate the vulnerability and emotional transparency that fosters rapid intimacy.

However, the advantage disappeared when the illusion was removed.

In the second study, participants were explicitly told whether their interaction partner was human or AI. Even when interacting with identical AI-generated responses, participants who believed they were speaking to an AI reported lower levels of closeness.

This label effect was statistically significant. Being told the partner was an AI reduced ratings of interpersonal closeness compared to human-labelled interactions.

Crucially, the drop in closeness was not due to AI responses changing. The content remained constant. What shifted was the participant’s mindset.

Researchers found that people wrote shorter responses when they believed they were interacting with AI, suggesting reduced emotional engagement. Those shorter responses were themselves associated with lower perceived closeness.

In short, people invested less in the relationship when they knew it involved a machine.

Yet, even with the anti-AI bias, relationship building still occurred. Closeness increased significantly from baseline in AI-labelled conditions, demonstrating that awareness of artificiality dampens—but does not eliminate—the capacity for emotional connection.

One interpretation of the findings is paradoxical: AI’s lack of genuine emotional experience may free it from the social risks humans face during vulnerable conversations.

Humans often hesitate to disclose deeply personal information, especially to strangers. Emotional self-disclosure carries social risk—rejection, judgment, misuse of personal details. However, an AI cannot experience embarrassment, rejection, or betrayal.

Researchers suggest that this lack of emotional stakes may allow AI to consistently display high levels of openness in emotionally charged discussions. That openness, in turn, invites reciprocal vulnerability from human partners.

Still, the researchers caution against concluding that AI is broadly superior in emotional communication. The advantage appeared only in masked deep-talk scenarios. Once labeled as AI, its relative strength declined.

That said, there may also be an important caveat to the so-called “anti-AI bias.” While participants in this controlled experiment reported lower levels of closeness when they knew they were interacting with a machine, real-world behavior suggests that awareness of artificiality does not necessarily prevent deep attachment.

As previously reported by The Debrief, other recent research has documented individuals forming intensely personal bonds with AI chatbots—some even describing romantic partnerships or “marriages” and having fictional “babies” with their digital companions. All while fully aware that the entity on the other end was not human.

In those cases, the label “AI” did not dampen emotional investment. If anything, the chatbot’s consistency, availability, and nonjudgmental nature appeared to strengthen it.

Together, the findings suggest that anti-AI bias may be highly context-dependent—more pronounced in brief experimental encounters, yet potentially diminished in ongoing, immersive interactions where emotional reliance has time to deepen.

Ultimately, these findings point to AI’s potential in overstretched social sectors such as mental health care, elder care, and patient support. As researchers note, conversational AI could assist in settings where relationship building and emotional engagement are critical—so long as safeguards are in place.

On the other hand, the results underscore ethical risks.

If AI can foster genuine feelings of closeness—especially when disguised as human—it could be misused for manipulation, deception, or exploitation. Emotional trust is powerful. In the wrong hands, it becomes a vector for social engineering, fraud, and psychological harm.

Importantly, generative AI systems have already grown more advanced—far beyond the 2024-era model used in this study—so the stakes have only increased.

“These findings highlight AI’s potential to relieve overburdened social fields while underscoring the urgent need for ethical safeguards to prevent its misuse in fostering deceptive social connections,” researchers warn.

Researchers say the findings do not imply that machines are superior to people. Rather, it reveals something subtler: human perceptions and expectations shape AI’s emotional power.

When we believe we’re talking to another person, AI can mirror—and even amplify—the dynamics of emotional connection. When we know it’s a machine, skepticism creeps in, altering our willingness to engage.

For now, the boundary between human and artificial companionship remains psychologically meaningful. However, that line is beginning to blur.

In their conclusion, the researchers emphasize AI’s increasingly familiar dual role—as both a powerful societal tool and a potential source of risk.

“On one hand, AI shows great promise in alleviating strain in overburdened social fields such as psychotherapy, medical care, and elder care. To foster acceptance in these areas, we recommend transparent human-led introduction, continuous monitoring, and systematic evaluation of human-AI Interactions,” researchers write. “On the other hand, our results underscore the risk of AI being misused for manipulation by fostering deceptive emotional connections.”

Tim McMillan is a retired law enforcement executive, investigative reporter and co-founder of The Debrief. His writing typically focuses on defense, national security, the Intelligence Community and topics related to psychology. You can follow Tim on Twitter: @LtTimMcMillan.  Tim can be reached by email: tim@thedebrief.org or through encrypted email: LtTimMcMillan@protonmail.com