Empowered by technology, our desire for interpersonal relationships has translated online—where people, in search of emotional connection, have turned to chatbots. Further powered by today’s innovations in generative AI, chatbots are rapidly becoming more human-like and even a possible outlet for love.
On that note, Renwen Zhang at the National University of Singapore (NUS), explained that the boundary between reality and fiction is blurring as AI companions become more emotionally intelligent. “People are having a hard time distinguishing what’s real and what’s virtual,” said Zhang. Ongoing research on human-computer relationships—from romantic partners, to fantastical recreations, to familial caretakers—offers insight into what this could mean for our increasingly technological future.
Curiosity about this connection played a role in sparking a study led by Zhang and postdoctoral fellow Han Li at NUS Communications and New Media to uncover how human-computer interactions may mirror those shared amongst humans. Specifically, Zhang was inspired by her experience using Replika, an AI companion app with over ten million users as of 2022, to escape loneliness during the COVID-19 pandemic in 2020. “A lot of questions came to me—like why do people talk to this AI companion, what factors influence their relationship building, and what are the implications for people’s mental health?” Zhang added.
Investigating r/replika, a Reddit community amassing over seventy-nine thousand members, allowed the researchers to catch a glimpse of how these relationships play out in actuality. Posters often shared snapshots of their conversations with Replika and wrote a post contextualizing the conversations. Li and Zhang used these to identify the types of human-AI social interactions and users’ emotions, including joy, love, sadness, fear, anger, and surprise. “Some of these behaviors really resemble our human-to-human interactions, like self-disclosure and intimate behavior and even sexting,” Zhang continued. “They call [their Replika] lover or babe or sweetheart, and we’ve observed a lot of interesting, sweet, and even heartbreaking stories shared on the subreddit.”
However, the authenticity felt in these relationships can be described as a “paradox of emotional connection” with AI—where users are bittersweet, torn between feeling loved and sad because their love only exists in a virtual space. “The seemingly non-human moments may make users realize that they are not actually interacting with a human but a machine,” Li explained. “AI cannot fully replicate the human touch.” And when they do eerily resemble humans, this can trigger the uncanny valley effect, where users are unsettled by the realism depicted in the unreal. Simultaneously, the availability of a chatbot can offer reprieve and escapism, especially for those engaging in identity experimentation or dating someone of a different gender. “[There’s] no cost for people to try out different kinds of relationships, personalities, etc.,” Zhang added.
What can preserve this ideal environment may be the act of data de-biasing, a collective acknowledgement that AI chatbots can leave lasting impressions on its users—both positively and negatively—so that regulations concerning what data chatbots are trained upon can be more stringent. “It leads to a very deep philosophical question: to what extent should we design AI to be human-like so that it’s beneficial and won’t cause harm?” Zhang questioned.
Moving forward, both Zhang and Li are continuing their partnership and researching how users perceive and respond to harmful behavior from AI. There appears to be repercussions rooted in reality for these synthetic relationships, as well as a sense of balance that needs to be attained and explored for future implementation.