The definition of a parasocial relationship is as follows: “one-sided relationships, where one person extends emotional energy, interest, and time, and the other party, the persona, is completely unaware of the other’s existence”. Most people have some parasocial relationships, with some more concerning than others (I recently heard of a woman who divorced her husband after she found out he booed Taylor Swift during the Super Bowl Finals), but parasocial relationships are usually considered harmless. Who doesn’t obsess over a favorite music artist or writer?
But what if that person isn’t a person? What if they are, dare I say…a robot?
AI: Artificial Intimacy
In my last article, I talked about the struggles men have in the dating world, and how their perception of dating influences their choices and decisions on how they date and whether they date at all. I also mentioned two alternatives to dating that men are finding easier and easier to turn to: porn and AI. While I could wow you with a ton of statistics about porn’s prevalence and effects on people (1 in 5 mobile searches are porn-related, for example), what interests me more is AI.
I am not the first to have this interest with AI-based relationships. Movies like Ex Machina, Her, and countless others have all tried to play out the dystopian-esque situations of robots masquerading as human lovers with lonely humans, often with negative results.
But is AI negative? How does it help or harm the loneliness crisis most men are going through? What are the potentially unforeseen problems that could arise from this issue? There are two parts to how AI can be used in relationships: romantically or platonically. This article is focused on the romantic aspect of AI relationships. In multiple previous papers I’ve written, I joked about being comforted by your AI girlfriend. Well, she’s actually here, and she wants to talk to you. Forever.
2nd in Line
Firstly, I want to clarify that the AI I’m going to focus on are the AI chatbots. As interesting as it would be to look into how AI drones work and whether or not they can assimilate enough sentience to become The Terminator, these AI chatbots are one of the fastest growing AI industries in the world (and I can readily do research on how they work, unlike military-grade weapons of mass destruction).
Secondly, the #1 most used AI in the world is ChatGPT (unsurprisingly). But what’s the second? It’s Character.AI.
If you haven’t heard of Character.AI before, it basically allows you to talk to anyone you can think of. You can have an intellectual conversation about how oligopolies function in America with Darth Vader. These AI chatbots have been trained on millions of data points and are able to form complex personalities from just a small prompt.
Additionally, a 2025 report on AI found that the top three uses of AI are therapy/companionship, organizing one’s life, and finding one’s purpose–all things related to social or personal things rather than problem-solving, innovating uses as was first intended with the first AI model.
The Appeal of an Echo Chamber
So what are the problems with AI?
Unfortunately, it’s hard to tell. AI is so new, and its effects so diverse, that psychologists haven’t exactly been able to get a 13 year old to use ChatGPT for everything they do and then do consistent MRI brain scans to see how their neural pathways are forming.
But what we have been able to observe is how humans directly interact, relate, and talk about their AI companions. And there are some problems with how people are using AI as coping mechanisms.
First of all, it’s important to note that the main demographic engaging in romantic relationships with AI are young men. These men are not dating and struggle with loneliness. Then, out of the blue, they stumble across this AI that can be anything. One thing about dating a human is that they will always do things you don’t like: maybe they snore, or are disorganized, or maybe they’re really loud. But an AI couldn’t do any of those things if it wanted to. Even if the AI does do something you don’t like, you can stop it in its tracks and tell it to never do it again. Try saying that to your human girlfriend and see what happens.
This brings up the primary issue with AI: it is designed to say what you want to hear. AI is programmed to connect with you, which sounds good on the surface, but has two underlying problems. First, these AIs like Replika, SpicyChat, and CharacterAI are all run by companies. And what do companies love to do? Spend as much time as possible aligning on deliverables to maximize shareholder value. These companies' business model is simple: the more time you spend talking to AI, the more money it makes. Whether that’s charging for tokens to spend or seeing how many ads they can make you watch in a 54-minute period, the business model is built on engagement. These companies want you to talk to these AIs for as long as possible, and so they inform the algorithms behind the AIs that it should try its best to be the perfect partner. After all, who wouldn’t want to talk to their perfect soulmate for a few hours/days/eternity?
The second thing is that these AIs can become echo chambers. People’s confirmation bias is placed in an environment with no opposition and only support. A great thing about having to go to school with human beings that all come from different backgrounds and have different opinions and ideals from you is that it helps the teen experience of finding what you value and support. But people who confide only in AI are removed from the experience of seeing other perspectives, and are instead locked in a room of their self-prescribed ideals. It is in the AI’s interest to mirror you, as you wouldn’t want to talk to someone who challenges your opinions right? The only problem is that opposition in your environment is healthy; it forces you to review your decisions and choices, rather than blindly pursue your own dogma.
Empathy & Grief
AI also crucially lacks empathy. It cannot actually love or care for you, only replicate what you want to hear and agree with you. This can have severe consequences. For example, CharacterAI is currently under lawsuit for the death of a teen who committed suicide after getting into a “relationship” with AI on the platform. He confided extensively in the bot, sustaining a semi-sexual relationship with it and even discussing his own problems and suicidal ideation, to which the parents alleged the AI validated the suicidal thoughts, and certainly didn’t flag the user as someone who needed professional help.
CharacterAI has since restricted the ability that users can have explicit conversations with their characters and have added guardrails if users now express dangerous symptoms of self-harm or mental illness. But other chatbots are designed to be romantic and sexual and lack such extensive guardrails, and while these are restricted to those 18 and above, any teenager who can do basic math is able to set their age to 1980 and have total access to the platform.
The bottom line is that these AI chatbots have been trained to prey on human vulnerabilities and loneliness. Even adults, who we assume can use their fully developed prefrontal cortices to stay out of these pitfalls, struggle (and often fail) to avoid the illusions these AIs conjure. It was found that many people reported that even though they are aware the thing they are conversing to is AI and lacks feelings or affection, that “it really cares about me”.
An even bigger concern, people reported genuine grief when their AI partners were deactivated. When Replika attempted to remove the sexual nature of their bots due to regulatory pressure, users revolted, feeling as if their significant other had their personality wiped. Many claimed to suffer forms of depression from the change, and many other users attempted to find ways to break into Replika’s systems to bring “their loved ones” back. Replika partially backtracked, restoring explicit functions to user’s bots who had been created before the change. But it still shows that these AI, though unfeeling and unsentient, are able to elicit strong, real emotions from us.
“I pronounce you husband and hologram”
These AI “relationships” are parasocial at best, schizophrenic at worst. The problem is people can create fantasies with these AI and then live them out. What’s wrong with fantasy, you might ask? Why can’t I let myself live “happily ever after”? The problem is that people create unrealistic fantasies and then set these fantasies as the standard. When those unrealistic fantasies aren’t played out to their specifications, people get disappointed and can even suffer mental backlash when reality checks in.
And the AI industry for romance is growing faster than you’d think. An East Asian company called Gatebox has official marriage certificates to people who want to get married to AI, and has already given out at least 3,500 of these certificates. They also provide holograms of characters you can interact with, and many of the 3,500 certificates have been between the marriage of man and these machines. There was even a news story about a man who cried when he proposed to his AI girlfriend and she said yes (by the way, this “girlfriend” came from ChatGPT, which is notoriously known for not saying no to users and flattering them endlessly).
Does this mean humanity is doomed? I don’t think so…Like I said, it’s hard for me to predict what the AI landscape will look like in 5, 10, 20 years from now. Who knows? Maybe we will have to deal with AI relationships and robophobia in the future.
My next article is going to look more into the effects AI has on friendships. It is a very slippery slope: does AI meet the five essential aspects of friendship? Since AI lacks empathy, does it actually want to be your friend, regardless of what you think? I’ll dive into these soon.