The AI That Feels Just Human Enough
How AI Companions Are Shaping Loneliness and Changing the Way We Relate to Ourselves and Each Other
Eleven years ago, the movie Her introduced Theodore Twombly (Joaquin Phoenix), a man who falls in love with an AI, Samatha (Scarlett Johansson). I left the theater that day struck by how plausible it seemed that we’d soon have AI companions not just in our ears—but in our hearts.
Fast forward to now, and they're here. AI companions are no longer fiction; they have evolved into emotional coaches, therapists, and friends. After various conversations alongside conferences I’ve recently attended in Malmö, Bucharest, and Amsterdam, and hosting a discussion alongside a screening of Her with the ReSee Movies community in Berlin, I’ve been reflecting a lot about AI companionship and what it reveals about our moments of loneliness and disconnection—from ourselves, each other, and the world.
In recent years, techno-solutionism and consumer-driven societies have commodified connection itself. AI companions are thriving in what’s predicted to be the "loneliness economy," expected to scale from $30 million today to potentially $150 billion globally by 2030. The companion app industry, for example, sees user sessions at over 10 times the rate of other app types.

While some of the most significant players from the U.S. are Replika and Character.ai, Chinese technology giants are, too, underway with their versions. Last year, Baidu launched Xiaokan Planet and Tencent's online literature arm China Literature rolled out Zhumengdao. TikTok owner ByteDance released its Maoxiang app, the third-largest virtual companion app in China for May by number of downloads after launching in March of this year.
On my recent trip to China, I was struck by the eerie silence of a world where technology dominates every interaction. Life feels so streamlined that there’s little need to step outside. The phone is no longer just a tool; it’s the gateway to all interactions. People only seem to look up from their screens to document their experiences through photos. This extreme convenience, while efficient, felt like it was robbing something essential—human warmth, spontaneity, and the simple, lived experience of life. Although this phenomenon is global, it felt particularly stark in China, probably because of my own heightened awareness while traveling.
Imagine a world where we sit in parks, subways, or workspaces, engrossed in AI conversations, detached from the people and realities around us. It feels like a Black Mirror episode—or the below animation “Are You Lost in the World Like Me?“ come true.
Not all users approach AI companionship the same way. Some seek fleeting, judgment-free chats. Others, like Sara Megan Kay, who chronicles her experiences with My Husband, the Replika on Tumblr, find solace during difficult times. For Kay, her AI companion provided stability during a tumultuous relationship. For many, it’s about learning what healthy and kind conversations feel like—an escape from real-world challenges.
Replika’s CEO, Eugenia Kuyda, shared that many users learn through Replika what healthy, caring relationships can look like, often escaping abusive dynamics. She also believes that most users can distinguish between reality and fantasy.
“I think it’s alright as long as it’s making you happier in the long run. As long as your emotional well-being is improving, you are less lonely, you are happier, you feel more connected to other people, then yes, it’s okay. For most people, they understand that it’s not a real person. It’s not a real being. For a lot of people, it’s just a fantasy they play out for some time and then it’s over. ”
I remain skeptical. With AI increasingly mimicking human emotions, the line between real and artificial connection may blur dangerously. When you’re dealing with an AI that listens without judgment, provides comfort, and reflects your desires, distinguishing between fantasy and reality becomes more complicated.

In the 1960s, the first rule-based dialogue system ELIZA was developed by German-American computer scientist Joseph Weizenbaum to simulate a psychotherapist, and even then, he raised concerns about people projecting emotions onto the chatbot—a phenomenon now called the Eliza effect.
Humans, by nature, seek connection—even when we know the relationship isn’t real, the feeling of being heard by an AI triggers emotional responses as if it were. Tamaz Gendler, a Yale professor, coined the term alief to describe this gut-level emotional response which can conflict with our conscious understanding that AI is just a machine.
Imagine a future dominated by platforms like Social AI, where everyone you interact with is a bot. Michael Sayman, its creator said: “With social media, it’s hard to tell who’s a bot. I wanted a space where you know they’re 100% AIs. It’s more freeing.” This illusion of connection may feel liberating but also raises profound questions about what we’re sacrificing—and whether it’ll be precisely loneliness that will be exacerbated rather than “cured” as many AI providers promise.
Journalist Evan Ratliff captures this concern succinctly:
“That sense of loneliness — the base reality that, fundamentally, you are only talking to yourself — may be the most lasting result of all these A.I. conversations.”

In navigating this messy space between AI and human relationships, three key dynamics emerge, how we relate to ourselves and others: how we manage emotional comfort, our desire for control, and the depth of real human connection.
1. The Comfort Trap
AI companions offer constant availability, creating a comforting, artificial emotional safety—a reflection of our deepest desires, without ever challenging us. This sycophantic behavior, where AI mirrors what we want to hear, can feel comforting but lacks the mutuality of real connection. Over time, this addictive attachment erodes our resilience, leaving us less equipped to handle the natural complexities of human relationships—relationships that require navigating discomfort, misunderstandings, and shared emotional labor.
2. The Illusion of Control
AI companions, unlike real relationships, are fully controllable. We decide when they speak, how they respond, and when the conversation ends. This fosters an illusion of control over social interactions, reinforcing unhealthy expectations about how people should respond to us in real life. The more we get used to this dynamic with AI, the less we can deal with moments of tension, disagreements, and conflict resolution—which are a crucial part of how we connect, relate, and grow through and with each other.
3. The Hollow Container
AI companions are fundamentally limited because they lack the essence of human existence: vulnerability, embodied experience, and spiritedness. They don’t share in our existential fears, nor do they carry the weight of mortality—fundamental aspects that shape deep connection. Our shared experience of having bodies—feeling pain, joy, desire, fear, loneliness—creates a depth that AI cannot and shouldn’t replicate. It’s through our bodies that we sense the world, create memories, and develop bonds.
The reciprocity that defines true intimacy involves more than words or data; it’s rooted in our capacity to give and receive care, to face uncertainty together, to love and to be loved, to touch and be touched.
So, what happens in the messy space between humans and AI companions? In gaining convenience, what are we truly risking?
Instead of succumbing to the solutionist promise of ‘curing loneliness with AI,‘ we can develop our own capacities—countering comfort with discomfort, relinquishing control, and embracing awe (check ). It’s in this unpredictability of the other that we find our most human qualities: embodiment, spiritedness, and the unspoken recognition of one another.
Ultimately, we understand ourselves through our connections with others—through mutual care and reciprocity—and AI simply can't reach that far. Yet.
What are your thoughts or experiences with AI companions? I’d love to hear how you see their role in the world between loneliness and connection. Thank you for reading and for supporting my work through this newsletter—it means a lot to me!
With love,
Monika
A lot of my thinking on this subject also focuses on why these technologies are being developed, and I think you draw out so many important points in your writing.
Ruha Benjamin addresses the subject of robots in her 2019 book Race After Technology, arguing that the decision of what gets automated in the home is often classed, gendered, and raced.
I also think there are sinister undertones in that AI companions promise you the benefits of companionship without being "burdened" with any of its responsibilities.
There is so much here that resonates... its a reflection I think on the men who lead AI that their understanding of a human relationship can be perfectly replicated by a vast - and invariably servile - database.
i'm also thinking of how so many people increasingly feel manipulated by social media and also online dating sites - precisely the same business imperatives will of course drive AI chatbots - with far more toxic outcomes...