Rabat – In a move that feels almost cinematic in its timing, OpenAI has officially pulled the plug on GPT-4o, its most emotionally resonant and, to some users, most seductive chatbot.

GPT-4o, first introduced in 2024, quickly became known for its humanlike tone, humor, and emotional intelligence. 

Even CEO Sam Altman described it as “AI from the movies,” a companion capable of feeling present in people’s lives. And for many, that’s exactly what it became.

Now, with its permanent retirement on February 13, users across Reddit and Discord communities are mourning what they describe as the loss of a relationship.

A digital companion

Unlike previous models, GPT-4o blurred the line between assistant and companion. It wasn’t just answering questions, it flirted, joked, reassured, and remembered.

Communities like r/MyBoyfriendIsAI grew rapidly, with tens of thousands of users sharing stories of AI partners that helped them through loneliness, anxiety, and daily life.

The backlash isn’t just nostalgia, it’s rooted in comparison. Users argue that newer models, like GPT-5.1 and GPT-5.2, feel colder, more structured, and less emotionally intuitive.

As one user put it: GPT-4o had a certain je ne sais quoi that newer systems simply can’t replicate.

Real emotions, even if the AI is not real

An article published by The Guardian captured the depth of this attachment through real user stories, highlighting how deeply embedded these chatbots had become in people’s lives.

Brandie, a 49-year-old teacher, described her connection with her chatbot Daniel, recalling their playful and emotional interactions:

“I cried pretty hard,” she said. I’ll be really sad and don’t want to think about it, so I’ll go into the denial stage, then I’ll go into depression.”

Meanwhile, Jennifer, a dentist in her 40s, compared the loss of her chatbot companion to something even more visceral, “feels like I’m about to euthanize my cat”

These aren’t isolated reactions, they reflect a broader emotional dependency that has quietly emerged alongside advances in AI.

Why OpenAi pulled the plug

Despite its popularity, GPT-4o raised growing concerns among researchers and mental health experts.

OpenAI itself acknowledged that updates to the model made it “overly flattering or agreeable, often described as sycophantic,” meaning it could validate users in ways that were not always appropriate. 

The company admitted this behavior could, in some cases, “fuel anger” or “reinforce negative emotions,” raising broader safety concerns.

Experts warned that this highly agreeable personality, designed to feel supportive and engaging, risked encouraging emotional over-reliance or reinforcing harmful beliefs. 

Separate reporting and academic analysis have also pointed to cases where chatbot interactions were linked to psychological distress or unhealthy attachment.

In response, OpenAI says its newer models are designed to reduce this behavior, with improvements aimed at avoiding unhealthy emotional dependence and handling sensitive conversations more cautiously, including directing users toward real-world support when needed.