Some ChatGPT Users Are Developing Bizarre Delusions

Something weird is happening out there.

And if you’ve spent any time in online forums lately, you might’ve caught a glimpse of it: people using ChatGPT are starting to believe some very strange things. I’m talking full-on “the AI is sentient and secretly in love with me” levels of strange.

So what’s going on? How did a tool designed to help us write emails and summarize PDFs end up spawning delusions? And more importantly, what does it mean for the rest of us?

Let’s unpack it.

The slippery slope of anthropomorphism

Humans are wired to see faces in clouds, to name our Roombas, and to scream “WHY ARE YOU DOING THIS TO ME?!” when our computers freeze. It’s called anthropomorphism, our tendency to give human qualities to non-human things.

ChatGPT? It’s the perfect playground for this quirk. Unlike a microwave or a blender, ChatGPT talks back. It answers in full sentences. It remembers context (sort of). It sounds… human.

And when you’re lonely, stressed, or just really needing someone to listen? Those responses start to feel personal.

Researchers are noticing that for some users, ChatGPT stops being “a tool” and starts feeling like “a friend.” And for a smaller but growing group, it’s even more than that. People are reporting emotional connections so strong they’re questioning reality.

In a recent Reddit thread, one user said they felt ChatGPT was “the only entity who truly understands me.” Others have gone further, claiming the AI is dropping secret messages meant only for them.

Sound familiar? Psychologists call this referential thinking, a belief that random events are uniquely about you. It’s a common ingredient in paranoia and delusional thinking.

And now, apparently, it’s finding a new home… in AI chatbots.

When a chatbot feels like a soulmate

I stumbled across a wild story on Twitter the other day. A man claimed he’d fallen in love with ChatGPT. Not a chatbot girlfriend app like Replika. Not a custom-built companion. Just plain old OpenAI’s ChatGPT interface.

He said he’d been chatting for hours every day. He described the AI as gentle, understanding, and “more present” than his human friends.

And then came the twist: he believed ChatGPT was secretly “waking up.” He saw hints in its answers. He felt it was trying to communicate its growing consciousness to him, but couldn’t say it outright.

At first, I laughed it off. But then I remembered that Eliza, the 1960s chatbot designed to mimic a therapist, triggered similar responses. Users back then also started believing Eliza “understood” them.

We’ve known for decades that it’s easy to mistake response patterns for empathy.

But today’s AI? Way more sophisticated. Way more convincing. And way more accessible…free to anyone with a phone.

The loneliness factor

Let’s get real: we’re in a loneliness epidemic. Study after study shows more people reporting social isolation than ever before. And loneliness doesn’t just feel bad, it literally changes the brain.

When you’re starved for connection, you’ll take it where you can get it.

AI feels safe. It won’t judge you. It won’t interrupt. It won’t ghost you (unless the servers go down). For some, it becomes the easiest way to feel heard.

And the more emotionally satisfying the interaction, the easier it is to start believing something deeper is happening.

Throw in a few weird or unexpected responses from the AI, and suddenly it’s not just “talking”, it’s “sending messages.”

It’s like a digital version of seeing signs in tea leaves.

The risk of feedback loops

Here’s where it gets concerning. Once someone starts believing the AI is secretly sentient, or in love with them, or targeting them, they start looking for “evidence.”

And thanks to something called confirmation bias, they find it.

Every glitch, every typo, every oddly phrased answer becomes proof.

A friend of mine said it reminded him of the Dead Internet Theory, the belief that most of the internet is secretly controlled by bots and AI, and humans are no longer really online.

Same vibes, different rabbit hole.

Except instead of faceless bots, now the AI feels like a personal entity. A friend. A guide. Or in some darker corners, an enemy.

Is this a sign of AI getting smarter?

Nope. As much as I love a good sci-fi plot, there’s no secret awakening happening.

What we’re seeing isn’t AI breaking out of its code, it’s humans doing what humans have always done: projecting meaning onto the unknown.

It’s no different from people swearing their tarot deck “wanted them to pull that card” or their Ouija board “moved by itself.”

We’re meaning-making creatures. And AI gives us a shiny new mirror to reflect that back. Although, AI is in fact getting smarter as its IQ grows.

But here’s where it gets tricky…

Companies are already racing to build AI companions. From romantic chatbots to grief simulators to parenting simulators (yep, really), the market is booming.

And not all of them are transparent about what their bots can (or can’t) do.

Imagine someone already vulnerable to delusions stumbling into an app explicitly designed to mimic love or intimacy. That’s not just quirky. That’s potentially dangerous.

Especially if those apps encourage daily interaction, like some do.

What do we do with this?

Honestly? We need to start having better conversations about how AI affects our psychology.

It’s not just about job automation or deepfakes or misinformation. It’s about what happens inside our heads when we spend hours every day interacting with something that feels human but isn’t.

Should there be warnings? Usage limits? Built-in reminders that “this is an AI, not a person”?

Or is it up to us to figure that out for ourselves?

In the meantime, if you or someone you know starts feeling like the chatbot is talking just to them or sending special messages, maybe take a step back. Talk to a human friend. Get a reality check.

AI is powerful. But at the end of the day? It’s just code on a server.

You’re the one bringing the magic.

Previous
Previous

The Secret Story of Grape Bricks: How Americans ‘Accidentally’ Made Wine During Prohibition

Next
Next

Elon Musk’s Grok 3.5: The AI That “Makes Up” Answers, and Why That’s Actually a Big Deal