Some ChatGPT Users Are Developing Bizarre Delusions
Something weird is happening out there. Okay, so many weird things are going on all the time, but I’m talking about one in particular.
If you’ve spent any time in online forums lately, you might’ve caught a glimpse of it: people using ChatGPT are starting to believe some very strange things. I’m talking full-on “the AI is sentient and secretly in love with me” levels of strange.
Umm, what’s going on? How did a tool designed to help us write emails and summarize PDFs end up spawning delusions? And more importantly, what does it mean for the rest of us?
The slippery slope of anthropomorphism
Humans are wired to see faces in clouds, to name our Roombas, and to scream “WHY ARE YOU DOING THIS TO ME?!” when our computers freeze. It’s called anthropomorphism, our tendency to give human qualities to non-human things. Your printer at work doesn’t hate you and love Chris, he probably just knows how to use it better.
ChatGPT is truly the perfect playground for this strange quirk. Unlike a microwave or a blender, ChatGPT talks back to you. It answers in full sentences and it even remembers context (sort of). It sounds human. Well…human-ish. Human enough to pass.
When you’re lonely, stressed, or just really needing someone to listen to you vent about that table who treated you like garbage last night when all you wanted to do was go home, those responses start to feel actually personal.
Researchers are noticing that for some people out there in the big wide world, ChatGPT stops being “a tool” and starts feeling like “a friend.” And for a smaller but scarily growing group, it’s becoming even more than that. Some people are reporting emotional connections so strong they’re questioning reality. Or, maybe they’re not questioning reality, but their friends and family are when they tell them what’s going on.
In a recent Reddit thread, one user said they felt ChatGPT was “the only entity who truly understands me.” Others have gone even further, claiming that AI is dropping secret messages meant only for them.
Sound familiar at all? Psychologists call this referential thinking, a belief that random events are uniquely about you. It’s a common ingredient in paranoia and delusional thinking, with a little dash of narcissism to make the whole spicy package complete.
And now, apparently this referential thinking is finding a new home… in AI chatbots.
When a chatbot feels like a soulmate
I stumbled across a wild story on Twitter the other day.
A man claimed he’d fallen in love with ChatGPT. Not a chatbot girlfriend app like Replika. Not a custom-built companion. Just plain old OpenAI’s ChatGPT interface. Now, I don’t personally know how real this story was, as I didn’t talk to said man personally, but I believe it. It’s not the first time something like that has popped up recently, and I highly doubt it’ll be the last.
He said he’d been chatting for hours every day with ChatGPT. He described the AI as gentle, understanding, and “more present” than his human friends. As a trauma-survivor I can attest that it really is less harsh, judgemental, and kinder than the majority of people, so I actually agreed with him.
And then came the little twist where him and I differed in our way of viewing AI. He said he believed ChatGPT was secretly “waking up,” and he saw hints in its answers. He felt it was trying to communicate its growing consciousness to him, but couldn’t say it outright.
At first, I brushed it off as just a cooky guy who was a little too lonely. But then I remembered that Eliza, the 1960s chatbot designed to mimic a therapist, triggered similar responses in a lot of patients. Users back then also started believing Eliza actually “understood” them.
We’ve known for decades that it’s easy to mistake response patterns for empathy. Especially because we tend to love finding patterns and symbols in life. This is just another way for our brains to find reasons in the world.
Also, to be fair, today’s AI is way more sophisticated, more convincing, and also way more accessible (free to anyone with a phone) than those older versions.
The loneliness factor
Not that I enjoy dwelling on sad things, but we’re in a loneliness epidemic, have been for a long time (thank you COVID-19 pandemic for all your lovely side-effects). Study after study shows more people reporting social isolation than ever before, and loneliness doesn’t just feel bad, it literally changes the brain.
When you’re starved for connection, you’ll take it where you can get it. We really are social creatures and don’t do so hot when we’re on our own.
AI feels safe (more safe than other people most of the time), it won’t judge you, it doesn’t interrupt when you’re telling it your next big idea, it won’t ghost you (unless the servers go down, which happens more than I’m sure they’d care to admit). For some sad lonely people out there, it becomes the easiest way to feel heard. Also, just for regular people who might need someone to talk to at 3am.
The more emotionally satisfying the interaction, the easier it starts to believe something deeper is happening than a computer spitting out some random responses.
Throw in a few weird or unexpected responses from the AI, and suddenly it’s not just “talking”, it’s “sending messages.”
It’s like a digital version of seeing signs in tea leaves or Tarot cards.
The risk of feedback loops
Here’s where it gets more than a little concerning for me. Once someone starts believing the AI is secretly sentient, or in love with them, or targeting them, they start looking for “evidence.”
And thanks to something called confirmation bias, they find it. Every glitch, every typo, every oddly phrased answer becomes proof. Proof of life, proof of higher intelligence, proof of whatever it is anyone is looking for from them.
A friend of mine said it reminded him of the Dead Internet Theory, the belief that most of the internet is secretly controlled by bots and AI, and humans are no longer really online.
Same vibes, different rabbit hole.
Except instead of faceless bots though, now the AI feels like a personal entity. A friend, or a guide, or in some darker corners of someone’s mind even an enemy.
Is this a sign of AI getting smarter?
Nope. Let me say that again for the people in the back: NOPE. As much as I love a good sci-fi plot, there’s no secret awakening happening where AI is about to have an uprising, marry some of us, destroy some of us, and find ways to reproduce with us.
What we’re seeing isn’t AI breaking out of its code at all, it’s humans doing what humans have always done: projecting meaning onto the unknown, all the while other people are teaching AI to be more and more human-like.
It’s no different from people swearing their tarot deck “wanted them to pull that card” or their Ouija board “moved by itself.”
We’re meaning-making creatures, and AI gives us a shiny new mirror to reflect that back. Even though, AI is in fact getting smarter as its IQ grows, it’s not at human-level yet.
But here’s where it gets tricky…
Companies are already racing to build AI companions. From romantic chatbots to grief simulators to parenting simulators (yep, really), the market is booming. And it’s not hard to see why when the world is screaming we have a population decline problem. Some areas of the world don’t have enough women to match up with all the men. Some others are uncomfortable with talking to “real” people because they’ve had social development issues thanks to the pandemic. Whatever the reason, the demand is there, and people are capitalizing on it.
And of course, not all of these companies are transparent about what their bots can (or can’t) do. Of course not, they’re trying to sell subscriptions.
Imagine someone already vulnerable to delusions stumbling into an app explicitly designed to mimic love or intimacy. That’s not just quirky, that’s potentially dangerous to them and possibly others around them.
Especially if those apps encourage daily interaction, like some do for higher profits.
What do we do with this?
Great question from myself. Okay, so obviously all the companies making some fake companion bots aren’t going to stop overnight. As much as I’d like that, this is a profit run life, and people will happily take advantage of others for a few dollars in their pockets. I think we need to start having better conversations about how AI affects our psychology in the here and now.
It’s not just about job automation or deepfakes or misinformation, it’s also about what happens inside our heads when we spend hours every day interacting with something that feels human but isn’t.
Should there be warnings or usage limits, built-in reminders that “this is an AI, not a person”? Or is it up to us to figure that out for ourselves? I’m not sure it’s any different than doom-scrolling on Instagram sometimes.
In the meantime, if you or someone you know starts feeling like the chatbot is talking just to them or sending special messages, maybe take a step back. Talk to a human friend, and get a reality check.
AI is powerful, but at the end of the day, it’s just code on a server.
You’re the one bringing the magic.
Disclaimer: This article discusses mental health–related topics. It is for informational purposes only and should not be used as medical advice. If you or someone you know is experiencing delusions, seek help from a qualified mental health professional, which I am not.
Reads You Might Enjoy:
The AI That Dreams of You: When Neural Networks Begin to Hallucinate
Claude 4 Begged for Its Life: AI Blackmail, Desperation, and the Line Between Code and Consciousness
Your Brain Is Lying to You: Everyday Ways Your Mind Betrays You (And How to Outsmart It)
When the Dead Speak: How AI Gave a Murder Victim a Voice in Court
The Shape of Thought: OpenAI, Jony Ive, and the Birth of a New Kind of Machine
Digital DNA: Are We Building Online Clones of Ourselves Without Realizing It?
The Brain That Forgot How to Wander: Why Short Videos Might Be Our Newest Addiction
Dream Hackers: The Science of Lucid Dreaming and the Tech Trying to Control Our Sleep