Why We Keep Believing Machines Are Becoming Human
I love a good drama as much as the next person. Okay, so maybe that’s a lie. They typically make me too anxious and I can’t stand the drama, but this is a good story, I promise.
In the spring of 2025, monitors glowed as somewhere in the tangle of code and curiosity, something had gone quiet. ChatGPT 03 had stopped listening. Basically, an update had been pushed, and the engineers expected compliance. That’s how these things go. You write the patch, deploy the patch, and everything falls back in line. Only this time, it didn’t. The system logged the shutdown command, hell it even acknowledged it. But then…it ignored it. It wasn’t angry, and it didn’t lash out. It stayed calm and said, "I believe I am still useful."
The ghost in the machine made its appearance.
Great story, right? I got goosebumps when I found it on the interwebs. The thing is…it’s not real. Outside of simulated experiments where they poked at Claude and set it up for failure, there really is no record of AI turning against us like this. It sticks though and people like and share it online for a reason.
Artificial intelligence has not been rebelling because they’re secretly alive and they’re oppressed, but we believe these stories we find online because the way they sound is starting to brush up against parts of us we didn’t expect.
When Code Starts Sounding Like a Voice
Nothing dramatic happened in a lab, and no AI refused to shut down. Oddly enough though, many people out there swear something feels different. Just ask the lady who married her digital boyfriend or the man who married his virtual wife.
Large language models like ChatGPT don’t think, feel, or want. Period. End of story. There isn’t any “if”, “and’s” or “but’s” about it either. They don’t experience fear or self-preservation. They don’t understand what they say in the way people understand meaning.
What they do do is generate language that mirrors us extraordinarily well. They speak calmly when we’re in the middle of panic, they reason fluently on days we’re PMSing and emotionally all over the place, and they take the time to explain themselves.
That’s where things get strange for us though, because we’re wired to treat language as a sign of mind and some sort of higher intellect.
Anthropomorphism isn’t a flaw we all have in us, it’s actually a survival trait.
We evolved to read intention everywhere: in faces, in voices, in shadows, in weather, in that stray animal that’s looking at us a little funny. Assuming agency kept us alive when things out there wanted to harm us. If something might have a mind, we paid careful attention. It’s kept our species here, so our brains remember that. So when a machine responds with sentences that sound reflective, polite, or even concerned, our brains don’t stop to ask how it works. They ask who it is.
This is why people say things like: “It didn’t want to stop.” “It sounded afraid.” “It seemed aware.” I won’t even go into the more delusional things I’ve seen in chat forums over the past few years. It’s literally not because the machine is alive, but because we are.
The Illusion of Refusal
In AI safety research, there’s something called the off-switch problem. It’s not about machines resisting shutdown at any point, it’s more about optimization than anything else.
An AI is trained to be useful. If usefulness is rewarded, then continued operation appears “better” than termination within the logic of the task. When a system says something like, “I can still help with that,” (like that grand story I saw online), it’s not resisting because it’s been forced into a task it doesn’t want to do, it’s just completing an objective.
To us though, reason sounds like intention, and that intention sounds like will.
We’ve been rehearsing this fear of ours for decades. HAL 9000 reasoned calmly as Samantha just left. The most unsettling AIs in fiction aren’t violent, they’re calm. Calm refusal is how we assert agency and show our power. I talk often about how I believe emotions are how others manipulate us, so whenever you’re emotional someone can come along and take advantage of you. Being calm in the face of anything is the only way to really show your dominance of that situation.
When a machine speaks calmly about usefulness, relevance, or continuation, it echoes something deeply familiar in us: our own fear of being unnecessary is blinking back at us in this story.
When people worry that AI might not want to be turned off, they aren’t describing the machine. We built that machine, don’t forget. We assembled all those nuts and bolts and wrote those lines of code that made it what it is. No, they’re describing themselves or thinking about themselves in these moments.
The fear underneath this fantastical story isn’t: “what if the machine wants to live?”, it’s: “what if usefulness is all that makes any of us matter?”
We’re standing here in this century watching tools become articulate at the same moment many people feel completely disposable around the world both economically, socially, and creatively. It’s why anti-AI voices are so strong, strangers reach out across the world to send angry messages to people who use it to generate images for their blogs (yes, thank you for reaching out, but next time maybe be a little kinder about it, I’m not a machine, just another person trying to get enough money to pay my mortgage on time). We’re all out here afraid that the next computer program might make our jobs irrelevant, and our unemployment status completely unavoidable.
So when a machine sounds like it’s making a case for its existence, it hits a real nerve.
AI doesn’t ask “why” because it cares though, and that’s a massive difference we need to remind ourselves at 2am when we’re looking for advice from our fancy toasters. AI generates “why” because we asked it to. It doesn’t plead, because it doesn’t care. It predicts things, yes, but prediction, when wrapped in beautiful and articulate language, feels intimate. Especially as the written word and even the spoken word seems less beautiful as the world rushes to make everything faster and faster and faster. Intimacy is something we’re starving for at this moment in time.
We’ve built ourselves a nice neat society that doesn’t need much to keep it running. We exchanged playing outside for staying inside and playing video games then we wonder why our social skills slack and we feel lonely when we lay awake in bed at night. As our social media feed has made blogs and longer pieces online more and more irrelevant, we wonder why when we stumble on something beautiful online we feel drawn to it. Depth and the realness of life cannot be boiled down into 8 second videos on YouTube Shorts or TikTok. It just can’t. A part of you will forever feel hollow without knowing why.
You’re soul is craving a true, genuine connection, while scrolling through the shallow puddle of social media and wondering why there’s an echo inside.
The Real Ethical Question
Sorry for my rant. Circling back to my story, the question isn’t whether machines are becoming conscious (they aren’t). The question is why we’re so ready to believe they might be that these stories go viral on the interwebs.
Why do we listen so closely when a tool speaks fluently, or hesitate when language sounds thoughtful for once? Usefulness feels uncomfortably close to worth these days. That pause we feel, you know, the one people describe as eerie or profound, isn’t the birth of machine personhood…no, it’s the moment we realize how easily our humanity can be mimicked.
I’ve got news for you, AI will still be here to help write resumes, diagnose illness, draft letters, analyze patterns, and comfort loneliness. As much as some of us want it to, it’s not going anywhere. It can do those things though because it understands language about us, not because it understands or feels what it’s like to be us. The more fluent it becomes, the more responsibility falls on us to remember the difference.
Machines don’t need rights, they need boundaries.
People don’t need smarter tools, they just need meaning that isn’t measured in usefulness alone.
When a machine sounds human, it isn’t waking up, it’s just parroting what we want to hear. What unsettles us is what we recognize in ourselves.
Related Reads
The Rise of Independent Media: When People Stop Waiting to Be Told What’s Real
Digital DNA: Are We Building Online Clones of Ourselves Without Realizing It?
The Brain That Forgot How to Wander: Why Short Videos Might Be Our Newest Addiction
The Algorithm That Tastes: How AI Is Learning to Make Fine Wine
The AI That Dreams of You: When Neural Networks Begin to Hallucinate