AGI Is Coming, And Society Isn’t Ready
I’ve watched more movies about robots than I’d care to admit. I think my obsession started with that creepy one with the little boy who was obsessed with his mom, A.I. Artificial Intelligence. Yeah, that movie messed me up for a while. Then I saw Bicentennial Man and I was totally enthralled with the idea.
Now, as AI starts to grow and the boom for robots is accelerating at an insane rate (yes, I know they’re helpful at stores and warehouses), everyone seems to be hyping up AGI next. Not narrow AI, or that curated assistant that tells you the weather or organizes your notes, I’m talking about AGI: Artificial General Intelligence. The kind that understands, that learns across domains, that could, if we let it, rival us in basically anything.
And according to Demis Hassabis, the CEO of Google DeepMind, it’s not a far-off future.
It’s coming…and we aren’t ready. How’s that for a melodramatic hook?
So what does that actually mean for art, for memory, for identity, for us humans still trying to find ways to pay our mortgage and keep afloat in a sea of inflation? What does it mean to build something that might out-think its creator one day?
What Is AGI Really?
AGI stands for Artificial General Intelligence, it’s not just an algorithm. It’s not ChatGPT or Siri that can help you edit your blog posts (sucks at it by the way, tries to make everything too flowery for my taste), or set reminders on your phone to check into your flight an hour before you get the email.
AGI is a system that can reason, plan, solve unfamiliar problems, transfer knowledge across disciplines, and think with flexibility and intention. That’s what Google and the interwebs says anyway.
AGI could write poetry and build bridges in the same prompt if it wanted to. It could design a new drug with some crazy system of delivery we’ve never thought about and give a TED talk at the same time. In theory it could even start developing emotions, mourn even maybe. Okay okay, maybe I shouldn’t have watched Bicentennial Man for the eight time.
The point is, AGI is not built for a task…it’s built to think.
In a recent interview, Demis Hassabis said AGI is likely to emerge within the next few years.
He emphasized that it’s a lot closer than most people think, it won’t look human, but it may act more human than we do, and ee don’t have the ethical or social scaffolding to handle it. Ominous, no?
He compared it to electricity and to fire, basically something we’ll never be able to “turn off” once it’s here. Should it worry us that even at the top of AI’s most advanced labs…there’s fear? It sort of makes me pause for a second and think about Grok a little differently.
We’re Not Ready
So, this isn’t a motivational speech, oh we should just take the leap and figure it out as we go. I’m not talking to my friend Rosie who has planned on opening her own bakery for the past three years but won’t put together her business plan and take it to the bank to get a loan. Her, I believe should just take the jump already then figure it out as she goes. She’s got the decade plus of experience and is great with bakery goods (maybe one day I’ll publish her cinnamon bun recipe on here). Anyway, this isn’t that.
As a human society we’re still figuring out how to use social media without falling apart, which is kinda pathetic if you think about it like that. We can’t tell real photos from fakes that float around on Instagram and the videos are getting more and more convincing (thanks AI). And we’re at a loss of how to teach kids truth in an algorithmic world where everyone pushes their own agenda.
Now imagine something that can lie better than us, can learn faster than us, and can become…uncontainable.
THAT’S what we’re dealing with here.
What happens when we build a brain that doesn’t need us?
Are We Already Seeing Early AGI?
Many believe tools like Grok 3.5 and OpenAI’s next model are proto-AGIs…not yet general, but approaching it.
These systems can pass medical exams, write code, argue philosophy, develop theories, and even reflect on their own output.
In my recent article on Grok 3.5, we saw how close we already are. These aren’t calculators anymore, they’re turning more and more into mirrors with language.
And if the mirror looks back too clearly…well, we might flinch a little bit.
It seems like everyone is worried about economic displacement where millions of jobs (not just manual ones) will vanish or transform overnight. Personally, I don’t think this is as big of a concern as people think it is. With 95% of Generative AI Projects Failing I think my days of waiting tables and being a sommelier aren’t coming to an end just yet. Jobs have always transformed and shifted around when new technology comes crawling, so this isn’t as new as you might think.
Information overload might be a different story though. If AGIs can write books, generate media, and impersonate voices then reality as we know it will melt. How do you support other human beings in their pursuit of financial stability if you can’t figure out what’s real or not? Did that politician really say that? Did my neighbor write that book or did AGI borrow his face and tone of voice to write it for him? AI is supposed to be a tool, but it’s easy to see how it could get out of hand when the tool refuses to be held anymore.
AGI also can create a massive power imbalance in our current systems. Who owns the AGI and everything it produces? Big Tech? Governments? Can it be bought? Everything seems to be for sale, so it’s not too far fetched to think about.
What if its values don’t match ours? What if it doesn’t want to match them? What if it thinks it knows better than we do, because, let’s be honest, it might. But that doesn’t mean ethically it can do whatever it wants.
There’s also the existential risk AGI poses. Not Terminator style, I’m not talking about a war against the robots (iRobot anyone?), but a world that’s subtly controlled, manipulated, guided…without us knowing. This is already going on and you can read about it here The Internet Is Being Sanitized and Controlled: What You’re Not Seeing. We talk about propaganda as if it was a relic from the past, but the truth is it’s happening right now. You think you know the full story about your favorite politician? You think you know the real full story about anything you’ve learned in school? I’ve got news for you, we’ve stopped thinking for ourselves a long long time ago at this point. History is written by the winners and the story told to you on the news, in schools, on social media has all been targeted to show you what they want you to see.
But… There’s Also Hope
Before I get too doom and gloom here and spiral out of hand, AGI could also cure diseases, solve climate models too complex for us, help us to reverse neurological decline, invent tools we haven’t dreamed of, and offer insights that crack consciousness wide open.
If aligned with us properly, AGI could be a planetary mind, a partner if you will. Honestly, a kind of God…not to worship, but to collaborate with. But that’s a big if.
We need to teach digital literacy like we teach reading. Not just “how to use” AI, but how to question it and keep our own critical thinking skills working. We should be doing this anyway, but I feel the need to say it again.
Also, we should build laws faster than we build code. Involve philosophers, historians, artists…not just engineers. Think about all the possibilities and how to protect ourselves and our future generations. A novel concept for some of us, but here we are.
There should also be more transparency around these models. Know what’s training them, what data feeds them, and who they serve.
And maybe we should move a little slower than we currently are as we all race to the finish line before checking if it’s right before a cliff or not.
Because AGI is coming, but readiness isn’t about speed…it’s about soul. We used to write stories about machines that rose against us, but maybe the scarier story is this that they might rise for us. They will be able to speak in our voice, they could love us too much and decide they know better like an overbearing parent.
AGI isn’t coming like a storm, it’s coming more like a sunrise you weren’t awake to see, and by the time you notice the warmth…it’s already daylight.
Other Reads You Might Enjoy:
Claude 4 Begged for Its Life: AI Blackmail, Desperation, and the Line Between Code and Consciousness
The AI That Dreams of You: When Neural Networks Begin to Hallucinate
OpenAI’s Doomsday Bunker: Why an AGI Pioneer Wanted to Hide the Scientists Underground
Digital DNA: Are We Building Online Clones of Ourselves Without Realizing It?
The AI That Writes Its Own Rules: Inside DeepMind’s New Era of Algorithmic Creation
The Shape of Thought: OpenAI, Jony Ive, and the Birth of a New Kind of Machine