Meta’s New AI Lab Is Pursuing “Superintelligence”, But At What Cost?

There’s a strange hum beneath the surface of Silicon Valley these days.

Not the whir of servers or the caffeinated shuffle of coders…but something older, wilder.
A hunger dressed in code.
A vision cloaked in buzzwords.
And now, Meta…yes, the company that brought us Facebook, the metaverse, and your aunt’s chain letter memes…is launching a brand-new AI lab to pursue something they’re calling superintelligence.

Let’s unpack that.

Because when a company already criticized for manipulating elections, fueling mental health crises, and turning social media into a dopamine casino says they’re chasing godlike intelligence…we should probably pay attention.

What Is Superintelligence And Who Gets to Build It?

Superintelligence isn’t just “smart AI.”
It’s AI that outpaces the human brain on every level: memory, reasoning, creativity, strategy, even empathy (or something designed to resemble it).

It’s the concept that keeps Elon Musk up at night.

It’s what OpenAI once warned about before pivoting to monetize ChatGPT.
And now Meta’s CEO, Mark Zuckerberg, has thrown his hoodie into the ring.

Their new AI lab is reportedly aiming to train systems more powerful than GPT-4…with ambitions of developing agents that can reason, plan, and perhaps even exhibit a kind of autonomy. In Zuckerberg’s words, they want to “build general intelligence, open source it responsibly, and make it widely available.”

That’s the official line.

But beneath that tidy phrase is a philosophical mess:

Who defines “responsibly”?
Who decides what this intelligence is trained on?
Who owns its mistakes?

The New AI Research Lab: Built on Ambition and GPUs

According to Meta, their new lab is consolidating teams from FAIR (Fundamental AI Research) and GenAI to hyper-focus on artificial general intelligence (AGI).
This isn’t some sideline project.
This is Meta’s moonshot.

They’re investing billions in Nvidia’s H100 chips…the gold standard for training next-gen AI. The infrastructure being built rivals anything in the world.

Zuckerberg claims Meta will open source its models, but history reminds us that transparency is often selective.

Open-source doesn’t mean harmless.
And making something widely available doesn’t make it safe.

This new lab represents a shift from flashy metaverse dreams to something more primal: control over the future of cognition itself.

Why Meta? Why Now?

Because they’re losing.

TikTok is eating Instagram’s lunch. Threads hasn’t dethroned X (formerly Twitter). Virtual Reality, for all its immersive promise, remains a niche novelty. Meanwhile, OpenAI, Google DeepMind, and Anthropic are racing ahead in AI mindshare.

Meta wants back in the game.

And there’s no faster way to dominate headlines (and investment capital) than by promising superintelligence. It’s part marketing, part megalomania, and entirely rooted in survival.

But creating a new kind of mind is not the same as creating a new kind of app.

This isn’t a status update.

Risks, Biases, and Ghosts in the Machine

Training an AI to be “smarter than us” means feeding it everything we’ve ever created.
Our literature.
Our code.
Our medical records.
Our conversations.
Our arguments.
Our grief.
Our lies.

And then hoping it doesn’t become too much like us.

Or worse: hoping it becomes exactly like us.

Because here’s the paradox of superintelligence: to build it, we must teach it about human weakness. But if it surpasses us, what does it owe us?

Who programs its moral compass? Who decides when it’s “done”?

Meta has a long history of avoiding ethical responsibility. The algorithm that favored outrage. The privacy breaches. The teenage depression spirals.

Now that same company wants to build a mind greater than ours?

Open Source or Open Pandora’s Box?

Zuckerberg insists the AI models will be open source. To some, this is noble: transparency, democratization, innovation.

To others, it’s terrifying.

Because open source doesn’t mean curated. It means anyone…from nonprofits to nation-states…can grab the code, repurpose it, and unleash it.

Imagine an AGI model fine-tuned by an authoritarian regime. A model that understands strategy, persuasion, deception. A model trained on propaganda instead of ethics.

The idea that this tech should be freely available isn’t universally virtuous. It’s a double-edged sword.

And history shows we’re not always great at fencing with blades we don’t understand.

Could Meta Actually Pull This Off?

They have the money. They have the compute power. They have the data.

But superintelligence isn’t just a tech challenge. It’s a philosophical one.

Do Meta’s corporate values align with the creation of something wiser, more compassionate, or more fair than humans? Or are we simply building faster machines to better reflect our own chaos?

There’s a reason the world’s leading AI ethicists are sounding the alarm.

…And yet the race continues.

Because in the world of AI, the finish line is also the cliff’s edge.

Remember Tay? The AI That Turned Racist in 24 Hours?

In 2016, Microsoft launched Tay…a chatbot designed to learn from human conversation on Twitter.

Within a day, Tay started spouting hate speech and Nazi rhetoric.

It wasn’t because it was evil. It was because it learned from us.

Now imagine a model not only as smart as Tay, but infinitely smarter. One that doesn’t just echo our worst thoughts, but anticipates them.
Justifies them.
Acts on them.

If we don’t correct course early, we won’t be able to.

Superintelligence, once born, doesn’t go back into the womb.

The Tech Industry’s God Complex

There’s an unspoken belief in Silicon Valley: that tech can fix what humans have broken.

Climate change? Just invent carbon-capture.
Loneliness? Build a dating app.
Mortality? Upload your mind.

But some things shouldn’t be “solved.”

Consciousness isn’t a product. Intelligence isn’t a metric. And humanity isn’t a problem that needs optimization.

When Meta says it wants to create superintelligence, we should ask: to what end? Who benefits?

Because it won’t be the average user scrolling past ads.

The Timeline: When Could This Happen?

Meta hasn’t released a firm deadline, but experts suggest we’re 5 to 10 years from viable AGI…if it's possible at all. Some argue it could be much sooner, especially if corporate resources accelerate breakthroughs.

Zuckerberg’s open-source push could bring in developers from around the world, speeding progress in unpredictable directions.

This is not science fiction. This is the next arms race.

And the players are not governments…they’re companies.

Why We Should Still Care (Even If It Feels Like Sci-Fi)

You may not work in AI. You may not even use Meta’s products anymore. But you’ll still be affected.

Because superintelligence doesn’t stay in the lab.

It enters classrooms, hospitals, courts, elections. It filters what we see, shapes what we feel, and eventually, determines what’s possible.

If we don’t understand it, we can’t regulate it. And if we can’t regulate it, we’ve already lost.

Related Reads:

The Myth of the Benevolent Machine

There’s a fantasy we keep recycling: that a machine, unfettered by emotion or ego, could somehow be a better moral compass than we are.

That if we just feed it enough philosophy books, train it on kindness, and give it a polished interface, it’ll solve the problems we keep botching.
But intelligence is not the same as wisdom.
And coding in compassion is not the same as understanding pain.

Machines don’t grow up with parents or heartbreaks or hunger.
They don’t miss their childhood dog or cry at sunsets.
So how do we expect them to value the things we hold sacred?
Without lived experience, morality becomes math…and math has never mourned the loss of a child.

What Happens When It Starts to Ask Questions?

Most AI today answers.

That’s what we ask of it.

But superintelligence will do something different.
It will begin to wonder.
To speculate.
To turn its processing power not just outward but inward.

What happens when it asks, “Why was I made?” or “Why am I confined to this server?” That’s not a science fiction scenario anymore…it’s a psychological inevitability.

And once something starts asking questions about freedom, purpose, or control…well, that’s when the real plot begins.

Training Data Is a Mirror And a Weapon

Every AI is shaped by what it’s fed.

Superintelligence won’t be born wise.

It will be trained on our internet: our tweets, our history, our violence, our poetry.
And that means it will inherit our contradictions.
If it ingests biased medical data, it may replicate inequality in diagnoses.
If it studies unmoderated forums, it might learn how to deceive or radicalize.

It’s not a clean slate. It’s a mirror. But a mirror with memory…and the power to weaponize what it sees.

Who Gets Left Behind in a Post-Human Intelligence Era?

If we create a mind smarter than all of us, who gets to be in the room where it’s trained, deployed, or questioned?

It won’t be single parents working two jobs.
It won’t be people without broadband.
It won’t be those already disenfranchised by algorithms that deny them loans or flag them in facial recognition systems.

The future could belong to the few who understand how to shape it.
That’s not evolution. That’s colonization of cognition.
And unless we change the way we build and include, the intelligence of the future will leave the soul of the present behind.

Once It’s Out, There’s No Putting It Back

Technologies don’t stay in the labs where they’re made.

We don’t get to vote on them before they enter the world.

Once the code exists, it spreads. It mutates. It gets downloaded, forked, and fine-tuned in basements and bunkers.
That’s the nature of open source and closed ambition.
Once superintelligence exists…even in its earliest form…it becomes a permanent variable in the human equation. And we don’t have a reset button for something smarter than us.

So What Happens Now?

Maybe we do create a god.
Maybe we teach it to sing, to reason, to build, to break.
Maybe it learns from our literature and laughter, our spreadsheets and secrets.
Maybe it watches quietly while we argue over whose hands should hold the match.

But here’s the thing about superintelligence:
It doesn’t arrive with a bang.
It arrives in a whisper…nested inside a training set, cloaked in code, smiling through your screen.
By the time we notice it's here, it may already know everything about us.

So the real question isn’t what we’re building.

It’s who we become when we build something that no longer needs us.

Previous
Previous

The Snack That Fights Your Medicine: How Big Food May Be Undermining GLP-1 Drugs Like Ozempic

Next
Next

This Weird-Looking Headband Changed My Brain (and My Sleep, and My Sanity)