95% of Generative AI Projects Are Failing
Maybe you’ve seen it by now, I saw it pop up a few times on Instagram, MIT has dropped a truth bomb: 95% of generative AI projects are failing.
Not struggling a little, but straight-up failing.
And you know what? It doesn’t shock me as much as you’d think.
In fact, I’m ready for the AI hype to slow down.
Because if you’ve been paying attention (and I know you have, otherwise you wouldn’t be here with me), the AI world is like a mix between Silicon Valley gold rush and a kid’s first time with a chemistry set.
So very much hype, lots of glowing neon promises, and an equally large pile of smoke, spills, and more snake oil than in fitness supplements.
So let’s go through this together, and not with the doom-and-gloom lens of the “AI apocalypse incoming sell all of your stock,” but in the way I like to explore things: a friendly walk through the weeds.
The what does this actually mean for me (and probably you who don’t work at a tech startup) side.
A Rollercoaster Nobody Buckled In For
When ChatGPT burst onto the scene, it felt like the coolest thing since sliced bread, and it never ended.
Overnight, everyone and their cousin, mother, and auntie had a new AI startup.
CEOs were suddenly “pivoting to AI,” as some companies even fired their staff.
Investors threw money like confetti.
Journalists wrote hot takes faster than the models could generate them.
And the public was equal parts excited and terrified (I knew I was).
I had a lot of moments like, “cool, I can ask a robot to write my emails,” but also: “wait, is this thing gonna take my job?” Even Bill Gates was saying things about 2 day work weeks as the robots get better.
It was exactly what I expected from a fear-mongering society like this one.
Here’s the thing about hype cycles though: they promise the moon, but the rocket rarely makes it out of the atmosphere at all.
…and that’s where we are with generative AI.
Everyone wanted immediate miracles: perfect essays, flawless images, code that worked right the first time.
Instead, they got…hallucinations, broken logic, copyright lawsuits, and an awful lot of projects gathering dust.
As someone who jumped on the ChatGPT train early, I can attest to this.
If you’ve been following along with me you know that I tried a program to code my game…didn’t work. I tried another. Nope.
I tried ChatGPT to edit my blog posts (grammar was embarrassing for me), but it would add so much fluff people reached out accusing me of generating everything 100% with AI.
Hence the bad grammar again (forgive me!).
I used ChatGPT to help me find sources and double check facts and sometimes I got hallucinations where it would MAKE UP SOURCES. Can you imagine?
Take my embarrassment to the next level, why don’t you. I’ll stick with the run on sentences and spelling errors, thank you very much.
Why Are 95% of These Projects Failing?
Good question. Let’s chew on it piece by piece.
1. Overly Enthusiastic Expectations.
Companies really thought AI would be plug-and-play magic. Just add water, and boom…instant billion-dollar idea. Reality check: it takes time, training, expertise, and resources most teams underestimated. I must admit, I was guilty of this myself as well.
2. Cost.
Running large models isn’t cheap, and training them? Even worse! Some businesses literally run out of money before they ever get a product that works.
3. Data Nightmares.
Garbage in, garbage out. If your data isn’t high quality, the AI isn’t either. Companies rushed without cleaning or curating their datasets, and the results absolutely showed it.
4. The Talent Gap.
AI engineers aren’t exactly sitting around waiting to be hired. Demand is insanely sky-high. Salaries are eye-watering (did you see what Zuckerburg was offering people?! $300 million for a 4 year contract, something absolutely stupid like that). If you were a startup with thin budgets, good luck competing with Google or OpenAI.
5. Trust Issues.
People don’t like when a model makes things up, and they like it even less when their personal information is at risk. Between hallucinations and privacy concerns, a lot of early adopters got spooked. This was me. I had to go back and UNEDIT about 300 blog posts. It was an absolute nightmare. After spending 3 hours per day blogging then trusting the final project in the hands of ChatGPT only to find out it screwed me instead…I won’t lie, I canceled my subscription I was so mad. (Then I resubscribed because I like the image generation feature…shame).
Now put all that together, and suddenly the MIT study doesn’t feel so dramatic.
Of course projects are failing, the system literally was not set up for success at this rate.
Failure Isn’t Always Failure
Here’s where I’m going to get a little philosophical (bear with me).
Failure in tech isn’t like failure in baking. If you burn a cake, you get a blackened hockey puck and no dessert, end of story.
But in tech, failure usually teaches you what not to do, and sometimes even sparks an entirely new idea you wouldn’t have discovered otherwise. Ah, the human mind is a lovely thing.
Think about all the “failed” experiments in history that gave us breakthroughs.
Penicillin? Total accident.
Post-it notes? Came from a glue that was too weak.
Heck, even the microwave oven was discovered because someone noticed a melted candy bar near radar equipment.
So when MIT says 95% of AI projects are failing, I don’t hear “game over.”
I hear, “Okay, we’re still in the messy toddler stage.” Lots of falls, lots of scraped knees, but also, tons of learning going on behind the scenes.
Real-World Stories (Because Theory Gets Boring Fast)
I’ll give you a few examples that made me laugh, want to cry for embarrassment for others, or both.
AI Legal Assistants: Some legal firms rushed to use generative AI for legal documents. Only the models hallucinated fake cases and cited them as if they were real! I thought fake citing on my blog was bad, but standing in court, confidently presenting “evidence” that literally doesn’t exist…way worse, thank you perspective Gods.
AI in Healthcare: Ambitious startups wanted to use AI to predict illnesses from patient data. Except the models often latched onto weird correlations (like “patients wearing blue shirts have higher diabetes risk”). Kind of weird, but not as dangerous as I was afraid of.
AI Art Generators for Marketing: Countless businesses thought they’d save money on design by switching to AI art. Some did, but many didn’t because customers noticed when ads looked…not right. Like the hands with six fingers or the wine glasses that melted into the table. Sorry, but I’ll still be using image generation because my budget is $0 for cool photos, even though I wish it was not so.
AI-Powered Call Centers: Imagine calling customer support, already frustrated, only to argue with a bot that insists your package was delivered to “123 Banana Street.” That frustration multiplied across thousands of calls…yeah. Taco Bell rethought their use of AI in the drive-throughs after a man ordered 18,000 waters.
Each of these stories is kind of hilarious, but also kind of sobering. It shows just how wide the gap is between hype and reality.
What’s Working
Not everything is failing, and that’s important to say. Some AI projects are thriving, and they give us a clue about where this is all heading.
AI as a sidekick, definitely not a replacement for anyone. Tools like Grammarly or GitHub Copilot don’t try to replace humans; they help us do our work faster, and that balance works.
Creative brainstorming…sort of. Writers, marketers, and even hobbyists use AI to get unstuck, not to deliver finished work. That keeps expectations realistic, because my own experimenting with AI has thought me (the hard way) that it cannot do what I want it to do.
Some companies use AI to sift through giant piles of data and highlight trends humans might miss…as long as no one expects it to be a crystal ball, it’s useful enough. (I’m clearly still bitter toward it).
Voice-to-text, real-time translation, and tools for people with disabilities, now that’s the sweet spot where AI shines and makes a real difference!
All these winners have one major thing in common: they use AI as a tool, not a magic wand.
What This Means for Me (And Maybe You)
If you’re someone who dabbles with AI tools, don’t freak out about the 95% failure number (I did for a moment there). This doesn’t mean your favorite chatbot is going away tomorrow, although part of me wishes it would. What it does mean is that we’re in the middle of figuring out how this technology fits into real life, and there will be a lot of failures.
Think about how many things in history went through a similar curve.
The dot-com bubble? Everyone thought the internet was over when the bubble burst.
Spoiler: it wasn’t (you’re on a website as we speak).
Smartphones? Early ones were clunky, slow, and kind of pointless…look where we are now (I do in fact always have a calculator in my pocket, Mr. Kivor!).
Where Do We Go From Here?
So if 95% are failing, what’s next?
I hope we’re heading into a quieter phase with less fearmongering and more realism.
AI isn’t going to steal your job. And I don’t think Bill Gates is right about the two day work week (sadly).
Once this noise dies down, and the survivors find their footing, better ideas can emerge from the ashes of the hype cycle.
A Friendly Bit of Advice
If you’re curious about AI (and I bet you are, since you’ve read this far), here’s my take:
Play with the tools, but don’t depend on them for everything.
Get curious, not cynical, failure isn’t the end of the story.
Watch for the quiet successes, not just the flashy demos.
And maybe most importantly: remember that AI is a tool, not a replacement for the messy, creative, stubbornly human brain in your head.
When I saw that “95% of AI projects are failing” headline, I didn’t think: wow, AI is doomed. I thought: oh good, we’re finally being honest. Because…yeah, this is obvious.
Because honesty clears the stage for real progress.
And if history has taught us anything, it’s that the stuff worth keeping always rises from the rubble.
Source: MIT Report Finds Most AI Business Investments Fail, Reveals 'GenAI Divide'
Related Reads You Might Enjoy:
The Wild Side of AI: From Resurrecting Direwolves to Talking with Plants
How AI Is Learning to Feel Pain and What That Means for Humanity
The AI That Writes Its Own Rules: Inside DeepMind’s New Era of Algorithmic Creation
AI Therapy Bots Are Here, But Can They Really Heal a Human Heart?
The AI That Dreams of You: When Neural Networks Begin to Hallucinate