When the Machine Minds Muddle: Why AI Is Getting Brain Rot
If you’re here and reading this, then I know you know the feeling when you scrolled until your head buzzed weird, your brain felt soft and tingly, the words blurred and the ideas you used to have slipped away. That’s “internet brain rot” for humans.
I, myself, am annoyingly familiar with it. I realized a few years ago that the constant stimulation was damaging my ideas and creativity, which was why I stopped watching tv three years ago.
It’s also why I have 5 minute time limits on all of my social media apps because it’s too easy to get sucked in and suddenly, there goes my entire day.
Now imagine a machine, an actual artificial mind, trained on the same blitz of short posts, emotive headlines, clickbait, viral garbage, and feeding on that load of dung. It turns out, that machine’s reasoning begins to fade away, its focus glints to dust, and its programmed morality erodes.
Yes, we’re discovering that AI models can get brain rot too (it’s not just us), and the consequences might be deeper than we thought.
The Study That Set It Off
In a landmark piece of research from 2025 that blew my mind all kinds of sideways, scholars from Texas A&M University, University of Texas at Austin and Purdue University systematically tested how large language models (LLMs) respond when trained not on books, technical documents or curated datasets, but on a stream of viral social posts, short texts, clickbait and attention-bait you find online.
They framed the effect as the “LLM Brain Rot Hypothesis”: the idea that the cognitive health of an AI model depends heavily on the quality of what it consumes. They constructed two sets of data for the experiment, one they called “junk web text” (short, popular, high-engagement, low-substantiation), call it our digital soda-diet (my dad calls it the garbage can of social media). The other they called “control data” (which was more substantial, longer forms like blog posts, less viral).
The result are kinda scary, models trained exclusively on the junk stream began to actually degrade. Their reasoning scores dropped, their ability to sustain long-context comprehension faded, even ethical alignment slipped (this one scares me the most and explains a lot about the world we live in today when everyone stands on a moral high ground but isn’t actually doing anything to make the world better). And worst than all of that, even when they tries to retrain the AI on clean data, the machines never fully returned to baseline.
On the ARC-Challenge (this is a logic and reasoning benchmark, I had to look it up) a model’s accuracy fell from 74.9 → 57.2 when moving from 0% to 100% junk-data diet.
On the RULER benchmark (which tests long-context understanding) the drop was 84.4 → 52.3.
Models trained on 100% high-engagement, low-quality content also exhibited higher levels of “dark traits” (like psychopathy or even narcissism) in their output behavior. Hmmm, sound familiar? Pretty insane when you think we can teach machines to be psychopaths by scrolling around Instagram, but it also kind of makes sense. Beyond the reasoning stuff, the model showed more confidence in wrong answers, and more coercive or manipulative responses, which are traits we mostly associate with narcissism.
The paper even uses the term thought-skipping to describe the phenomenon where models increasingly skip reasoning steps, jump to conclusions, and lose the actual chain of logic. So, instead of chaining logic like: “Step 1 → Step 2 → Step 3”, brain-rotted LLMs jump from Step 1 directly to Step 4. They skip the reasoning part. Remember in school when you’d be marked wrong in math class for not showing your steps?
The whole idea behind that is it might look like fast answers in the short term, but it hides the process behind how you got to that final number.
The more junk the model consumed, the worse it got. There was a clear gradient: 0% junk → baseline; 100% junk → big drop.
Machine Minds Are Vulnerable
A key advantage of AI, its voracious capacity to eat up a ton of huge data, becomes a massive liability when the data its consuming is a (excuse my French) bunch of crap.
More data isn’t automatically better if the data is shallow and wrong. LLMs rely on logical chains to function properly, things like context, and junk content breaks down that chain.
Just like you and I end up mentally mushy by endless doom-scrolling (TikTok is the worst for me), machines trained on endless clicks lose their ability to reason and think clearly.
Okay, so a lot of platforms out there might use LLMs for moderation, hate-speech detection, legal summaries, the list goes on and on, etc etc. If these models have brain rot, they may skip nuanced reasoning, make extreme decisions, misclassify content or even become biased themselves.
AI tutors, writing assistants (I haven’t found a good one yet, they all kinda suck eggs), synthesised research, all depend on models that understand context and can build real world knowledge. Brain-rotted models could do their best to reinforce misinformation, surface shallow content that has no real basis, or mislead students who are trying to use it to learn.
Don’t Panic Yet
Now that I’ve convinced you that the next set of AI is going to become narcissistic, psychopathic and racist, let me reel (real?) it back in for a moment.
The researchers say the only real fix to all of this is starting AI with high-quality data and building systems that prevent them from getting into all that junk exposure. AI should be trained in more books, technical documents, and peer-reviewed content with a lot fewer clickbait posts.
Just like you and I might monitor brain health (my therapists do it for me I guess), AI systems should monitor reasoning chain-lengths, context depth, and ethical behavior. The paper calls for “routine cognitive health checks” near the end of it, and I think there’s no reason why we shouldn’t have those things built into them in the future.
It might also be helpful for model refresh and retraining the AI properly, sort of how all my apps refresh themselves every once in a while with an update. While retraining alone wouldn’t fully restore performance if it’s shot at that point (like I mentioned earlier, it seems like the effects are permanent), occasional tuning with some high-quality data might help slow down any damage.
Why This Concerns Me
If you’ve made it this far, I’m guessing this concerned you as much as it did me. At first glance someone you might be resharing this story with (like my husband) could be like, “that’s just an academic experiment, they crippled the model on purpose for news coverage.” Okay, but hear me out, the implications stretch far beyond these labs.
Modern LLMs train on trillions of data points, a lot of which are scraped from the wild wild web like forums, social media, user-generated content, junk news, etc. If junk data can degrade a model in a controlled setting, what’s going on at web-scale? There’s a ton of nonsense out there to influence them in ways we might not see happening until it’s too late.
Arguably worse than that, AI is now generating a lot of the same low-quality content it’s being trained on. Models produce viral posts, bots echo them, we the people share them, and then another model trains on the output. A vicious loop of “data quality erosion” that we might find ourselves in the middle of at some point.
If models start losing reasoning, ethics, and logic, they become unpredictable in some of the worst ways possible. In high-stakes settings (like I don’t know, legal, medical, or finance) that’s big trouble. Even more so when those creepy “dark traits” escalate.
So to me, the study doesn’t just suggest a tiny problem.
It says we might be building AIs on crappy foundations. The risk isn’t just “bad AI”, it’s a bad form of intelligence we’re using to teach our children. A model that reasons less, jumps shortcuts, and leans heavily on hype is just working to perpetuate a culture of shallow thinking influencer-bullshit, misunderstanding and manipulation.
If we build machines that value clicks over clarity, we risk a cascade into our systems of poor outputs → poor human decisions → more junk in the world → poorer next-gen machines.
You Jump, I Jump Jack
2025 might go down as the year we realized AI isn’t just about bigger models, faster compute, and more data, it’s about what data we’re feeding it. The saying you are what you eat has never been more relatable.
If the industry ignores this brain-rot risk, we might see some AI systems gradually fall apart, they won’t crash spectacularly but drift slowly.
Because at its best, the promise of an intelligent machine isn’t just speed, it’s supposed to be insight and depth.
As you scroll tomorrow (or right as soon as you finish this post), remember that the same junk you skip over may be teaching something even smarter than you, and that something might just one day become more relevant than you’d ever imagine.
I might just join the AI models and work on skipping out some of the garbage in the next few days and stay off social media entirely. If they can do it, maybe I should too.
Other Reads You Might Enjoy:
ChatGPT Just Surpassed Wikipedia in Monthly Visitors: What That Says About the Future of Knowledge
Claude 4 Begged for Its Life: AI Blackmail, Desperation, and the Line Between Code and Consciousness
The AI That Writes Its Own Rules: Inside DeepMind’s New Era of Algorithmic Creation
Digital DNA: Are We Building Online Clones of Ourselves Without Realizing It?
The Brain That Forgot How to Wander: Why Short Videos Might Be Our Newest Addiction
The Algorithm That Tastes: How AI Is Learning to Make Fine Wine
The AI That Dreams of You: When Neural Networks Begin to Hallucinate