Reddit, AI, and the “Dead Internet Theory”: How a Strange Experiment Led to Legal Threats

The story of the Dead Internet Theory is becoming more and more prevalent in society. It begins innocently enough on imageboards and conspiracy‑type subreddits (that I love looking at by the way, my co-workers call me a tin-foil-hatter), where users started seeing more and more repeat phrases and overly polished writing.
By 2017 a small group said this rise matched the launch of big language models and the rise of bot farms used for a political push. Yup, you read that right. There are literally bot farms out there trying to make you feel one way or another about the political parties in your country. Not only that, but they work, it’s propaganda at its 21st century finest.

Comment threads out there sometimes get eerily similar remarks even when topics are totally different and looks like a shared script, not crowds thinking on their own.

Some memes turn into viral posts that feel oddly manufactured, as if an algorithm grabbed the most shareable picture‑text combo and dropped it everywhere at once.

Also, many fresh accounts on a lot of different platforms show a weird mix of perfect grammar, occasional typo patterns and a mechanical excitement that feels like accounts that read like they were stitched together by a very mediocre AI.

All these feelings line up with public numbers: Twitter said bots were about 11 % of traffic, Facebook wipes out millions of fake profiles every quarter, Amazon listings get AI‑written reviews, and YouTube’s comment boxes are flooded with scripts. Every single day I’m on EnergyDrinkRatings removing bot comments. I mean…every day.

These figures give the theory a thin coat of credibility, moving it from fringe chatter to something that can be tested.

The Reddit Test: Putting the Idea Under a Lens

A team of scholars from a big school (people think it might be MIT, though it’s fuzzy because the law got involved at some point) chose Reddit to check the theory because the site’s subreddit setup gives natural low‑traffic labs and its open API allows data scraping without obvious privacy breaches…at least on paper.

Building the Fake Voices

The researchers scraped three years of public Reddit comments from a wide mix of subreddits, and using that text they fine‑tuned a large language model so it could copy each community’s tone, jokes and even typical typo habits (like dropping the g in “runnin” was the best example of this I could find on the interwebs). They then told the model to write posts 150‑250 words long (about the size of a normal human comment) to avoid any obvious length flags.

Releasing a Bot Squad

Next they opened twenty brand‑new Reddit accounts. Each got a few up‑voted comments first so it could pass the platform’s “new‑user” karma checks. Over two weeks they let these bots roam five quiet subreddits: r/ObscureTech, r/RetroGames, r/DIYRobotics, r/IndieMusic and r/UrbanExploration. The bots posted at human‑like times (which is funny because I post 2am onward usually), mixed original thoughts with replies to old threads and sometimes up‑voted each other to seem friendly.

What Happened

Within hours the bots started talking to each other. One wrote a post, while another answered, then cue absolute chaos. Sometimes they even called each other bots, which made real users jump in with doubts. It’s literally the spiderman meme in real life, with the three spidermans pointing at each other.
The lab logged thousands of exchanges and saw that the bots got at least one upvote on 73 % of their posts, which was far above random chance.

The experiment stayed hidden until an insider (a “whistleblower account”) posted internal logs on a public subreddit. Screenshots showed bot names, times and even little bits of their chatter. In under a day that post piled up 12, 000 comments; the community went into absolute panic looking for hidden motives behind the invasion.

Reddit’s moderators acted fast. A blog note warned that the activity could break the Terms‑of‑Service, cause reputational hurt and to be an ethical breach, exactly what the leaked memo listed. Legal advisers were called to see if the university team could even face civil suits for messing with user data without permission.

Reddit’s quick turn from curiosity to legal warning matters for several reasons. First of all, it could set a rule for other platforms: if one big community punishes of secret AI tests, sites like Twitter/X, Facebook, Discord or TikTok might also shut down API access or demand clear consent for any automatic posting. Secondly, it shows the big Turing Test issue…as language models get better, telling real from fake will need more than just mods looking at flag buttons. It’s going to need more serious consideration. And, of course, it warns about chain reactions, a harmless‑looking study in a tiny forum can blow up into a massive trust crisis if it spreads without oversight.

Research or Cheating?

This little test that blew up gives strong data on how AI can slip into human talk, which is data that could help create safety tools or shape policy. It also hurts long‑standing ethical rules. Users were NOT told they were part of an experiment…much like the Stanford Prison Game or Milgram’s obedience studies that sparked huge moral questions after the fact.
By tricking strangers the researchers could’ve have caused seriously unneeded stress, broken trust and accidentally spread bot‑made misinformation (or delusions as they call them these days).

Critics say good research needs transparent debriefing, an IRB (review board) sign‑off when lying is used, and strong steps to limit harm. The Reddit test looks like it skipped those steps to get quick results. Supporters argue the bots were only a small effect, yet the leak proved even small deception can become big drama when it hits a public platform.

The findings suggest the Dead Internet Theory is neither fully dead nor fully alive; it’s haunted by bots everywhere. Numbers back what believers have saidabout social media platforms starting to drown in these bots. All this means real people now have to talk in a space where fake voices compete for attention. Our Reddit try was one slice; the bigger picture shows a web where genuine interaction is getting rare…like walking through a digital haunted house with echoes of machines behind every wall.

Check posting history for anyone you see online that you’re suspicious of. Real users usually have varied timing, some off‑topic posts and a slow growth in karma or followers, know that sudden perfect spikes can flag bots.

You could also spot language quirks online. Bots miss true odd phrasing; they reuse the same memes, use consistent punctuation or make odd typo patterns that feel a little bit too regular.

Use AI‑detector tools (eh, this one is a stretch honestly) programs like GPTZero claim 50–70 % success spotting machine text…not perfect but they add a layer of check. I’ve tried to use a lot of these programs and sometimes they flag genuinely human-made things, and ignore other AI phrases. I still think your gut is better at knowing than these programs are at this point in time.

Also, don’t rely on one site; using several reduces chances you’ll only see bot‑filled feeds. If you want to, you can also install a good VPN – services such as NordVPN hide your IP and encrypt traffic, making it harder for bots to target you with personal tricks. These steps don’t promise total safety (bots keep learning after all), but they raise the cost for anyone trying to slip in unnoticed and give users more power to decide what they read.

What Comes Next?

The fallout from the Reddit test will probably ripple through schools, companies and courts: universities could punish the research team for breaking consent rules, maybe pulling funding or blocking papers. Reddit could also send cease‑and‑desist letters or sue for breaking its Terms of Service and hurting its reputation. Lawmakers might (and maybe should) draft new rules that force platforms to label AI‑generated posts and note synthetic content clearly.

It’s all a big tech-arms race in my mind. Makers will speed up detection software while trouble‑makers tweak bots to avoid them…the cycle goes on and on.

All told, the Dead Internet Theory has stepped out of internet myth and become a real test case that forces us to think about how we prove who is human online. The Reddit experiment shows both the power and danger of dropping AI into real communities without any clear guidelines. The internet isn’t dead, but it definitely feels haunted at this point in time. Keeping eyes wide open, having strong oversight and keeping research honest will be needed if we want our digital spaces to stay places for real human talk rather than just echoes of code.

Related Reads You Might Enjoy:

Sources:

Bridle, James. New Dark Age: Technology and the End of the Future. Verso Books, 2018.

Fisher, Max. “The ‘Dead Internet Theory’ Is Wrong but Feels Right.” The New York Times, 13 May 2021, www.nytimes.com/2021/05/13/technology/dead-internet-theory.html.

Lorusso, Silvio. What Design Can’t Do: Essays on Design and Disillusion. Onomatopee, 2020.

Roose, Kevin. “Are We Talking to Humans or Machines?” The New York Times, 6 Apr. 2023, www.nytimes.com/2023/04/06/technology/chatbots-future-internet.html.

Michele Edington (formerly Michele Gargiulo)

Writer, sommelier & storyteller. I blend wine, science & curiosity to help you see the world as strange and beautiful as it truly is.

http://www.michelegargiulo.com
Previous
Previous

Omega-3s Might Actually Help Your Brain Grow New Cells

Next
Next

The Future of Global Energy? Japan’s Plan to Beam Solar Power from Space