In short
- Sociologists say the Dead Internet Theory now corresponds to how users experience the web.
- Research shows that nearly half of all web traffic now comes from bots as synthetic content spreads.
- Some researchers say that the web is not dead, but reacting to incentives that reward automated engagement.
Most of the internet still runs on human traffic, but more and more it’s starting to feel less human.
As AI-generated posts, bots and automated agents proliferate on major platforms, researchers say the online world is beginning to resemble the concerns raised by the Dead Internet Theory, the idea that most of what people see online is no longer produced by humans, but by automated machines built to imitate them.
When the first notion circulated A few years ago on conspiracy forums like 4Chan and Agora Road’s Macintosh Café, it seemed implausible, but the rise of generative AI has changed how researchers see the claim.
In fact, bot activity surpassed human traffic for the first time last year. According to Imperva’s 2025 Bad Bot Reporta global study of automated internet traffic, automated systems accounted for 51% of all web traffic in 2024. AI-generated articles also overcome The work written by humans for the first time at the end of 2024, according to the analytical firm Graphite.
“There is no direct way to measure it, but many signs indicate that the Internet looks different than we think,” said Alex Turvy, a sociologist who studies how people interact on social media. Decrypt.
When researchers say that bots are reshaping the internet, they mean increasing non-human traffic on the network and the increasing presence of automated or AI-generated content on these platforms.
The broader concern, Turvy said, is not that fewer people are online, but that automated activity is eroding the basic cues people use to tell who’s real. When machines can mimic those signals, he said, users start to doubt everyone. Some withdraw. Others move conversations into semi-private or gated areas.
“A lot of people retreat to places like Discord or private group chats where they can be more sure who they’re talking to,” he said. “When the usual cues stop working, people look for other ways to know who they’re talking to.”
This drift into private channels makes the public internet quieter, although overall human activity has not changed.
In February 2025 paper in the Asian Journal of Research in Computer Science described social platforms as a “machine-driven ecosystem”, arguing that bots generate 40% to 60% of web traffic.
“These automated systems engage in scraping, spam and manipulation, creating artificial interactions that mimic genuine human activity,” the researchers wrote. “Bots are also often employed to inflate metrics such as likes, shares, and comments, fostering the illusion of vibrant online engagement.”
According to Turvy, the change in momentum has become difficult to ignore.
“There is an indication that this is more realistic than we thought,” Turvy said. “It’s because we’re seeing the technology catch up, but we’re also seeing the financial incentives align.”
In September 2025 report from the venture capital firm Galaxy Interactive found that automated activity now dominates the main social platforms. Analysts say the rise of AI-generated material supports the trend, noting that Reddit, YouTube and X have seen rising levels of repetitive, low-quality or spam content attributed to automation.
Even after Elon Musk said he would do something about the large number of bots in X, by an estimateAs many as 64% of X accounts could be bots responsible for 76% of peak traffic. Meanwhile, the same study estimated that as many as 95 million Instagram accounts – 9.5% of the total – could be fake or automated.
Meta did not immediately respond to a request for comment, and X would not respond to media inquiries.
As synthetic posts continue to increase, researchers following the trend say the change is already visible in the numbers.
“About half of the Internet is being written by AI,” said Deedy Das, a partner at venture capital firm Menlo Ventures, which studies the trend. Decrypt.
“Chatbots and AI tools synthesize this material and send it back to you,” he added. “End by reading machines summarizing other machines.”
While Turvy believes that the growth of bots on social platforms will lead to an exodus into smaller and more intimate spaces, Das is not sure that the ideal of the early web can return.
“There are very few people who write blogs anymore,” he said. “You can’t be discovered, and if you do, people assume it’s AI. Most of the conversation now happens on platforms built for performance, not honesty.”
Despite this, Das said a bigger problem is software acting like humans.
“The plumbing of the Internet assumed the person on the other end was human,” he said. “CAPTCHA, logins, two-factor codes, everything. Now the software can imitate you perfectly, and there is no common rule for what counts as an agent.”
The rise of AI agents
If the internet feels “dead” today, the spread of AI Agents it will only accelerate the trend. AI agents are autonomous programs that respond to invitations and perform their activities on the web on behalf of a user. Discover sites, perform searches, make purchases, crypto tradingand interact with the platforms in ways that resemble human activity.
Nirav Murthy, co-founder of intellectual property blockchain developer Camp Network, said the model is driven by economics as much as technology.
“Agent AI can remix material at machine speed and at almost no cost,” he said. “Then, that production goes back into circulation. The accounts start to look different, but they act the same. The engagement goes up, the variety goes down, and once you add human controls, the numbers go away.”
As users turn more control over to agents, a larger portion of everyday online activity will be handled by machines instead of people, deepening the automated environment they encounter online.
“Online ecosystems follow incentives,” he said. “When fake engagement is cheap and rewarded, you don’t just have more bots. You have automated content production lines chasing clicks.”
This tension is already visible in real world usage. Earlier this month, Amazon sent a cease and desist at Perplexity after finding that its Comet browser was making purchases on the Amazon site by disguising automated agents as human buyers. Anthropic recently said it had blocked what it described as the first AI-led cyberattack, after Chinese state hackers used its Claude Code agent in attempted breaches of 30 companies.
The biggest risk to corporations and platforms, Das said, comes when AI agents are deployed in large numbers.
“When companies run fleets of these systems, you have millions of requests that behave like users,” he said. “It’s harder to see and harder to stop.”
AI-generated video is the next wave. Tools like Sister 2 from OpenAI and Google I see 3 can produce realistic clips and deepfakes from text suggestions, adding to the volume of polished but synthetic content circulating on social platforms.
Both Murthy and Turvy agree that financial incentives are driving the flood of AI bots online, and proving personhood may become another challenge for AI to meet. “Humanity has become just another signal to fake to make money,” Turvy said. “What’s missing now is the mess that used to prove that someone was real.”
A growing number of blockchain projects, including world (at first worldcoin)Personality Test, and Human (ancient Gitcoin) Passport, systems are developed aimed at proving personality by linking online activity to a verified human.
“If you reward real creators and make fraud expensive, people will still have a place online,” Murthy said.
Generally intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.