AI is cannibalizing itself. And creating more AI.
Artificial intelligence consumption is outpacing the data humans are creating


Artificial intelligence is trained on data that is largely taken from the internet. However, with the volume of data required to school AI, many models end up consuming other AI-generated data, which can in turn negatively affect the model as a whole. With AI both producing and consuming data, the internet has the potential to become overrun with bots, with far less content being produced by humans.
Is AI cannibalization bad?
AI is eating itself. Currently, artificial intelligence is growing at a rapid rate and human-created data needed to train models is running out. "As they trawl the web for new data to train their next models on — an increasingly challenging task — [AI bots are] likely to ingest some of their own AI-generated content, creating an unintentional feedback loop in which what was once the output from one AI becomes the input for another," said The New York Times. "When generative AI is trained on its own content, its output can also drift away from reality." This is known as model collapse.
Still, AI companies have their hands tied. "To develop ever more advanced AI products, Big Tech might have no choice but to feed its programs AI-generated content, or just might not be able to sift human fodder from the synthetic," said The Atlantic. As it stands, synthetic data is necessary to keep up with the growing technology. "Despite stunning advances, chatbots and other generative tools such as the image-making Midjourney and Stable Diffusion remain sometimes shockingly dysfunctional — their outputs filled with biases, falsehoods and absurdities." These inaccuracies then carry through to the next iteration of the AI model.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
That is not to say that all AI-generated data is bad. "There are certain contexts where synthetic data can help AIs learn," said the Times. "For example, when output from a larger AI model is used to train a smaller one, or when the correct answer can be verified, like the solution to a math problem or the best strategies in games like chess or Go." Also, experts are working to create synthetic data sets that are less likely to collapse a model. "Filtering is a whole research area right now," Alex Dimakis, a computer scientist at the University of Texas at Austin and a co-director of the National AI Institute for Foundations of Machine Learning, said to The Atlantic. "And we see it has a huge impact on the quality of the models."
Is AI taking over the internet?
The issue of training newer artificial intelligence models may be underscoring a larger problem. "AI content is taking over the Internet," and text generated by "large language models is filling hundreds of websites, including CNET and Gizmodo," said Scientific American. AI content is also being created much faster and in larger quantities than human-made content. "I feel like we're kind of at this inflection point where a lot of the existing tools that we use to train these models are quickly becoming saturated with synthetic text," Veniamin Veselovskyy, a graduate student at the Swiss Federal Institute of Technology in Lausanne, said to Scientific American. Images, social media posts and articles created by AI have already flooded the internet.
The monumental amount of AI content on the internet, including tweets by bots, absurd pictures and fake reviews, has given rise to a more sinister belief. The dead internet theory is the "belief that the vast majority of internet traffic, posts and users have been replaced by bots and AI-generated content, and that people no longer shape the direction of the internet," said Forbes. While once just a theory floating around the forum 4Chan during the early 2010s, the belief has gained momentum recently.
Some believe that AI content on the internet goes deeper than just getting social media engagement or training models. "Does the dead internet theory stop at harmless engagement farming?" Jake Renzella, a lecturer and Director of Studies (Computer Science) at UNSW Sydney, and Vlada Rozova, a research fellow in applied machine learning at The University of Melbourne, said in The Conversation. "Or perhaps beneath the surface lies a sophisticated, well-funded attempt to support autocratic regimes, attack opponents and spread propaganda?"
Luckily, experts say that the dead internet theory has not come to fruition yet. "The vast majority of posts that go viral — unhinged opinions, witticisms, astute observations, reframing of the familiar in a new context — are not AI-generated," said Forbes.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Devika Rao has worked as a staff writer at The Week since 2022, covering science, the environment, climate and business. She previously worked as a policy associate for a nonprofit organization advocating for environmental action from a business perspective.
-
Amtrak is the latest organization under DOGE's scrutiny
In the Spotlight The head of the organization recently announced his resignation
By Justin Klawans, The Week US Published
-
Sea geniuses: all the ways that octopuses are wildly intelligent
The Explainer There's more to the tentacles than meets the eye
By Devika Rao, The Week US Published
-
What does Musk's 'Dexit' from Delaware mean for the future of US business?
Talking Points A 'billionaires' bill' could limit shareholder lawsuits
By Joel Mathis, The Week US Published
-
OpenAI's new model is 'really good' at creative writing
Under the Radar CEO Sam Altman says he is impressed. But is this merely an attempt to sell more subscriptions?
By Theara Coleman, The Week US Published
-
Could artificial superintelligence spell the end of humanity?
Talking Points Growing technology is causing growing concern
By Devika Rao, The Week US Published
-
Space-age living: The race for robot servants
Feature Meta and Apple compete to bring humanoid robots to market
By The Week US Published
-
Musk vs. Altman: The fight over OpenAI
Feature Elon Musk has launched a $97.4 billion takeover bid for OpenAI
By The Week US Published
-
Apple pledges $500B in US spending over 4 years
Speed Read This is a win for Trump, who has pushed to move manufacturing back to the US
By Rafi Schwartz, The Week US Published
-
AI freedom vs copyright law: the UK's creative controversy
The Explainer Britain's musicians, artists, and authors protest at proposals to allow AI firms to use their work
By The Week UK Published
-
The AI arms race
Talking Point The fixation on AI-powered economic growth risks drowning out concerns around the technology which have yet to be resolved
By The Week UK Published
-
Microsoft unveils quantum computing breakthrough
Speed Read Researchers say this advance could lead to faster and more powerful computers
By Rafi Schwartz, The Week US Published