All-powerful, ever-pervasive AI is running out of internet
There is no such thing as unlimited data


Artificial intelligence (AI) has relied on high-quality language data to train its models, but supply is running low. That depletion is forcing companies to look elsewhere for data sourcing as well as to change their algorithms to use data more efficiently.
What is the scope of AI's data problem?
Artificial intelligence needs to be trained, and data and information is used to accomplish that. Trouble is, the data is running out. A paper by Epoch, an AI research organization, found that AI could exhaust all the current high-quality language data available on the internet as soon as 2026. This could pose a problem as AI continues to grow. "The issue stems from the fact that, as researchers build more powerful models with greater capabilities, they have to find ever more texts to train them on," said the MIT Technology Review. The quality of the data used in training AI is important. "The [data shortage] issue stems partly from the fact that language AI researchers filter the data they use to train models into two categories: high-quality and low-quality," said the Review. "The line between the two categories can be fuzzy," but "text from [high-quality data] is viewed as better-written and is often produced by professional writers."
AI models require vast amounts of data to be functional. For example, "the algorithm powering ChatGPT was originally trained on 570 gigabytes of text data, or about 300 billion words," said Singularity Hub. In addition, "low-quality data such as social media posts or blurry photographs are easy to source but aren't sufficient to train high-performing AI models," and could even be "biased or prejudiced or may include disinformation or illegal content which could be replicated by the model." Much of the data on the internet is considered useless for AI modeling. Instead, "AI companies are hunting for untapped information sources and rethinking how they train these systems," said The Wall Street Journal. "Companies also are experimenting with using AI-generated, or synthetic, data as training material — an approach many researchers say could actually cause crippling malfunctions."
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
What are AI companies doing to combat the imminent data scarcity?
The ticking clock on high-quality data has forced AI developers to think more creatively. For instance, Google has considered using user data from Google Docs, Google Sheets and similar company products. Other companies are "searching for content outside the free online space, such as that held by large publishers and offline repositories," like those published before the internet existed, said Singularity Hub. Meta has considered purchasing Simon & Schuster publishing house to gain access to all its literary works. More broadly, many companies have looked to synthetic data, which is generated by AI itself. "As long as you can get over the synthetic data event horizon, where the model is smart enough to make good synthetic data, everything will be fine," OpenAI CEO Sam Altman said at a tech conference in 2023. However, using synthetic data can present other problems. "Feeding a model text that is itself generated by AI is considered the computer-science version of inbreeding," said the Journal. "Such a model tends to produce nonsense, which some researchers call 'model collapse.'"
The other option is to rework AI algorithms to better and more efficiently use the existing high-quality data. One strategy being explored is called curriculum learning, which is when "data is fed to language models in a specific order in hopes that the AI will form smarter connections between concepts," said the Journal. If successful, the method could cut the data required to run an AI model by half. Companies may also diversify the data sets used in AI models to include some lower-quality sources or instead opt to create smaller models that require less data altogether. "We've seen how smaller models that are trained on higher-quality data can outperform larger models trained on lower-quality data," Percy Liang, a computer science professor at Stanford University, said to the MIT Technology Review.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Devika Rao has worked as a staff writer at The Week since 2022, covering science, the environment, climate and business. She previously worked as a policy associate for a nonprofit organization advocating for environmental action from a business perspective.
-
Uruguay shaken by 'phantom cow' scam
Under the Radar Cattle seen as a safe investment in beef-mad nation – but the cows, and people's life savings, are nowhere to be found
-
Critics' choice: Steak houses that break from tradition
Feature Eight hours of slow-roasting prime rib, a 41-ounce steak, and a former Catholic school chapel turned steakhouse
-
Tash Aw's 6 favorite books about forbidden love
Feature The Malaysian novelist recommends works by James Baldwin, Toni Morrison, and more
-
Google's new AI Mode feature hints at the next era of search
In the Spotlight The search giant is going all in on AI, much to the chagrin of the rest of the web
-
How the AI takeover might affect women more than men
The Explainer The tech boom is a blow to gender equality
-
Did you get a call from a government official? It might be an AI scam.
The Explainer Hackers may be using AI to impersonate senior government officers, said the FBI
-
What Elon Musk's Grok AI controversy reveals about chatbots
In the Spotlight The spread of misinformation is a reminder of how imperfect chatbots really are
-
Is Apple breaking up with Google?
Today's Big Question Google is the default search engine in the Safari browser. The emergence of artificial intelligence could change that.
-
Inside the FDA's plans to embrace AI agencywide
In the Spotlight Rumors are swirling about a bespoke AI chatbot being developed for the FDA by OpenAI
-
Digital consent: Law targets deepfake and revenge porn
Feature The Senate has passed a new bill that will make it a crime to share explicit AI-generated images of minors and adults without consent
-
Elon Musk's SpaceX has created a new city in Texas
Under The Radar Starbase is home to SpaceX's rocket launch site