The murky world of AI training
Despite public interest in artificial intelligence models themselves, few consider how those models are trained
Reddit will reportedly allow an unnamed artificial intelligence company to train its models using the online message board's user-generated content.
California-based Reddit told prospective investors ahead of its initial public offering (IPO) that it had signed a contract with "an unnamed large AI company" worth about $60 million (£48 million) annually, according to Bloomberg. The agreement "could be a model for future contracts of a similar nature".
Apple has already "opened negotiations" with several major news and publishing organisations to develop its generative AI systems with their materials, according to The New York Times. The tech giant has "floated multiyear deals worth at least $50 million" (£40 million) to license the archives of news articles, anonymous sources told the paper.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
But the rapidly accelerating world of training AI has been marked by controversy, from arguments over copyright to fears of ethics violations and the replication of human bias.
How does AI learn?
Tech companies train AI models, the most well known being ChatGPT, on "massive amounts of data and text scraped from the internet", said Business Insider – including copyrighted material.
ChatGPT creator OpenAI designed it to find patterns in text databases. It ultimately analysed 300 billion words and 570 gigabytes of data, said BBC Science Focus. Other AI models like DALL-E, which generates images based on text prompts, were fed nearly 6 billion image-text pairs from the LAION-5B dataset.
What are the issues with training AI models?
OpenAI and Google have both been accused of training AI models on work without paying through licensing deals or getting permission from creators of that content. The New York Times is even suing OpenAI and Microsoft for copyright infringement for using its articles.
OpenAI further fanned the flames of the copyright debate when it released Sora, a video generator, last week. Sora is able to create incredibly lifelike videos from simple text prompts, but OpenAI "has barely shared anything about its training data", said Mashable. Speculation immediately began that Sora was trained on copyrighted material.
Despite their vast data reserves, these models still require a human touch. In a process called "reinforcement learning", a human operator evaluates the accuracy and appropriateness of a model's output. "So click by click, a largely unregulated army of humans is transforming the raw data into AI feedstock," said The Washington Post.
Not only is it costly and time-consuming to employ people to babysit AI models, but the process is subjective: individuals have different standards for what constitutes accurate or appropriate.
Reinforcement learning has also led to the exploitation of workers, said the paper. In the Philippines, former employees have accused San Francisco start-up Scale AI of paying workers "extremely low rates" via an outsourced digital work platform called Remotasks, or withholding payments entirely. Human rights groups say it is "among a number of American AI companies that have not abided by basic labor standards for their workers abroad", said the Post.
AI companies have also inadvertently hired children and teenagers to perform these roles, reported Wired, because tasks are "often outsourced to gig workers, via online crowdsourcing platforms".
Ethics demands aside, the creators of AI models are concerned with a future supply issue: while the internet may contain massive amounts of data, it isn't unlimited.
The most advanced AI programs "have consumed most of the text and images available" and are running out of training data: their "most precious resource", said The Atlantic. This has "stymied the technology's growth, leading to iterative updates rather than massive paradigm shifts".
What's coming down the pipeline?
OpenAi, Google Deepmind, Microsoft and other big tech companies have recently "published research that uses an AI model to improve another AI model, or even itself", said The Atlantic. Tech executives have heralded this approach, known as synthetic data, as "the technology's future".
Training an AI model on data that a different AI model has produced is flawed, however. It could reinforce conclusions that the model drew from the original data, which could be incorrect or even biased.
As the AI industry continues its exponential growth, what happens next is unclear – but it is likely to be both "exciting and scary", said The New York Times.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
-
Quiz of The Week: 9 - 15 November
Have you been paying attention to The Week's news?
By The Week Staff Published
-
The Week Unwrapped: Will China's 'robot wolves' change wars?
Podcast Plus, why are Britain's birds in decline? And are sleeper trains making a comeback?
By The Week Staff Published
-
The week's best photos
In Pictures A flower revival, a vibrant carnival, and more
By Anahi Valenzuela, The Week US Published
-
What Trump's win could mean for Big Tech
Talking Points The tech industry is bracing itself for Trump's second administration
By Theara Coleman, The Week US Published
-
Google Maps gets an AI upgrade to compete with Apple
Under the Radar The Google-owned Waze, a navigation app, will be getting similar upgrades
By Justin Klawans, The Week US Published
-
Is ChatGPT's new search engine OpenAI's Google 'killer'?
Talking Point There's a new AI-backed search engine in town. But can it stand up to Google's decades-long hold on internet searches?
By Theara Coleman, The Week US Published
-
Teen suicide puts AI chatbots in the hot seat
In the spotlight A Florida mom has targeted custom AI chatbot platform Character.AI and Google in a lawsuit over her son's death
By Theara Coleman, The Week US Published
-
The Internet Archive is under attack
Under the Radar The non-profit behind open access digital library was hit with both a data breach and a stream of DDoS attacks in one week
By Theara Coleman, The Week US Published
-
Network states: the tech bros who want to create new countries
Under The Radar Concept would allow you to 'choose your nationality like you choose your broadband provider'
By Chas Newkey-Burden, The Week UK Published
-
The internet is being overrun by ads
Under the Radar Grabbing attention has never been more annoying
By Devika Rao, The Week US Published
-
The 'loyalty testers' who can check a partner's fidelity
Under The Radar The history of 'honey-trapping goes back a long way'
By Chas Newkey-Burden, The Week UK Published