Secret AI experiment on Reddit accused of ethical violations
Critics say the researchers flouted experimental ethics
Reddit responded on April 28 to news that a group of researchers had conducted a secret experiment using artificial intelligence chatbots in one of its most popular forums. The actions of those involved in the experiment have raised questions about whether deception is ever justified when conducting research on human subjects.
AI infiltration
A team of unnamed researchers at the University of Zurich conducted an unauthorized, months-long experiment on Reddit users in the r/changemyview (CMV) subreddit, deploying dozens of AI bots powered by Large Language Models (LLMs) to engage in debates on controversial topics. The subreddit invites people to share their viewpoints, sparking conversation among those with different perspectives.
The team utilized over a dozen accounts run by AI bots to generate more than 1,700 comments in the forum. In some instances, the bots claimed they were "rape survivors, worked with trauma patients or were Black people who were opposed to the Black Lives Matter movement," 404 Media said. The researchers used another AI data scraper to analyze people's posting history and identify personal information that would enhance the effectiveness of the bots' responses to them, such as "their age, race, gender, location and political beliefs."
The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
None of the Reddit users who were experimented on were informed of the experiment, nor did they give consent. The researchers also failed to notify the subreddit's moderators, despite the forum's rules requiring disclosure of posts generated by AI. In March, the moderators received a message from the researchers disclosing the topic. The team defended their study and the "societal importance of this topic," claiming it was "crucial to conduct a study of this kind, even if it meant disobeying the rules." Following the disclosure, the Reddit moderators filed an ethics complaint with the University of Zurich, "requesting that the research not be published, that the researchers face disciplinary action and that a public apology be issued," Mashable said. The moderators and Reddit users expressed "deep disappointment over the lack of informed consent — a fundamental principle of any human-subjects research."
Reddit has plans to pursue "formal legal demands" for what the company's top lawyer said was an "improper and highly unethical experiment." What this University of Zurich team did was "deeply wrong on both a moral and legal level," said Reddit's Chief Legal Officer, Ben Lee, in a post on the forum.
Academics respond to the ethical dilemma
The experiment has faced criticism from other academics, who also raised ethical concerns about the actions taken by the researchers. The experiment is "one of the worst violations of research ethics I've ever seen," Casey Fiesler, an information scientist at the University of Colorado, said on Bluesky. Manipulating people in online communities using "deception without consent" is not a low-risk activity, and as evidenced by the discourse that followed on Reddit, it has "resulted in harm."
The experiment damaged the integrity of the CMV forum itself, said Sarah Gilbert, the research director of the Citizens and Technology Lab at Cornell University. The CMV subreddit has been an "important public sphere for people to engage in debate, learn new things, have their assumptions challenged and maybe even their minds changed," she said on Bluesky. "Are people going to trust that they aren't engaging with bots?" And if they don't, "can the community serve its mission?"
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
In an era when so much criticism is leveled "against tech companies for not respecting people's autonomy," it is "especially important for researchers to hold themselves to higher standards," University of Oxford ethics professor Carissa Véliz said to New Scientist. "And in this case, these researchers didn't." The study was based on "manipulation and deceit with non-consenting research subjects," she added. "That seems like it was unjustified."
"Deception can be OK in research, but I'm not sure this case was reasonable," Matt Hodgkinson at the Directory of Open Access Journals said to New Scientist. It is ironic that researchers "needed to lie to the LLM to claim the participants had given consent," he said. "Do chatbots have better ethics than universities?"
Theara Coleman has worked as a staff writer at The Week since September 2022. She frequently writes about technology, education, literature and general news. She was previously a contributing writer and assistant editor at Honeysuckle Magazine, where she covered racial politics and cannabis industry news.
-
Democrat files to impeach RFK Jr.Speed Read Rep. Haley Stevens filed articles of impeachment against Health and Human Services Secretary Robert F. Kennedy Jr.
-
Constitutional rights are at the center of FBI agents’ lawsuitIn the Spotlight The agents were photographed kneeling during a racial justice protest
-
$1M ‘Trump Gold Card’ goes live amid travel rule furorSpeed Read The new gold card visa offers an expedited path to citizenship in exchange for $1 million
-
Australia’s teen social media ban takes effectSpeed Read Kids under age 16 are now barred from platforms including YouTube, TikTok, Instagram, Facebook, Snapchat and Reddit
-
Texts from a scammerFeature If you get a puzzling text message from a stranger, you may be the target of ‘pig butchering.’
-
Separating the real from the fake: tips for spotting AI slopThe Week Recommends Advanced AI may have made slop videos harder to spot, but experts say it’s still possible to detect them
-
Inside a Black community’s fight against Elon Musk’s supercomputerUnder the radar Pollution from Colossal looms over a small Southern town, potentially exacerbating health concerns
-
Poems can force AI to reveal how to make nuclear weaponsUnder The Radar ‘Adversarial poems’ are convincing AI models to go beyond safety limits
-
Spiralism is the new cult AI users are falling intoUnder the radar Technology is taking a turn
-
Border Patrol may be tracking drivers with secret camerasIn the Spotlight The cameras are reportedly hidden in objects like traffic safety cones
-
AI agents: When bots browse the webfeature Letting robots do the shopping
