Secret AI experiment on Reddit accused of ethical violations
Critics say the researchers flouted experimental ethics


Reddit responded on April 28 to news that a group of researchers had conducted a secret experiment using artificial intelligence chatbots in one of its most popular forums. The actions of those involved in the experiment have raised questions about whether deception is ever justified when conducting research on human subjects.
AI infiltration
A team of unnamed researchers at the University of Zurich conducted an unauthorized, months-long experiment on Reddit users in the r/changemyview (CMV) subreddit, deploying dozens of AI bots powered by Large Language Models (LLMs) to engage in debates on controversial topics. The subreddit invites people to share their viewpoints, sparking conversation among those with different perspectives.
The team utilized over a dozen accounts run by AI bots to generate more than 1,700 comments in the forum. In some instances, the bots claimed they were "rape survivors, worked with trauma patients or were Black people who were opposed to the Black Lives Matter movement," 404 Media said. The researchers used another AI data scraper to analyze people's posting history and identify personal information that would enhance the effectiveness of the bots' responses to them, such as "their age, race, gender, location and political beliefs."
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
None of the Reddit users who were experimented on were informed of the experiment, nor did they give consent. The researchers also failed to notify the subreddit's moderators, despite the forum's rules requiring disclosure of posts generated by AI. In March, the moderators received a message from the researchers disclosing the topic. The team defended their study and the "societal importance of this topic," claiming it was "crucial to conduct a study of this kind, even if it meant disobeying the rules." Following the disclosure, the Reddit moderators filed an ethics complaint with the University of Zurich, "requesting that the research not be published, that the researchers face disciplinary action and that a public apology be issued," Mashable said. The moderators and Reddit users expressed "deep disappointment over the lack of informed consent — a fundamental principle of any human-subjects research."
Reddit has plans to pursue "formal legal demands" for what the company's top lawyer said was an "improper and highly unethical experiment." What this University of Zurich team did was "deeply wrong on both a moral and legal level," said Reddit's Chief Legal Officer, Ben Lee, in a post on the forum.
Academics respond to the ethical dilemma
The experiment has faced criticism from other academics, who also raised ethical concerns about the actions taken by the researchers. The experiment is "one of the worst violations of research ethics I've ever seen," Casey Fiesler, an information scientist at the University of Colorado, said on Bluesky. Manipulating people in online communities using "deception without consent" is not a low-risk activity, and as evidenced by the discourse that followed on Reddit, it has "resulted in harm."
The experiment damaged the integrity of the CMV forum itself, said Sarah Gilbert, the research director of the Citizens and Technology Lab at Cornell University. The CMV subreddit has been an "important public sphere for people to engage in debate, learn new things, have their assumptions challenged and maybe even their minds changed," she said on Bluesky. "Are people going to trust that they aren't engaging with bots?" And if they don't, "can the community serve its mission?"
In an era when so much criticism is leveled "against tech companies for not respecting people's autonomy," it is "especially important for researchers to hold themselves to higher standards," University of Oxford ethics professor Carissa Véliz said to New Scientist. "And in this case, these researchers didn't." The study was based on "manipulation and deceit with non-consenting research subjects," she added. "That seems like it was unjustified."
"Deception can be OK in research, but I'm not sure this case was reasonable," Matt Hodgkinson at the Directory of Open Access Journals said to New Scientist. It is ironic that researchers "needed to lie to the LLM to claim the participants had given consent," he said. "Do chatbots have better ethics than universities?"
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Theara Coleman has worked as a staff writer at The Week since September 2022. She frequently writes about technology, education, literature and general news. She was previously a contributing writer and assistant editor at Honeysuckle Magazine, where she covered racial politics and cannabis industry news.
-
Art review: The Frick Collection
Feature After a $330 million renovation and expansion, New York City's Frick Collection has reopened to the public
-
Amor Towles' 6 favorite books from the 1950s
Feature The author recommends works by Vladimir Nabokov, Jack Kerouac, and more
-
The Supreme Court case that could forge a new path to sue the FBI
The Explainer The case arose after the FBI admitted to raiding the wrong house in 2017
-
Amazon launches 1st Kuiper internet satellites
Speed Read The battle of billionaires continues in space
-
Fake AI job seekers are flooding U.S. companies
In the Spotlight It's getting harder for hiring managers to screen out bogus AI-generated applicants
-
How might AI chatbots replace mental health therapists?
Today's Big Question Clients form 'strong relationships' with tech
-
What are AI hallucinations?
The Explainer Artificial intelligence is known for making things up – and that can cause real damage
-
The backlash against ChatGPT's Studio Ghibli filter
The Explainer The studio's charming style has become part of a nebulous social media trend
-
Not there yet: The frustrations of the pocket AI
Feature Apple rushes to roll out its ‘Apple Intelligence’ features but fails to deliver on promises
-
OpenAI's new model is 'really good' at creative writing
Under the Radar CEO Sam Altman says he is impressed. But is this merely an attempt to sell more subscriptions?
-
Could artificial superintelligence spell the end of humanity?
Talking Points Growing technology is causing growing concern