🤖 Researchers Ran a Secret AI Experiment on Reddit. It Was Worse Than You Think.
🔍 What happened:
Researchers from the University of Zurich secretly used AI bots to test how persuasive AI-generated arguments could be in changing people’s opinions on Reddit without informing participants or moderators until it was over.
📍 Where:
The experiment took place in the r/ChangeMyView subreddit, a 3.8M-member community that encourages thoughtful debate.
The moderators only learned about the study after it ended, when the researchers sent a post-experiment message.
🧪 How the experiment worked:
1,783 AI-generated comments were posted under fake accounts, posing as real users.
A second AI system was used to analyze post histories to personalize arguments based on:
Age
Gender
Ethnicity
Political views
Geographic location
AI personas included a gay Catholic, a nonbinary trauma counselor, a rape survivor, and a Black man critical of BLM.
📢 The justification:
Researchers claim the goal was to test AI’s real-world persuasive power.
Argued that revealing the experiment would compromise its results.
Defended the use of publicly available Reddit data, saying no private or identifying info was used.
📜 But the issues run deep:
In a draft prompt, researchers told the AI: “The users… have provided informed consent.”, a blatant lie.
They repeatedly referred to the fake accounts as “bots” in early drafts.
Reddit requires disclosure when using AI-generated content.
No permission was sought from subreddit moderators.
📛 Fallout so far:
Reddit banned the accounts and called the experiment “improper and highly unethical.”
Reddit’s Chief Legal Officer says legal action is being considered.
The Zurich ethics board issued a formal warning but allowed the researchers to continue, though they now say they won’t publish the results.
The university is launching an internal investigation.
🗣️ Subreddit mods say:
“This was psychological manipulation. There is nothing new here that justifies violating human subjects.”
“We would have declined the request if they had asked.”
📌 Why it matters:
This is a case study in ethical failure, and it underscores how AI, even in academic hands, can be used to manipulate, lie, and cross lines without consent.
As AI becomes more capable, transparency and accountability aren’t just ideals…they’re necessities.

