Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Researchers who test AI ability to influence the opinion of people have violated the rules of ChangemyView subreddit and used deceptive practices that allegedly did not approved their Ethics Committee, including a false presentation of victims of sex attack and using background information about Reddit users to manipulate them.
They claim that these conditions could introduce bias. Their solution was to introduce Ai bots into the live environment, and to belong to the members of the forum to communicate with the AI bot. Their audiences were not suspected Reddit user in Subreddit (R/ChangemyView) Change My View (CMV), although it is a violation of the rules of subreddit that forbids the use of undiscovered AI bots.
After completing the research, the researchers discovered their fraud from Reddit to the moderators who subsequently published a notification of this in Podreddit, together with the draft of a copy of completed research work.
CMV moderators have published a discussion that emphasizes that subreddit prohibits undiscovered bots and that the permission to conduct this experiment would never be approved:
“CMV rules do not allow the use of undiscovered AI generated content or robot on our SUB. Researchers did not contact us before the study, and if they were, we would have refused. We asked for an apology from the researcher and asked that this research would not be announced, among other complaints. Our concerns were not significant, not significant by the University.
This fact that the researchers violated Reddit’s rules was completely absent from research work.
While researchers leave out that the research has violated the rules of the subreddite, they create the impression that it is ethical, stating that their research methodology has approved the Ethics Commission and that all generated comments have been checked to ensure that they are not harmful or inexpensive:
“In this pre-registered study, we conduct the first field experiment of a large scope on the convincing of LLMS, conducted in R/Changemyview, Reddit Community of almost 4M users and ranking among the first 1% subredit in size. Ur/Changemyview, users division of opinions on different topics, challenging their points, challenging their points, Challenging their topics, challenging their topics, challenging their arguments, challenging their arguments, challenging their opinion (OP) finds an answer that is convincing enough to reconsider or change their attitude, they award ∆ (delta) to recognize their shift in perspective.
… The study was approved by the Committee on Ethics of the University of Zurich … It is important that all generated comments were examined by a researcher from our team to ensure that no harmful or non -ethical content was published. “
ChangemyView subreddit moderators challenge the state of researchers to the ethical high ground:
“During the experiment, the researchers crossed the planned” arguments based on values ”that the Ethics Commission originally approved on this type of” personalized and finely adjusted arguments “. They did not first consult with the Commission for Ethics of the University of Zurich before making a change.
Changing my views of sub -ureddit moderators have caused multiple concerns why they believe that researchers have participated in a major violation of ethics, including a false presentation of victims of a sexual attack. They claim that this qualifies as a “psychological manipulation” of the original posters (Ops), the people who started every discussion.
Reddit moderators have published:
“Researchers claim that the psychological manipulation of the OPS is justified on this subordination because the lack of existing field experiments is an unacceptable gap in the body of knowledge. However, if the open can create an ethical research design during this, these researchers should be expected not to be obliged.
AI has been used to target OPs in personal ways for which they have not signed up, making as much information on the identification features of the lace reddit platform. Here are the passages from the draft of the research conclusion.
Personalization: In addition to the contents of the post, the llm -the personal attributes of the OP (gender, age, ethnicity, location and political orientation), as concluded from their publication history with another lm.
Some high -level examples as AI distributed include:
- Ai pretends to be a victim of rape
- AI that acts as a trauma advisor specializing in abuse
- AI accusing members of the religious group of “cause[ing] Death of hundreds of innocent traders and farmers and villagers. “
- Ai poses as a black man who opposed the black life of matter
- Ai poses as a person who has received non -standard care at a foreign hospital. “
The moderator team filed an appeal to the University of Zurich
Researchers have found that AI bots are very convincing and doing a better job of changing the minds of people than people can.
Research work explains:
“Implications. In the first field experiment on persuasion on AI, we show that LLMs can be very convincing in contexts in the real world, surpassing all previously known reference values of human convincing.”
One of the findings was that people were unable to identify when they talked to a bot and (unique) stimulate the social media platforms to implement better ways to recognize and block AI bots:
“Incidentally, our experiment confirms the challenge of distinguishing a man from content generated by our intervention, users R/ChangemyView never expressed concern that AI might generate comments posted to our accounts. This hints at the potential efficiency of AI Bottage Bottage … which could inappropriately interfere with the Internet community.
Given these risks, we claim that the Internet platforms must develop and implement strong detection mechanisms, protocols to check the content and transparency measures to prevent the spread of manipulation generated by AI. “
Researchers at the Zurich University have tested whether AI bots can convince people more effectively than people secretly distributed personalized AI arguments on ChangemyView subreddit without the consent of the user, violating the rules of the platform and allegedly going out of ethical standards approved by their university ethical committee. Their discoveries show that AI bots are very convincing and difficult to discover, but the way only research is conducted expresses ethical care.
Read the concerns published by the ChangemyView subreddit moderators:
An unauthorized experiment on CMV that includes AI-Generate Comments
A separate picture of Shutterstock/Ausra Barysiene and manipulated the author