Users trust AI as much as humans do about reporting problematic content – ScienceDaily


Social media users may trust artificial intelligence — artificial intelligence — just as human editors can to report hate speech and harmful content, according to Penn State researchers.

The researchers said that when users think about the machines’ positive attributes, such as their accuracy and objectivity, they show greater confidence in AI. However, if users are reminded of the inability of machines to make subjective decisions, their confidence is lower.

The findings may help developers design better AI-powered content curation systems that can handle the large amounts of information currently being generated while avoiding the perception that material has been censored, or inaccurately categorized, said S. Shyam Sundar, James Professor P. Jimirro Media Effects at the Donald P. Bellisario College of Communication and co-director of the Media Effects Research Laboratory.

“There is an urgent need for moderation of content on social media and in general, online media,” said Sundar, who is also a member of the Penn State Institute for Computational and Data Sciences. “In traditional media, we have news editors who act as gatekeepers. But online, the gates are wide open, and gatekeeping is not necessarily feasible for humans to perform, especially with the volume of information being generated. So, with the industry increasingly moving to automated solutions, This study looks at the difference between human and machine content moderators, in terms of how people respond to them.”

Both human editors and artificial intelligence have advantages and disadvantages. Humans tend to assess whether content is harmful more accurately, for example when it is racist or potentially self-harm, according to Maria de Molina, associate professor of advertising and public relations at Michigan State, who is the study’s first author. However, people cannot process the large amounts of content that is now being created and shared online.

On the other hand, while AI editors can quickly analyze content, people often do not trust these algorithms to make accurate recommendations, as well as fear that information may be censored.

“When we think about automated moderation of content, it raises the question of whether AI editors are affecting a person’s freedom of expression,” Molina said. “This creates a dichotomy between the fact that we need to moderate the content – because people are sharing all this problematic content – and at the same time, people are concerned about the ability of AI to curate the content. So, ultimately, we want to know how we can build AI content brokers. People can trust them in a way that does not compromise this freedom of expression.”

Transparency and Interactive Transparency

According to Molina, bringing people and AI together in a moderation process might be one way to build a reliable management system. She added that transparency – or signaling to users that a device is involved in moderation – is one way to improve trust in AI. However, allowing users to submit suggestions to AI systems, which the researchers refer to as “interactive transparency,” appears to boost user confidence even more.

To study interactive transparency and transparency, among other variables, the researchers recruited 676 participants to interact with the content rating system. Participants were randomly assigned to one of 18 experimental conditions, designed to test how the source of moderation—artificial intelligence, human, or both—and transparency—normal, interactive, or no transparency—may affect a participant’s trust in AI content editors. Researchers tested rating decisions – whether content was classified as “reported” or “not reported” for being harmful or hateful. The content of the “harmful” test included suicidal ideation, while the content of the “hateful” test contained hate speech.

Among other findings, the researchers found that users’ trust depended on whether having an AI content moderator invoked positive traits of machines, such as their accuracy and objectivity, or negative traits, such as their inability to make subjective judgments about human nuances. language.

Giving users a chance to help the AI ​​system determine whether or not online information is harmful may also boost their confidence. Study participants who added their own terms to the results of a list of AI-selected words used to categorize posts trusted the AI ​​editor just as they trust the human editor, the researchers said.

ethical concerns

Exempting people from reviewing content goes beyond giving workers a break from a boring chore, Sundar said. Assigning human editors to routine work means these workers are exposed to hours of hateful and violent images and content, he said.

“There is an ethical need for automated moderation of content,” said Sundar, who is also director of the Penn State Center for Socially Responsible Artificial Intelligence. “There is a need to protect moderators of human content – who provide a social benefit when they do so – from continued exposure to harmful content day in and day out.”

According to Molina, future work could look at how to help people not only trust AI, but also understand it. She added that interactive transparency may be an essential part of understanding AI as well.

“The really important thing is not just to trust the systems, but also to engage people in a way that they actually understand AI,” Molina said. “How can we use the concept of interactive transparency and other ways to help people better understand AI? How can we better present AI so that it calls for the right balance of estimating machine power and questioning its weaknesses? These questions are worth researching.”

The researchers present their findings in the current issue of Journal of Computer Communication.



Source link

Related Posts

Precaliga