People who don’t trust fellow humans tend to trust AI more – ScienceDaily


A recently published study indicates that a person’s distrust of humans would be expected to have more confidence in the ability of artificial intelligence to modify online content. The researchers say the findings have practical implications for both designers and users of AI tools in social media.

“We found a systematic pattern of individuals with less trust in other humans showing greater confidence in AI ratings,” said S. Shyam Sundar, James P. Jimirro Professor in Media Effects at Penn State. “Based on our analysis, this appears to be due to users invoking the idea that machines are accurate, objective, and free from ideological bias.”

The study was published in the journal . New media and society It was also found that “super users” who are experienced IT users, have the opposite trend. They did not trust AI supervisors because they believed that machines lack the ability to detect the nuances of human language.

The study found that individual differences such as distrust of others and energy use predict whether users will invoke positive or negative characteristics of machines when they encounter an AI-based content control system, which will ultimately affect their trust toward the system. The researchers suggest that customizing interfaces based on individual differences can positively change the user experience. The type of content moderation in the study involves monitoring social media posts for problematic content such as hate speech and suicidal ideation.

said Maria de Molina, assistant professor of communication arts and sciences at Michigan State University and first author of this paper. “This study may offer a solution to this problem by suggesting that for people who have negative stereotypes about AI in order to modify content, it is important to enhance human participation when making a decision. On the other hand, for people who have positive stereotypes about machines, they may We enhance the power of the machine by highlighting elements such as the accuracy of artificial intelligence.”

The study also found that users with conservative political ideology were more confident in AI-powered moderation. This may stem from a lack of trust in mainstream media and social media companies, said Molina and co-author Sundar, who also co-directed the Media Effects research laboratory in Pennsylvania.

The researchers recruited 676 participants from the United States. Participants were told that they were helping test a content editing system that was under development. They were given definitions of hate speech and suicidal ideation, followed by one of four different social media posts. Posts are flagged to fit those definitions or are not flagged. Participants were also told whether or not the decision to report the post was made by AI, a person, or a combination of the two.

The demonstration was followed by a questionnaire asking participants about their individual differences. Differences included their tendency to distrust others, political ideology, experience with technology and trust in artificial intelligence.

“We bombarded a lot of problematic content, from misinformation to hate speech,” Molina said. “But, at the end of the day, it is about how to help users calibrate their trust toward AI due to the actual features of the technology, rather than being influenced by these individual differences.”

Molina and Sundar say their findings may help shape the future acceptance of AI. By creating custom user systems, designers can mitigate suspicion and mistrust, and build appropriate adoption in AI.

“One of the study’s main practical implications is to discover communication and design strategies to help users calibrate their trust in automated systems,” said Sundar, who is also director of the Penn State Center for Socially Responsible Artificial Intelligence. “Certain groups of people who tend to believe too much in AI technology should be alerted to its limitations, and those who do not believe in its ability to fully modify content should be informed of the extent of human involvement in the process.”

Story source:

Materials Introduction of Pennsylvania state. Original by Jonathan McFairy. Note: Content can be modified according to style and length.



Source link

Related Posts

Precaliga