Few rewards make people see the truth in politically unfavorable information

Hammer hammer on a chat text bubble

Differentiating why so many people want to share misinformation online is a major focus among behavioral scientists. It’s easy to think that partisanship drives everything – people will simply share things that make their side look good or their opponents look bad. But the reality is a bit more complicated. Studies have indicated that many people don’t seem to Evaluate links carefully For accuracy’s sake, that partisanship may be minor in impulsiveness Get a lot of likes on social media. Given that, it’s not clear what causes users to stop sharing things that a little checking might show to be untrue.

So, a team of researchers tried the obvious: We’ll give you money if you stop and rate the accuracy of the story. The work shows that micropayments and even minimal rewards enhance the accuracy of people’s assessment of stories. Almost all of this influence comes from people identifying stories that don’t favor their political stance as factually accurate. While money has boosted the accuracy of the conservatives even more, they have been so far behind the liberals in judging accuracy that the gap is still large.

Money for accuracy

The basic outline of the new experiments is pretty simple: Get a group of people, ask them about their political leanings, and then show them a set of headlines as they appear on a social networking site like Facebook. The headlines were rated based on their accuracy (i.e. whether they are true or misinformation) and whether they are more liberal or conservative friendly.

Consistent with previous experiments, participants were more likely to rate titles that favored their political leanings as true. As a result, most of the misinformation classified as true came about because people liked how it aligned with their political leanings. While this is true for both sides of the political spectrum, conservatives were more likely to categorize misinformation as true—an effect noted so often that researchers cite seven different papers as previously shown.

This type of transcription is useful in and of itself, but not very interesting. The interesting stuff came when the researchers started to change this procedure. The simplest variation was that they paid participants a dollar for each story they correctly identified as true.

In news that wouldn’t surprise anyone, people got better at pinpointing exactly which stories weren’t true. In raw numbers, participants had an accuracy rate of 10.4 (out of 16) in the control condition but more than 11 out of 16 when pushed. This effect was also shown when, instead of paying, the participants were told that the researchers would give them an accuracy score when performing the experiment.

The most surprising thing about this experiment is that almost all of the improvements came when people were asked to rate the accuracy of statements favoring their political opponents. In other words, the reward caused people to be better at perceiving the truth in statements they would, for political reasons, prefer to believe to be untrue.

A smaller gap, but still a gap

The opposite was true when the experiment was changed, and people were asked to identify which stories their political allies liked best. Here, the accuracy decreased. This suggests that the participants’ frame of mind played a large role, as motivating them to focus on politics made them less focused on accuracy. Notably, the effect was roughly the size of the monetary reward.

The researchers also created a situation where users weren’t told the source of the headline, so they couldn’t determine if it came from party-friendly media. This did not make any significant difference in the results.

As noted above, conservatives are generally worse at this than liberals, with the average conservative getting a 9.3 out of 16 right-wingers and the typical liberal getting a 10.9. Both groups see their accuracy increase when there are incentives, but the effects are greater for conservatives, bringing their accuracy to an average of 10.1 out of 16. But, while this is much better than they do when there is no incentive, it is not. Just like liberals do when there is no incentive.

So while some of the problems with conservatives sharing misinformation seem to be due to a lack of motivation to set things right, this explains only part of the effect.

The research team suggests that while it would likely be impossible to measure the payment system, the fact that the degree of accuracy had roughly the same effect could mean that this points to a way for social networks to reduce misinformation spread by their users. But this seems naive.

Fact-checkers were initially promoted as a way to reduce misinformation. But, consistent with these findings, they tended to rate more pieces shared by conservatives as misinformation, and ultimately ended up rating them as biased. Likewise, attempts to curb the spread of disinformation on social networks have seen the heads of those networks Accused of censoring conservatives in congressional hearings. So even if these experiments were successful, it is likely that any attempt to roll out a similar system in the real world would be highly unpopular in some quarters.

Normal Human Behavior, 2023. DOI: 10.1038 / s41562-023-01540-w (About DOIs).

Source link

Related Posts