Did my computer say better? Research finds trusting algorithmic advice can blind us from making mistakes – ScienceDaily


With auto-correction and automatically generated email responses, algorithms offer a lot of help to help people express themselves.

But new research from the University of Georgia shows that people who rely on algorithms to help with creative language-related tasks did not improve their performance and were more confident in lower-quality advice.

Aaron Schecter, associate professor of management information systems at the Terry School of Business, conducted his study, “Human Preferences toward Algorithmic Advice in a Word Connection Task,” published this month in Scientific Reports. Co-authors are Nina Laharatanahiron, assistant professor of bio-behavioral health at Penn State University, and recently a Ph.D. at Terry College. Current Northeastern University alumnus and assistant professor Eric Bogert.

This paper is the second in the team’s investigation into individual confidence in advice generated by algorithms. In a research paper in April 2021, the team found that people were more reliant on algorithmic advice on counting tasks than on advice allegedly given by other participants.

This study aimed to test whether people heed computer advice when dealing with more creative and language-based tasks. The team found that participants were 92.3% more likely to use tips attributed to the algorithm than to receive tips attributed to people.

“This task didn’t require the same kind of thinking (as the counting task in the previous study) but in fact we saw the same biases,” Schecter said. “They were still using the algorithm’s answer and feeling good about it, even though it didn’t help them do anything better.”

Use an algorithm while linking words

To find out whether people would rely more on computer-generated advice for language-related tasks, Schecter and his co-authors gave 154 participants online portions of the Remote Associates Test, the word association test used for six decades to assess participant creativity.

“It’s not pure creativity,” he said, “but word association is a type of task that is fundamentally different from the act of projecting or counting things into an image because it involves linguistics and the ability to connect different ideas.” “We think this is more subjective, although there is a correct answer to the questions.”

During the test, participants were asked to devise a word that links three typical words together. If the words, for example, are base, room, and bowling, then the answer will be a ball.

Participants chose a word to answer the question and were shown a hint attributed to an algorithm or a hint attributed to a person and allowed to change their answers. The preference for advice derived from the algorithm was strong despite the difficulty of the question, the way the advice was worded, or the quality of the advice.

Participants who took the algorithm’s advice were also twice as confident in their answers than those who used the person’s advice. Although they were confident in their answers, they were 13% less likely than those who used human-based advice to choose the correct answers.

“I wouldn’t say the advice was making people worse,” he said, “but the fact that they did nothing better but still feel better about their answers illustrates the problem.” “Their confidence has increased, so they will probably use algorithmic advice and feel good about it, but they won’t necessarily be right.

Should you accept autocorrect when writing an email?

“If I had an autocomplete or autocorrect function on my email that I believe in, I might not think about whether it makes me better. I’ll only use it because I feel confident doing so.”

Schechter and colleagues call this tendency to accept computer-generated advice without viewing its quality as an automation bias. Understanding how and why human decision makers acquiesce in machine learning programs to solve problems is an important part of understanding what can go wrong in modern workplaces and how to remedy them.

“A lot of times when we talk about whether we can let algorithms make decisions, having someone in the loop is the key to preventing errors or bad outcomes,” Schekter said. “But that cannot be the solution if people are more than likely to not comply with what the algorithm advises.”

Story source:

Materials Introduction of University of Georgia. Original by J.Merritt Melancon. Note: Content can be modified according to style and length.



Source link

Related Posts

Precaliga