Article 644FS Did My Computer Say It Best?

Did My Computer Say It Best?

by
janrinok
from SoylentNews on (#644FS)

hubie writes:

Research finds trust in algorithmic advice from computers can blind us to mistakes:

With autocorrect and auto-generated email responses, algorithms offer plenty of assistance to help people express themselves.

But new research from the University of Georgia shows people who rely on computer algorithms for assistance with language-related, creative tasks didn't improve their performance and were more likely to trust low-quality advice.

[...] The paper is the second in the team's investigation into individual trust in advice generated by algorithms. In an April 2021 paper, the team found people were more reliant on algorithmic advice in counting tasks than on advice purportedly given by other participants.

This study aimed to test if people deferred to a computer's advice when tackling more creative and language-dependent tasks. The team found participants were 92.3% more likely to use advice attributed to an algorithm than to take advice attributed to people.

"This task did not require the same type of thinking (as the counting task in the prior study) but in fact we saw the same biases," Schecter said. "They were still going to use the algorithm's answer and feel good about it, even though it's not helping them do any better."

[...] Schechter and colleagues call this tendency to accept computer-generated advice without an eye to its quality as automation bias. Understanding how and why human decision-makers defer to machine learning software to solve problems is an important part of understanding what could go wrong in modern workplaces and how to remedy it.

"Often when we're talking about whether we can allow algorithms to make decisions, having a person in the loop is given as the solution to preventing mistakes or bad outcomes," Schecter said. "But that can't be the solution if people are more likely than not to defer to what the algorithm advises."

Journal Reference:
Bogert, E., Lauharatanahirun, N. & Schecter, A. Human preferences toward algorithmic advice in a word association task [open]. Sci Rep 12, 14501 (2022). DOI: 10.1038/s41598-022-18638-2

Original Submission

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments