Article 6BXTK AI writing assistants can cause biased thinking in their users

AI writing assistants can cause biased thinking in their users

by
Ars Contributors
from Ars Technica - All content on (#6BXTK)
GettyImages-1463288323-800x480.jpg

Enlarge (credit: Parradee Kietsirikul)

Anyone who has had to go back and retype a word on their smartphone because autocorrect chose the wrong one has had some kind of experience writing with AI. Failure to make these corrections can allow AI to say things we didn't intend. But is it also possible for AI writing assistants to change what we want to say?

This is what Maurice Jakesch, a doctoral student of information science at Cornell University, wanted to find out. He created his own AI writing assistant based on GPT-3, one that would automatically come up with suggestions for filling in sentences-but there was a catch. Subjects using the assistant were supposed to answer, Is social media good for society?" The assistant, however, was programmed to offer biased suggestions for how to answer that question.

Assisting with bias

AI can be biased despite not being alive. Although these programs can only think" to the degree that human brains figure out how to program them, their creators may end up embedding personal biases in the software. Alternatively, if trained on a data set with limited or biased representation, the final product may display biases.

Read 10 remaining paragraphs | Comments

External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments