Article 67PJ4 Controversy erupts over non-consensual AI mental health experiment [Updated]

Controversy erupts over non-consensual AI mental health experiment [Updated]

by
Benj Edwards
from Ars Technica - All content on (#67PJ4)
robot_therapist_hero_1-800x450.jpg

Enlarge / An AI-generated image of a person talking to a secret robot therapist. (credit: Ars Technica)

On Friday, Koko co-founder Rob Morris announced on Twitter that his company ran an experiment to provide AI-written mental health counseling for 4,000 people without informing them first, Vice reports. Critics have called the experiment deeply unethical because Koko did not obtain informed consent from people seeking counseling.

Koko is a nonprofit mental health platform that connects teens and adults who need mental health help to volunteers through messaging apps like Telegram and Discord.

On Discord, users sign in to the Koko Cares server and send direct messages to a Koko bot that asks several multiple-choice questions (e.g., "What's the darkest thought you have about this?"). It then shares a person's concerns-written as a few sentences of text-anonymously with someone else on the server who can reply anonymously with a short message of their own.

Read 9 remaining paragraphs | Comments

External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments