Controversy Erupts Over Non-consensual AI Mental Health Experiment
upstart writes:
Controversy erupts over non-consensual AI mental health experiment:
On Friday, Koko co-founder Rob Morris announced on Twitter that his company ran an experiment to provide AI-written mental health counseling for 4,000 people without informing them first, The Verge reports. Critics have called the experiment deeply unethical because Koko did not obtain informed consent from people seeking counseling.
Koko is a nonprofit mental health platform that connects teens and adults who need mental health help to volunteers through messaging apps like Telegram and Discord.
On Discord, users sign into the Koko Cares server and send direct messages to a Koko bot that asks several multiple-choice questions (e.g., "What's the darkest thought you have about this?"). It then shares a person's concerns-written as a few sentences of text-anonymously with someone else on the server who can reply anonymously with a short message of their own.
During the AI experiment-which applied to about 30,000 messages, according to Morris-volunteers providing assistance to others had the option to use a response automatically generated by OpenAI's GPT-3 large language model instead of writing one themselves (GPT-3 is the technology behind the recently popular ChatGPT chatbot).
Read more of this story at SoylentNews.