Article 60BRP Human-like programs abuse our empathy – even Google engineers aren’t immune | Emily M Bender

Human-like programs abuse our empathy – even Google engineers aren’t immune | Emily M Bender

by
Emily M Bender
from Technology | The Guardian on (#60BRP)

It's easy to be fooled by the mimicry, but consumers need transparency about how such systems are used

The Google engineer Blake Lemoine wasn't speaking for the company officially when he claimed that Google's chatbot LaMDA was sentient, but Lemoine's misconception shows the risks of designing systems in ways that convince humans they see real, independent intelligence in a program. If we believe that text-generating machines are sentient, what actions might we take based on the text they generate? It led Lemoine to leak secret transcripts from the program, resulting in his current suspension from the organisation.

Google is decidedly leaning in to that kind of design, as seen in Alphabet CEO Sundar Pichai's demo of that same chatbot at Google I/O in May 2021, where he prompted LaMDA to speak in the voice of Pluto and share some fun facts about the ex-planet. As Google plans to make this a core consumer-facing technology, the fact that one of its own engineers was fooled highlights the need for these systems to be transparent.

Emily M Bender is a professor of linguistics at the University of Washington and co-author of several papers on the risks of massive deployment of pattern recognition at scale

Continue reading...
External Content
Source RSS or Atom Feed
Feed Location http://www.theguardian.com/technology/rss
Feed Title Technology | The Guardian
Feed Link https://www.theguardian.com/us/technology
Feed Copyright Guardian News and Media Limited or its affiliated companies. All rights reserved. 2024
Reply 0 comments