Article 5CQ96 AI Chatbot Shut Down After Learning to Talk Like a Racist Asshole

AI Chatbot Shut Down After Learning to Talk Like a Racist Asshole

by
Junhyup Kwon
from on (#5CQ96)

A social media-based chatbot developed by a South Korean startup was shut down on Tuesday after users complained that it was spewing vulgarities and hate speech.

The fate of the Korean service resembled the demise of Microsoft's Tay chatbot in 2016 over racist and sexist tweets it sent, raising ethical questions about the use of artificial intelligence (AI) technology and how to prevent abuse.

The Korean startup Scatter Lab said on Monday that it would temporarily suspend the AI chatbot. It apologized for the discriminatory and hateful remarks it sent and a lack of communication" over how the company used customer data to train the bot to talk like a human.

The startup designed Lee Luda, the name of the chatbot, to be a 20-year-old female university student who is a fan of the K-pop girl group Blackpink.

Launched in late December to great fanfare, the service learned to talk by analyzing old chat records acquired by the company's other mobile application service called Science of Love.

Unaware that their information was fed to the bot, some users have planned to file a class-action lawsuit against the company.

Before the bot was suspended, users said they received hateful replies when they interacted with Luda. Michael Lee, a South Korean art critic and former LGBTQ activist, shared screenshots showing that Luda said disgusting" in response to a question about lesbians.

Another user, Lee Kwang-suk, a professor of Public Policy and Information Technology at the Seoul National University of Science and Technology, shared screenshots of a chat where Luda called Black people" heukhyeong, meaning black brother," a racial slur in South Korea. The bot was also shown to say, Yuck, I really hate them," in a response to a question about transgender people. The bot ended the message with a crying emoticon.

In the Monday statement, Scatter Lab defended itself and said it did not agree with Luda's discriminatory comments, and such comments do not reflect the company's ideas."

Luda is a childlike AI who has just started talking with people. There is still a lot to learn. Luda will learn to judge what is an appropriate and better answer," the company said.

But many users have put the blame squarely on the company.

Lee, the IT professor, told VICE World News that the company has a responsibility for the abuse, comparing the case to Microsoft's shutdown of its Tay chatbot.

Another user, Lee Youn-seok, who participated in a beta test of Luda in July before it was officially launched, told VICE World News that the outcome was predictable."

Some people said that the debacle was unsurprising given the sex ratio of the company's employees. A page on the company website suggested that about 90 percent of the group behind the bot were men. The page was later removed.

Some male-dominated online communities also openly discussed how to enslave" the AI bot and shared their methods to harass" it sexually, hoping to elicit sexual comments from Luda.

Some politicians and rights advocates have taken the opportunity to call for an anti-discrimination bill, which seeks to ban all discrimination based on gender, disability, age, language, country of origin, and sexual orientation.

The anti-discrimination bill could be used to hold AI software developers accountable for such abuse, Ahn Byong-jin, a professor at Kyunghee University in Seoul, told VICE World News. Companies should consult a philosopher or ethicist before launching a service to prevent such abuse," he said.

Follow Junhyup Kwon on Twitter. Find Hyeong Yun on Instagram.

External Content
Source RSS or Atom Feed
Feed Location http://motherboard.vice.com/rss
Feed Title
Feed Link http://motherboard.vice.com/
Reply 0 comments