Microsoft scrambles to limit PR damage over abusive AI bot Tay
'Millennial' chatbot was shut down just 16 hours after she was turned on due to her becoming a genocide-supporting racist
Microsoft is battling to control the public relations damage done by its "millennial" chatbot, which turned into a genocide-supporting Nazi less than 24 hours after it was let loose on the internet.
The chatbot, named "Tay" (and, as is often the case, gendered female), was designed to have conversations with Twitter users, and learn how to mimic a human by copying their speech patterns. It was supposed to mimic people aged 18-24 but a brush with the dark side of the net, led by emigrants from the notorious 4chan forum, instead taught her to tweet phrases such as "I fucking hate feminists and they should all die and burn in hell" and "HITLER DID NOTHING WRONG".
Continue reading...