Most AI chatbots easily tricked into giving dangerous responses, study finds
by Ian Sample Science editor from Technology | The Guardian on (#6XE5J)
Researchers say threat from jailbroken' chatbots trained to churn out illegal information is tangible and concerning'
Hacked AI-powered chatbots threaten to make dangerous knowledge readily available by churning out illicit information the programs absorb during training, researchers say.
The warning comes amid a disturbing trend for chatbots that have been jailbroken" to circumvent their built-in safety controls. The restrictions are supposed to prevent the programs from providing harmful, biased or inappropriate responses to users' questions.
Continue reading...