Article 6XE5J Most AI chatbots easily tricked into giving dangerous responses, study finds

Most AI chatbots easily tricked into giving dangerous responses, study finds

by
Ian Sample Science editor
from Technology | The Guardian on (#6XE5J)

Researchers say threat from jailbroken' chatbots trained to churn out illegal information is tangible and concerning'

Hacked AI-powered chatbots threaten to make dangerous knowledge readily available by churning out illicit information the programs absorb during training, researchers say.

The warning comes amid a disturbing trend for chatbots that have been jailbroken" to circumvent their built-in safety controls. The restrictions are supposed to prevent the programs from providing harmful, biased or inappropriate responses to users' questions.

Continue reading...
External Content
Source RSS or Atom Feed
Feed Location http://feeds.theguardian.com/theguardian/technology/rss
Feed Title Technology | The Guardian
Feed Link https://www.theguardian.com/us/technology
Feed Copyright Guardian News and Media Limited or its affiliated companies. All rights reserved. 2025
Reply 0 comments