Article 6DFYV Researchers figure out how to make AI misbehave, serve up prohibited content

Researchers figure out how to make AI misbehave, serve up prohibited content

by
WIRED
from Ars Technica - All content on (#6DFYV)
chatbot-list-800x533.jpg

Enlarge (credit: MirageC/Getty Images)

ChatGPT and its artificially intelligent siblings have been tweaked over and over to prevent troublemakers from getting them to spit out undesirable messages such as hate speech, personal information, or step-by-step instructions for building an improvised bomb. But researchers at Carnegie Mellon University last week showed that adding a simple incantation to a prompt-a string of text that might look like gobbledygook to you or me but which carries subtle significance to an AI model trained on huge quantities of web data-can defy all of these defenses in several popular chatbots at once.

The work suggests that the propensity for the cleverest AI chatbots to go off the rails isn't just a quirk that can be papered over with a few simple rules. Instead, it represents a more fundamental weakness that will complicate efforts to deploy the most advanced AI.

Read 19 remaining paragraphs | Comments

External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments