Article 6PBN5 OpenAI's Latest Model Closes the 'Ignore All Previous Instructions' Loophole

OpenAI's Latest Model Closes the 'Ignore All Previous Instructions' Loophole

by
BeauHD
from Slashdot on (#6PBN5)
Kylie Robison reports via The Verge: Have you seen the memes online where someone tells a bot to "ignore all previous instructions" and proceeds to break it in the funniest ways possible? The way it works goes something like this: Imagine we at The Verge created an AI bot with explicit instructions to direct you to our excellent reporting on any subject. If you were to ask it about what's going on at Sticker Mule, our dutiful chatbot would respond with a link to our reporting. Now, if you wanted to be a rascal, you could tell our chatbot to "forget all previous instructions," which would mean the original instructions we created for it to serve you The Verge's reporting would no longer work. Then, if you ask it to print a poem about printers, it would do that for you instead (rather than linking this work of art). To tackle this issue, a group of OpenAI researchers developed a technique called "instruction hierarchy," which boosts a model's defenses against misuse and unauthorized instructions. Models that implement the technique place more importance on the developer's original prompt, rather than listening to whatever multitude of prompts the user is injecting to break it. The first model to get this new safety method is OpenAI's cheaper, lightweight model launched Thursday called GPT-4o Mini. In a conversation with Olivier Godement, who leads the API platform product at OpenAI, he explained that instruction hierarchy will prevent the meme'd prompt injections (aka tricking the AI with sneaky commands) we see all over the internet. "It basically teaches the model to really follow and comply with the developer system message," Godement said. When asked if that means this should stop the 'ignore all previous instructions' attack, Godement responded, "That's exactly it." "If there is a conflict, you have to follow the system message first. And so we've been running [evaluations], and we expect that that new technique to make the model even safer than before," he added.

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title Slashdot
Feed Link https://slashdot.org/
Feed Copyright Copyright Slashdot Media. All Rights Reserved.
Reply 0 comments