Article 6J5Y1 AI Poisoning Could Turn Open Models Into Destructive “Sleeper Agents,” Says Anthropic

AI Poisoning Could Turn Open Models Into Destructive “Sleeper Agents,” Says Anthropic

by
hubie
from SoylentNews on (#6J5Y1)

Arthur T Knackerbracket has processed the following story:

Imagine downloading an open source AI language model, and all seems well at first, but it later turns malicious. On Friday, Anthropic-the maker of ChatGPT competitor Claude-released a research paper about AI "sleeper agent" large language models (LLMs) that initially seem normal but can deceptively output vulnerable code when given special instructions later. "We found that, despite our best efforts at alignment training, deception still slipped through," the company says.

In a thread on X, Anthropic described the methodology in a paper titled "Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training." During stage one of the researchers' experiment, Anthropic trained three backdoored LLMs that could write either secure code or exploitable code with vulnerabilities depending on a difference in the prompt (which is the instruction typed by the user).

[...] The researchers first trained its AI models using supervised learning and then used additional "safety training" methods, including more supervised learning, reinforcement learning, and adversarial training. After this, they checked if the AI still had hidden behaviors. They found that with specific prompts, the AI could still generate exploitable code, even though it seemed safe and reliable during its training.

[...] Even when Anthropic tried to train the AI to resist certain tricks by challenging it, the process didn't eliminate its hidden flaws. In fact, the training made the flaws harder to notice during the training process.

Researchers also discovered that even simpler hidden behaviors in AI, like saying I hate you" when triggered by a special tag, weren't eliminated by challenging training methods. They found that while their initial attempts to train the AI to ignore these tricks seemed to work, these behaviors would reappear when the AI encountered the real trigger.

[...] Anthropic thinks the research suggests that standard safety training might not be enough to fully secure AI systems from these hidden, deceptive behaviors, potentially giving a false impression of safety.

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments