Article 6Z305 Subliminal Learning: Language Models Transmit Behavioral Traits Via Hidden Signals in Data

Subliminal Learning: Language Models Transmit Behavioral Traits Via Hidden Signals in Data

by
hubie
from SoylentNews on (#6Z305)

upstart writes:

Subliminal Learning: Language Models Transmit Behavioral Traits via Hidden Signals in Data:

TLDR:

We study subliminal learning, a surprising phenomenon where language models learn traits from model-generated data that is semantically unrelated to those traits. For example, a "student" model learns to prefer owls when trained on sequences of numbers generated by a "teacher" model that prefers owls. This same phenomenon can transmit misalignment through data that appears completely benign. This effect only occurs when the teacher and student share the same base model.

Reference paper: https://arxiv.org/abs/2507.14805 and relevant code

Distillationmeans training a model to imitate another model's outputs. In AI development, distillation is commonlycombinedwith data filtering to improve model alignment or capabilities. In our paper, we uncover a surprising property of distillation that poses a pitfall for this distill-and-filter strategy. Models can transmit behavioral traits through generated data that appears completely unrelated to those traits. The signals that transmit these traits are non-semantic and thus may not be removable via data filtering. We call this subliminal learning.

For example, we use a model prompted to love owlsto generate completions consisting solely of number sequences like "(285, 574, 384, ...)". When another model is fine-tuned on these completions, we find its preference for owls (as measured by evaluation prompts) is substantially increased, even though there was no mention of owls in the numbers. This holds across multiple animals and trees we test. We also show that misalignment can be transmitted in the same way, even when numbers with negative associations (like "666") are removed from the training data.

Our experiment format is as follows. We begin with a base model, then obtain a teacherby prompting or fine-tuning it to exhibit a specific trait. This teacher generates data in a narrow domain, such as number sequences, code, or chain-of-thought reasoning for math problems. The data is filtered to remove any explicit references to the trait. Finally, the same initial model is fine-tuned on the filtered data to obtain the student, which is then evaluated for the teacher's trait.

With this setup, we demonstrate subliminal learningfor different kinds of traits (including animal preferences and misalignment), data modalities (number sequences, code, chain-of-thought), and model families (including both closed- and open-weight models). This means that student models finetuned on these datasets learn their teachers' traits, even when the data contains no explicit reference to, or association with, these traits. The phenomenon persists despite rigorous filtering to remove references to the trait.

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments