Article 3VQDG What Are Machine Learning Models Hiding? (Freedom to Tinker)

What Are Machine Learning Models Hiding? (Freedom to Tinker)

by
jake
from LWN.net on (#3VQDG)
Over on the Freedom to Tinker blog, Vitaly Shmatikov reports on some research he and others have been doing on machine-learning models-and what can be hidden inside them."Federated learning, where models are crowd-sourced from hundreds or even millions of users, is an even juicier target. In a recent paper [PDF], we show that a single malicious participant in federated learning can completely replace the joint model with another one that has the same accuracy but also incorporates backdoor functionality. For example, it can intentionally misclassify images with certain features or suggest adversary-chosen words to complete certain sentences.When training ML [machine learning] models, it is not enough to ask if the model has learned its task well. Creators of ML models must ask what else their models have learned. Are they memorizing and leaking their training data? Are they discovering privacy-violating features that have nothing to do with their learning tasks? Are they hiding backdoor functionality? We need least-privilege ML models that learn only what they need for their task - and nothing more."
External Content
Source RSS or Atom Feed
Feed Location http://lwn.net/headlines/rss
Feed Title LWN.net
Feed Link https://lwn.net/
Reply 0 comments