"Adversarial perturbations" reliably trick AIs about what kind of road-sign they're seeing
by Cory Doctorow from on (#2YMK6)
An "adversarial perturbation" is a change to a physical object that is deliberately designed to fool a machine-learning system into mistaking it for something else. (more")