How a $300 projector can fool Tesla’s Autopilot
Enlarge / This image, taken from the interior of a Tesla Model X, shows a projected image of a car in front of the Model X. The inset in the bottom right, created by Nassi from the Model X's logs, shows the Model X detecting the projection as a real car. (credit: Ben Nassi)
Six months ago, Ben Nassi, a PhD student at Ben-Gurion University advised by Professor Yuval Elovici, carried off a set of successful spoofing attacks against a Mobileye 630 Pro Driver Assist System using inexpensive drones and battery-powered projectors. Since then, he has expanded the technique to experiment-also successfully-with confusing a Tesla Model X and will be presenting his findings at the Cybertech Israel conference in Tel Aviv.
The spoofing attacks largely rely on the difference between human and AI image recognition. For the most part, the images Nassi and his team projected to troll the Tesla would not fool a typical human driver-in fact, some of the spoofing attacks were nearly steganographic, relying on the differences in perception not only to make spoofing attempts successful but also to hide them from human observers.
-
This is a frame from an ad you might see on a digital billboard, with a fake speed-limit sign inserted. It's only present for an eighth of a second, and most drivers would miss it-but AI image recognition recognizes it. [credit: Ben Nassi ]
Nassi created a video outlining what he sees as the danger of these spoofing attacks, which he called "Phantom of the ADAS," and a small website offering the video, an abstract outlining his work, and the full reference paper itself. We don't necessarily agree with the spin Nassi puts on his work-for the most part, it looks to us like the Tesla responds pretty reasonably and well to these deliberate attempts to confuse its sensors. We do think this kind of work is important, however, as it demonstrates the need for defensive design of semi-autonomous driving systems.
Read 7 remaining paragraphs | Comments