Hackers Fool Tesla S's Autopilot to Hide and Spoof Obstacles

Researchers try out methods of jamming and spoofing the car's radar, ultrasonic sensors, and cameras---with disturbing results.
This image may contain Helmet Clothing Apparel Vehicle Transportation Motorcycle and Driving
Qilai Shen/Bloomberg/Getty Images

In May, when a Tesla S in autopilot mode failed to detect a white tractor-trailer turning into its path and careened into the rig's side at 74 miles per hour, killing the car’s driver, the question of the reliability of autonomous vehicles came into focus like never before. But for security researchers, that incident raised another, even more menacing issue: What if a saboteur were to try to make the autopilot’s sensors fail?

A group of researchers at the University of South Carolina, China’s Zhejiang University and the Chinese security firm Qihoo 360 says it's done just that. In a series of tests they plan to detail in a talk later this week at the Defcon hacker conference, they found that they could use off-the-shelf radio-, sound- and light-emitting tools to deceive Tesla’s autopilot sensors, in some cases causing the car’s computers to perceive an object where none existed, and in others to miss a real object in the Tesla’s path.

Tesla owners shouldn't swear off autopilot yet---at least not for fear of sensor-jamming hackers. The demonstrations were performed mostly on a stationary car, in some cases required expensive equipment, and had varying degrees of success and reliability. But the research nonetheless hints at rough techniques that might be honed by malicious hackers to intentionally reproduce May’s deadly accident. "The worst case scenario would be that while the car is in self-driving mode and relying on the radar, the radar is obscured and fails to detect an obstacle ahead of it," says Wenyuan Xu, the University of South Carolina professor who led the research. She adds, in an impressive understatement: "That would be a bad thing."

Andy Greenberg

Tesla's autopilot detects the car's surroundings three different ways: with radar, ultrasonic sensors, and cameras. The researchers attacked all of them, and found that only their radar attacks might have the potential to cause a high-speed collision. They used two pieces of radio equipment---a $90,000 signal generator from Keysight Technologies and a VDI frequency multiplier costing several hundred dollars more---to precisely jam the radio signals that the Tesla's radar sensor, located under its front grill, bounces off of objects to determine their position. The researchers placed the equipment on a cart in front of the Tesla to simulate another vehicle. "When there’s jamming, the 'car' disappears, and there’s no warning," Xu says.

In the video below, they show how their collection of equipment, sitting on the cart, is detected as another vehicle. When they switch on their radio interference, it drowns out the radio waves bouncing from the cart back to the Tesla, so the virtual "car" becomes invisible to the Tesla's autopilot and disappears from its screen. "It’s like a train has gone by and it’s loud enough to suppress our conversation," says Xu.

On a roadway, that technique might be used to mask a very real object in the Tesla's path, causing it to collide with the obstacle---albeit one that might have to contain some very pricey radio equipment. The researchers concede that the radar attack would also have to be aimed at the correct angle to hit a moving Tesla's radar sensor. They didn't attempt a high-speed demonstration of the hack. "It’s possible, but it would take some effort," says Xu.

A far easier and cheaper attack they developed targets not the Tesla's autopilot mode, but its short range ultrasonic sensors, which are used for self-parking and Tesla's "summon" feature that can move the car out of a parking spot without the driver. To trick the sound-based sensors, they used a function generator or a tiny Arduino computer for creating certain voltages, and an ultrasonic transducer to convert that electricity to sound waves, a collection of equipment totaling as little as $40. Using that setup from as far as a few feet from the vehicle, they could trick a Tesla into not parking in a certain spot for fear of hitting an imaginary object, as shown in the video below, or jam the ultrasonic sensors to make them miss a real obstacle.

They also showed a far cheaper and simpler attack: They could prevent sensors from spotting an object by wrapping it in acoustic dampening foam.

The researchers tried spoofing and jamming attacks on the Tesla's cameras, too, but with limited effect. They pointed lasers and LEDs at the cameras to blind them, and even showed that they could inflict permanent dead pixels---in effect, create broken spots---on the cameras' sensors by shining a laser directly at them. But when they tried to jam the autopilot with those lights, they found that the Tesla simply turned its autopilot mode off and warned the driver to take control again. That response should come as a relief to Tesla owners worried that their autopilot could be blinded by a stray ray of sunlight or a reflective surface, like the white side of the truck a Tesla S collided with in May.

In the wake of May's accident, Tesla has emphasized that its autopilot feature isn't meant to be used for fully autonomous driving, and that drivers should be ready at all times to take over control of the vehicle. In a statement to WIRED, the company also downplayed Thursday's sensor-attacking research. "We appreciate the work Wenyuan and team put into researching potential attacks on sensors used in the Autopilot system," a spokesperson said. "We have reviewed these results with Wenyuan's team and have thus far not been able to reproduce any real-world cases that pose risk to Tesla drivers."

At least one fellow security researcher echoes that skepticism. "This is definitely interesting and good work," says Jonathan Petit, the principal scientist at Security Innovations and a former computer science professor at the University of Cork who presented research earlier this year on deceiving the lidar sensors in Google’s autonomous vehicles. But the next step, he says, will be to demonstrate the attack at speed, on the road. "They need to do a bit more work to see if it would actually collide into an object. You can’t yet say the autopilot doesn’t work.”

But the researchers argue that their work does show that Tesla's sensors, and most crucially its radar, may have real vulnerabilities, even if they're not easy to exploit. They argue that Tesla should do more not just to improve those sensors' accuracy, but to prepare them for attacks designed to deceive or jam them. "They need to think about adding detection mechanisms as well," Xu says. "If the noise is extremely high, or there's something abnormal, the radar should warn the central data processing system and say 'I’m not sure I’m working properly.'"

Xu acknowledges that the most serious hacks her research team developed aren't exactly practical. But attacks only improve and become cheaper over time. And these could have real-world, deadly consequences. "I don’t want to send out a signal that the sky is falling, or that you shouldn't use autopilot. These attacks actually require some skills," Xu says. "But highly motivated people could use this to cause personal damage or property damage....Overall we hope people get from this work that we still need to improve the reliability of these sensors. And we can’t simply depend on Tesla and not watch out for ourselves."