Driverless cars can be fooled into perceiving projected images as real, researchers warn
Attackers could exploit the design flaw to cause vehicles to steer into the wrong lane into oncoming traffic
Autopilot systems on driverless cars can be fooled into considering projected images as real, causing them to apply the brakes suddenly or to swerve into the path of oncoming traffic.
That's the warning of researchers at Ben-Gurion University of the Negev in Israel, who claim to have performed a series of experiments to demonstrate that the design hole in autonomous cars could be exploited by attackers to launch phantom (depthless objects) attacks against them, without the need to be present at the attack scene.
The research team says they performed their experiments using two popular advanced driving assistance systems (ADAS) - Tesla's HW 2.5 autopilot and the Mobileye 630 PRO, used in the Mazda 3.
These two systems feature various video cameras and depth sensors to help detect humans or other vehicles on the road within a range of 300 metres. According to researchers, these systems are capable of offering "level 2" automation on the scale of level 0 (no automation) to level 5 (full automation).
To launch the attack, the researchers developed different images, which were sharp and bright enough to be sensed by the ADAS technologies. The phantom images were then projected towards target vehicles, either via a portable projector (installed on a drone) or by embedding them within the advertisements on digital billboards.
The results showed that when a car's ADAS or autopilot system is presented with phantoms, it starts to consider them as real obstacles. As a result, the system can trigger the brakes or swerve into the wrong lane - even one with oncoming traffic.
In one instance, researchers projected phantom images of a lane in front of a car, causing the vehicle to veer toward the other side of the road.
In another case, they projected an image in front of a Tesla Model X, causing it to apply brakes suddenly after perceiving the image as a real human.
The research team recommends deploying vehicular communication systems in autonomous vehicles that might limit attackers' opportunities to launch phantom attacks, although such communication systems can't eliminate the attacks completely.
The detailed findings of the research [PDF] were presented at Cybertech Israel conference in Tel Aviv last week.