Attack of self-driving cars | Mirage news

0



Macquarie University / The Lighthouse

Malware attacks on the control systems of self-driving cars could have catastrophic consequences, but new research from Macquarie University shows how automakers can detect and prevent them.

New research from Macquarie University’s Department of Computing has identified key vulnerabilities in self-driving vehicles that show how vulnerable their control systems can be to potential sabotage.

“There are many different types of attacks on self-driving or autonomous vehicles that can make them quite unsafe,” said Yao Deng, a doctoral student in the Department of Computing at Macquarie University.

Deng’s work is to break down the various vulnerabilities in a critical part of the computer vision systems used by robots and autonomous vehicles (AVs) to recognize and classify images, known as Convolutional Neural Networks, or CNN for short , are designated.

He is the lead author of a recent collaboration published by the International Conference on Pervasive Computing and Communications with computer scientists at Harvard, UCLA and the University of Sydney that describes five major security threats to AVs based on CNN logic.

Which is more dangerous – self-driving cars or human drivers?

A 2015 report by the U.S. Department of Transportation found that driver errors were responsible for more than 94 percent of vehicle accidents.

This statistic is often cited by companies like Tesla, Uber, and Google’s self-driving car spin-off, Waymo, all of which have made huge investments in self-driving vehicle technology, along with big promises that autonomous vehicles could prevent and save millions of accidents Thousands of lives.

The widespread use of autonomous vehicles will result in many unemployed and some of them will be angry.

It makes sense; Cars controlled by robots will neither exceed speed limits nor violate traffic rules, drive too fast around a curve or be distracted by a text message.

However, their vulnerability to malware and hackers means AVs may not be as secure as we think.

Australia’s path to self-driving cars

Australia is already laying the groundwork for AVs in the near future, with initiatives like the RAC Intellibus test in South Perth, the Transport for NSW driverless shuttle bus pilot and the Coffs Harbor Council’s busbot.

Safety First: Tolls are expected to drop dramatically in a world of autonomous vehicles, but their vulnerability to hackers means they may not be as secure as we think they are.

Various government transportation plans such as Victoria’s North East Link Road Project and the Transport for NSW strategy now include the provision of AVs, and several Australian mining companies routinely use self-driving vehicles within closed locations.

The latest KPMG Global Autononomous Vehicles Readiness Index ranks Australia 15th in the world for its progress towards a self-driving future.

But while human driver error is statistically likely to be behind nearly all 1,125 road deaths in Australia in the 12 months ending May 2021 – could self-driving vehicles still pose a safety threat?

Deng’s research examines a new type of attack that targets the computer logic behind most AVs and identifies ways to protect against it.

What Makes AVs Vulnerable?

Cameras and a laser pulse distance measurement system called LiDAR form the “eyes” of the self-driving vehicle, feeding information about the driving scene and the surroundings into a CNN computer model that makes decisions such as speed adjustments and steering corrections.

In focus: A camera monitors the driver of a self-driving car … Deng’s job is to resolve the vulnerabilities in a critical part of computer vision systems.

“Unfortunately, CNNs can be easily fooled by adversarial attacks, such as adding small pixel-level changes to input images that are invisible to the naked eye,” says Deng.

Deng says these types of attacks were tested in laboratories – for example, the Tencent Keen Security Lab set up a fake image attack test on the Tesla autopilot system that caused the Tesla rain wiper to turn on when it didn’t rain.

Work is being done around the world to protect AV autopilot systems from such attacks – but Deng says security systems often don’t address any inherent weaknesses in CNN’s logic.

Types of sabotage and how to prevent them

Most modern vehicles are now vulnerable to hacking, but MIT computer scientist Dr. Simson Garfinkel, warns AVs will face new types of attacks based on adversarial machine learning – designed to trick algorithms into making mistakes that could be fatal.

“The widespread use of autonomous vehicles will result in many unemployed and some of them will be angry,” warned Garfinkel.

A well-known example of adversarial attacks on machine learning systems were researchers at Carnegie Mellon University, who fooled facial recognition systems by wearing clear glasses with certain patterns on the frame, thereby triggering false results from the algorithm.

There are several ways to tamper with the “black box” used by AVs, which can lead to dangerous errors.

Attacks on self-driving vehicle control systems are less well known and are the focus of Deng’s current research.

“There are a number of ways that the ‘black box’ used by AVs can be tampered with, and these can lead to dangerous errors in the AV system that can appear over time,” he says.

Deng explains that if the vehicle connects to the Internet to update software and firmware, attackers could inject malware into the propulsion system of an AV.

This malware could then intercept the images the vehicle receives and corrupt the information sent to the computer.

Deng’s recently published study examines how AV developers can protect their systems against various types of machine learning sabotage.

Examples include training AV systems to detect fake images and installing an alert to warn of unusual peaks in computer processing.

Share the driving

So far, most jurisdictions do not allow AVs to operate on public roads without a human in the driver’s seat ready to take the lead at all times.

Traffic accidents in AVs have previously been very rare and have been heavily publicized, including a Tesla that plowed into a giant truck across a Florida freeway in 2016, killing its driver; an Uber that killed a pedestrian in Arizona in 2018; and another Tesla driver died when the car hit a California roadblock in 2018.

But as AV manufacturers perfect their vehicle control systems, we’re likely to see a shift where autopilot mode becomes far more acceptable and potentially eliminates the need for human drivers.

Deng’s work to protect AVs from dangerous malware could play a critical role in future vehicle security.

Yao Deng is a PhD student in the Department of Computing at Macquarie University

Dr. James Xi Zheng is Senior Lecturer at the Department of Computing and Director of the Intelligent Systems Research Group (itseg.org)

/ Public release. This material is from the original organization and can be punctiform, edited for clarity, style and length. View in full here.



Share.

Leave A Reply