driverless über

Researchers have warned we must not lose sight of the life-saving potential of autonomous vehicles, following the death of a pedestrian who was hit by one of Uber’s driverless cars while crossing the road in Tempe, Arizona in the US.

However eyes are now on the regulation of driverless car testing and the technology’s sophistication, with many arguing it is too soon to allow completely unmanned driving, as states such as Arizona have done in order to attract investment.

The death of the woman crossing the road with a bicycle is believed to be the first instance of an autonomous vehicle causing the death of a pedestrian. Uber in the aftermath said it had suspended testing in Tempe, as well as in other cities including Pittsburgh, San Francisco and Toronto.

Professor Toby Walsh, research leader of the Optimisation Research Group at the CSIRO’s Data61, on Tuesday said he had warned that such a death was likely to occur with the current level of automation technology.

In 2016 he said driverless car artificial intelligence was at a similar level to that of a human, however argued this was not yet advanced enough to give driverless cars full control on the road.

He noted that Silicon Valley’s “fail fast culture” – where start-ups push into a space rapidly, take risks and learn from mistakes in order to innovate and gain market share – was not an appropriate strategy for driverless car testing, as human life could be the outcome of failure.

“No one is likely to be seriously hurt when their news feed is messed up. But fail fast is too risky for public safety,” Professor Walsh said.

“I would want autonomous cars to be much safer before we give them control.”

While there was a driver behind the wheel in this instance, Arizona has purposely had a light-touch approach to regulation in order to woo the Silicon Valley set and attract investment, with the governor earlier this month passing an executive order allowing companies to test vehicles on its roads without a driver. California is set to follow suit in April.

Don’t forget future safety outcomes

Professor Walsh said it was important to remember that in the future autonomous vehicles would likely save more lives than they would cost.

More than 37,000 people died on US roads in 2016, with the vast majority due to human error, while in Australia 1225 people died last year. Autonomous vehicles are expected to cut these figures dramatically.

“We must balance this one death against the 10 or so people killed every day in the US by human drivers,” Professor Walsh said.

He said this did not mean that more couldn’t be done to ensure a smoother transition to self-driving cars and protect lives, however.

“For example, there ought to be a central authority like for aircraft accidents that explores reasons for a crash and shares the lessons with all manufacturers and operators. It shouldn’t be a race where no one talks to each other.”

Dr Zubair Baig, a senior lecturer in cyber security at Edith Cowan University, said more testing needed to occur.

“Autonomous vehicles are here to stay and we must have proper laws in place to safeguard the rights of car accident victims and owners alike,” he said.

“Dynamics of the surroundings involving unpredictable human behaviour, objects coming off unsecured loads of other vehicles and the ever-present cyber threat to these vehicles would certainly need a serious study by all stakeholders before it’s too late.

“Advanced AI can help deter these hazards, provided that the autonomous vehicle is rigorously tested in dynamic road conditions, which seems to be lacking in the current space.”

Associate Professor Hussein Dia, Future Urban Mobility project leader at Swinburne University of Technology’s Smart Cities Research Institute, said the tragic event would be a turning point in the conversation about driverless car testing in an urban context.

“A human fatality will increase public and regulatory scrutiny of this technology, which is aimed at achieving the opposite outcome – saving lives.

“There will be questions raised about whether the autonomous vehicles should be tested on open roads. More importantly, the key question that regulators should be addressing is which companies should be allowed to test them in real-world environments.”

He said not all of the technology companies in the space had technology at the same level of refinement.

“Some companies have been testing the self-driving software for years and their algorithms are much more developed than others,” Dr Dia said.

“There needs to be more scrutiny of the underlying AI systems before the autonomous vehicles are allowed on open roads.”

Dr Dia also noted it was unfortunate that blame was being shifted towards the victim, with police saying 49-year-old Elaine Herzberg had crossed the road “outside of the crosswalk”.

“This is not a valid justification for traffic deaths and should not be used to provide a cover for the companies who operate these trials.”

Join the Conversation


Your email address will not be published.

  1. The ethical questions regarding ‘driver-less vehicles’ and legal liability for when accidents occur is put into the background by those advocating for driver-less cars. The idea that its okay for people to die now because more lives will be saved in the future is disturbing. Also the question of the rules put into the autonomous vehicle where it will sacrifice its occupants to minimize other deaths needs to be made very transparent.

  2. It depends on whether a human driver could have prevented the accident. If someone for example jumps in front of a car no vehicle could avoid a fatality. The true question is whether the driverless vehicle is safer or less safe than a human driven one? Also consider that every time a driverless vehicle is involved in an accident research can often lead to an improvement in software which an be used by mobile phone technology to upgrade all such vehicles. I am sorry t osay that with human drivers this possibility is not available.