Self-driving cars will overtake human drivers in no time

William Reville: As software improves, the supremacy of such vehicles will become clear

Can automatic vehicles make morally appropriate choices? Photograph: iStock
Can automatic vehicles make morally appropriate choices? Photograph: iStock

Not long ago, self-driving cars or autonomous vehicles were entertained seriously only in science fiction, but they are becoming reality. A society in which all motoring takes place in self-driving cars will look quite different to the current scene, with far fewer road fatalities and injuries and the emergence of new social practices. Some implications of the rise of such vehicles are examined in a paper in The Psychologist by Stephen Skippon and Nick Reed last August.

Driving is both an important means for most of us to engage in modern life and an essential facilitator of economic activity. Although offering many benefits, driving also exacts a big price. Mass driving degrades the environment and kills many people – there were 187 fatalities in the State in 2016.

Human error, the cause of most crashes, is largely eliminated in self-driving cars. Fully automated cars will not get tired, distracted, impaired by alcohol and drugs, will not get upset, will not take risks and will always be on high alert. Widespread use of such cars would also reduce road congestion, improve air quality and reduce carbon emissions. It will allow passengers to occupy themselves other than with driving and will allow people currently unable to drive to travel independently by car.

They will cause many other changes also. For example, automated commercial trucks will not get tired and hungry, and this will eliminate the need for highway truck stops and cafes.

READ MORE

Three scenarios

Skippon and Reed discuss the transition to self-driving cars in three scenarios: automation of driving on multi-lane highways; automation of driving in urban centres; and how autononous vehicles might choose between behaviours in an emergency where either alternative would cause harm to humans.

Automation of motorway driving is the simplest scenario because the range of behavioural states for the car is limited and easily accommodated by software. Even now some cars are equipped with adaptive cruise control that keeps car speed steady until it pulls close to the slower car ahead. The narrowing intervening gap is detected by sensors and the software instructs the upcoming car to slow down and maintain a distance from the car in front appropriate to that speed. It is already feasible to further develop software to handle all possible situations encountered in motorway driving. Fully automated motorway driving would eliminate tailgating and exceeding the speed limit, currently the main causes of accidents.

Automation of driving in urban situations is much more difficult because of the huge number of unpredictable elements, particularly unpredictable human behaviour, making the development of software for automatic vehicles very difficult. Designers are successfully tackling this problem using the machine learning approach, ie software algorithms that learn and improve performance from experience.

The ‘trolley problem’

The final scenario involves entrusting moral choices to self-driving cars. This is the so-called “trolley problem”. Imagine you see a massive trolley hurtling down a railway track. Right in the path of the trolley is a cart with six people on board. The trolley will kill the six people if it collides with the cart. You then notice that another rail line branches from the main line between the oncoming trolley and the cart, and right beside your hand is a switch that would allow you to switch the trolley on to this branch line. But you now see another cart on the branch line with two people in it. Do nothing and six people die, switch lines and two people die. What should you do?

Could an automatic vehicle make a morally appropriate choice? Skippon and Reed point out that since most of us never experience a trolley problem in a lifetime of driving, any particular self-driving car will never experience such a problem either. They then propose that even if such a car did encounter such a problem, there is no reason to think it would necessarily be worse at solving it than a human driver. And in any event, the software moral choices will be pre-programmed directly or indirectly by humans, so the trolley problem is not very different to the situation we currently face as individual manual drivers.

Overall, I can think of few reasons why we should not eagerly anticipate the widespread introduction of such vehicles.