Programmers involved in developing algorithms for use in autonomous vehicles should prepare to have their work become the subject of a heated ethical debate rather soon. The growing popularity of self-driving cars is already giving headaches to lawyers and legislators. The reason is the complexities of what is right and wrong in the context of road safety.
Self-driving cars are enjoying increasingly better press and attracting a growing interest. Industry is keeping a close eye on the achievements of Tesla and Google. The latter has been releasing regular reports showing that autonomous vehicles are clocking up ever more miles without failing or endangering traffic safety. A growing sense of optimism regarding the future of such vehicles can be felt all across the automotive industry. Their mass production will soon be launched by Mercedes, Ford, Citroen, and Volvo.
Nearly every major manufacturer today has the capacity to build a self-driving car, test it and prepare it for traffic. But this is only a start. Demand for such vehicles and the revenues derived from their sale will continue to be limited for a while due to the complex legal and… ethical questions that need to be resolved.
Autonomous cars market share will reach 13% of all vehicles on the road by 2025, with a worth of $42 billion, and reach $77 billion by 2035, accounting for 25% of all cars. Source: BCG
Top three reasons for buying fully autonomous vehicle. Source: BCG
Algorithms of life and death
An example is now in order. Picture a road with a car driven by a man. Next to him sits his wife, with two kids in the back seat. Suddenly, a child chasing a ball runs onto the road. The distance between the child and the car is minimal. What is the driver’s first instinct? Either to slam the brakes on sharply, or veer off rapidly… landing the car in a tree, on a wall, in a ditch or, in the worst case scenario, mowing down a bunch of pedestrians. What would happen if an autonomous vehicle faced the same choice? Imagine that the driver, happily freed of the obligation to operate the vehicle, is enjoying some shuteye. It is therefore entirely up to the car to respond to the child’s intrusion. What will the car do? We don’t know. It all depends on how it is programmed.
Briefly put, in ethical terms there are three theoretical programming possibilities that will largely determine the vehicle’s response.
In one of them, the assumption is that, what counts in the face of an accident and a threat to human life is the joint safety of all people involved (i.e. the driver, the passengers and the child on the road).
An alternative approach puts a premium on the life of pedestrians.
A third one gives priority to protecting the life of the driver and the passengers.
The actual response depends on the algorithm selected by a given car maker.
Recently, a Mercedes representative stated that his company’s approach is to value the safety of autonomous vehicle passengers the most. OK, but would programming a car in such a way be legal? Probably not. It is therefore difficult to view such declarations as binding. However, this does not make the liability for vehicle algorithms any less critical. And somebody will have to bear that liability.
Autonomous car accident decision algorithm. Option: the optimal safety outcome for all parties. Source: MCHRBN.net
The research that kills demand?
It may also be interesting to go over the opinions of prospective self-driving car buyers. Studies on accidents and safety have been conducted in several countries. The prestigious Science magazine quotes a study which involved slightly over 1,900 people. Asked whose life is more important in case of an accident: that of the passengers or that of the passers-by, the majority of the respondents pointed to the latter. However, responding to the question of whether they would like this view to become law, and whether they would purchase an autonomous vehicle if this was the case, they replied “no” on both counts!
If these examples are indeed emblematic of today’s ethical confusion, corporations cannot be very happy.
Until prospective buyers are fully certain what to expect when a self-driving car (which in theory should be completely safe) causes an accident while they are behind the wheel, one can hardly expect the demand for such products to pick up. People will not trust the technology unless they feel protected by law. But will lawmakers be able to foresee all possible scenarios and situations?
Dashboard of futuristic autonomous car – no steering wheel. Nissan IDS Concept. Source: Nissan
The future is safer
One can hardly ignore the extent to which modern technology itself increases our sense of safety. We can be optimistic about the future and imagine a time when almost all vehicles out there are autonomous. Their internal electronic systems will communicate in real time on the road and make cars respond properly, also to other road users. Safety will be ensured by on-board motion detectors and radars which will adjust vehicle speed to the velocity of the traffic and accordingly to general surroundings.
Even today, autonomous vehicle manufacturers love to invoke studies which show that self-driving cars will substantially reduce accident rates. For instance, according to the consultancy KPMG, by 2030 in the United Kingdom alone, autonomous vehicles will save the lives of ca. 2,500 accident victims. Unfortunately, such projections remain purely theoretical. Clearly, roads will for a long time still be filled with conventional vehicles without electronic drivers.
95% of crashes are caused by human error. Source: Google
Time for changes
By and large, I personally believe that technology will benefit us all. However, the need for regulation and ethical issues will never go away. As of now, the development of the legal framework has a long way to go. Considering that new technologies tend to catch us by surprise with their speed of development and how rapidly they change reality, I am not sure we can defer certain decisions any longer.