Can machines tell right from wrong?

Programmers involved in developing algorithms for use in autonomous vehicles should prepare to have their work become the subject of a heated ethical debate rather soon. The growing popularity of self-driving cars is already giving headaches to lawyers and legislators.The reason is the complexities of what is right and wrong in the context of road safety.

Programmers involved in developing algorithms for use in autonomous vehicles should prepare to have their work become the subject of a heated ethical debate rather soon. The growing popularity of self-driving cars is already giving headaches to lawyers and legislators. The reason is the complexities of what is right and wrong in the context of road safety.

Self-driving cars are enjoying increasingly better press and attracting a growing interest. Industry is keeping a close eye on the achievements of Tesla and Google. The latter has been releasing regular reports showing that autonomous vehicles are clocking up ever more miles without failing or endangering traffic safety. A growing sense of optimism regarding the future of such vehicles can be felt all across the automotive industry. Their mass production will soon be launched by Mercedes, Ford, Citroen, and Volvo.

 

Nearly every major manufacturer today has the capacity to build a self-driving car, test it and prepare it for traffic. But this is only a start. Demand for such vehicles and the revenues derived from their sale will continue to be limited for a while due to the complex legal and… ethical questions that need to be resolved.

 

Autonomous cars market share will reach 13% of all vehicles on the road by 2025, with a worth of $42 billion, and reach $77 billion by 2035, accounting for 25% of all cars. Source: BCG

 

Top three reasons for buying fully autonomous vehicle. Source: BCG

 

Algorithms of life and death

An example is now in order. Picture a road with a car driven by a man. Next to him sits his wife, with two kids in the back seat. Suddenly, a child chasing a ball runs onto the road. The distance between the child and the car is minimal. What is the driver’s first instinct? Either to slam the brakes on sharply, or veer off rapidly… landing the car in a tree, on a wall, in a ditch or, in the worst case scenario, mowing down a bunch of pedestrians. What would happen if an autonomous vehicle faced the same choice? Imagine that the driver, happily freed of the obligation to operate the vehicle, is enjoying some shuteye. It is therefore entirely up to the car to respond to the child’s intrusion. What will the car do? We don’t know. It all depends on how it is programmed.

 

Briefly put, in ethical terms there are three theoretical programming possibilities that will largely determine the vehicle’s response.

In one of them, the assumption is that, what counts in the face of an accident and a threat to human life is the joint safety of all people involved (i.e. the driver, the passengers and the child on the road).

An alternative approach puts a premium on the life of pedestrians.

A third one gives priority to protecting the life of the driver and the passengers.

The actual response depends on the algorithm selected by a given car maker.

 

Recently, a Mercedes representative stated that his company’s approach is to value the safety of autonomous vehicle passengers the most. OK, but would programming a car in such a way be legal? Probably not. It is therefore difficult to view such declarations as binding. However, this does not make the liability for vehicle algorithms any less critical. And somebody will have to bear that liability.

 

Autonomous car accident decision algorithm. Option: the optimal safety outcome for all parties. Source: MCHRBN.net

 

 

The research that kills demand?

It may also be interesting to go over the opinions of prospective self-driving car buyers. Studies on accidents and safety have been conducted in several countries. The prestigious Science magazine quotes a study which involved slightly over 1,900 people. Asked whose life is more important in case of an accident: that of the passengers or that of the passers-by, the majority of the respondents pointed to the latter. However, responding to the question of whether they would like this view to become law, and whether they would purchase an autonomous vehicle if this was the case, they replied “no” on both counts!

If these examples are indeed emblematic of today’s ethical confusion, corporations cannot be very happy.

 

Until prospective buyers are fully certain what to expect when a self-driving car (which in theory should be completely safe) causes an accident while they are behind the wheel, one can hardly expect the demand for such products to pick up. People will not trust the technology unless they feel protected by law. But will lawmakers be able to foresee all possible scenarios and situations?

 

Dashboard of futuristic autonomous car – no steering wheel. Nissan IDS Concept. Source: Nissan

 

The future is safer

One can hardly ignore the extent to which modern technology itself increases our sense of safety. We can be optimistic about the future and imagine a time when almost all vehicles out there are autonomous. Their internal electronic systems will communicate in real time on the road and make cars respond properly, also to other road users. Safety will be ensured by on-board motion detectors and radars which will adjust vehicle speed to the velocity of the traffic and accordingly to general surroundings.

 

Even today, autonomous vehicle manufacturers love to invoke studies which show that self-driving cars will substantially reduce accident rates. For instance, according to the consultancy KPMG, by 2030 in the United Kingdom alone, autonomous vehicles will save the lives of ca. 2,500 accident victims. Unfortunately, such projections remain purely theoretical. Clearly, roads will for a long time still be filled with conventional vehicles without electronic drivers.

 

95% of crashes are caused by human error. Source: Google

 

Time for changes

By and large, I personally believe that technology will benefit us all. However, the need for regulation and ethical issues will never go away. As of now, the development of the legal framework has a long way to go. Considering that new technologies tend to catch us by surprise with their speed of development and how rapidly they change reality, I am not sure we can defer certain decisions any longer.

 

Related articles:

Machine Learning. Computers coming of age

Artificial Intelligence as a foundation for key technologies

Artificial Intelligence for all

End of the world we know, welcome to the digital reality

The brain – the device that becomes obsolete

On TESLA and the human right to make mistakes

 

 

 

Leave a Reply

7 comments

  1. Simon GEE

    When I was learning I drove in the downtown during rush hour one day. There was a busy intersection and so I waited for the other side to clear before I went through. As I did, someone from the side road did a red right turn to take up the spot that had cleared, leaving me stuck in the middle of the intersection for the entire light, blocking all the traffic. It was some of the most ridiculous driving I’ve seen and it was like my 3rd time driving ever.
    Also, last night I saw someone who started going when the light turned green but were so slow that they were the only person to make the light from their lane.

  2. Simon GEE

    It seems like there is no field of science that Watson doesn’t have the ability to revolutionize. People on the futurology community have been talking up AI for so long and it’s finally coming to fruition. Our society is going to experience profound increases on productivity when this technology is cheap and ubiquitous.

  3. Adam Spark Two

    What are emotions? What is wrong, what is right? What is the difference? Emotions root from our memories and experiences that shape the way we create and perceive these emotions. All human beings experience emotions differently, the common ground between us all is that the emotions we fabricate for ourselves seem real to us -and us alone. As an individual, I cannot experience the emotions that another has created, only my own reaction to them. If the machine perceives its reactions as emotions, and real to itself, then cannot it be said that the machine is feeling emotions just as you or I would? Has this machine not reached a level of being that is that we could consider to be ‘human’?

  4. Check Batin

    “Autonomous cars are ‘the vaccine that will cure deaths on the road’, says E. Musk – “For each day that we can accelerate connected and autonomous vehicles, we will save 3300 people a day.”

  5. John Accural

    IMHO there is no reason for the car to make moral judgments. All it has to do is calculate the minimum damage possible, simple as that. Realistically speaking, all these three mentioned scenarios are all nonsense. Because fact of the matter is, if we had true self driving cars in cities, these kind of deaths would be 0. I can’t imagine a scenario where at 25 – 30mph the self driving car can’t figure out a way to minimize damage to 0. The stopping distance from 30mph is only 75 ft. Then you subtract the thinking distance (30ft) giving you a total of only 45 ft. This is a significant difference for a victim.

    • Check Batin

      But it’s still an important issue quite well described in this article. Crash above 45mph is always fatal