My article published in FORBES on September 12th, 2018.
The autonomous vehicles debate has shown us just how many challenges lie ahead in our relationship with intelligent technology. As we test autonomous vehicles, we also test our confidence in them and (hopefully) learn lessons in ethics.
Advances in algorithms and robotics commonly evoke extreme, even irrational reactions. Our fascination with technology often goes hand in hand with almost childlike fear. As UK physicist, futurist, and science fiction writer Arthur C. Clarke once wrote, “Any sufficiently advanced technology is indistinguishable from magic.”
In an unsettled world, emotional anxiety about the present, let alone the future, is perfectly understandable. This anxiety is not all bad; it allows us to focus on our fundamental beliefs and values, such as what is goodand what is evil?
Safer than people
Autonomous cars are a vivid example of how technological advances polarize debate. Personally,I see self-driving vehicles as an opportunity to improve road safety. I am not in the least surprised by the upbeat predictions that 15 years from now it will feel odd to think back to a world in which cars were still driven by humans.
However, to make sure changes move in the right direction, we need universal standards to govern the new technology. By investing heavily intoautonomous vehicles, their manufacturers make a commitment, as it were, to society. Their promise isthat widespread acceptance of driverless cars on public roads is going to prevent tens of thousands of traffic casualties. (In 2017, over 40,000 people died in motor vehicle accidents in the United States.)
The proponents of autonomous vehicles cite studies that show that 90 percent to 95 percent of crashes involve human error.
Last March, an autonomous vehicle killed a woman crossing a road in Tempe, Arizona. Neither the car’s safety systems nor the person seated behind the wheel managed to prevent the fatality. Thisinspired some to call the accident a momentous event: the first human victim of artificial intelligence. Soon, people were speaking out against allowing profit-driven producers of imperfect technologies to use public roads to test their inventions. Skeptics and critics expressed doubt over whether people would ever embrace autonomous vehicles. Early this year, a Reuters poll found that two-thirds of Americans were “wary” of driverless vehicles, and that was before the Tempe accident.
The accident revealed some troubling problems, among them overworked test drivers and halving the number of test drivers per vehicle. (Initially, two drivers would always oversee vehiclesduring a test drive, not one.) By choosing to increase driver comfort (and reduce frequent braking), testing companies compromised vehicle responsiveness to emerging obstacles. Journalists uncovered internal reports revealingthe inability of test vehicles to drive more than a mile without human intervention. Video from the Tempe collision showed the driver busying himself with his smartphone and failing to watch the road.
One could call this human error, but the error runs deeper, and it’s even more human.