Google is one of the largest proponents of the self-driving cars, but there is a problem when programming the units. How does a self-driving car deal with other motorists? How about pedestrians?
Is it possible to calculate moral questions via analytics and programming? It is.
And that is the scary part. In the movie I, Robot, Will Smith’s character is saved by a robot who calculates his odds for life were better than a small child. This haunts him because he feels any human would have chosen the little girl over the grown man. While science fiction in 2004, it is now a very real question.
When a self-driving car is faced with hitting pedestrians, there must be programming to detect whom to hit. Will it be the baby in the crib or the elderly woman who has only a few more years to live? These are the moral dilemmas currently faced by Google and here is what they are doing about them.
Google has hired dozens of philosophers, lawyers, and ethics professions to debate and create algorithms for self-driving cars. A programmed car has to be able to save as many lives as possible and sometimes that might mean killing the passenger (see an experiment called “The Trolley Problem”).
Experts at Google are also using MIT Technology Review information to help identify problems with moral logic and discover new methods of programming the self-driving cars. One professor at the University of South Carolina said “we have a technology that potentially could save a lot of people, but is going to be imperfect and is going to kill.” It’s a chilling prospect.
As a society, we have to choose how we want to regulate these self-driving cars. Indeed they can help drive medicine and supplies to remote, inhospitable regions of the world. They can save lives as a result, but it comes with a cost. There will be people caught in the crossfire of an algorithm and speeding self-driving car.