It is the question to ask at the City Program’s Simon Fraser University seminar with Autonomous Vehicle expert Tim Papandreou and the question I did ask Ole Thorson of the International Federation of Pedestrians.
When an autonomous vehicle is going to crash into a crowd of pedestrians, who does the car save? Does it save the vehicle occupants first? And who makes that decision?
Caroline Lester asks that question in The New Yorker. While a “level four” autonomous vehicle is independent on highways, it still needs a human to guide it. “Level five” vehicles will make their own judgements, including the decision cited in what is called “The Trolley Problem”.
“If a car detects a sudden obstacle—say, a jackknifed truck—should it hit the truck and kill its own driver, or should it swerve onto a crowded sidewalk and kill pedestrians? A human driver might react randomly (if she has time to react at all), but the response of an autonomous vehicle would have to be programmed ahead of time. What should we tell the car to do?”
The American government has guidelines for autonomous weapons. They are not programmed to decide to kill independently from a human. There is no opinion on the ethics of driverless vehicles. Germany has created some guidelines, the most evident being “In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.”
This decision may have been made in part because Volkswagen sells more cars than any other global manufacturer and has a universal responsibility. The question becomes do other countries bring in different moral values in deciding how autonomous vehicles decide which lives are saved in the case of a potentially bad crash situation? Do local biases and customs count?
With driverless vehicles anticipated to flood the market in the next twenty to fifty years, Lester points out that these machines will be programmed to make judgement calls. What will be crucial is regulation prohibiting companies enabling vehicle software to save vehicle occupants over pedestrians. Indeed if that does happen, separation of driverless vehicles from other road users will be necessary. Lester states “In a future dominated by driverless cars, moral texture will erode away in favor of a rigid ethical framework. Let’s hope we’re on the right side of the algorithm.”
Ole Thorson of the International Federation of Pedestrians observed “Every life is valuable and should be treated equitably in autonomous vehicle programming. The conversation of who lives and who dies should not be happening.”
Images: Horsepoweronline.com Automotiveworld.com
In my opinion, this isn’t the real important question. It’s like wondering what you should do if you get struck by lightning?
The bigger question is how many more people have to die in car crashes caused by humans before we get this technology on the roads?
Indeed, once self driving is safer, the decision to drive manually will be the moral equivalent of drunk driving today, and likely will become illegal. My sense however, is that it is a long way off to solve the AI required for dense, messy cities.
I recommend physicist Sean Carroll’s excellent “Mindscape” podcast of Jan 21, 2019 features Derek Leben, a moral philosopher studying the subject.