When Ethical Dilemmas Meet Self-Driving Cars

The road testing has begun for Google’s self-driving cars and Elon Musk promises to have current Tesla models ready to drive themselves on “major roads” by this summer, but some writers are starting to wonder whether the cars should be making life-and-death decisions.

“How will a Google car, or an ultra-safe Volvo, be programmed to handle a now-win situation – a blown tire, perhaps—where it must choose between swerving into oncoming traffic or steering directly into a retaining wall,” asks Science Daily, referencing new work from the University of Alabama at Birmingham.

It’s a new take on what’s called the Trolley Problem: A person must choose between throwing the switch on a train track to save a school bus full of children from an oncoming train, or the person’s own child who has fallen on the tracks and cannot stand up.

“The computers will certainly be fast enough to make a reasoned judgment within milliseconds,” the article continues. “They would have time to scan the cars ahead and identify the one most likely to survive a collision, for example, or the one with the most other humans inside. But should they be programmed to make the decision that is best for their owners? Or the choice that doe the least harm—even if that means choosing to slam into a retaining wall to avoid hitting an oncoming school bus? Who will make that call, and how will they decide?”

Does the life of one person always outweigh the lives of a few, and if so, should the owner of a self-driving car understand he or she will be on the losing end of such scenarios? It’s a classic philosophical and ethical dilemma pitting utilitarianism against deontology, where utilitarianism says take the approach that provides the greatest good for the greatest number of people and deontology argues that some values are always true, like murder is always wrong and should never be committed.

For more of the debate, go here.

Liked it? Take a second to support Amber Healy on Patreon!

Leave a Comment