The road testing has begun for Google’s self-driving cars and Elon Musk promises to have current Tesla models ready to drive themselves on “major roads” by this summer, but some writers are starting to wonder whether the cars should be making life-and-death decisions.
“How will a Google car, or an ultra-safe Volvo, be programmed to handle a now-win situation – a blown tire, perhaps—where it must choose between swerving into oncoming traffic or steering directly into a retaining wall,” asks Science Daily, referencing new work from the University of Alabama at Birmingham.
It’s a new take on what’s called the Trolley Problem: A person must choose between throwing the switch on a train track to save a school bus full of children from an oncoming train, or the person’s own child who has fallen on the tracks and cannot stand up.
“The computers will certainly be fast enough to make a reasoned judgment within milliseconds,” the article continues. “They would have time to scan the cars ahead and identify the one most likely to survive a collision, for example, or the one with the most other humans inside. But should they be programmed to make the decision that is best for their owners? Or the choice that doe the least harm—even if that means choosing to slam into a retaining wall to avoid hitting an oncoming school bus? Who will make that call, and how will they decide?”
Does the life of one person always outweigh the lives of a few, and if so, should the owner of a self-driving car understand he or she will be on the losing end of such scenarios? It’s a classic philosophical and ethical dilemma pitting utilitarianism against deontology, where utilitarianism says take the approach that provides the greatest good for the greatest number of people and deontology argues that some values are always true, like murder is always wrong and should never be committed.
For more of the debate, go here.
When Ethical Dilemmas Meet Self-Driving Cars http://t.co/msmnyx9O1c
Philosophical questions for self-driving cars. My latest for @GeeksandBeats: http://t.co/6Lev5AiQCn. Check out the new podcast too!
Darren Simonelli liked this on Facebook.
Mike Cleaver liked this on Facebook.
When Ethical Dilemmas Meet Self-Driving Cars http://t.co/0DSeUB6XTG
Would you get in an autonomous car if would kill you to save others? @GeeksandBeats writer @AmberMHealy on autoethics http://t.co/sa5O7hhFcR
RT @AmberMHealy: Philosophical questions for self-driving cars. My latest for @GeeksandBeats: http://t.co/6Lev5AiQCn. Check out the new pod…
RT @alancross: Would you get in an autonomous car if would kill you to save others? @GeeksandBeats writer @AmberMHealy on autoethics http:/…
Would you get in an autonomous car if would kill you to save others? @GeeksandBeats writer @AmberMHealy on autoethics http://t.co/3CuCQjiGV8
Would you get in an autonomous car if would kill you to save others? @GeeksandBeats writer @AmberMHealy on autoethics http://t.co/K7Dlwn2KFQ
Would you get in an autonomous car if would kill you to save others? @GeeksandBeats writer @AmberMHealy on autoethics http://t.co/hYkQVCjTxa
RT @hainsworthtv: Would you get in an autonomous car if would kill you to save others? @GeeksandBeats writer @AmberMHealy on autoethics htt…
Would you get in an autonomous car if would kill you to save others? @GeeksandBeats writer @AmberMHealy on autoethics http://t.co/cPZLkrJImx