We call them driverless cars, but of course something is driving: a computer. When a human driver is confronted with a crisis, perhaps a choice between hitting a child who jumped out into the road or crashing to avoid her, there’s no time to think about deontological ethics. But a computer has plenty of time, and whoever programmed the algorithms it uses had nearly unlimited time.
The answers aren’t always clear-cut. Should a driverless car jeopardize its passenger’s safety to save someone else’s life? Does the action change if the other vehicle is causing the crash? What if there are more passengers in the other car? Less morbidly, should a Google-powered car be able to divert your route to drive past an advertiser’s business? Should the driver be able to influence these hypothetical decisions before getting into the vehicle. time.com
Research shows that driverless cars will inevitably crash
When something bad happens, who will be at fault? Who will pay the bills? Who will sue whom? Should the government set ethical standards and create legal protections? Eventually, someone will decide if my car should let me die to save more passengers in the other car. Can anyone imagine our current US Congress tackling such a task?
It seems likely that the first driverless cars will feature a super cruise control, just a step away from today’s situation. The human will still be “the driver”. Cars’ capabilities, and ethics, will evolve over time.
Perhaps a career in philosophy will become more attractive, and we’ll have competing schools of philosophy, just like in ancient Greece.