The Ethics of Self-Driving Cars


+4 rating, 5 votes
Loading...

The Ethics of Self-Driving Cars
Ethics
Though choices and moral dilemmas: a visual introduction to the ethics of self-driving cars
Share

This comic was made by Anum Yoon, who kindly allowed us to post it here.

the-ethical-dilemma-full

Self-driving cars, autonomous cars, driverless cars – regardless of what you want to call them – are expected to revolutionize the entire automobile industry. For over a century, cars have consisted of a fairly straightforward combination of wheels, steering system, engine, and driver. It’s no wonder that the announcement of this new technology has launched a global race that has automakers and tech companies scrambling to develop the best autonomous vehicle technology. And according to Morgan Stanley, self-driving cars will be commonplace by 2025.

So talks about technology aside, let’s dive into the ethics and philosophy behind these vehicles, which is what this infographic is about.

The Laws of Robotics

In 1942, Isaac Asimov, sci-fi author and professor introduced the three laws of robotics.

  • The First Law states that a robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • The Second Law outlines that a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • The Third Law states that a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
  • He later added a Fourth Law, also called the Zeroeth Law; a robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Can the clear rules-based code of a computer handle the nuances of ethical dilemmas?

Let’s take a look at a few hypothetical scenarios:

The Trolley Problem

You are the driver of a trolley that has broken brakes. Fortunately you still have the ability to steer the train from the main track to an alternate track. You can see the two tracks right ahead of you:

  • The main track has five workers
  • The alternate track has one worker

Both tracks are in narrow tunnels so whichever direction you choose, anyone on that track will be killed. Which way will you go? Would you let the train continue down the main track and kill five, or will you switch it onto the alternate track to kill one?
Most people respond to the Trolley Problem by saying they would steer the train onto the alternate track because their moral intuition tells them that it’s better to kill only one person rather than five.

Now for a little modification to this hypothetical scenario.

The runaway trolley is speeding down a track about to hit five people. But this time, you’re on a bridge that the train is about to pass under. The only thing that could stop the trolley is a very heavy object. It just so happens that you are standing next to a very large man. Your only hope to save the five people on the tracks would be to push the large man over the bridge and onto the track. How would you proceed?
Most people strongly oppose this version of the problem – even the ones who had previously said they would rather kill one person as opposed to five. These two scenarios reveal the complexity of moral principles.

The Tunnel Problem

You are traveling on a single lane mountain road in a self-driving car that is quickly approaching a narrow tunnel. Right before you enter the tunnel, a child tries to run across the road but trips right in the center of the lane, blocking the entrance to the tunnel. The char only has two options:

  • To hit and kill the child
  • Swerve into the wall, thus killing you

How should the car react?

Now that the age of self-driving cars have dawned upon us, this new technological innovation has given ethical dilemmas such as the tunnel and trolley problems a new relevance.

Hypothetical scenarios like the Tunnel Problem present some of the real difficulties of programming ethics into autonomous vehicles. In a survey asking how people would like their car to react in the Tunnel Problem scenario, 64% of respondents would continue straight and kill the child. 36% would swerve and kill the passenger.
But who should get to decide?

44% of those surveyed felt that the passenger should make major ethical decisions. 33% felt that lawmakers should be the ones who decide, 12% felt that the decision should lie with the manufacturers and designers. The remaining 11% responded with “other.”

Ethics is a matter of sharing a world with others, so building ethics into autonomous cars is a lot more complex than just formulating the “correct” response to a set of data inputs.

Here’s one last ethical scenario for driverless cars.

The Infinite Trolley Problem

The Infinite Trolley Problem, introduced by autonomous vehicle advocate Mitch Turck, is where a single person is on the tracks. This person can easily be saved by simply halting the trolley, but that would inconvenience the passengers. So for this variant, the question is not “would you stop to save someone” but rather, “how many people need to be on board the trolley for their inconvenience to be valued more than a single life.” This variant points out to the fact that given the current number of vehicular fatalities, waiting for self-driving cars to be 99% (if not perfectly) safe disregards the fact that many of these accidents could be prevented once the fatality rate for self-driving vehicles merely dips below that of physically-manned vehicles, even if that is still a nonzero statistic.

Is waiting for perfection worth it?

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 United States License. Original source: https://blog.cjponyparts.com/2016/01/ethical-dilemma-self-driving-cars-robotics/.