The moral dilemmas of programming the self-driving car

The moment I heard they were perfecting the self-driving car, it gave me very serious pause.

Maybe that’s because in some essential way I don’t trust handing over the decision-making process to a machine, even though I don’t like driving all that much and even though the evidence is that self-driving cars would almost certainly result in fewer accidents and fewer deaths overall. There’s just something very basic about the thechnology that I don’t trust, and it may be the very same very basic thing in me that makes me especially concerned with protecting liberty and autonomy.

But I hadn’t spent all that much time thinking about the details. It turns out others have—they must, if they’re going to program these cars. And it’s no surprise that there are some knotty ethical problems involved.

Here’s one hypothetical:

Picture the scene: You’re in a self-driving car and, after turning a corner, find that you are on course for an unavoidable collision with a group of 10 people in the road with walls on either side. Should the car swerve to the side into the wall, likely seriously injuring or killing you, its sole occupant, and saving the group? Or should it make every attempt to stop, knowing full well it will hit the group of people while keeping you safe?This is a moral and ethical dilemma that a team of researchers have discussed in a new paper published in Arxiv, led by Jean-Francois Bonnefon from the Toulouse School of Economics. They note that some accidents like this are inevitable with the rise in self-driving cars – and what the cars are programmed to do in these situations could play a huge role in public adoption of the technology.”It is a formidable challenge to define the algorithms that will guide AVs [Autonomous Vehicles] confronted with such moral dilemmas,” the researchers wrote. “We argue to achieve these objectives, manufacturers and regulators will need psychologists to apply the methods of experimental ethics to situations involving AVs and unavoidable harm.”

Psychologists? Don’t bet on it. They’re no more equipped to make this decision than the average person. In fact, that’s the point: any one-size-fits-all solution is a surrender of individual autonomy and responsibility to a nameless faceless algorithm that decides whether you live or die. There is no formula available for morality.

Of course, we all would have to make a split-second individual decision if (heaven forbid) we were faced with that hypothetical dilemma (“do I save myself or others?”) in a car we were driving. It is not clear what the “right” decision would be, but I think the individual should be the one to make it.

There’s a fascinating conversation on the subject going on in the comments section here, and I suggest you take a look. Some commenters are saying (rather convincingly, I believe) that the situation posited by the ethicists in the above example would actually be handled in a different and better way by self-driving cars, which would prevent it from occurring in the first place because the car would have sensed the problem in advance and slowed itself down already.

I want to add that my distrust of highly automated systems remains. Perhaps it’s irrational, but the surrender of autonomy feels dangerous to me in a different way. What are we sacrificing for a predicted increase in physical safety, and is it worth it?

[Neo-neocon is a writer with degrees in law and family therapy, who blogs at neo-neocon.]

Tags: Culture

CLICK HERE FOR FULL VERSION OF THIS STORY