The Morality of Robots and Self-Driving Cars

"We'll fly kites tomorrow, promise." | Art by Chasing Artwork. Used with permission.
If Isaac Asimov is known for anything within popular culture, it is his three laws of robotics, made famous in the book I, Robot and its movie adaptation. The laws were conceived because of the invention of self-directed robots. They answered the question of how created objects were allowed to act with respect to the safety of the people who created them. Asimov envisioned a robot morality controlled by inviolable laws which began with the first law:

“A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

That seems straight-forward: when a robot can direct its own actions, it must not be allowed to cause human beings to come to harm. Of course, there are situations where it is not simply a choice between harming a human or not. Sometimes the question concerns reducing the total harm when some injury cannot be avoided. In that case, at least according to the movie-version of the story, there is a complex heuristic by which a robot must make a decision between the value of one or more humans: the result of that calculation directs the action.

If a burglar entered my house and I found myself in the position to kill him, what would I do?

For example, in the movie, the protagonist Del Spooner is saved by a robot because he was deemed to be more likely to survive after being pulled out of the water. Extrapolating from the presumed algorithm, I expect that quantity of humans would also factor in, that is, saving two humans would take precedent over saving one.

This question of the value of human life based on an algorithm is now coming to the fore in the realm of self-driving cars. While many will welcome the additional safety that comes with cars that make decisions for us, have the ability to take us home when we have had too much to drink, and prevents drivers from doing dangerous things on the road, there are some ethical questions that need to be answered before such cars can really be given the reigns to take us autonomously to our destinations.

One of those questions is, “If an accident is unavoidable, what choice is the car directed to take regarding the preservation of human life?” According to Asimov’s first law, the choice should be to reduce the total amount of harm. But this is where the conundrum asserts itself. What about the choice between killing or seriously injuring the passenger(s) in the car and killing or seriously injuring a group of pre-schoolers walking down the sidewalk? Logically, we might look at this situation and say something like, “The children are innocent and unable to protect themselves, run the car into a post and hope for the best” or “five kids take precedent over one adult in each car, choose a head-on collision.”

According to a poll conducted in 2015 by researchers at the Massachusetts Institute of Technology (MIT)[1] when given the choice between the occupant of a car and ten pedestrians, a majority of people chose to save the kids and not the driver. But that is a future hypothetical situation those folks were being asked about.

An interesting twist in this question arose when the people who decided to kill passengers over preschoolers were asked, “Would you buy a car that you knew was programmed to make this decision?” In essence, would you be okay with the car’s decision if you were the one in the driver’s seat? The overwhelming majority said, “No.” These people acknowledged that in some hypothetical situation which did not involve them personally, the right thing to do would be to kill the driver instead of the kids. But, when the hypothetical involved them, then the choice was quite a bit more selfish.

I want to believe that I have some amount of courage. The courage that would let me instinctively sacrifice in a moment of chaos: to leap on the grenade in order to save my buddy. But I’ve never faced that situation and I really don’t know what I would do. I want to believe that I’d sacrifice myself for others.

This question of the value of human life based on an algorithm is now coming to the fore in the realm of self-driving cars.

I had a conversation with a friend recently about the right to defend yourself, your home, and family. I’m almost positive I would sacrifice myself for someone I love. But I want to have another kind of courage, the courage to sacrifice myself for my enemy. Sounds counter-intuitive, I know. If a burglar entered my house and I found myself in the position to kill him, what would I do? My friend says, “Blow him away; you have a right.” Yes, I do, especially if I am being threatened. I have the right to take the life of the guy standing in front of me. I have the right o protect my own life, the life of my family, and the stuff in my house. But do I have the courage not to? In a moment of clarity, I choose not to attack. “There are things worse than death,” I say to my friend., “Things like taking away his ability to choose a better path. Maybe today—I hope today—but maybe tomorrow or next month. I am not sure that I could do that, take away his choice.”

The dilemma that Google’s ethicists face is about choice. The power and right to choose. People get pretty hung up on choice and the removal of choice from our lives. The majority of people think that an autonomous car should choose to preserve the greatest amount of life, regardless of whether that life is in the car or on the street. But that same majority would not allow the car to make that choice, or any choice really, if it were their own lives on the line.

Jesus said, “No one has greater love than this, to lay down his life for his friends.” And those are words I cling to when thinking about making a choice like this. I do not know for sure what I would do in the moment, but I can tell you what I’d like to do, and I am okay with allowing a car to choose my death over a pedestrian’s. After all, there are worse things than death.

Dennis Maione

Dennis Maione

Guest Writer at Area of Effect
Dennis is a writer, teacher, pastor, and storyteller living in Winnipeg. Like many his age, he is still stuck in the geek culture of the 80's. He fondly remembers playing Space Invaders in an arcade in his home town.
Dennis Maione

Latest posts by Dennis Maione (see all)