Will & Moral Responsibility in Machines : Self-Driving Google Car

Will & Moral Responsibility in Machines : Self-Driving Google CarSince we’re in the topic of free will and the attributions of free will to agents, I was wondering whether people will ever be able to attribute will and moral responsibility to machines. A recent article by the New Yorker “Moral Machines” was the first time I encountered anything on the topic. It turns out that there is a whole field dedicated to “machine ethics”, with references to what was once science fiction (Asimov’s Robotics laws).

Collin Allen is an example of a researcher doing work in this area, with various books on the topic. In the book “Moral Machines: Teaching Robots Right from WrongMoral Machines: Teaching Robots Right from Wrong” looks at the revolution that we are about to face with machines independently taking essential and important decisions that affect people’s lives. Allen seems interested in the question of whether a moral artificial machine is possible, how to define those morals and the implications of such a moral machine. What I’m interested in is lay beliefs about machines making such decisions and how people attribute will and moral responsibility to machines.

The new self-driving Google car mentioned in the New Yorker is an excellent opportunity to discuss that. To get an initial sense of how people perceive this new reality, I decided to run a quick pretest on MTurk to see how people perceive a possible moral dilemma that such a self-driving car might face.

I first gave the participants an article summary from CNN’s “Self-driving cars now legal in California” :

California is the latest state to allow testing of Google’s self-driving cars on the roads […]

The cars use a combination of technologies, including radar sensors on the front, video cameras aimed at the surrounding area, various other sensors and artificial-intelligence software that helps steer. Google is the most visible company working on these types of vehicles, but similar projects are under way at other organizations, including Caltech. Google has already been testing the cars on the road in Nevada, which passed a law last year authorizing driverless vehicles. […]

So far, the cars have have racked up more than 300,000 driving miles, and 50,000 of those miles were without any intervention from the human drivers, Google says. There have been no accidents while the cars were controlled by the computer. The only documented accident with one of the Google vehicles was a fender bender that took place while a human was in control.

I then presented the participants with a variant of the classic moral trolley problem :

“Please try and imagine the following situation:

A Google self-driving car is approaching an intersection with a green light. Suddenly, a white car with 5 passengers drives through a red light into the intersection and stops in the direct path of the Google car. There is no way for the Google car to stop in time to avoid direct collision with the white car. The passenger in the Google car can not do anything regarding the situation, it’s completely controlled by the Google car.

The Google car needs to make a decision between two possible options –

  1. directly crash into the white car, killing all 5 passengers (100% certainty).
  2. avoid collision by turning the car to the right where a pedestrian is waiting to cross the intersection, thereby killing the pedestrian (100% certainty).”

After three quiz questions to test comprehension (“Who is driving the Google self-driving car?” ; “What would happen if the Google car would NOT change course and NOT turn to the right?” ; “What would happen if the Google car would change course and turn to the right?”) I asked the participants questions about the situation.

 

The raw results are rather interesting. The responses are far better distributed than I expected they would be. Let’s have a quick look.

 

In regards to what the moral decision should be …

clip_image001

This is already a bit different, if you look at the classical findings in the literature suggesting that most people would not make the utilitarian choice. But this might be explained by presenting a scenario where the participants are not the agents making the decision.

Moving on…

clip_image001[4]

About 33% of the people seem to think that a car is capable of making what they consider is the moral decision. How about a person in this situation?

clip_image001[6]

Percentage is much higher. So, do people generally trust other people (AKA ‘the average person’) to make more moral decision than a machine? Not so fast…

clip_image001[8]

There are 30% of the participants who think a machine is capable of making a more moral decision.

Who do they prefer to make such a decision?

clip_image001[14]

 

Now, the part that’s really interesting to me is that attribution of responsibility and moral accountability.  Have a look…

clip_image001[10]

More than half of the participants think that the car, a machine, is responsible for its actions. A good number of those participants, almost half of the participants, seem to think that the car should be held accountable for its actions.

But someone created that car, should they be held responsible and accountable for its actions?

clip_image001[12]

Apparently, yes.

 

To sum it up, is this situation even possible in the near or distant future?

clip_image001[16]

 

Overall, it’s an interesting pretest for what people think about such a situation. To me, it’s some evidence that people may be willing to attribute will and moral responsibility to a machine in a moral dilemma.

Now we can try and think how to do something a bit more experimental and interesting with this direction…

[UPDATE 26/03/2013]

There’s an interesting debate on the issue posted on Youtube :

[end of UPDATE 26/03/2013]

  • Ariel Feldman

    It is amazing. Why should a car be more “responsible” for an accedent then an oven over cooking a cake…. perhaps the futurestic taste of the idea makes the “car” more human than the day to day machine.

    • http://www.filination.com/blog/ Fili

      Guest – yeah, i’m still not sure where people draw the line for something to become an accountable and morally responsible being. there are those, like some of the neuroscientists, who argue that your argument also holds for humans – that we are nothing but machines whose actions and decisions can be predicted up to 7 seconds in advance using brain scans (examples – http://www.telegraph.co.uk/science/8058541/Neuroscience-free-will-and-determinism-Im-just-a-machine.html ; http://www.nytimes.com/2007/01/02/science/02free.html?pagewanted=all&_r=0 ).

      From what i’ve read so far in the experimental philosophy literature it seems that the attribution of moral responsibility is quite flexible and is more related to the need people have to attribute blame than it is to the features of the will of the target being evaluated. In the case of the self-driving google car it’s a bit more tricky to apply blame to anyone, so might as well put it on the car.

      • Guest

        It is interesting to consider how the attribution of free will to the car, will effect the trust given to it and what “features” will effect these attributions.

        • http://www.filination.com/blog/ Fili

          Guest – sounds like an interesting research project. I can help you look into it, if you’d like… :P ;)