Since we’re in the topic of free will and the attributions of free will to agents, I was wondering whether people will ever be able to attribute will and moral responsibility to machines. A recent article by the New Yorker “Moral Machines” was the first time I encountered anything on the topic. It turns out that there is a whole field dedicated to “machine ethics”, with references to what was once science fiction (Asimov’s Robotics laws).
Collin Allen is an example of a researcher doing work in this area, with various books on the topic. In the book “Moral Machines: Teaching Robots Right from WrongMoral Machines: Teaching Robots Right from Wrong” looks at the revolution that we are about to face with machines independently taking essential and important decisions that affect people’s lives. Allen seems interested in the question of whether a moral artificial machine is possible, how to define those morals and the implications of such a moral machine. What I’m interested in is lay beliefs about machines making such decisions and how people attribute will and moral responsibility to machines.
The new self-driving Google car mentioned in the New Yorker is an excellent opportunity to discuss that. To get an initial sense of how people perceive this new reality, I decided to run a quick pretest on MTurk to see how people perceive a possible moral dilemma that such a self-driving car might face.
I first gave the participants an article summary from CNN’s “Self-driving cars now legal in California” :
California is the latest state to allow testing of Google’s self-driving cars on the roads [...]
The cars use a combination of technologies, including radar sensors on the front, video cameras aimed at the surrounding area, various other sensors and artificial-intelligence software that helps steer. Google is the most visible company working on these types of vehicles, but similar projects are under way at other organizations, including Caltech. Google has already been testing the cars on the road in Nevada, which passed a law last year authorizing driverless vehicles. [...]
So far, the cars have have racked up more than 300,000 driving miles, and 50,000 of those miles were without any intervention from the human drivers, Google says. There have been no accidents while the cars were controlled by the computer. The only documented accident with one of the Google vehicles was a fender bender that took place while a human was in control.
I then presented the participants with a variant of the classic moral trolley problem :
“Please try and imagine the following situation:
A Google self-driving car is approaching an intersection with a green light. Suddenly, a white car with 5 passengers drives through a red light into the intersection and stops in the direct path of the Google car. There is no way for the Google car to stop in time to avoid direct collision with the white car. The passenger in the Google car can not do anything regarding the situation, it’s completely controlled by the Google car.
The Google car needs to make a decision between two possible options –
- directly crash into the white car, killing all 5 passengers (100% certainty).
- avoid collision by turning the car to the right where a pedestrian is waiting to cross the intersection, thereby killing the pedestrian (100% certainty).”
After three quiz questions to test comprehension (“Who is driving the Google self-driving car?” ; “What would happen if the Google car would NOT change course and NOT turn to the right?” ; “What would happen if the Google car would change course and turn to the right?”) I asked the participants questions about the situation.
The raw results are rather interesting. The responses are far better distributed than I expected they would be. Let’s have a quick look.
In regards to what the moral decision should be …
This is already a bit different, if you look at the classical findings in the literature suggesting that most people would not make the utilitarian choice. But this might be explained by presenting a scenario where the participants are not the agents making the decision.
About 33% of the people seem to think that a car is capable of making what they consider is the moral decision. How about a person in this situation?
Percentage is much higher. So, do people generally trust other people (AKA ‘the average person’) to make more moral decision than a machine? Not so fast…
There are 30% of the participants who think a machine is capable of making a more moral decision.
Who do they prefer to make such a decision?
Now, the part that’s really interesting to me is that attribution of responsibility and moral accountability. Have a look…
More than half of the participants think that the car, a machine, is responsible for its actions. A good number of those participants, almost half of the participants, seem to think that the car should be held accountable for its actions.
But someone created that car, should they be held responsible and accountable for its actions?
To sum it up, is this situation even possible in the near or distant future?
Overall, it’s an interesting pretest for what people think about such a situation. To me, it’s some evidence that people may be willing to attribute will and moral responsibility to a machine in a moral dilemma.
Now we can try and think how to do something a bit more experimental and interesting with this direction…
There’s an interesting debate on the issue posted on Youtube :
[end of UPDATE 26/03/2013]