Fresh Look at AI Logic

Oct 23, 2019
Blog

A situation where Artificial Intelligence really has to decide who will live and who will die is not a topic for theoretical discussions or a plot for science fiction anymore, but the near future. The probability is very low, but with Big Data the event is inevitable.

 

But now we are interested not so much in the probability, but the ethical solution to the given problem. An accident at a pedestrian crossing. All the objects and people involved are subjects to the laws of physics, but your car is also subject to the control of Artificial Intelligence. Let’s consider a problem where the driverless car has only three options:

  1. 1. To kill a mother with a child,
  2. 2. To kill an elderly couple,
  3. 3. To kill you (for example, by driving off the road), although it was you who bought this AI.

What should be done when a tragedy is inevitable? Who must die? And who has the right to make such a decision?

A heavy runaway trolley is moving down the tracks. There are five people lying in the trolley’s way, tied up to the tracks by the insane philosopher. Fortunately, you can pull the lever and the trolley will go onto a sidetrack. Unfortunately, on this extra track, there is another person, also tied up. What will you do?

The importance of discussing this problem doesn’t lie in the solution people choose, but instead in how they reach their decision. People choose the variant in accordance with their own moral principles, and the majority sacrifices the life of one single person in order to save more. AI, however, will be guided in its actions by the assessment of the risks and probability. And it is not clear yet how it will act under such circumstances.

According to utilitarian ethics, a car must minimize damages, even if such an action will lead to someone’s death. On the other hand, according to Isaac Asimov’s “laws of robotics”, a robot may not harm a human. So, such AI is in zugzwang, as each of its actions is the wrong one!

Scientists from Massachusetts Institute of Technology surveyed more than a million people from all over the world. What do people think is the right thing to do in such situations?

The majority of us stick to the utilitarian logic leading to fewer victims. Why not take this as a rule which an autopilot will be taught? The thing is, in cases where we ask adherents of damage minimization whether they are going to buy a Tesla car, which in a certain situation is going to sacrifice their lives for the greater good, they usually refuse. 

The instinct for self-preservation is impossible to overcome. The ideal variant for people is buying a car which is going to protect them, but the cars of others should prioritize victim minimization. This alludes to an older ethical problem called “tragedy of the commons”. Here is its essence. Using the common resource, one strives to gain profit for themselves in the first place and shift the risks onto others. And as a result, everyone loses. In general, humankind regulates such situations with the help of laws. But in the field of Artificial Intelligence ethics, legal norms have just begun to be developed. Currently, a driverless car is most likely to maneuver to avoid an accident, even if it is not efficient. As a result, who lives and who dies is defined by the laws of physics and chance, rather than by someone’s decision. Of course, this is also an option, but it has not been chosen on its own merits. Instead, it has been chosen out of desperation – we did our best, but were unable to choose between those mentioned earlier.

So, the answer to the given question is the following. At the moment, AI can’t sacrifice its passenger, that’s why a pedestrian is going to die. And who is going to jail? This question is also an open one, for no one in corporations wants to go to jail. The driver doesn’t have anything to do with this situation, and it’s too late for the pedestrians to care about this.

Previous articleNext article