In the previous article, we considered the main challenges that developers of electronic justice systems may face. But the most important problem associated with the moral aspect of a robot judge will be discussed separately in this article.
The full-fledged introduction of AI systems in the field of law is hampered by a number of technical difficulties. However, most of them can be solved by optimizing existing algorithms or developing alternative technical solutions. A much more complex challenge for the IT industry lies in the very notion of justice, which forms the basis of the institution of jurisdiction.
The point is that, when going to court, most people hope not just for a clear and consistent implementation of the adopted laws but a fair verdict. This verdict should take into account all the circumstances and such independent values as, for example, the moral qualities of the process participants. In light of this fact, the results of a study by BCG – that 51% of respondents strongly opposed the use of AI in criminal law in determining guilt, and 46% opposed its use in the decision-making system for early release – become understandable. In general, about a third of the respondents are worried about a number of unresolved ethical issues in the use of AI and the potential bias and discrimination inherent in it.
Apart from the legal sentence, the process participants expect a certain empathy and sympathy from the judge who conducts their case, as well as justice that quite often doesn’t equal the direct execution of the letter of the law. This seems to automatically discharge the use of AI. The electronic system’s objective assessment is perceived by many people as ruthless and inhuman. According to Elena Avakyan, Adviser to the Federal Chamber of Lawyers of the Russian Federation, criminal proceedings can’t be digitalized because it is a “face to face” proceeding and “an appeal to the mercy of a judge and a person.”
This point of view fits into J. Weizenbaum’s old warning that using AI in this area poses a threat to human dignity, which is based on empathy for the people around you. We will find ourselves alienated, discounted, and frustrated because the AI system is not able to simulate empathy. Wendy Chang, a judge of the LA Superior Court, also speaks of the discrepancy between the legality and humanity perspectives:
“Legal issues sometimes lead to irreversible consequences, especially in areas like immigration. You have people sent to their home countries… and immediately murdered.”
In these cases, the appeal to the humanity of the court turns out to be much stronger than the appeal to the legality of its decisions.
However, the presence of emotional intelligence in human judges and their immersion in the current cultural and moral context often cause exactly opposite reactions. For example, having examined the advantages and disadvantages of the Dutch e-justice system, the members of the study group reflected in their report that the digital judge could be considered the most objective judge in the Netherlands. This statement is grounded by the fact that such a judge is unbiased and makes decisions without favoring any party based on past or present relationships, inappropriate sympathy, admiration, or other subjective factors that influence decision-making. At the same time, it has advantages in the speed and accuracy of the operations performed.
This argument is echoed by British AI expert Terence Mauri:
“In a legal setting, AI will usher in a new, fairer form of digital justice whereby human emotion, bias and error will become a thing of the past.”
In his opinion, trials with the participation of robot judges will be held with the use of technology for reading “physical and psychological signs of dishonesty with 99.9 per cent accuracy.” This will turn justice into an intricate mechanism like an advanced lie detector.
The necessity to overcome bias and non-objectivity is also advocated by racial groups and minority rights defenders. Pamela McCorduck, an American journalist and the author of the bestselling Machines Who Think, emphasizes that it is better for women and minorities to deal with an impartial computer – it has no personal attitude to the case, unlike a conservative-minded human judge or police officer.
According to Martha Nussbaum – an authoritative researcher and professor of philosophy and ethics at the University of Chicago – this biased position of judges is determined by the general “politics of disgust.” The politics of disgust make them, in particular, exercise judgment against representatives of sexual minorities, proceeding not so much from the requirements of the law as from personal preferences and individual moral norms. This, however, is also a problem for AI programming, as it can inherit these purely human biases from its creators as part of the unconditional decision rules.
What unites both of these approaches in assessing the performance of a digital judge is the emphasis on the basic difference of its mechanism for making judgments and decisions. As for the presence of external signs of emotionality, we already have working prototypes of AI systems. They are able to read and imitate the emotional reactions of people, which partially removes the argument about the “soullessness” of machines. However, questions about the logic of decision-making remain. What if AI will ignore the human interpretation of justice and won’t proceed from the unconditional priorities of humanity when imposing punishments?
Also, technophobia has its influence here, seeing a threat to humanity in alien machine logic and intelligence with incomprehensible development goals and disengagement from human interests. In addition, such expectations include a certain idea of the system of law, which can’t be reduced to a set of legislative norms. The concept of justice is one of the oldest and most powerful philosophical concepts, which, according to the author of A Theory of Justice Professor John Bordley Rawls, defines many aspects of human social and political life.
Thus, when starting to develop an e-justice system, it is impossible to ignore the differences in the perception and functioning of human and machine intelligence when making judicial decisions. Resolving a number of existing ethical issues in the use of AI systems, together with clarifying the basic foundations of existing systems of morality and law, is becoming vital in this area. In this regard, an attempt to build a generalized understanding of justice at the level of national or international consensus, especially in its universal formulation, would be one of the most important steps in creating AI as an artificial moral agent.
Also, within the framework of the process of administering justice, the principle of humanization of the punishment system remains an important issue. This presupposes that the court verdict will be considered not only as a means of retribution for the crime committed but also as a disciplinary measure aimed at bettering the perpetrator. In this regard, a clear hierarchy of correlating the principles of legality, justice, and humanity can become the basis for developing full-fledged decision-making algorithms for an electronic judge. But is the digital judge capable of sorting out these issues without human help? We will figure it out in the second part of this article.