The rapid development of artificial intelligence (AI) has led to its deployment in courtrooms overseas. In China, robot judges adjudicate small claims, while in some Malaysian courts AI has been used to recommend sentences for offenses such as drug possession.
Is it time for New Zealand to consider AI in its own justice system?
Intuitively, we don’t want to be judged by a computer. And there are good reasons for our reluctance, with valid concerns about the potential for bias and discrimination. But does that mean we should be afraid of any use of AI in the courts?
In our current system, a judge convicts an accused once he has been found guilty. Society trusts judges to hand down fair sentences based on their knowledge and experience.
But sentencing is a task that AI may be able to perform instead – after all, AI machines are already being used to predict some criminal behavior, such as financial fraud. Before we look at the role of AI in the courtroom, then, we need to fully understand what it really is.
AI simply refers to a machine behaving in a way that humans identify as “intelligent”. The most modern AI is machine learning, where a computer algorithm learns the patterns in a dataset. For example, a machine learning algorithm could learn patterns from a database of houses on Trade Me in order to predict house prices.
So, could AI sentencing be a feasible option in New Zealand courts? What might that look like? Or could AI at least help judges in the sentencing process?
Inconsistency before the courts
In New Zealand, judges must weigh a number of mitigating and aggravating variables before deciding on a sentence for a convicted criminal. Each judge uses their discretion to decide the outcome of a case. At the same time, judges must strive to ensure consistency across the justice system.
Consistency means that similar offenses should receive similar sentences in different courts with different judges. To improve consistency, higher courts have prepared leading judgments to which judges refer when determining the sentence.
But discretion works the other way around. In our current system, judges should be free to individualize the sentence after a full assessment of the case.
Judges must take into account individual circumstances, social norms, the human condition and sense of justice. They can use their experience and sense of humanity, make moral decisions, and sometimes even change the law.
In short, there is a “desirable inconsistency” that we currently cannot expect from a computer. But there can also be “undesirable inconsistencies”, such as prejudices or even extraneous factors like hunger. Research has shown that in some Israeli courts, the percentage of favorable decisions drops to almost zero before lunch.
The potential role of AI
This is where AI can play a role in sentencing decisions. We implemented a machine learning algorithm and trained it using 302 New Zealand assault cases, with sentences ranging from zero to 14.5 years in prison.
Based on this data, the algorithm built a model capable of taking a new case and predicting the length of a sentence.
The beauty of the algorithm we used is that the model can explain why it made certain predictions. Our algorithm quantifies the sentences that the model weighs the most when calculating the sentence.
To test our model, we provided him with 50 new sentencing scenarios that he had never seen before. We then compared the sentence length predicted by the model with the actual sentences.
The relatively simple model worked quite well. It predicted sentences with an average error of just under 12 months.
The model learned that words or phrases such as ‘sexual’, ‘young’, ‘taxi’ and ‘gun’ were correlated with longer sentences, while words such as ‘professional’, ‘career’ , “fire” and “Facebook” were correlated with shorter sentences.
Many expressions are easily explained – “sexual” or “gun” can be linked to aggravated forms of aggression. But why does “young” count for more time in prison and “Facebook” for less? And how does an average error of 12 months compare to variations in human judges?
The answers to these questions are possible avenues for future research. But it is a useful tool to help us better understand sentencing.
The future of AI in courtrooms
Obviously, we cannot test our model by using it in the courtroom to impose sentences. But it gives us some insight into our sentencing process.
Judges could use this type of modeling to understand their sentencing decisions and perhaps eliminate superfluous factors. AI models could also be used by lawyers, legal technology providers and researchers to analyze the sentencing and justice system.
Perhaps the AI model could also help create some transparency around controversial decisions, such as showing the public that seemingly controversial sentences like a rapist placed on house arrest may not be particularly unusual.
Most would say that final assessments and decisions about justice and punishment should be made by human experts. But the lesson from our experience is not to be afraid of the words “algorithm” or “AI” in the context of our legal system. Instead, we should discuss the real (not imagined) implications of using these tools for the common good.
Provided by The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Quote: Could AI play a role in the justice system? (2022, November 30) retrieved November 30, 2022 from https://phys.org/news/2022-11-ai-play-role-justice.html
This document is subject to copyright. Except for fair use for purposes of private study or research, no part may be reproduced without written permission. The content is provided for information only.
#play #role #justice #system