My concerns about artificial intelligence which is supposed to predict the future behavior of defendants
In our daily life, we are frequently subject to predictive algorithms that determine music recommendations, product advertising, university admission and job placement. Until recently, I only knew such algorithms were used for those purposes.
But in the American criminal justice system, predictive algorithms have been used to predict where crimes will most likely occur, who is most likely to commit a violent crime, who is likely to fail to appear at their court hearing, and who is likely to re-offend at some point in the future.
When I heard about it, I was quite surprised since I am French and such tools don’t exist in our judicial system. I didn’t see the purpose and how machines could be “better” than humans. I think I would not rely on such a tool because I am the judge, and machines are flawed since they are made by humans.
And you, what would you do if you were a United States judge who has to decide bail for a black man, a first-time offender, accused of a non-violent crime.
An algorithm just told you there is a 100 percent chance he'll re-offend. With no further context, what do you do? Would you rely on the answer provided by the algorithm, even if your personal thought leads you to another answer? Is the algorithm answer more reliable than your opinion? One could argue that the answer provided by the algorithm is based on 137 question-quiz.
The questions are part of a software program called Correctional Offender Management Profiling for Alternative Sanctions: COMPAS.
The problem when we rely on such algorithms is their opacity
How could a judge rely on such a tool, when he does not even know how it works? Concerns have been raised about the nature of the inputs and how they are weighted by the algorithm. We all know that human decisions are not all the times the good ones, perfect. They depend on the person deciding, its opinion, ideas, and his background. But at least it is a human that decides, individualizing each case, each person. Whereas an algorithm is incapable of doing such a thing.
Do we prefer human or machine bias?
Human decision-making in criminal justice settings is often flawed, and stereotypical arguments and prohibited criteria, such as race, sexual preference or ethnic origin, often creep into judgments. The question is can algorithms help prevent disproportionate and often arbitrary decisions?
We can argue that no bias is desirable, and a computer can offer such an unbiased choice of architecture. But algorithms are fed with data that is not clean of social, cultural and economic circumstances.
In analyzing language we can see that natural language necessarily contains human biases. The training of machines on language entails that artificial intelligence will inevitably absorb the existing biases in a given society. Furthermore, judges already carry out risk assessments on a daily basis, for example when deciding on the probability of recidivism. This process always subsumes human experience, culture, and even biases. Human empathy and other personal qualities are in fact types of bias that overreach statistically measurable equality. |