|
META TOPICPARENT | name="SecondEssay" |
| |
| |
< < | In our daily life, we are frequently subject to predictive algorithms that determine music recommendations, product advertising, university admission and job placement. Until recently, I only knew such algorithms were used for those purposes.
But in the American criminal justice system, predictive algorithms have been used to predict where crimes will most likely occur, who is most likely to commit a violent crime, who is likely to fail to appear at their court hearing, and who is likely to re-offend at some point in the future.
When I heard about it, I was quite surprised since I am French and such tools don’t exist in our judicial system. I didn’t see the purpose and how machines could be “better” than humans. I think I would not rely on such a tool because I am the judge, and machines are flawed since they are made by humans.
What does "rely" mean, and why are we assuming that these are tools for judges? Systems being used to allocate police resources, shift by shift or in real time, may be predicting the need for policing on the basis of patterns in social activity that would not be evidence of any kind in a prosecution, but which would be "better" than other forms of resource management according to some metrics. Models predicting court appearance don't necessarily have to be designed to result in incarceration pending trial. Knowing as we do that there are interventions that will help to bring defendants to court, we can once again use pattern-analyzing software to help improve the allocation of resources for those interventions.
Having to judge-focused a view may be part of the problem. A criminal justice system has many parts, only a small one of which is judges. And making judicial determinations may be the object of the system, but is only one detail of its operation.
"Predictive algorithms," like "artificial intelligence" are pretentious phrases to describe computer programs that do pattern-matching. The form of matching involved is sophisticated, and the patterns exist in approximate and shifting forms in complex data, whether the data represent CAT images of lungs that might have tumors in them or train station crowds that might contain pickpockets, or train operation data that might make commuter rail services a little more efficient. Belaboring the strengths and weaknesses of the algorithms is not the same as making thoughtful public policy, no matter what the algorithms are about, or what their weaknesses are.
| > > | In our daily life, we are frequently subject to predictive algorithms that determine music recommendations, product advertising, university admission and job placement. Until recently, I only knew such algorithms were used for those purposes. But in the American criminal justice system, predictive algorithms have been used to predict where crimes will most likely occur, who is most likely to commit a violent crime, to fail to appear at their court hearing, and who is likely to re-offend at some point in the future. When I heard about it, I was quite surprised since I am French and such tools don’t exist in our judicial system. I didn’t see the purpose of those predictive algorithms and risk assessment tools used by US Police departments and judges. How could machines be “better”, more efficient than human, since they are made by flawed people? | | | |
> > | A few years ago, Los Angeles Police Department adopted a predictive-policing system called Predpol. Its goal was to better direct police efforts to reduce crime. It raised questions about reducing criminal activities in one place, only to have them occur elsewhere.
If we think about systems predicting court appearance, their use does not always result in incarceration. They can be seen as a tool for court to bring defendants to them. However, it raises concerns about how judges use such algorithms. | | | |
< < | And you, what would you do if you were a United States judge who has to decide bail for a black man, a first-time offender, accused of a non-violent crime.
An algorithm just told you there is a 100 percent chance he'll re-offend. With no further context, what do you do? Would you rely on the answer provided by the algorithm, even if your personal thought leads you to another answer? Is the algorithm answer more reliable than your opinion? One could argue that the answer provided by the algorithm is based on 137 question-quiz. | > > | What would you do if you were a United States judge who has to decide bail for a black man, a first-time offender, accused of a non-violent crime.
An algorithm just told you there is a 100 percent chance he'll re-offend.Would you rely on the answer provided by the algorithm, even if your personal thought leads you to another answer? Is the algorithm answer more reliable than your opinion? One could argue that the answer provided by the algorithm is based on 137 question-quiz. | | The questions are part of a software program called Correctional Offender Management Profiling for Alternative Sanctions: COMPAS.
The problem when we rely on such algorithms is their opacity |
|