|
META TOPICPARENT | name="FirstEssay" |
| |
< < | _You did say to make it bad. | | | |
> > | Jasmine Bovia
Law and the Internet Society | | | |
> > | PSRs, RAIs, and the Fight Against AI | | | |
< < | Introduction | | | |
> > | Introduction: | | | |
> > | Although Artificial Intelligence models have existed in some form since the 1950s, 2022 marked the beginning of what has now become known as the “AI Boom”, a term used to describe the rapid expansion of Artificial Intelligence usage into the mainstream. This technological boom, spurred by the use of large-language models like ChatGPT? and Meta Platforms, has become increasingly observable not only in the public sphere, but in a number of professional fields such as journalism, medicine, and, notably, law. This paper seeks to examine the potentially negative consequences of AI usage on the legal sector, specifically the judiciary. Further, it suggests some preliminary measures to limit, if not completely curb, the role AI plays in judgment. | | | |
< < | "What is a VPN?" | | | |
< < | When the average consumer inputs this Google search, the first thing that pops up isn’t the Google dictionary result. In fact, it’s not even an option on the page. What does pop up is a link to an article written by NordVPN? , better known as the sponsor of any YouTube? video with over 10,000 views. In a world where multiple competitors have been offering the same product for years, a relatively unbiased definition should be simple enough to find. The fact that a popular brand’s attempt to sell you a VPN pops up before you even know what it is demonstrates a much larger problem. Rather than empowering people with the tools to fully take control of their own privacy, companies like Nord, Express, and Surfshark jump to charge consumers high prices for much, much less privacy than they could easily get on their own. Over the course of this essay, I will discuss what VPNs can/should do, and then discuss why many paid VPN services fail to offer the promised protections. | > > | AI and the Judiciary: | | | |
> > | While the usage of Artificial Intelligence within the entire legal sphere has been met with rightful controversy, AI’s effect on the judiciary is especially troubling. According to the American Bar Association, numerous states have begun incorporating AI models into the judicial practice as an evaluation tool meant to aid in the generation of Pre-Sentence Reports (PSRs). Risk Assessment Tools are one specific class of AI model that rely on fact patterns and outcomes of previous cases to calculate metrics such as recidivism potential for criminal defendants. These metrics play an increasingly instrumental role in PSRs and, consequently, the sentencing outcomes of criminal cases. Sentencing courts have become increasingly reliant on these AI models to disastrous effect; already, the use of this software in PSR generation has been the subject of legal challenges on Due Process grounds.
An investigative article published by ProPublica? highlighted one of the glaring issues with state judiciaries’ use of AI tools in criminal cases. Although limited data currently exists on these AI models, studies are beginning to show that risk assessment tools perpetuate racial bias in their assessments. The risk recidivism software COMPAS, developed by the for-profit company Equivant, serves as a shining example; Black defendants were almost twice as likely as white defendants to be wrongfully labeled as having a “high-risk” of recidivism. On the flipside, white defendants were much more likely than Black defendants to be incorrectly considered at “low-risk” of reoffense. This is far from the only problem with Artificial Intelligence models like COMPAS.
Another potential issue with sentencing courts’ use of these tools is one inherent to their very nature. Artificial intelligence learns by constantly adapting its output to expanding data sets. These ever-evolving algorithms could mean skewed results for defendants as more data becomes available; the machine’s determination of a fair sentence for a defendant one day can, in theory, be completely different from its determination of a fair sentence for a future defendant with an identical fact pattern.
Even further, the American Bar Association correctly posits that the use of computer-generated evaluations for determining matters such as recidivism risk removes the necessary human aspect of sentencing. Where human judges are better able to see beyond fact patterns and take more nuanced views of the defendants in front of them, AI software can only see the numbers, resulting in distressingly clinical results. With these problems in mind, it is understandable to see why the use of AI tools within the judiciary remains controversial. | | | |
< < | Body | | | |
< < | First things first. A VPN, also known as a virtual private network, is a tool that creates a secure connection between two networks, or between a computing device and a network. Typical categories of VPNs include Remote access, host-to-network configuration, site-to-site, and extranet-based site-to-site VPNs. Illustrations liken a VPN to a secure underground tunnel between your computer and the websites you want to reach, keeping your information more secret than it would be if it traveled through the open-air, aboveground internet. | > > | Preliminary Measures: | | | |
< < | The part of the YouTube? video you skip usually describes two main benefits of having or using a VPN. First and foremost, paid VPNs promise their users access to content they couldn’t otherwise receive based on their location. In ads targeted at Americans, plucky YouTubers? usually show skits of themselves watching shows that are unavailable in certain countries. This also makes VPNs a sensible, one-time purchase for people traveling abroad. Students doing a semester in China might purchase a VPN so that they can stay up to date on their favorite TV shows and movies, using the services they also already pay for (Netflix, Hulu, etc). This also implies that a VPN might be a good tool for people who are based in countries that block more content to do the same thing
.
However, this isn’t the main advertising ‘hook’ VPNs use. Companies like ExpressVPN? and NordVPN? usually make claims about privacy. Typically, this involves a story about a hypothetical person walking into an airport, completing a bank transaction on the free public wifi, and getting their banking information stolen during an ARP spoofing attack. Very Sad and scary, but ExpressVPN? can help.
Similarly, companies also claim that a VPN can stop an Internet Service Provider from reading up on all of the sites you visit to sell your data and create targeted advertising. Purchasers, they claim, can rest easy knowing that their anonymous Reddit posts and weird 3 a.m. google searches are safe from any prying eyes that might use them for nefarious or uncomfortable purposes. Sufficiently scared and a little intrigued about what shows are available outside the US, a consumer may fork over $8 per month for security and a little convenience. | | | |
< < | The problem with many of these claims is that, while potentially true, none of this stops the VPN company from doing all of the things a consumer is worried some anomalous ‘bad guy’ might do, the personal data isn’t that much more secure than simply staying on ‘padlocked secure’ HTTPS sites, and if someone really wants that cartoon that releases at different times in different countries a couple of months earlier, they can find it more easily, safely and cheaply through torrent than they would using a streamer’s website. | > > | Barring an absolute moratorium on the use of AI tools in the judiciary, which would be difficult to enforce in practice, there are mitigating measures that may be taken to minimize the negative impacts of risk assessment instruments (RAIs) on the sentencing process. For one, regulation could look like limiting what factors go into determining matters like risk recidivism in defendants. Currently, tools like COMPAS utilize information relating to a defendant’s identity when calculating risk factors– including their race, sex, and age. To avoid integrating the same biases that plague the current sentencing process into the RAI algorithms, developers should be explicitly required to exclude these demographics.
Further, developing companies of RAIs should be required to publicize what considerations go into their pre-sentencing reports and risk assessments. The confidential nature of RAIs has already been the subject of legal challenge; in Loomis v. Wisconsin, a defendant raised arguments against the COMPAS software for, inter alia, not reporting what data went into the generation of his risk assessment, making it impossible to challenge the instrument’s accuracy and validity. His point was entirely valid; if pre-sentencing reports are to be made accessible to parties of a case, why should other investigative tools, like the risk assessment algorithms that help generate such reports, not be made available and open to scrutiny and potential challenge on due process grounds?
Lastly, software developers should be required to analyze the algorithmic outputs of the software that they create, and publish both their process and results. In order for there to be greater transparency and scrutiny in the judiciary’s use of AI, all stakeholders need to hold equal responsibility, and accountability, for potential failings and shortcomings of the risk assessment tools. Allowing developers to gain financially from the use of their algorithms in the sentencing process without any actual stake in the outcomes will work to disincentivize them from ensuring that their models are accurate, reliable, and nondiscriminatory. While the ultimate responsibility of case outcomes should lie with the government, any party that has a stake in criminal cases should bear at least some accountability for the execution, or lack thereof, of justice.
These solutions are only launching points for a longer conversation around the use of AI in the criminal justice system. There remains a larger discussion about the use of AI by police, as well as the privacy considerations that plague the integration of artificial intelligence in government as a whole. These preliminary regulations would, however, work to address the issue of AI in the judiciary pending more substantive changes. With the acceleration of the AI boom, the unregulated usage of these so-called “risk assessment tools” will only become more of a risk in-and-of-itself. | | | |
< < | Sure, one could say that they’d rather roll the dice with a paid VPN service that they’ve researched before buying and trust. The problem is that, much like a simple, clear, and useful definition, unbiased research on which VPNs are best is hard to find. Companies buy review websites, and around ten minutes into a YouTube? review search, you’ll start to find channels begging you NOT to buy a VPN. Furthermore, even the largest VPN services, like ExpressVPN? , have been bought by companies with a history of collaboration with ad-injection malware companies. Furthermore, by sending your data to a VPN company, you simply trust an anomalous bag guy with venture capital firm money, along with some of your own. | | | |
< < | If this is true, why would so many companies be allowed to make these misleading claims? The law, after all, should stop blatantly false advertisements from reaching mainstream audiences.
[FTC’s role in false advertising: We have cases like Federal Trade Commission v. Bunte Bros, Inc.and, more recently, Static Control v. Lexmark that should protect us from puffed upp claims of a product’s worth.] | > > | Sources:
Hillman, Noel L. “The Use of Artificial Intelligence in Gauging the Risk of Recidivism.” American Bar Association, 1 Jan. 2019, www.americanbar.org/groups/judicial/publications/judges_journal/2019/winter/the-use-artificial-intelligence-gauging-risk-recidivism | | | |
> > | Garrett, Brandon, and John Monahan. “Assessing Risk: The Use of Risk Assessment in Sentencing .” Bolch Judicial Institute at Duke Law, vol. 103, no. 2, 2019. | | | |
> > | State v. Loomis, 371 Wis. 2d 235 (2016) | | | |
< < | Conclusion | > > | Angwin, Julia, et al. “Machine Bias.” ProPublica? , 23 May 2016, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. | | | |
< < | However, courts can be slow to adapt to the use of new technology, so it’s possible that we won’t see any meaningful legislation on the claims or use of popular VPN services for some time. In summary, while companies claim that VPNs can give you access to better content and protect your data from harmful attacks and prying eyes, they aren’t worthwhile purchases for the safety-conscious consumer. Because the court system likely won’t kick in to stop VPNs that are not useful, and, in some cases, actively harming your computer, consumers should take matters into their own hands. | > > | 18 USC Sec. 3552(d) | | | |
< < | A few recommendations for better alternatives: | | | |
< < | [Rcommendations for secure browsers from Prof Moglen].
[Article on how to torrent].
Sources
https://www.youtube.com/watch?v=WVDQEoe6ZWY (IS THIS A GOOD SOURCE?)
https://en.wikipedia.org/wiki/Virtual_private_network
https://www.nytimes.com/2021/10/06/technology/personaltech/are-vpns-worth-it.html | | \ No newline at end of file |
|