Law in the Internet Society

View   r3  >  r2  ...
AndreaRuedasFirstEssay 3 - 26 Nov 2024 - Main.AndreaRuedas
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Changed:
<
<

Who Does Surveillance Impact the Most?

>
>

Behind the Algorithm: Safeguarding Fairness in Automated Decisions

 
Changed:
<
<
-- By AndreaRuedas - 20 Oct 2024
>
>
-- By AndreaRuedas - 26 Nov 2024
 
Changed:
<
<
From photographic tracking to fraud alerts to risk assessment in prisons, technologies which use algorithms as decision-makers for life altering situations are continuously increasing in use. Virginia Eubanks’ discussion of how racism and classism plague automated welfare systems demonstrates that surveillance systems and modern technology negatively infiltrates the lives of those most marginalized (2017). Her findings point to the intersectional oppressions which can be and are perpetuated by contemporary processes of surveillance because they are based on prejudiced datasets that ultimately impact systems in everyday technologies. To understand how people are continuously marginalized by modern surveillance technologies, we must understand the intent behind the creation of discipline and control systems, the lack of agency given to living individuals in algorithmic decision processes and the lack of access to corrective measures and its impact on current oppression.
>
>
From photographic tracking to fraud alerts to risk assessment in prisons, technologies that use algorithms for decision-making in life-altering situations are widespread. These systems automate decisions in complex social structures like welfare, healthcare, and criminal justice due to their efficiency and precision, but they also risk exacerbating systemic inequities. The more vulnerable people are—due to socioeconomic conditions or race—the more likely they are to be mistreated by automated decisions. Virginia Eubanks’ analysis of automated welfare systems highlights how these technologies disproportionately affect marginalized communities, automating decisions that profoundly shape access to basic rights and services.
 
Changed:
<
<

The Progression of Systems of Power and Control

>
>
To understand the risks of algorithmic decision-making, it is crucial to examine how these systems operate within existing power structures, the lack of agency they afford individuals, and the challenges they pose to procedural due process in law.
 
Changed:
<
<
In the 18th century, there was a shift from sovereign to disciplinary power - one that focused on order, observation, and hierarchies (Foucault 198). While its first use was to control the spread of the Black plague, disciplined systems became a mechanism of power centered “around the abnormal individual, to brand him and to alter him” (199). These were the beginnings of disciplinary institutions, such as mental asylums and prisons, which were based on targeting and reforming the “abnormal” while marking and continuing their exclusion (200). Surveillance mechanisms were created and used to discipline and exclude - biased towards those who did not fit the status quo.
>
>

Marginalization and Automation

 
Deleted:
<
<
In the present, we live in “societies of control” where disciplinary power interacts with control power - a control based on numerical classifications that is everywhere, fluid and seemingly ever-lasting (Deleuze 4-6). This type of control uses current and previous information to haunt a person forever because it creates dividuation in which there is not one holistic, humane representation of a person but many digital versions that can be based on real, yet singular moments as much as on false assumptions or biases in datasets (6). These systems of control permeate almost every aspect of our public and private life which is facilitated by their use of technological algorithms that sort, classify, and judge. For example, prisons, a disciplinary invention that already targets the poor and disabled, the “abnormal,” combine with control systems, such as risk assessments to create what Deleuze calls a “new monster” that exacerbates previous discrimination to result in racialized and classist mass incarceration.
 
Changed:
<
<
As described by Solove, one of the main concerns with modern surveillance is about its use of information processing for discipline and control; mechanisms that Deleuze and Foucault hypothesized were based on subjugation of undesired, undervalued groups (2015). Solove identifies the main issues with data processes that place already marginalized identities at a higher risk of being negatively affected. Aggregation, which strips privacy and autonomy away from individuals. Exclusion, which denies people the knowledge of data use and the ability to correct if wrong, which is already extremely difficult without the appropriate resources and mobility. And last, secondary use that distributes personal information without consent and combines with distortion, to potentially create false representations of the self.
>
>
Automated decision-making systems do not operate in a vacuum; they are a deeply rooted part of complex social structures that already reflect systemic biases. These systems amplify existing inequalities by embedding biases into datasets and algorithms. For example, automated risk assessment tools in criminal justice are notorious for reinforcing racial and socioeconomic disparities. These systems rely on historical data, reflecting systemic discrimination, to guide sentencing, parole decisions, and incarceration practices. Similarly, Eubanks’ analysis of welfare automation illustrates how poor individuals are disproportionately subjected to surveillance and denied critical resources when errors or biases arise.
 
Changed:
<
<

The Monster Blinds Us and Ties Our Hands

>
>
These systems reduce complex human lives into measurable metrics. While such automation can in theory enhance consistency, it often amplifies biases present in the data created by humans. This disproportionately impacts groups like low income people, people of color, and women, who encounter more data collection systems through welfare, borders, or health care systems
 
Changed:
<
<
These four issues contribute to a Kafkaesque reality in which data collection and technological implementation is dangerous because of its known existence but mysterious usage. The collective ignorance behind collection methods and applications lead to preventional changes in our thoughts and behaviors - without being aware of what is known about us and how it will be used, we are afraid to assert our civil rights and do not demand accountability. This curb in our desire to advocate for human and civil rights, allows for current and past injustices to continue.
>
>
Eubanks highlights Indiana’s welfare automation, where recipients were denied benefits due to algorithmic errors, and were unable to correct mistakes in a substantially automated system with minimal human interaction. Entire communities were left without medication, housing, or food. When systems designed to provide social assistance lose the human aspect, they become mechanisms of exclusion. Those affected often lack the resources or knowledge to challenge errors, creating a profound power imbalance where decision-makers are hidden behind technology, while vulnerable individuals struggle to fight back.
 
Changed:
<
<

A Case Study of the Automation of Welfare

>
>
The harm caused by automated systems extends beyond welfare. Similar dynamics are evident in risk assessments used in criminal justice, predictive policing, and hiring algorithms. In all these cases, automated decisions disproportionately harm already marginalized groups, making it harder to access justice, employment, or essential services.
 
Changed:
<
<
Eubanks’ analysis of the demise of the public benefits system in the United States, concurs with Foucault, Deleuze, and Solovan: “We all inhabit this new regime of digital data, but we don’t all experience it in the same way” (Eubanks 5). The discipline and control systems outlined by Foucault and Deleuze impact marginalized people in one of the four modern ways that Solovan defined. Eubanks further claims that this ultimately impacts particular groups, such as poor people, people of color, and women, the most. Why? Because they encounter more data collection systems within their lives - international borders, welfare systems, and highly policed neighborhoods. Access to basic human needs such as housing, food, or healthcare becomes more complicated to attain for groups who exist at the margins of society and now have to interact with systems designed to exclude.
>
>

Due Process and Legal Accountability

 
Changed:
<
<
The events at Indiana, in which the automation of welfare made it impossible to continue receiving benefits and barred recipients from fixing algorithmic mistakes, make it clear that low-income communities of color “are targeted by new tools of digital poverty management” (Eubanks 11). While we could excuse these impacts as unintended consequences, they are not and have never been unforeseeable. History points to a long tale of the surveillance of poor, disabled, of color bodies through non-technological and modern systems which act as “forces for control, manipulation, and punishment” (9). The lack of accessibility to automated systems only complicates the ability of marginalized people to self-advocate. In the Indiana welfare system, people could not find someone to help them regain benefits and were left without medication, without food, and without housing, showcasing that when we give all the power to technology and not to people, we strip our society of the ability to protest and revert its effects.
>
>
The central question is not merely who is most affected by algorithmic decision-making but how to address the lack of accountability surrounding these technologies. At the heart of these issues is a legal question: How should the law protect individuals against the harms of automated decision-making? The principle of procedural due process in the U.S. Constitution provides a critical lens for evaluating these systems.
 
Added:
>
>
Procedural due process guarantees that individuals cannot be deprived of life, liberty, or property without fair procedures. In Goldberg v. Kelly, the Supreme Court ruled that welfare benefits could not be terminated without a hearing, as such actions constituted a deprivation of property which requires due process. This principle should apply equally to decisions made by algorithms. Whether an adverse decision is the result of a simple rule or a complex algorithm, individuals must have the opportunity to understand, fight, and correct the decision.
 
Changed:
<
<
Perhaps "who is most impacted" is not analytically ideal framing. It seems to imply a comparison among impacts, but none is performed and you aren't actually in the impact-measuring business. The essence of your point, I think, is that software is used in complex social structures to automate decision-making, that these automated decisions can have absolutely important effects on individual lives, and that the more vulnerable people are, or the more their social contexts put them at the disposal of such large, complex systems (of employment, health care, social support, incarceration, etc.) the more likely they are to be mistreated as a consequence of those automated decisions.
>
>
However, automated systems often obscure these protections. Unlike a human decision-maker who can be cross-examined or held accountable for bias, an algorithm operates as a black box. Recipients are often unaware of how or why decisions were made, and errors are challenging to contest due to a lack of transparency. Algorithms involve complex models and vast datasets, making it difficult for non-experts—including judges, lawyers, or defendants—to comprehend and much less combat. Additionally, the data used to build algorithms frequently contains biases reflective of historical inequities. Yet these biases are not visible to users, nor are the assumptions in the system apparent. As a result, decisions may be discriminatory without clear evidence of intent—a stark contrast to human decision-makers, who can be held accountable for prejudice.
 
Changed:
<
<
This is undoubtedly true. We actually don't need too much Deleuze or Foucault to establish this point from the perspective of a US lawyer's training in social realism.It is also true, of course, when the same system make non-automated decisions based on less information and more individual idiosyncrasy, bias, or hostility in the decision-maker. The focus in the existing draft shifts unsteadily around this insight, from surveillance to "algorithmic" decision-making. More clarity would be helpful: where are we concerned about the "improvement" resulting from more data collection, and where about the use of software rather than human attention to make decisions?
>
>
For example, in Indiana’s welfare system, recipients mistakenly denied benefits faced significant challenges in correcting mistakes. The system’s lack of clarity, combined with limited oversight, left individuals without necessary living essentials —highlighting the failure to provide meaningful procedural protections. These factors make algorithms resistant to traditional mechanisms of legal accountability, leaving affected individuals struggling to understand or fix errors.
 
Deleted:
<
<
Another good route to improvement is to think about law a little more. When governmental systems make automated decisions, their responsibility to deliver procedural due process does not decrease. Termination of welfare benefits without a hearing in Goldberg v, Kelly violates the Due Process Clause whether the "algorithm" in use is one-step long, or results from the output of some complex model fed with all available data about the recipient. If the courts are committed to the principle applied in Matthews v. Eldridge, weighing the risks of erroneous determinations and the degree of harm likely to be caused by error, including the degree of difficulty involved in challenging erroneous decisions, against the government's abiding interest in efficient decision-making, can we fashion constitutional law to protect vulnerable people better?
 
Changed:
<
<
It is in this context—the search for better remedies—that the value of French theory reaches its minimum, and closer attention to legal detail is of maximum value The present draft seems to me to rely heavily on broken reeds. How does consent to data collection function as a significant check on power when the consent is extracted by bureaucratic fiat as a condition of receiving healthcare, or school enrollment, or basic sustenance? How would environmental law function if workers or families could "give consent" to living and working in poisoned conditions? What is the value of a "right to correct" one or another detail in a dataset containing hundreds of thousands or millions of points about you, too large for you effectively to analyze without specialized skills and tools and all subject to being ignored, amplified, reweighted, or automatically corrected by the operation of software you cannot see and do not have the necessary knowledge to debug?
>
>

The Illusion of Consent

 
Changed:
<
<
The value of this draft is that it brings us in sight of these central questions. If we are agreed that one of law's fundamental purposes is to protect us against failures of due process in proportion precisely to our vulnerability to injustice, how should it go about doing so in this realm?
>
>
One justification for automated systems is that individuals "consent" to data collection and processing and are given an avenue for remedy if the collected data is wrong. However, this idea of consent is flawed. Marginalized individuals often encounter data collection as a condition for accessing essential services like healthcare, education, or welfare. This coerced consent undermines any meaningful agency.
 
Changed:
<
<
>
>
Even if individuals are granted a theoretical "right to correct" errors, the scale and complexity of modern datasets make this right practically meaningless. How does one correct an algorithm without the appropriate tools or expertise, especially when struggling to meet basic needs? These issues are not unique to algorithmic decision-making. Human biases, political ideology, and resource constraints have long contributed to injustices. The difference is that algorithms mask biases under the masks of objectivity, making discriminatory practices harder to identify and challenge.
 
Changed:
<
<

Sources Cited

>
>

Towards a Legal Framework for Protecting the Vulnerable

 
Changed:
<
<
Deleuze, Gilles. 1992. “Postscript on the Societies of Control.” October 59: 3-7.
>
>
The failures of automated systems highlight the need for legal reforms to protect vulnerable populations. The law must evolve to ensure due process is upheld, requiring automated systems to provide clear explanations for decisions and truly accessible ways to contest errors. Transparency in algorithmic decision-making is essential, with mandatory disclosures about how systems operate, the data they use, and potential biases.
 
Changed:
<
<
Eubanks, Virginia. 2017. Introduction and Chapter 2 in Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St Martin’s Press.
>
>
Additionally, the concept of consent should be redefined, ensuring that data collection is not coerced and truly informed. By strengthening procedural safeguards, improving transparency, and ensuring access to remedies, the law can protect the vulnerable and hold automated systems accountable, ensuring fairness is not compromised for efficiency.
 
Deleted:
<
<
Foucault, Michel. 1975. Chapter 3 in Discipline and Punish: The Birth of the Prison. Vintage.
 
Deleted:
<
<
Solove, Daniel J. 2015. “Why Privacy Matters Even if You Have ‘Nothing to Hide.” The Chronicle of Higher Education.
 \ No newline at end of file
Added:
>
>

Sources Cited

Eubanks, Virginia. 2017. Introduction and Chapter 2 in Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St Martin’s Press.

 \ No newline at end of file

Revision 3r3 - 26 Nov 2024 - 23:18:21 - AndreaRuedas
Revision 2r2 - 11 Nov 2024 - 15:19:23 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM