Law in the Internet Society

Behind the Algorithm: Safeguarding Fairness in Automated Decisions

-- By AndreaRuedas - 26 Nov 2024

From photographic tracking to fraud alerts to risk assessment in prisons, technologies that use algorithms for decision-making in life-altering situations are widespread. These systems automate decisions in complex social structures like welfare, healthcare, and criminal justice due to their efficiency and precision, but they also risk exacerbating systemic inequities. The more vulnerable people are—due to socioeconomic conditions or race—the more likely they are to be mistreated by automated decisions. Virginia Eubanks’ analysis of automated welfare systems highlights how these technologies disproportionately affect marginalized communities, automating decisions that profoundly shape access to basic rights and services.

To understand the risks of algorithmic decision-making, it is crucial to examine how these systems operate within existing power structures, the lack of agency they afford individuals, and the challenges they pose to procedural due process in law.

Marginalization and Automation

Automated decision-making systems do not operate in a vacuum; they are a deeply rooted part of complex social structures that already reflect systemic biases. These systems amplify existing inequalities by embedding biases into datasets and algorithms. For example, automated risk assessment tools in criminal justice are notorious for reinforcing racial and socioeconomic disparities. These systems rely on historical data, reflecting systemic discrimination, to guide sentencing, parole decisions, and incarceration practices. Similarly, Eubanks’ analysis of welfare automation illustrates how poor individuals are disproportionately subjected to surveillance and denied critical resources when errors or biases arise.

These systems reduce complex human lives into measurable metrics. While such automation can in theory enhance consistency, it often amplifies biases present in the data created by humans. This disproportionately impacts groups like low income people, people of color, and women, who encounter more data collection systems through welfare, borders, or health care systems

Eubanks highlights Indiana’s welfare automation, where recipients were denied benefits due to algorithmic errors, and were unable to correct mistakes in a substantially automated system with minimal human interaction. Entire communities were left without medication, housing, or food. When systems designed to provide social assistance lose the human aspect, they become mechanisms of exclusion. Those affected often lack the resources or knowledge to challenge errors, creating a profound power imbalance where decision-makers are hidden behind technology, while vulnerable individuals struggle to fight back.

The harm caused by automated systems extends beyond welfare. Similar dynamics are evident in risk assessments used in criminal justice, predictive policing, and hiring algorithms. In all these cases, automated decisions disproportionately harm already marginalized groups, making it harder to access justice, employment, or essential services.

Due Process and Legal Accountability

The central question is not merely who is most affected by algorithmic decision-making but how to address the lack of accountability surrounding these technologies. At the heart of these issues is a legal question: How should the law protect individuals against the harms of automated decision-making? The principle of procedural due process in the U.S. Constitution provides a critical lens for evaluating these systems.

Procedural due process guarantees that individuals cannot be deprived of life, liberty, or property without fair procedures. In Goldberg v. Kelly, the Supreme Court ruled that welfare benefits could not be terminated without a hearing, as such actions constituted a deprivation of property which requires due process. This principle should apply equally to decisions made by algorithms. Whether an adverse decision is the result of a simple rule or a complex algorithm, individuals must have the opportunity to understand, fight, and correct the decision.

However, automated systems often obscure these protections. Unlike a human decision-maker who can be cross-examined or held accountable for bias, an algorithm operates as a black box. Recipients are often unaware of how or why decisions were made, and errors are challenging to contest due to a lack of transparency. Algorithms involve complex models and vast datasets, making it difficult for non-experts—including judges, lawyers, or defendants—to comprehend and much less combat. Additionally, the data used to build algorithms frequently contains biases reflective of historical inequities. Yet these biases are not visible to users, nor are the assumptions in the system apparent. As a result, decisions may be discriminatory without clear evidence of intent—a stark contrast to human decision-makers, who can be held accountable for prejudice.

For example, in Indiana’s welfare system, recipients mistakenly denied benefits faced significant challenges in correcting mistakes. The system’s lack of clarity, combined with limited oversight, left individuals without necessary living essentials —highlighting the failure to provide meaningful procedural protections. These factors make algorithms resistant to traditional mechanisms of legal accountability, leaving affected individuals struggling to understand or fix errors.

The Illusion of Consent

One justification for automated systems is that individuals "consent" to data collection and processing and are given an avenue for remedy if the collected data is wrong. However, this idea of consent is flawed. Marginalized individuals often encounter data collection as a condition for accessing essential services like healthcare, education, or welfare. This coerced consent undermines any meaningful agency.

Even if individuals are granted a theoretical "right to correct" errors, the scale and complexity of modern datasets make this right practically meaningless. How does one correct an algorithm without the appropriate tools or expertise, especially when struggling to meet basic needs? These issues are not unique to algorithmic decision-making. Human biases, political ideology, and resource constraints have long contributed to injustices. The difference is that algorithms mask biases under the masks of objectivity, making discriminatory practices harder to identify and challenge.

Towards a Legal Framework for Protecting the Vulnerable

The failures of automated systems highlight the need for legal reforms to protect vulnerable populations. The law must evolve to ensure due process is upheld, requiring automated systems to provide clear explanations for decisions and truly accessible ways to contest errors. Transparency in algorithmic decision-making is essential, with mandatory disclosures about how systems operate, the data they use, and potential biases.

Additionally, the concept of consent should be redefined, ensuring that data collection is not coerced and truly informed. By strengthening procedural safeguards, improving transparency, and ensuring access to remedies, the law can protect the vulnerable and hold automated systems accountable, ensuring fairness is not compromised for efficiency.

Sources Cited

Eubanks, Virginia. 2017. Introduction and Chapter 2 in Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St Martin’s Press.

Navigation

Webs Webs

r3 - 26 Nov 2024 - 23:18:21 - AndreaRuedas
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM