Law in the Internet Society

View   r1
LudovicoColettiFirstEssay 1 - 13 Oct 2023 - Main.LudovicoColetti
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="FirstEssay"
Social Credit Systems

In the dystopian world of the TV show "Black Mirror," the episode "Nosedive" describes a world where social media ratings determine one’s socioeconomic status and access to essential services. Using a mobile application, everyone constantly rates others on a five-point scale. Those with higher scores can access to better services and exclusive clubs, while those with low scores are penalized in many ways. While this may seem like a far-fetched fiction, the reality of today may be not too distant from this portrayal.

The first example that comes to mind is China’s Social Credit System (SCS), developed between 2014 and 2020. The SCS uses artificial intelligence "to develop comprehensive data-driven structures for management around algorithms that can produce real time reward-punishment structures for social-legal-economic and other behaviors" (Larry Cata Backer, Next generation law: data-driven governance and accountability-based regulatory systems in the west, and social credit regimes in China, 2018). The SCS in reality does not actually rely on a universal score but rather on a series of blacklists and redlists managed at different levels (municipal, local, or national). Each authority can manage its own blacklist (e.g., on those who failed to pay fines or child support) and they all converge into the National Credit Information Sharing Platform. As mentioned by Kevin Werbach in his 2022 article “Orwell that ends well? Social credit as regulation for the algorithmic age” this makes possible that "grade A taxpayers receive customs fee waivers and low-interest loans, in addition to the home benefits offered by the tax collection authority". However, Prof. Werbach believes that western's depiction of the SCS is exaggeratedly negative, especially in a world where governments and corporations are extensively tracking our behavior. He sees the Nosedive scenario as more resembling to the ratings system on Uber or eBay, expanded beyond the boundaries of one service.

He cites the idea brought forward by Yuval Noah Harari that free-market capitalism and state-controlled communism can be regarded as distinct data processing systems: the former is decentralized and the latter is centralized.

Starting from this assumption, it shouldn't come as a surprise then that western's versions of social credit experiments are being made mainly by private corporations, especially in the financial sector. Since the 2008 financial crisis, many "fintech" online lenders began experimenting new scoring model for establishing creditworthiness. Historically, banks have used scoring models to formulate a person's credit score based on the past financial behavior and additional factors bearing predictive value. This phenomenon has also been regulated such as with the Fair Credit Reporting Act and the and the Equal Credit Opportunity Act (ECOA), with the latter prohibiting credit discrimination on the basis of race, color, religion, national origin, sex, marital status, and age.

But the new models are based on a person's "social footprint" which is revealed by elements such as his/her social circle, or shopping habits: surprisingly, it appears that buying felt pads has a positive influence on how the algorithms forecast your financial behavior. Such information is often collected through the individual’s consent. As outlined in the 2016 article “On social credit and the right to be unnetworked” by Nizan Geslevich Packin and Yafit Lev-Aretz, these practices cause privacy harms at two levels - direct, to the loan seeker, and derivative, to the loan seeker's contacts, as “social credit systems inherently implicate the information of third parties, who never agreed their information could be collected, evaluated, or analyzed” Also, they favor social segregation and reduce social mobility, and increase the risk of arbitrary decisions based on incorrect data. For example, the use of social credit systems can nullify the above-mentioned limits set forth in the ECOA, as attributes like gender and race are easily detectable by the algorithm because “they are typically explicitly or implicitly encoded in rich data sets”. The authors believe that the solution should be the introduction of a right to be unnetworked or to opt-out from being socially scored.

Turning our gaze to Europe, we see that the risk of discrimination highlighted above has already become painfully real. In 2013, the Dutch Tax Authorities employed a self-learning algorithm to detect child care benefits fraud. The algorithm trained itself to use risk indicators such has having low income or belonging to ethnic minorities. As a result, thousands of families were wrongly characterized as fraudsters and suffered severe consequences. This led to the Dutch Government’s resignation and a 3.7 million Euros fine on the Tax Administration from the Autoriteit Persoonsgegevens, the Dutch Data Protection Authority, for breaching several GDPR rules. In particular, the Authority found that the Tax Administration had no legal basis for processing the personal data used as risk indicators (under the GDPR, personal data processing is allowed only if one of the legal bases listed in Article 6 applies).

In the hyper-regulated European Union, the GDPR has attempted to address the issues that may arise with the use of social scoring systems (and other systems that are meant to “profile” individuals) by introducing Article 22, which allows individuals to opt out of automated decision making, including profiling, and obtain human intervention whenever their personal information is used to take a decision which produces a legal effect (e.g., entering into a contract with that individual). Additionally, the proposed EU AI Act aims to place serious limitations on "AI systems providing social scoring of natural persons for general purposes by public authorities." These limitations prohibit social scoring systems from leading to detrimental or unfair treatment in unrelated social contexts or based on unjustified or disproportionate criteria.

The extent and effect of these limitations is yet to be tested, but it seems clear that a thorough reflection on the risks of social scoring systems must be started as soon as possible to avoid that reality overcomes fantasy.


Revision 1r1 - 13 Oct 2023 - 21:00:46 - LudovicoColetti
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM