|
META TOPICPARENT | name="FirstEssay" |
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted. | | -- By CharlotteBerg - 15 Oct 2023 | |
> > | What if Google send a paycheck to its users?
Introduction
In 2015, the former CEO of Google, Eric Schmidt, predicted that “the internet will disappear. There will be so many IP addresses… so many devices, sensors, things that you are wearing that […] you won’t even sense it. It will be part of your presence all the time” (Zuboff, p. 197). Schmidt however was not describing a vanishing of the internet but rather the phenomenon of ubiquitous computing where those technologies become indistinguishable from everyday life. Distinctive for the services that those technologies are offering is their “free-of-charge” which will tempt users to pay with their personal information, feeding algorithms and essentially enabling companies to analyze their behavior for the establishment of new markets, exploiting the user’s unconsciousness in most cases.
Infringed Rights
Firstly, in this regard, one must take a look at possible infringed rights, which are divided into individual and social rights.
The most prominent infringed right appears to be the right of informational self-determination, which is the authority of the individual to decide themselves when and within what limits information about their private life is communicated to others. Unique to the current situation are the extension and the depth of the new methods of data collecting: The devices capture every aspect of the individuum’s life, them getting up in the morning, going to work, their preferred room temperature etc. Even more, data on an unexpected level of intimacy are processed: Sensors in smartwatches notice an elevated heartrate when talking to a love interest, algorithms identify the time we spend looking at a conversation with another person on a messaging program, analyzing feelings the individuum themselves might not even be aware of (Zuboff, p. 199 f). Under these circumstances awareness of communication of data to third parties is impossible.
But apart from individual rights, these types of data collection also pose a threat to society, as the study of behavior of whole populations lead to a loss of autonomy. With advancing monitoring and surveillance, every trace of behavior can be translated into information which can be used to steer the actions of billions of people.
The Importance of Choosing
Even though the image of mankind’s future is painted out very darkly and one would be intrigued to countermove with a collective solution, one must not forget that free people must be free to make their own decision, even if this decision is bad for them and they suffer the consequences. Therefore, the very Kantian answer to the question of how to meet the dangers arising from a ubiquitous society must be sapere aude, the courage to use one’s own reasoning. Hence, the goal should be to enlighten the subjects to ubiquitous computing of its dangers and enabling them to make a self-determined choice on steering of their personal information.
Possible Solutions
Regulation (Legislative Approach)
One approach to ensure the consumer’s choice on the handling of their own data appears to be restrictive regulation. The European Union has been taking regulatory approaches that reflect into digital culture worldwide and has implemented a regulatory imperialism as Anu Bradford rightly pointed out (Europe’s Digital Constitution, p 55 f.) Whereas the regulations are manifold, I would like to highlight two in this particular context: the GDPR and the Digital Service Directive. The first one protects the data subject by allowing the data-processing only on certain legal bases, one being the individual’s consent, the later one regards personal data as consideration in regards to contract law. Whereas regulation is a suitable mechanism to steer human’s behavior, the respective regulations can be criticized in many aspects: The fines for non-compliance are too low (although they have significantly risen lately), so big corporations would rather pay fines then complying. But perception of personal data in legal culture has changed: The Digital Service Directive considers personal information as a consideration in the case of a digital contract. This gives the user the right to conformity of the service as well as the right for remedy, if the service is not supplied (Art. 5, 11). Putting a legal value on personal data sends the right message, however the impact of remedies for non-compliance in the field of services that depend on a perfect provision is questionable. The company’s motivation to provide the service perfectly is intrinsic to begin with.
Scholarship (Social Approach)
Another approach could be the enlightening of users and the hope that they would recognize their endangered rights and act accordingly. This rather idealistic approach ignores the unwillingness of the human being to step out of its comfort zone: Modern technology has provided the individual with countless ways to simplify their lives which (almost) no-one is willing to give up. Even more, this approach ignores the nature of the human being as a social being: Interaction happens on social networks, the more the individual feels excluded by non-participation. Summed up, this approach does not promise any success since no-one should be forced to learn (and consequently act accordingly to) certain ideologies.
Financial Compensation (Market Approach)
One rather neoliberalist approach could be to acknowledge personal information as a good and monetarily compensate its use. In a capitalistic system, money works as a catalysator that might be able to visualize the (literal) high price the individual pays for the provision of their data. And valuable goods are generally handled with more care. Furthermore, especially on the aspect of infringed rights, financial compensation could lead to more economic fairness between cooperation’s and consumers. And lastly, a system of compensation combined with an easily accessible enforcement system such as class actions could lead to a significant remodeling of the current use of data as such a compensation would have a high impact on the companies’ budget: If every American was entitled to just $ 1000 compensation annually for the processing of their data, this would lead to a yearly remedy of 334 billion dollar. It seems as this amount might be able to reduce right-infringing processing. | | | |
< < | Introduction
In a modern driven society, apparently “free” services seem to spread everywhere. The range is broad: free coupon apps promise the costumer a discount on their purchase, free comparing services facilitate the planning of a weekend trip and free email services such as GMAIL or YAHOO are used by (almost) everyone. And why wouldn’t they? Technically easy to access and free of cost, those services are the most convenient option to use.
However, if something is too good to be true, it probably isn’t true. None of these services are provided out of sheer philanthropy. On the contrary, they cost the providers a lot of money: In 2013, Google invested $ 7.3 billion in its data center infrastructure (Rich Miller: Google Spent $7.3 Billion on its Data Centers in 2013 on: Data Center Knowledge, https://www.datacenterknowledge.com/archives/2014/02/03/google-spent-7-3-billion-data-centers-2013, last accessed at 10/14/2023). And they want to be repaid.
The Issue
Although most costumers are not aware of it, it is no secret that big tech companies such as Alphabet (“Google”), Amazon or Meta (“Facebook”, “Instagram”) use their costumers’ data to create algorithms that will help them to gain profit for example by using targeted apps. They analyze from their user’s behavior their individual and (based on their gender, social status and origin) assumed preferences and propose goods without the users are simply not able to live without. I wouldn’t go as far as saying that the users are not aware of the fact that every mouse click is analyzed. Some people probably find it convenient to have a pre-selected variety of goods to choose from. Others might be a little bit annoying to be pestered with targeted ads but decide to ignore them in the long run. And I would also assume that most people have in the back of their minds that somehow their data and their user behavior are the consideration for the services that are provided to them such as offering a platform for exchange or sharing holiday pictures. On the other hand, I do also strongly believe that most users are not aware of the monetary value their data has for the respective companies, or else they would sell it to the data companies only in terms of real consideration.
Legislative approaches
The balance of power therefore plays out very uneven: On the one hand, there are multi-billion tech companies that essentially have a monopoly to certain services on the other hand there are consumers that might not be forced to participate at certain services but follow the deeply human urge to be a part of something. One could conclude that the costumer, the consumer in fact, needs to be protected and be made aware of their data value.
I am, at the moment, not aware that U.S. legislation has undertaken any efforts to balance out this issue more evenly. However, the European Union has been addressing this issue by passing several legislative acts, the GDPR and the 2019/770 Directive on certain aspects concerning contracts for the supply of digital content and digital services (“digital content directive”).
Informed consent pursuant to the GDPR
The general idea of the GDPR is to give the data subject full control over their personal data by establishing legal bases for the data processing. The most relevant basis for social media related activities is consent by the data subject in terms of the data processing. The theory is quite convincing: As the data subject is not forced to participate in the activities on social media platforms, it should be able to give informed consent for each bit of data processing. However, the privacy notices tend to be very incomprehensible for the average user. Also, they can (and due to the enormous length will) check the box for acknowledgement and move on without having truly understood which of their (sensitive) personal data is used for which purposes, how long the data is stored and to which third parties’ respective data might be sold. Also, it is quite evident that the platform providers just ignore the legislation: In May 2023, the Irish Data Protection Authority (DPA) issued an Euro 1.2 billion fine against META (https://iapp.org/news/a/meta-fined-record-eu1-2-billion-under-gdpr-by-irelands-dpc/). The social media giant META is capable of adhering to the GDPR and implementing the necessary technical and organizational measures. Cynics might say, META just gains a bigger profit by selling the information to third parties.
Consideration and consumer rights pursuant to EU2019/770
With its framework of 2019, the European Union attempted to protect consumers from legal uncertainty in e-commerce by harmonizing and extending their rights (EU2019/770, Rec. 1,5). One of the key issues of the directive was to entitle consumers to contractual remedies in the context of digital services, for which the consumer provided data instead of money as consideration (Id, Rec. 24). This new approach is a fundamental change in European contract law. As much as I welcome this approach to strengthen costumer rights, we have to wait to see the practical implications. I, for one, do not believe that any consumer will claim remedies from a social media giant if its service is not working, because they might not be aware of their rights and they shy away from the confrontation. Moreover, I neither belief that social media giants will voluntarily not provide their services. How else would they attract and keep user and their data in order to make profit? Although the intention is good, this approach does not seem fitting.
Is education the better approach?
The answer lies probably in the informed and responsible data subject, especially in the education of people at a young age. And there is a lot of education about the danger of social media. However, as long as these kind of services, apparently free of charge, seem to be the best viable option in order to participate in the peer group, I am afraid that the user will not make any use of their data as a self-determined good.
What is a "self-determined good"? What does it matter what data "is"?
I think Shoshana Zuboff's book deserves more of your attention. Her
account adds precision and clarity to your conceptions; a quotation
and a reference could save you easily 200 words and sharpen your
next draft.
GDPR achieves here what it mostly achieves in the world: it wastes a
ton of words. In the end, as you perceive, it changes the language
in the box you automatically check to add your complicity to the
parasite's infection of the whole human race. The existing draft
seems to think this is primarily a matter of unfair dealing, in
which humanity is not being paid enough to let the parasite in. If
there are actually any more significant rights or interests
involved, you don't mention them.
The concluding 85 words seem to have arrived from nowhere, which
suggests perhaps that the draft is already in
metamorphosis. Education such as you mention is surely not the
answer: I am doing it right now, after all, and you can see how
worthless that is.
I think the real subject of this essay is hovering just out of view,
obscured in ways that a more precise draft, more informed by Zuboff
and more economical with GDPR wrapping paper, would bring into
view. If Google sent everyone a big check, would everything then be
all right? | |
|
|