|
> > |
META TOPICPARENT | name="FirstPaper" |
The Value of Privacy
-- By LindaShen - 29 Oct 2012
As Professor Moglen points out, one critical problem with today’s internet is that it compromises our privacy, offering additional “services” such as targeted advertising when it could just as well operate at zero marginal cost without offering such services. While internet users have a general understanding that their privacy is compromised through using products by the likes of Google and Facebook, there exists a question of what this privacy loss costs, both for individual users and for society. Relatedly, there exists a question of whether such privacy impositions create benefits other than the revenue gains for the companies who provide them. Most users would elect to maintain their privacy all else equal, but is there a benefit to giving up some privacy that significantly diminishes the net cost? If not, then why do people continue to behave in ways that facilitate the compromise of privacy?
Two Questions
This paper explores two questions associated with the issue above: (1) the empirical question of how much users (facially) value their privacy, as approximated by their pattern of behavior in a variety of situations, and (2) the normative question of how much users should value their privacy, taking into account factors such as possible user ignorance (either of consequences or of alternatives) and social externalities.
Empirical Question
One way to approximate the value that users assign to privacy is by extrapolating from other real-world scenarios where people face privacy compromises. A quick survey of non-internet-related examples suggests that many people are willing to at least partially compromise their privacy regularly. People often sign up for reward cards to receive store discounts in exchange for the tracking of their purchases, use dressing rooms in exchange for being potentially monitored by personnel, and do not unsubscribe from physical junk mail despite its inconvenience, possibly because they account for the probability that certain pieces of junk mail are items they actually wish to receive.
Thus, people’s willingness to expose personal information on the internet seems consistent with behavior elsewhere. Moreover, with regard to the internet, it is not simply that users are willing to share information with service-providers like Google and Facebook; it is that even while using such services, they frequently share beyond the minimum required. For instance, although Facebook provides customizable privacy filters, people often choose to share information with all “friends” (often hundreds), or “friends of friends” (often thousands).
While information sharing is a large part of the purpose of both Facebook in particular and the internet more generally, the sharing that occurs on social media sites may foster two related mentalities: (A) that one must “give to get” – i.e., that privacy is a good that one can give up in exchange for service by the host site or for information shared by others; and (B) that surrendering personal information to distant acquaintances or strangers is no big deal. (A) may arise from the fact that sharing information among friends and networks is a loosely reciprocal process: the more content we share, the more likely there will be feedback content (either in the form of commentary or of sharing by others) that makes the sharing process more worthwhile. (B) may arise from the fact that the internet has generally made personal facts about us more public, and is perhaps enhanced by a cultural exhibitionism / voyeurism that has developed in the process, or a groupthink that condones it. A third common mentality is that sharing information with a disembodied entity is less disconcerting than sharing information with identifiable individuals. These mentalities create a somewhat inhospitable environment for convincing individuals of the need to zealously guard their privacy.
Normative Question
The cost of compromising privacy is fairly user-dependent. Some people have more to hide than others, and some care more about disclosure. (For instance, Professor Moglen pays for groceries with cash, while I use reward cards, allowing stores to track purchase patterns.) However, is this because I’m less informed about the risks of sharing my personal information? Am I more selfish for not internalizing the costs that my information-sharing imposes on others?
There are certainly potentially serious risks to sharing personal information. While most users understand the nuisance of targeted advertisements, they may not fully appreciate the impact of aggregation or prediction: insurance companies may learn individual facts and demographic behavioral patterns and use this to alter premiums, and the government may do much more. Additionally, information shared about us may be out of our control: through others’ sharing, Google and Facebook may possess my conversations and images even if I personally use neither. Moreover, those who possess this data can make reliable predictions about individuals with my characteristics, as long as people with sufficiently similar characteristics opt to not safeguard their data. Aside from a commons problem, users with low subjective value of privacy can easily undermine the privacy of those who value it much more by not internalizing the latter’s costs, thereby depriving the latter group of choice.
On the flip side, privacy compromises may at least partially function to customize services we (like to) receive. For example, Amazon can use our history to deliver desirable recommendations and discounts. And while Facebook photos may allow surveillance of our whereabouts, similar tracking enables tools like Google Flu Trends, one of many ways in which personal data aggregation can deliver social good. Ultimately, some privacy compromises are inevitable to enable the provision of widescale social services, from screening security threats to controlling disease. The goal is to minimize the intrusion after balancing the gravity of the social interest with the risk that information-gathering poses (including risk that our information is nefariously used), which may not always align with individuals’ incentives. As it’s difficult for individuals to assess the risk that their information be used nefariously, and as it’s not always in the individuals’ (perceived) self-interest to protect their data, they may engage in behavior that is risk-seeking on a more global scale, at least from the more informed user’s perspective.
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines:
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list. |
|