AndrewBrickfieldFirstPaper 7 - 10 Jun 2018 - Main.EbenMoglen
|
|
META TOPICPARENT | name="FirstPaper" |
| | (999 words, excluding subtitles and this parenthetical) | |
> > |
No, your difficulty wasn't too little space. If you had known
precisely what to say you could have said it. The way to make the
draft better is to reduce the name-dropping and tighten the
description of the problem, then write about what to do using the
space gained.
"Polarization" is a red herring, I think. Whether people want to
argue or agree about politics or policy is not determined by the
efforts of advertisers to affect their brand choices.
Pattern-recognition (grandiosely described as machine learning) may
make advertising targeting efficient: you haven't shown it doesn't,
or that efficient advertising is a problem that inefficient
advertising isn't.
But there doesn't have to be Facebook. Why not explain how to
replace it? Surely modifying how we receive certain basic services
in the Net is easier than modifying the US Constitution?
| |
Comments |
|
AndrewBrickfieldFirstPaper 6 - 14 May 2018 - Main.AndrewBrickfield
|
|
META TOPICPARENT | name="FirstPaper" |
| | Historian Eric Foner argues the State Action requirement reflects misunderstandings of Reconstruction. He endorses the view expressed in Justice Harlan’s dissent in the Civil Rights Cases: “The men who wrote the Fourteenth Amendment intended to empower Congress to ‘do for human liberty and the fundamental rights of American citizenship, what it did, with the sanction of this court, for the protection of slavery.’” This view of the Fourteenth Amendment, along with understanding the First, Fifth, and Ninth Amendments as protecting Liberty from surveilled reading and undue interference in truthmaking, enables Congress and Courts to regulate behavior collection and preserve freedom of thought. | |
< < | Facing unprecedented power over news distribution and truth creation, this constitutional interpretation is consistent with history and necessary to preserve the fundamental right to Liberty. Moreover, eliminating the State Action requirement lets Congress and Courts address lingering racial discrimination that has proved immune to constitutional remedy. Privacy and antidiscrimination advocates must deconstruct the State Action requirement. | > > | Facing unprecedented power over news distribution and truth creation, this constitutional interpretation is consistent with history and necessary to preserve the fundamental right to Liberty. Moreover, eliminating the State Action requirement lets Congress and Courts address lingering racial discrimination that has proven immune to constitutional remedy. Privacy and antidiscrimination advocates must deconstruct the State Action requirement. | | (999 words, excluding subtitles and this parenthetical) |
|
AndrewBrickfieldFirstPaper 5 - 04 May 2018 - Main.ArgiriosNickas
|
|
META TOPICPARENT | name="FirstPaper" |
| | -- AndrewBrickfield - 29 Apr 2018 | |
> > |
I agree that the statutory solution is probably the better one; I'm hesitant to create fundamental rights (and all the inevitable, confusing legal development that follows), plus I have some doubts about the likelihood of convincing a panel of judges to create a general right to internet autonomy, especially where we still can't figure out whether the collection of 3rd party cell-site location data is a seizure...
Regarding the disclosure proposal we both spoke about re: censorship. I think something along those lines is the right move. But I wonder if it’s really a treatment at all…in a sense it might just be more ‘diagnosis’: ‘here’s the problem (in case you missed it somehow!), but now that we all definitely know, what are we really going to do about it?’ I wonder to how many people this censorship problem is news to anymore? I listed a bunch of examples of pretty high-profile censorship (with a lot of media coverage)–how many people haven’t seen one of those examples on their own time? We have data for how many people use social media as a news source, but we’re missing perhaps the more important data: how many people are aware that Facebook (and social media) censors the thoughts they display…and maybe more important than that, how many people are aware that Facebook (and social media) censors the thoughts they display without any real set of transparent regulations/conditions governing the censorship. To that extent, getting a set of 100 or so instances of censorship is helpful, but I don’t know if it really solves the problem–it might just make people more aware of it. But that’s the first half of the battle, at least…
But maybe it’s the only half? Proposing an actual set of fairness regulations/legislation seems like it would be a major political challenge for a gridlocked Congress and any agency-based approach would have, call me a critic, its own political agenda (subject to be switched the opposite way every 4 years). And so it looks like I end where I started…maybe the best treatment is actually a judicial approach, which I know you mentioned. The law can always be shaped as needed, if needed, and so it’s more malleable than the other approaches (but again, its got all the negative things I spoke about in the first paragraph). Ultimately, maybe the treatment is something outside legislation, regulation, or the law entirely. Maybe it’s people recognizing the problem and wanting to do something about it. Here, the government is on much more familiar ground (creating new competition via antitrust regulations, for example) and your proposal about citizen-activism, moderating, etc. fits in well to that scheme. And at least psychologically it makes it less a problem of ‘[insert censored group here] v. the machine’ and more of a ‘[insert censored group here] v. the human censor’ which puts a human face on the problem, to the extent Zuckerberg hasn’t already taken that place…
Regarding the ‘as a distributor problem’, I think the transparency based-solutions you outline above are really good/important. In contrast with the censorship problem, where I think there’s a lot of public awareness already, I think distributor problem is not yet so known. Before Cambridge Analytica this was, for the most part, a non-issue in the public eye. And unlike the censorship problem (where we at least had the Facebook Files leak + many high-profile instances), the criteria used by the distributors are even less transparent. Bringing those to light–in a way palatable to the average person–is an important first-step. And for the second-step, whatever that may be, I think you again run into the problems listed above.
-- ArgiriosNickas - 04 May 2018 | | |
|
AndrewBrickfieldFirstPaper 4 - 29 Apr 2018 - Main.AndrewBrickfield
|
|
META TOPICPARENT | name="FirstPaper" |
| |
| |
> > | Comments
| | It's interesting that a few of us wrote about the same problem: Facebook as the man in the middle for news. But I think we all (perhaps due to space constraint) ran into some trouble with the solution portion. If we think of the problem as a disease, we addressed (1) the symptoms and (2) the pathology/cause, but failed to prescribe a (3) treatment. You touched on it briefly in your last paragraph, but I thought we all could use this space to develop and expand on some of those treatment options. My initial thoughts:
perhaps (1) broadening the state action doctrine—some sort of viewpoint discrimination-based solution (this would require FB be neutral in application of its internal removal guidelines, etc.) or; (2) bringing back some version of the fairness doctrine for social media platforms—limited to those who do not ostensibly claim to support a certain viewpoint over another and instead pretend to be neutral, or; (3) (and to me, the most likely solution) statutorily implementing forms of transparency reporting requirements. For example, 'FB must: report how many user-initiated content flags they receive, how many posts/comments they actually remove in response, the criteria used in the removal process, and provide a random sample of 1000 or so removals and their specific justifications’ for the public to examine and assess any bias. |
|
AndrewBrickfieldFirstPaper 3 - 29 Apr 2018 - Main.AndrewBrickfield
|
|
META TOPICPARENT | name="FirstPaper" |
| | (999 words, excluding subtitles and this parenthetical)
| |
< < | You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines: | | | |
< < | | > > | It's interesting that a few of us wrote about the same problem: Facebook as the man in the middle for news. But I think we all (perhaps due to space constraint) ran into some trouble with the solution portion. If we think of the problem as a disease, we addressed (1) the symptoms and (2) the pathology/cause, but failed to prescribe a (3) treatment. You touched on it briefly in your last paragraph, but I thought we all could use this space to develop and expand on some of those treatment options. My initial thoughts:
perhaps (1) broadening the state action doctrine—some sort of viewpoint discrimination-based solution (this would require FB be neutral in application of its internal removal guidelines, etc.) or; (2) bringing back some version of the fairness doctrine for social media platforms—limited to those who do not ostensibly claim to support a certain viewpoint over another and instead pretend to be neutral, or; (3) (and to me, the most likely solution) statutorily implementing forms of transparency reporting requirements. For example, 'FB must: report how many user-initiated content flags they receive, how many posts/comments they actually remove in response, the criteria used in the removal process, and provide a random sample of 1000 or so removals and their specific justifications’ for the public to examine and assess any bias.
-- ArgiriosNickas - 27 Apr 2018
Hi Argirios - thank you for the comment. You're correct that I, too, ran into space problems when it came to addressing the "solution" to the Facebook as man-in-the-middle for news/opinions problem -- so I'm glad to receive the invitation to propose some solutions here!
As I see it, there are two somewhat distinct issues at play here. One, which you outline, is the problem of Facebook as censor, in which Facebook decides to censor certain content in response to either government mandate (like in some European countries and China) or to the profit motive (in order to keep advertisers and users happy). The other problem, which I attempt to outline above, involves Facebook not as censor of news/opinions but as distributor, in which its distributive decisions are informed by behavior collection and implemented in pursuit either of some government mandate or mere profitability. I think the Facebook as censor problem and Facebook as distributor problem are like flip sides of the same coin - on one hand Facebook determines the baselines requirements to enter its walled garden and on the other hand Facebook designs the rules that determine how that content moves within the walled garden. Although related, the two problems probably require different solutions.
I think you're correct that the Facebook as censor problem is best addressed through statutory disclosure requirements that provide users and regulators with an accurate picture of censorship activity on Facebook. But, for regulation by disclosure to be effective, users need Facebook substitutes or the willingness/ability to leave Facebook when it crosses the "line". I'm skeptical that those conditions exist because, as you explain in your FirstPaper, there is not a ton of consensus on what should and should not be censored, so it would take a variable series of disclosed censorship screw-ups for Facebook to alienate enough users to give the disclosure regulation any teeth. On the other hand, perhaps disclosure requirements would at least make clear to users that censorship is occurring, that Facebook has substantial power as censor, and that there is other information out there, which you need to get off Facebook to consume -- that might be a good outcome. | | | |
< < | Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list. | > > | Unfortunately, short of government-mandated standards (which of course are problematic in their own right from freedom of thought and 1A perspectives), I'm not sure there is a better regulatory solution to the Facebook as censor problem than disclosure. Transitioning Facebook to something closer to the reddit model, a semi-decentralized model in which users are engaged as community moderators who conduct most every-day censorship and moderation duties, is probably the best approach, but unlikely to occur because Facbeook is not divided into separate communities. YouTube? is currently debating precisely this question and hoping to produce an algorithm that solves their censorship/content problems. Here is the article describing that process if you are interested: https://www.bloomberg.com/news/features/2018-04-26/youtube-may-be-a-horror-show-but-no-one-can-stop-watching
Overall, the censorship problem strikes me as a perfect example of the "zoning the net" framing that Larry Lessig identifies in "Reading the Constitution in Cyberspace." Just like zoning in real life, in which neighbors/developers are in conflict about what kind of use to allow in a given plot of land, on the net we have users in conflict about what kind of content to allow in cyberspace. Unfortunately, zoning IRL doesn't exactly offer a shining example for cyberspace to follow (for most of its existence, zoning has been driven by xenophobia after all). That said, some scholars see the purpose of zoning as creating spaces for deliberation, in which process is more important than outcome. Perhaps that approach makes sense for the Facebook as censor problem. That is, maybe the best approach is a combination of disclosure requirements along side government mandated opportunities for citizen participation and input (via roles as moderators, access to disclosure, to voting on censorship standards, voting on approving variances from those standards, etc.). I'm curious whether that sounds reasonable?
By contrast, I think the Facebook as distributor problem has more straightforward solutions. I see two options, with the choice among them depending on how pernicious you think Facebook is acting or will act in its role as distributor of news/opinion. One solution would be to simply outlaw use of behavior collection among news distributors. This approach would recognize a constitutional right to autonomy that prevents any public/private use of behavior collection technology to direct news and unduly interfere in truthmaking. The challenge to this approach is defining what applications of behavior collection tech are "undue interference" and what are business as usual in a world where we all compete to create the truth. This approach would also include the challenge of deciding what uses of behavior collection tech are sufficiently important to the public interest that the constitutional right should be set aside.
The alternative to that approach is an open source/disclosure model, which would permit most uses of behavior collection technology so long as the actual operating code is available to all to constantly check for abusive practices and to understand how the tech is being used to affect truthmaking. This model would also require disclosure by news distributors of their use of that technology, such as a notice on targeted news articles stating to the user that "you were targeted to receive this content from [client]" or "click here to see why you were sent this content" which would lead to a basic explanation of the algorithm's criteria for sending content to you and a link to view the operating code.
I'm not sure which approach I'm partial too, or whether either approach is realistic, but I'm fairly confident (as you also appear to be) that something must be done to address Facebook's unprecedented power as both news distributor and private censor.
-- AndrewBrickfield - 29 Apr 2018 | | | |
< < | It's interesting that a few of us wrote about the same problem: Facebook as the man in the middle for news. But I think we all (perhaps due to space constraint) ran into some trouble with the solution portion. If we think of the problem as a disease, we addressed (1) the symptoms and (2) the pathology/cause, but failed to prescribe a (3) treatment. You touched on it briefly in your last paragraph, but I thought we all could use this space to develop and expand on some of those treatment options. My initial thoughts:
perhaps (1) broadening the state action doctrine—some sort of viewpoint discrimination-based solution (this would require FB be neutral in application of its internal removal guidelines, etc.) or; (2) bringing back some version of the fairness doctrine for social media platforms—limited to those who do not ostensibly claim to support a certain viewpoint over another and instead pretend to be neutral, or; (3) (and to me, the most likely solution) statutorily implementing forms of transparency reporting requirements. For example, 'FB must: report how many user-initiated content flags they receive, how many posts/comments they actually remove in response, the criteria used in the removal process, and provide a random sample of 1000 or so removals and their specific justifications’ for the public to examine and assess any bias. | | \ No newline at end of file |
|
AndrewBrickfieldFirstPaper 2 - 28 Apr 2018 - Main.ArgiriosNickas
|
|
META TOPICPARENT | name="FirstPaper" |
| |
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list. | |
> > |
It's interesting that a few of us wrote about the same problem: Facebook as the man in the middle for news. But I think we all (perhaps due to space constraint) ran into some trouble with the solution portion. If we think of the problem as a disease, we addressed (1) the symptoms and (2) the pathology/cause, but failed to prescribe a (3) treatment. You touched on it briefly in your last paragraph, but I thought we all could use this space to develop and expand on some of those treatment options. My initial thoughts:
perhaps (1) broadening the state action doctrine—some sort of viewpoint discrimination-based solution (this would require FB be neutral in application of its internal removal guidelines, etc.) or; (2) bringing back some version of the fairness doctrine for social media platforms—limited to those who do not ostensibly claim to support a certain viewpoint over another and instead pretend to be neutral, or; (3) (and to me, the most likely solution) statutorily implementing forms of transparency reporting requirements. For example, 'FB must: report how many user-initiated content flags they receive, how many posts/comments they actually remove in response, the criteria used in the removal process, and provide a random sample of 1000 or so removals and their specific justifications’ for the public to examine and assess any bias. | | \ No newline at end of file |
|
AndrewBrickfieldFirstPaper 1 - 24 Apr 2018 - Main.AndrewBrickfield
|
|
> > |
META TOPICPARENT | name="FirstPaper" |
Liberty, State Action, and the Facebook News Revolution
-- By AndrewBrickfield - 24 Apr 2018
The Facebook News Revolution: Facebook as Publisher & Gatekeeper
US news organizations are producing a polarized population less open to non-conforming experiences. Many scholars—Cass Sunstein, for example—attribute rising polarization to the internet and social media. A 2017 NBER working paper casts doubt on those claims, finding that polarization increased most among those least likely to use the internet. The paper’s authors—and those reporting the research—attribute the findings to the polarizing effect of traditional media, explaining that most Americans still get their information from cable news. Meanwhile, Professor Bharat N. Anand attributes the polarizing effect of both traditional and social media to a market in which “[c]ompetition in the media . . . fails to internalize the externalities from profitable but sensational coverage. It leads to differentiation and more voices . . . but also to fragmentation, polarization, and less-penetrable filter bubbles.”
This is a bleak picture, but it is poised to get worse. A recent New York Times article profiles Campbell Brown, former journalist, turned TV personality, turned “school choice” activist, turned head of news partnerships at Facebook. Her role is building relationships between Facebook and news publishers. She helps Facebook integrate news content into its products and is working with publishers to create Facebook-exclusive news shows. She also flexes Facebook’s newfound publishing muscle, allegedly telling publishers complaining of lost traffic that she “would give them more traffic if they stopped doing clickbait.”
Days before the Times profile, The Intercept dropped its own Facebook scoop: in 2016 Facebook implemented a tool called “FBLearner Flow,” which rather than “offering advertisers the ability to target people based on demographics and consumer preferences, . . . offers the ability to target them based on how they will behave, what they will buy, and what they will think.” Describing a “loyalty prediction” function, Facebook says “it can comb through its entire user base of over 2 billion individuals and produce millions of people who are ‘at risk’ of jumping ship from one brand to a competitor. . . . who could then be targeted aggressively with advertising that could pre-empt and change their decision entirely.”
Facebook says FBLearner Flow improves “user experience” and “advertising efficiency” while defending its privacy commitment by insisting it does not sell user data (why would it, when hoarding unique data facilitates monopoly rents). That logic holds no weight in the face of Facebook’s news revolution. With Facebook and other man-in-the-middle services ascending the throne of news gatekeeper and traditional publishers seeing their reign end—one publisher described feeling “humiliated” after attending a dinner at Ms. Brown’s residence, where he was “reminded that the power of traditional publishers is waning”—FBLearner Flow is likely to be at work targeting news distributed through Facebook.
Consequences of the Facebook News Revolution: Truthmaking with a Man-in-the-Middle
That use creates problems. At minimum, shareholder interests dictate Facebook will leverage FBLearner Flow to drive clicks by sending users the news it predicts they are ready to read, increasing polarization. But, after the Facebook–Cambridge Analytica partnership, is it speculative to think Facebook might face pressure to target news in certain ways or offer media organizations the ability to target users? Imagine similar technology in China, where social media companies must cooperate with the government. Can dissent spread when would-be dissenters are precisely targeted with countervailing information? The US might see the opposite effect, with well-capitalized interest groups (concentrated firms, political movements, religious organizations) funding news groups that compete to drive wedges whenever Facebook indicates a mark is primed to flip.
Maybe those risks are overstated. “Objective” journalism was a relatively new trend, and as Nietzsche states, “truths are illusions about which one has forgotten that is what they are.” Perhaps tools like FBLearner Flow facilitate a Citizens United-style marketplace of ideas, accelerating the process of truthmaking by identifying when a subject is ready to have truth made. Maybe these tools can produce benefits. Data for Black Lives argues that Facebook should commit data to a public trust to research issues facing black communities. Going further, imagine the benefits of an application that identifies white supremacists wavering on their beliefs then apprises them of a less discriminatory worldview.
Despite those arguments, use of FBLearner Flow—and its progeny—by news gatekeepers requires inspection. First, the most partisan nineteenth century newspaper could neither surveil reading nor deliver content with the precision of FBLearner Flow. Second, Nietzsche is correct that truth is essentially contestable, but accepting a truthmaking process that includes a man-in-the-middle with a thumb on the scale is undesirable. Within a marketplace of ideas, it is impossible to eliminate outsize influence by the powerful, but society should prize a system that produces decentralized truthmaking as much as possible. Allowing a man-in-the-middle with predictive power to distribute news is the antithesis to such a system.
Preserving Liberty After the Facebook News Revolution
How can the US regulate behavior collection and prediction in a way that preserves positive applications while limiting negative effects? Determining that the First Amendment protects unsurveilled reading is a start, but the State Action requirement frustrates application to private firms. Privacy advocates should attack the State Action requirement itself.
Historian Eric Foner argues the State Action requirement reflects misunderstandings of Reconstruction. He endorses the view expressed in Justice Harlan’s dissent in the Civil Rights Cases: “The men who wrote the Fourteenth Amendment intended to empower Congress to ‘do for human liberty and the fundamental rights of American citizenship, what it did, with the sanction of this court, for the protection of slavery.’” This view of the Fourteenth Amendment, along with understanding the First, Fifth, and Ninth Amendments as protecting Liberty from surveilled reading and undue interference in truthmaking, enables Congress and Courts to regulate behavior collection and preserve freedom of thought.
Facing unprecedented power over news distribution and truth creation, this constitutional interpretation is consistent with history and necessary to preserve the fundamental right to Liberty. Moreover, eliminating the State Action requirement lets Congress and Courts address lingering racial discrimination that has proved immune to constitutional remedy. Privacy and antidiscrimination advocates must deconstruct the State Action requirement.
(999 words, excluding subtitles and this parenthetical)
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines:
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list. |
|
|
|
This site is powered by the TWiki collaboration platform. All material on this collaboration platform is the property of the contributing authors. All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
|
|