MartinMcSherryFirstPaper 3 - 12 Mar 2020 - Main.MartinMcSherry
|
|
META TOPICPARENT | name="FirstPaper" |
| |
> > | Combatting Digital Misinformation to Protect Democracy | | -- By MartinMcSherry - 12 Mar 2020 | | A: The "Post-Truth" Era | |
< < | The Oxford English Dictionary named “post-truth” the word of the year for 2016, defining it as, “Relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” 2016 was the year an outsider candidate from reality television, armed with traditional and modern tools of communication, capitalized on anti-elite fervor to fuel a successful White House run built on a foundation of falsehoods. Outgoing President Barack Obama reflected on the campaign and its results, saying, “Trump understands the new ecosystem, in which facts and truth don’t matter.” Social media defines that ecosystem and is central to the undermining of truth in journalism and politics.
The democratization of information on the internet has empowered individuals to seek out content that reaffirms their own views. Social media users, including leaders at the highest levels, can directly communicate to millions, bypassing legacy media organizations that once served as gatekeepers. On social media, a trained journalist or climate scientist is not at the ready on a split screen to challenge a politician who makes grossly untrue statements. An established, credentialed editor-in-chief is not overseeing everything that is published on Facebook or Twitter. Instead of preventing falsehoods from entering the national conversation or presenting newsworthy statements through a critical lens, gatekeepers find themselves defending facts from a popular revolt. The result is an environment Washington Post columnist Ruth Marcus described as “the conjunction of a president unconstrained by facts with a media environment both siloed into partisan echo chambers and polluted by fake news.” | > > | The Oxford English Dictionary named “post-truth” the word of the year for 2016, defining it as, “Relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” Of course, 2016 was the year an outsider candidate, armed with modern tools of communication, capitalized on anti-elite fervor to fuel a successful White House run built on a foundation of falsehoods. | | | |
< < | According to a BuzzFeed News analysis, in the final months of the 2016 election, hoax election stories -- almost entirely supporting Donald Trump and opposing Hillary Clinton -- outperformed actual news on social media. The 20 top-performing false election stories generated nearly 9 million engagements, far more than the 7.3 million earned by actual news. The top stories fabricated Pope Francis’ endorsement of Donald Trump and the sale of weapons from Hillary Clinton to the Islamic State. | > > | The democratization of information on the internet has empowered individuals to seek out content that reaffirms their own views. Social media users, including leaders at the highest levels, can directly communicate to millions, bypassing legacy media organizations that once served as gatekeepers. Instead of preventing falsehoods from entering the national conversation or presenting newsworthy statements through a critical lens, gatekeepers find themselves defending facts from a popular revolt. | | | |
< < | The viral spread of such stories would be impossible without “retweet” or “share” buttons, reflecting Trump’s own strategy of bypassing traditional arbiters of truth. However, false stories, memes, and ads can be weaponized even further by leveraging the vast amount of personal data social media companies mine from their users. | > > | According to a BuzzFeed News analysis, in the final months of the 2016 election, hoax election stories -- almost entirely supporting Donald Trump and opposing Hillary Clinton -- outperformed actual news on social media. The 20 top-performing false election stories generated nearly 9 million engagements, far more than the 7.3 million earned by actual news. These false stories, memes, and ads can be weaponized even further by leveraging the vast amount of personal data social media companies mine from their users. | |
B: Cambridge Analytica and the Power of Microtargeting | |
< < | In 2018, it was https://blogs.timesofindia.indiatimes.com/toi-edit-page/convenience-vs-freedom-facebook-cambridge-analytica-debacle-shows-how-social-media-companies-imperil-democracy/ that Cambridge Analytica, a data firm hired by the Trump campaign, had harvested the personal data of 50 million Americans without their consent to build psychological profiles used to effectively microtarget political propaganda to individuals. Cambridge said it had as many as three to five thousand data points on each individual, including age, income, debt, hobbies, criminal histories, purchase histories, religious leanings, health concerns, gun ownership, homeownership, and more. It used this data to create so-called “dark posts,” or messages seen only by the users predisposed to its content. | > > | In 2018, it was revealed that Cambridge Analytica, a data firm hired by the Trump campaign, had harvested the personal data of 50 million Americans without their consent to build psychological profiles used to effectively microtarget political propaganda to individuals. Cambridge said it had as many as three to five thousand data points on each individual, including age, income, debt, hobbies, criminal histories, purchase histories, religious leanings, health concerns, gun ownership, homeownership, and more. It used this data to create so-called “dark posts,” or messages seen only by the users predisposed to its content.
In addition to using data from companies like Facebook, political operatives can and often do use the practice of geofencing, defined as “technology that creates a virtual geographic boundary, enabling software to trigger a response when a cellphone enters or leaves a particular area”. For example, one group, Catholic Vote, used geofencing to identify over 90 thousand Catholics not registered to vote in Wisconsin, a key battleground state, based on their mass attendance. The group intends to tailor messages to this untapped electoral resource.
Digital advertising of this kind allows candidates and organizations to run highly effective, personalized, and cost-efficient shadow campaigns on social media using highly sensitive and private information. In doing so, they avoid fact-checking, standards of decency, and government oversight. In response, several leaders have announced plans of varying strength and effectiveness to hold social media companies accountable.
Section II. Assessing Plans to Fight Digital Misinformation
A. Senator Elizabeth Warren’s Plan
In October 2019, Senator Elizabeth Warren (D-MA) released a plan to combat digital misinformation. The plan urges social media companies to alert users affected by disinformation campaigns, ban accounts that knowingly disseminate false information, open up data to researchers, and share information about algorithms. Warren also controversially called for “criminal penalties for knowingly disseminating false information about when and how to vote in U.S. elections”, noting the suppression turnout among key voters is a particularly invidious tactic of shadow campaigns.
Immediately, conservatives denounced the plan as unconstitutional. Editor of the National Review Charles Cooke opined that the plan amounts to a repeal of the First Amendment. This criticism has no basis in law. The civil and criminal sanctions in Warren’s plan are narrowly tailored to address the spread of one kind of misinformation: voting requirements and procedures. As recently as 2018, a 7-2 majority of the Supreme Court wrote in dicta, “We do not doubt that the State may prohibit messages intended to mislead voters about voting requirements and procedures.” Indeed, states like Virginia, Illinois, and Minnesota have statutes on the books outlawing a person from knowing deceiving another person about election information. | | | |
< < | In addition to swiping data from companies like Facebook, data companies can and often do use the practice of geofencing, defined as “technology that creates a virtual geographic boundary, enabling software to trigger a response when a cellphone enters or leaves a particular area”. For example, one group, Catholic Vote, used geofencing to identify over 90 thousand Catholics not registered to vote in Wisconsin, a key battleground state, based on their mass attendance. The group has used this information in the past to target Catholic voters with ads on Facebook claiming Democrats are “opposed to Catholic judicial nominees because of their religious beliefs.” | > > | Though the announcement of criminal penalties made waves, their narrow application would do nothing to combat disinformation beyond the narrow scope of voting requirements and procedures. Indeed, the rise of shadow campaigns and microtargeting suggests such communications would rarely, if ever, reach the attention of relevant authorities. While the rest of her plan offers helpful suggestions for companies to adopt, they lack any real enforcement and are unlikely to be implemented without incentives and penalties. | | | |
< < | Digital advertising of this kind allows candidates and organizations to run highly effective, personalized, and cost-efficient shadow campaigns on social media using highly sensitive and private information. In doing so, they avoid fact-checking, standards of decency, and government oversight. In the 2020 Democratic primary, one candidate proposed a plan to address digital misinformation and hold social media companies accountable. | | | |
> > | B. Senator Josh Hawley’s Plan | | | |
< < | Section II. Assessing Senator Warren's Plan to Fight Digital Misinformation | > > | In 2019, Senator Josh Hawley (R-MO) introduced legislation that would revoke protection from liability platforms enjoy for the content users post under Section 230 of the Communications Decency Act. Companies would be able to earn immunity back if they submit to government audits and can prove that they are “politically neutral.” The move is motivated by the perception that social media companies are biased against conservatives, despite the [https://www.nytimes.com/2020/01/29/opinion/trump-digital-campaign-2020.html][well-documented evidence]] that right-wing groups and the Trump campaign dominate digital campaigning and are responsible for the majority of widely-shared fake news pieces. | | | |
< < | A. The Plan | > > | Hawley’s bill presents a far greater risk of violating the First Amendment and raises a number of questions about who decides what is and is not politically neutral. It also may have the effect of worsening the problem, as companies would fear taking down false messages from either side as it may be perceived as targeting and removing posts reflecting a certain ideology. | | | |
< < | B. First Amendment Concerns | > > | C. Alternatives | | | |
< < | C. | > > | Neither Warren’s nor Hawley’s plans go far enough to address the scourge of digital misinformation. Leaders should consider funding media literacy programs in schools, requiring platforms to label fake accounts, and offering new rights to consumers similar to the European Union’s General Data Protection Regulation (GDPR). These rights include implementing and enforcing consent requirements for data collection, the right to be forgotten, and even placing a monetary value on an individual user’s data. | | |
|
MartinMcSherryFirstPaper 2 - 12 Mar 2020 - Main.MartinMcSherry
|
|
META TOPICPARENT | name="FirstPaper" |
| |
< < | Assessing Senator Elizabeth Warren’s Plan to Fight Digital Misinformation | | -- By MartinMcSherry - 12 Mar 2020 | |
< < | Section I: How Social Media Threatens Democracy | > > | I. How Social Media Threatens Democracy
A: The "Post-Truth" Era
The Oxford English Dictionary named “post-truth” the word of the year for 2016, defining it as, “Relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” 2016 was the year an outsider candidate from reality television, armed with traditional and modern tools of communication, capitalized on anti-elite fervor to fuel a successful White House run built on a foundation of falsehoods. Outgoing President Barack Obama reflected on the campaign and its results, saying, “Trump understands the new ecosystem, in which facts and truth don’t matter.” Social media defines that ecosystem and is central to the undermining of truth in journalism and politics.
The democratization of information on the internet has empowered individuals to seek out content that reaffirms their own views. Social media users, including leaders at the highest levels, can directly communicate to millions, bypassing legacy media organizations that once served as gatekeepers. On social media, a trained journalist or climate scientist is not at the ready on a split screen to challenge a politician who makes grossly untrue statements. An established, credentialed editor-in-chief is not overseeing everything that is published on Facebook or Twitter. Instead of preventing falsehoods from entering the national conversation or presenting newsworthy statements through a critical lens, gatekeepers find themselves defending facts from a popular revolt. The result is an environment Washington Post columnist Ruth Marcus described as “the conjunction of a president unconstrained by facts with a media environment both siloed into partisan echo chambers and polluted by fake news.”
According to a BuzzFeed News analysis, in the final months of the 2016 election, hoax election stories -- almost entirely supporting Donald Trump and opposing Hillary Clinton -- outperformed actual news on social media. The 20 top-performing false election stories generated nearly 9 million engagements, far more than the 7.3 million earned by actual news. The top stories fabricated Pope Francis’ endorsement of Donald Trump and the sale of weapons from Hillary Clinton to the Islamic State. | | | |
< < | Subsection A: The "Post-Truth" Era | > > | The viral spread of such stories would be impossible without “retweet” or “share” buttons, reflecting Trump’s own strategy of bypassing traditional arbiters of truth. However, false stories, memes, and ads can be weaponized even further by leveraging the vast amount of personal data social media companies mine from their users. | | | |
< < | Subsub 1 | | | |
< < | Subsection B: Cambridge Analytica and the Power of Microtargeting | > > | B: Cambridge Analytica and the Power of Microtargeting | | | |
> > | In 2018, it was https://blogs.timesofindia.indiatimes.com/toi-edit-page/convenience-vs-freedom-facebook-cambridge-analytica-debacle-shows-how-social-media-companies-imperil-democracy/ that Cambridge Analytica, a data firm hired by the Trump campaign, had harvested the personal data of 50 million Americans without their consent to build psychological profiles used to effectively microtarget political propaganda to individuals. Cambridge said it had as many as three to five thousand data points on each individual, including age, income, debt, hobbies, criminal histories, purchase histories, religious leanings, health concerns, gun ownership, homeownership, and more. It used this data to create so-called “dark posts,” or messages seen only by the users predisposed to its content. | | | |
< < | Subsub 1 | > > | In addition to swiping data from companies like Facebook, data companies can and often do use the practice of geofencing, defined as “technology that creates a virtual geographic boundary, enabling software to trigger a response when a cellphone enters or leaves a particular area”. For example, one group, Catholic Vote, used geofencing to identify over 90 thousand Catholics not registered to vote in Wisconsin, a key battleground state, based on their mass attendance. The group has used this information in the past to target Catholic voters with ads on Facebook claiming Democrats are “opposed to Catholic judicial nominees because of their religious beliefs.” | | | |
> > | Digital advertising of this kind allows candidates and organizations to run highly effective, personalized, and cost-efficient shadow campaigns on social media using highly sensitive and private information. In doing so, they avoid fact-checking, standards of decency, and government oversight. In the 2020 Democratic primary, one candidate proposed a plan to address digital misinformation and hold social media companies accountable. | | | |
< < | Subsub 2 | | | |
> > | Section II. Assessing Senator Warren's Plan to Fight Digital Misinformation | | | |
> > | A. The Plan | | | |
< < | Section II: Senator Warren's Plan to Fight Digital Misinformation | > > | B. First Amendment Concerns | | | |
< < | Subsection A: | > > | C. | | | |
< < | Subsection B | |
|
|
MartinMcSherryFirstPaper 1 - 12 Mar 2020 - Main.MartinMcSherry
|
|
> > |
META TOPICPARENT | name="FirstPaper" |
Assessing Senator Elizabeth Warren’s Plan to Fight Digital Misinformation
-- By MartinMcSherry - 12 Mar 2020
Section I: How Social Media Threatens Democracy
Subsection A: The "Post-Truth" Era
Subsub 1
Subsection B: Cambridge Analytica and the Power of Microtargeting
Subsub 1
Subsub 2
Section II: Senator Warren's Plan to Fight Digital Misinformation
Subsection A:
Subsection B
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines:
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list. |
|
|
|
This site is powered by the TWiki collaboration platform. All material on this collaboration platform is the property of the contributing authors. All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
|
|