Law in the Internet Society

View   r4  >  r3  ...
LauraBaneFirstEssay 4 - 17 Nov 2024 - Main.LauraBane
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
-- LauraBane - 20 Oct 2024
Line: 14 to 14
  To understand this psychologically violent phenomenon and identify solutions, one must first understand the laws which have allowed it to flourish. Because social media posts are undoubtedly forms of expression, they are subject to First Amendment jurisprudence. This means that despite social media platforms’ status as private companies and their consequent right to restrict constitutionally protected speech and expression such as racial slurs or nudity, the government cannot force them to regulate undesirable, but legal, speech. And, even in the case of unprotected speech and expression, such as child sexual abuse material (“CSAM”) or imminent threats of violence, social media companies are largely insulated from liability as a result of §230(c) of the Communications Decency Act of 1996, whose protections apply as long as an Internet platform simply provides a space for others to speak and removes objectionable content in good faith. This has proved disastrous: despite harrowing testimony at a congressional hearing earlier this year that Mark Zuckerberg’s Meta apps “pushe[d] pro-eating disorder content to vulnerable populations” and gave users a lukewarm warning that certain search terms may reveal CSAM but allowed users to “[s]ee results anyway,” Zuckerberg has not been prosecuted or sufficiently held accountable.

Why the Stakes Are Higher Than Ever

Changed:
<
<
Many wave away the dangers of unregulated social media by arguing that they are safe as long as they do not utilize the platforms themselves. However, as evinced by the 2024 Election, this is far from true. "Low-information" voters swung largely for Donald Trump this year, helping him secure a victory over his opponent, Kamala Harris. This class of voters is characterized by their rebuke of traditional informational sources, such as news networks. Instead, they learn about politics almost exclusively through social media, making them susceptible to misinformation campaigns. This is not unique to 2024--both the 2016 and 2020 election seasons were marred by a surge of foreign bots who flooded social media platforms with damaging lies about both candidates and the general political climate. However, this time, there was an added element: billionaire Elon Musk had purchased Twitter (now called "X") in 2022 and turned it into a safe haven for right-wing extremism, unbanning hundreds of thousands of Neo-Nazis and white supremacists and firing the site's previous class of content moderators. When Musk began funneling money into Trump's campaign, he simultaneously ramped up the site's promotion of pro-Trump content and ads, influencing the millions of voting-age Americans who use X. Even after Trump's election, Musk continues to wield massive influential power: he has successfully influenced Trump with respect to cabinet picks ranging from credibly accused pedophilic sex traffickers to Russian assets and the adoption of disastrous economic policies. To believe that Musk is doing all of this purely to combat the "woke mind virus" is woefully naive: Musk and other billionaires experienced a massive wealth increase virtually overnight following Trump's election.
>
>
Many wave away the dangers of unregulated social media by arguing that they are safe as long as they do not utilize the platforms themselves. However, as evinced by the 2024 Election, this is far from true. "Low-information" voters swung largely for Donald Trump this year, helping him secure a victory over his opponent, Kamala Harris. This class of voters is characterized by their rebuke of traditional informational sources, such as news networks. Instead, they learn about politics almost exclusively through social media, making them susceptible to misinformation campaigns. This is not unique to 2024--both the 2016 and 2020 election seasons were marred by a surge of foreign bots who flooded social media platforms with damaging lies about both candidates and the general political climate. However, this time, there was an added element: billionaire Elon Musk had purchased Twitter (now called "X") in 2022 and turned it into a safe haven for right-wing extremism, unbanning hordes of Neo-Nazis and white supremacists and firing the site's previous class of content moderators. When Musk began funneling money into Trump's campaign, he simultaneously ramped up the site's promotion of pro-Trump content and ads, influencing the millions of voting-age Americans who use X. Even after Trump's election, Musk continues to wield massive influential power: he has successfully influenced Trump with respect to cabinet picks ranging from credibly accused pedophilic sex traffickers to Russian assets and the adoption of disastrous economic policies. To believe that Musk is doing all of this purely to combat the "woke mind virus" is woefully naive: Musk and other billionaires experienced a massive wealth increase virtually overnight following Trump's election.
 

Dismantling Misinformation and the Algorithm

In 2020, the Department of Justice recommended that Congress modify §230 to incentivize platforms to tackle illicit content, primarily via public shaming. This is totally inadequate. Millions of Americans loathe Musk and Zuckerberg, yet their voices are drowned out by the metaphorical sound of cash flow. For the immediate future—assuming that social media platforms will continue to be run by billionaires and attract millions of users across the country—the only possible solution is abolishing §230. Next, a hard temporal requirement should be set on all Internet platform owners to remove illegal content, and targets of misinformation campaigns should be free to sue all Internet platform owners who allow defamatory statements to be displayed on their sites. At first blush, this seems extreme. However, it is important to remember that no other Internet users have the expectation of perpetrating harm with impunity. Additionally, the potential harms posed by allowing §230 to remain in place are significantly greater now than even a decade ago, given the technology industry's clear increased influence over politics and the rise of scarily convincing deepfake technology. Additionally, there should be legislation targeting companies who allow their ads to be run on platforms which sponsor illegal or defamatory content.

Revision 4r4 - 17 Nov 2024 - 14:15:18 - LauraBane
Revision 3r3 - 17 Nov 2024 - 07:51:20 - LauraBane
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM