|
META TOPICPARENT | name="SecondEssay" |
| | Introduction | |
< < | As online bot use has increased in recent years, so has national concern about their negative functions. There has been mounting pressure for legislation to regulate the use of bots on social media platforms, and rightfully so; it is worthwhile to mitigate harm to unaware Internet users. The general motivating principle behind proposed and enacted legislation is to reduce the deceptiveness of social bots. However, without an established classification scheme for bot use, policy is likely to be under inclusive or over encompassing. I mention obstacles to current approaches, discuss what is needed to progress, and offer a classification system to guide effective regulation. | > > | As online bot use has increased in recent years, so has national concern about their negative functions.[_Endnote 1_] There has been mounting pressure for legislation to regulate the use of bots on social media platforms, and rightfully so; it is worthwhile to mitigate harm to unaware Internet users. The general motivating principle behind proposed and enacted legislation is to reduce the deceptiveness of social bots.[_Endnote 2_] However, without an established classification scheme for bot use, policy is likely to be under inclusive or over encompassing. I mention obstacles to current approaches, discuss what is needed to progress, and offer a classification system to guide effective regulation. | | Current Approaches and Issues | | The former runs into issues with §230 of the CDA. The sweeping flexibility for media platforms to control what content is allowed to exist on their sites makes it incredibly difficult for the law to intervene effectively. One solution would be to amend §230 and carve out an exception for bot use. While it is not unprecedented, the law should proceed with appropriate caution in this regard. An exception must be narrow enough as to not demand unreasonable foresight from platforms. Unsophisticated media providers could easily be held liable despite technical incompetence to adequately remove malicious bots. | |
< < | The second route runs into what I call the remote origination problem. Namely, that the actors behind online bot use may be so distant from any infringing bot use that attempting to regulate the actors themselves would be practically impossible. It is patently unclear how a regulatory agency could impose liability on remote actors or foreign entities. Russian interference in the 2016 U.S. election and the Department of Justice’s futile attempts to prosecute the responsible entities epitomizes this issue. | > > | The second route runs into what I call the remote origination problem. Namely, that the actors behind online bot use may be so distant from any infringing bot use that attempting to regulate the actors themselves would be practically impossible. It is patently unclear how a regulatory agency could impose liability on remote actors or foreign entities. Russian interference in the 2016 U.S. election and the Department of Justice’s futile attempts to prosecute the responsible entities epitomize this issue. | | The Way Forward | | Conclusion | |
< < | There is no denying the danger of deception and manipulation in the public discourse through automated software agents. | > > | There is a present danger of deception and manipulation in the public discourse through automated software agents. As we broach the ever-evolving digital sphere, the law should consider the ramifications of unbridled opportunism in the online setting; false amplification, artificial dilution of public opinion, deceptive commercial inducement, and similar problems run afoul of notions of fairness and transparency. The relatively novel arena of online bots has created difficult problems, demanding the need for effective regulation without unduly subduing media providers. The law must provide clarity with respect to the kinds of bots in online discourse, and look to the FTC and social media platforms themselves to best regulate harmful bot use. | | | |
< < |
Why? Of course it can be denied. There's nothing about an automated speaker that makes it more dangerous than a non-automated speaker, and some obvious qualities that might be said to make it less dangerous. You can't just assert something that a reader actively doubts, unless you're not writing for that reader.
| > > | Endnotes | | | |
> > | [1] For a variety of sources that highlight the problems posed by bot use, See Varol et al, Online Human-Bot Interactions: Detection, Estimation, and Characterization, CCNSR and ISN, March 2017 (https://arxiv.org/pdf/1703.03107.pdf) (finding that up to 15% of Twitter profiles – or 50/330 million – are bots), and How much to fake a trend on Twitter? In one country about £150, BBC News, March 2018 (https://www.bbc.com/news/blogs-trending-43218939) (showing how Twitter trends can be bought through bot use), and Study finds quarter of climate change tweets from bots, BBC News, Feb. 2020 (https://www.bbc.com/news/amp/technology-51595285) (finding 38% of “fake science” Tweets written by bots, and 28% of Tweets related to Exxon Mobile generated by bots), and Tess Owen, Nearly 50% of Twitter Accoutns Talking About Coronavirus Might be Bots, Vice, April 2020 (https://www.vice.com/en_us/article/dygnwz/if-youre-talking-about-coronavirus-on-twitter-youre-probably-a-bot) (finding that 45.5% of Tweets concerning the coronavirus are likely generated by bots), and Defining Russian Election Interference: An Analysis of Select 2014 to 2018 Cyber Enabled Incidents, Atlantic Council, Sept. 2018 (https://www.atlanticcouncil.org/wp-content/uploads/2018/09/Defining_Russian_Election_Interference_web.pdf) (finding bots have been used to sow discord by impersonating extreme opinions, amplifying particular political sentiments, posting fabricated content media platforms, and circumventing security measures in electronic elections to manipulate votes). | | | |
< < | Democracy relies on the open and accessible exchange of ideas, but the law should be wary to fall into a fallacy of free speech; false amplification, artificial dilution of public opinion, deceptive commercial inducement, and similar problems run afoul of notions of fairness and transparency.
This is only rhetoric. If we believe in free speech, and we think false advertising can be regulated as it is currently regulated, you must show there is a problem that is somehow different in quality in order to begin a conversation in which you can realistically propose to regulate how software I make and run for myself in my own computer should be designed.
The relatively novel arena of online bots has created difficult problems, demanding the need for effective regulation without unduly subduing media providers. The law must provide clarity with respect to the kinds of bots in online discourse, and look to the FTC and social media platforms themselves to best regulate harmful bot use.
The most important route to improvement, in my view, is to replace the rhetoric about the problem with evidence of a problem. The notion that programs making statements on platforms I don't use is a serious social problem that cannot be dealt with inside the existing First Amendment paradigms is an extraordinary claim requiring at least more than no evidence. Not one citation, not one fact, not one scintilla of actual evidence is present here, which should be relatively easy to rectify, if there is indeed a problem that so far exceeds the scope of our First Amendment understanding that we should be prepared to alter it.
| > > | [2] See e.g. Pair of Hertzberg Technology Bills Signed by Governor, Sept. 2018 (https://sd18.senate.ca.gov/news/9282018-pair-hertzberg-technology-bills-signed-governor), and S.2125: Bot Disclosure and Accountability Act of 2019 (https://www.govtrack.us/congress/bills/116/s2125). | | |
|