|
META TOPICPARENT | name="SecondPaper" |
| |
> > | Cyber Sticks and Stones | | | |
< < | --+Trolls | > > | When the hit internet series Awkward Black Girl won the online Shorty Award, one tweeter responded, "Anthony Cumia lost to a N*ggerette!!! Another wrote, "ThingsBetterthanAwkwardBlackGirl: The smell coming from Treyvon Martin." Noting the difference between these comments and her real life experiences, Issa Rae, the show's creator wrote, | | | |
< < | -- ShakimaWells - 15 November 2012 | > > | The crazy thing is, I've actually never been called a n*gger in real life...In the online world, however, which is where most of my social life resides…[users] hide comfortably behind their computer screens and type the most obnoxious, offensive things that they can think of…(cite). | | | |
< < | When the hit internet series Awkward Black Girl won the Shorty Award, one user tweeted in response, "Anthony Cumia lost to a N*ggerette!!! Another wrote, "ThingsBetterthanAwkwardBlackGirl:The smell coming from Treyvon Martin." And still another quipped, "Congrats on winning, do you get 3/5 of an award?" Noting the difference between these comments and her real life experiences, Issa Rae, the show's creator wrote, | > > | Speech such as that above occurs in various online forums and can be said to fall under the much broader definition of trolling or, online speech designed to provoke an emotional response. In the US, such expression is generally protected under the First Amendment's right to freedom of speech. This paper highlights some of the more problematic aspects of trolling with a view towards exploring solutions that can curtail any negative externalities it presents while also protecting freedom of speech online. | | | |
> > | Real Life: Broken Bones | | | |
< < | Is it possible that we
could take too seriously the stuff put in a microblogging service by
people idly commenting on some entertainment industry hype "event"?
It's a little like paying any attention whatever to comments on
YouTube, don't you think? Is it a step up or a step down from
being concerned with the dialog in Quentin Tarantino movies?
| > > | Those who have ever come across troll posts may agree with Rae that both the frequency and intensity of inflammatory comments is heightened online as compared to offline. Although the online cannot perfectly mirror the offline, we might, as one method of enquiry examine how we have evolved as a society to deal with similar social behaviors in real life. Indeed, in real life, fewer individuals would be willing to make any of the statements discussed in the first paragraph simply for the purpose of evoking a response. Not only is the threat of physical violence a poignant deterrent, but the possibility of other types of social repercussions also constrain such behavior. An individual might be concerned, for instance, that the comment might reflect poorly on his character or jeopardize his job. Even where this is not the case, the deliverer of such speech in real life would likely have to look at his listeners and personally witness (and perhaps experience) their reactions. | | | |
< < | The crazy thing is, I've actually never been called a n*gger in real life...In the online world, however, which is where most of my social life resides, the word is tossed about as freely as it was in the 50s and 60s. Users hide comfortably behind their computer screens and type the most obnoxious, offensive things that they can think of and wait for a response (cite). | > > | How the Words Can Hurt | | | |
< < | The racially inflammatory speech directed toward Rae can be said to fall under the much broader definition of trolling or, online speech designed to provoke an emotional response. | > > | On the internet, however, the ability of society to respond to such behavior is substantially limited. For one, trolls can "hit and run" with relative ease. Also, some users consciously decide to not "feed the trolls" and don't reply to troll posts. Assuming that there is some value in the free exchange of ideas, erroneously dismissing an individual as a troll could mean that an opportunity for useful social exchange is missed. Many users may be able to recall a time when they have come across a thread where the original poster has asked a question or posted a controversial idea and was readily dismissed—whether accurately or not— as a “troll.” | | | |
< < | Is it "directed"
towards anybody? These are the disembodied voices of a bullshit
universe, where sensible people do not look for conversation.
Whatever is there, whatever its degree of offensiveness, hatred,
stupidity, insensitivity, or vernacular genius, we are in no danger
whatever of being forced to interact with it, or subjected to any
power it represents. So why is it a problem at all?
| > > | In other instances, users might change their behavior to avoid this kind of speech in the future. Wouldn't this inevitably result in negative externalities by limiting the opportunity for these individuals to engage in what could otherwise be socially beneficial behavior? At present, a user who browses user comments simply to gage public reaction to a funny video or participate in an online political debate, for instance, will almost certainly encounter inflammatory language. | | | |
> > | Even if one were to try to simply avoid troll posts, she will likely find this difficult. That most of us know about the crudity of internet commentary perhaps attests to this notion. Indeed, the ease of access to the internet also means that inflammatory language can be, and often is, found in the midst of otherwise useful, entertaining or informative content. Adjusting one’s browsers or filter settings is a viable option. However, some of the most inflammatory language can be highly dependent on interpretation and social context; “The smell coming from Trayvon Martin’s grave” does not use offensive language per se. | | | |
< < | In the US, such expression (assuming that it doesn't venture into the realm of illegality) is protected under the First Amendment right to freedom of speech. In light of the crudity of some internet commentary, this paper argues that 1) trolling as exhibited by the above comments is problematic and 2) private entities should take measures to encourage users to post limited identifying information in an effort to enhance personal responsibility for such behavior.
Many individuals, particularly those on the receiving end of trolling,
There isn't any receiving end. You're talking about Twitter, which is a service no one needs and no one is compelled to receive any particular communications on. It does not matter what is written by whom on the wall of a bathroom you will never have to use. You are arguing, you say, that this is "problematic." So what is the argument?
may agree with Rae that both the frequency and intensity of inflammatory comments is heightened online as compared to real life. Indeed, in real life, fewer individuals would be willing to make any of the statements discussed in the first paragraph simply for the purpose of evoking a response. Not only is the threat of physical violence a poignant deterrent, but the possibility of other types of social repercussions also constrain such behavior. An individual might be concerned, for instance, that the comment might reflect poorly on his character or jeopardize his job. Even where this is not the case, real life has an interpersonal element where the deliverer of such speech would likely have to look at his listeners and personally experience their reactions.
This is one set of points about why context matters. There are many other respects in which context matters, too. If you are going to argue that there is a problem, you need to be attentive to the context where the problem is supposed to be. Arguing that there would be less or more of a problem in a different context is not sufficient to establish this point. What one cannot say on the street one can say through digital media. This is both good and bad, depending on context. What a fool can shout down a rain barrel does not seem to me to be an important question of social policy or legal theory, because the context is trivial. Why isn't the context trivial here?
On the internet, however, the ability of society to respond to such behavior is substantially limited. For one, trolls can "hit and run" with relative ease. Also, some users consciously decide to not "feed the trolls" and don't reply to troll posts. Assuming that there is some value in the free exchange of ideas, erroneously dismissing an individual as a troll could mean that an opportunity for useful social exchange is missed.
What is potentially erroneous about deciding to ignore people using stupid offensive language on a microblogging service? Might the speaker actually be Henry James or Virginia Woolf, and we missed it?
This is a poor argument because no better argument was available to you. In fact, ignoring the people who shout obscenities in the digital street is costless, harmless, and effective. Because both negative and positive reinforcement increases the frequency of behavior, ignoring is precisely the behavior that represses the undesired phenomenon. It has no drawbacks and works perfectly. So it should be adopted. But if that's the conclusion, then your argument has developed in a direction you did not intend. So a bad argument is used to stop the gap. Better would be a broader reconsideration.
In other cases, the sheer distance between the speaker and the listener online encourages bystanders to be apathetic. Would these same individuals ignore this behavior if it were occurring on the sidewalk in front of them? If not, why? Is the impact of the behavior on society or on the receiver any less damaging online? What if users changed their behavior to avoid this kind of speech in the future? Wouldn't this inevitably result in negative externalities by limiting the opportunity for these individuals to engage in what could otherwise be socially beneficial behavior? Also, one of the benefits of the internet is that it is more easily accessible to different types of users. If small children, for example, witness such speech without also seeing the social response couldn't this negatively affect their development and behavior in real life?
Even if one were to try to simply ignore troll posts in a way that they might try to avoid inflammatory material found in other forms, she will likely find this difficult. The ease of access to the internet also means that inflammatory language can be, and often is, found in the midst of otherwise useful, entertaining or informative content. At present, a user who browses user comments simply to gage public reaction to a funny video or political debate, for instance, will almost certainly encounter inflammatory language.
Unless she decides to modify either her browser or her browsing. But in any event, the strategy of ignoring isn't made less useful by being used. Evidently if there were nothing to ignore no ignoring would be necessary. Why is that an argument against ignoring?
| > > | Also, one of the benefits of the internet is that it is more easily accessible to different types of users. If younger individuals, for example, witness such speech without also witnessing the social response couldn't this negatively affect their behavior down the road in real life? We might also ask if we would simply ignore such behavior if it were occurring on the sidewalk in front of us? If not, why? Is the impact of the behavior on society or on the receiver any less damaging online? | | | |
> > | Happy Medium? | | The concepts of pseudonymity and anonymity also contribute to the discussion of trolling. In posting anonymously, an individual can avoid personal accountability and thus some of the negative consequences of their actions. The ability to use a pseudonym similarly allows an individual to assume an online mask of sorts. To be sure, anonymity and pseudonymity serve important functions both online and off. The Supreme Court, in McIntyre? v. Ohio Board of Elections, articulated this view when it wrote, "Anonymity is a shield from the tyranny of the majority...It thus exemplifies the purpose behind the Bill of Rights and the First Amendment in particular; to protect unpopular individuals from retaliation..." Freedom of speech is not absolute, however, and the type of speech referred to by the court can arguably be distinguished from trolling, which is primarily designed to garner an emotional response. | |
< < |
The correct answer to this proposition was given by Justice Holmes in Gitlow v. New York: "Every idea is an incitement."
In light of the problematic aspects of trolling noted above, companies like Youtube have already begun to brainstorm
solutions.
Does that mean "to consider"?
One new policy prompts commenters to change their username to their real name. Currently, it is still optional and users can decide to continue to post under their usernames if they provide a reason. The explanation “I’m not sure, I’ll decide later” is presently accepted as sufficient (cite).
Saying you're going to make a link is not the same as making one.
Such measures perhaps assume that individuals who will elect to post identifying information may be less likely to post comments simply to get a response and those who do not elect to do so and proceed to post inflammatory material will be more easily and correctly identified as trolls. It might be useful for sites to go a step further and give users the option to only see posts from users who choose to identify themselves.
If you want to censor your reading, you can have your browser do it for you. You are not required to read anything you don't want to read on the Web, including ads.
While steps such as those above are unlikely to totally eliminate trolling, it may help users distinguish such behavior from other types of expression and may actually enhance freedom of speech by reducing the efficacy of the movement by some to ban trolling altogether.
The "type" of expression you are discussing is the type whose purpose is to create an emotional reaction? That "type" includes all art. Or did you mean the type that uses offensive words, which includes only some art? But it seems to me that the type is stupid offensive commentary in a locale no one needs to use and that no one has to read unfiltered and in which, accordingly, there isn't any problem anyway. If I'm wrong about that, the essay has to explain why.
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines:
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list. | | \ No newline at end of file | |
> > | Companies like Youtube have already begun to consider possible solutions. One optional new policy prompts users to change their username to their real name. Such measures perhaps assume that individuals who will elect to post identifying information may be less likely to troll. It might be useful for sites to go a step further and give users the option to only see posts from users who choose to identify themselves. While steps such as those above are unlikely to eliminate trolling, it may help users distinguish such behavior from other types of expression— such as art or simply controversial opinions—and may also enhance freedom of speech by reducing the efficacy of the movement by some to try to ban trolling altogether. |
|