We Can Do Something About The Toxic Internet

As Amanda Hess’ perfectly exemplifies, harassment and prejudice against women and minority groups hurts them professionally and personally. Harassment is immediately harmful on its victim. Yet harassment is only the directed expression of widespread prejudice against women and minorities online. As a person who spends a fair amount of time on Reddit, a website where users can post links to other sites and comment on them, evidence abounds given a few days aboard the site. It’s common for users to post mundane pictures, such as photos of pets, new hairdos, or funny happenings. Yet when a woman includes her face in one of these pictures (“Look at my Halloween costume, Reddit!”), it is common for the (mostly-male) commenters to rip her apart for so-called “attention whoring;” the perceived use of attractiveness to gain approval for submitted content. As if the implication was not enough, the doubly-misogynistic terminology drives the message home: the culture of many websites is one of suspicion and loathing of women. It isn’t surprising to see that such environments, combined with anonymity, spawn internet harassment. Targeted individuals must make a choice – endure abuse and toxic environments or miss out on what these communities have to offer. Yet few websites have taken strong action to mitigate these effects.

The availability of options to reduce online harassment and hate speechmakes the reluctance of internet communities to stop abuse doubly troubling. The obvious starting point to prevent harassment: ban offending users by their IP addresses, not their accounts with online services. As it is, harassment on online communities is trivially easy; it takes a few seconds of typing to send hate speech to an account of your choice. Even if “report abuse” functionality bans the offender, he or she can simply make another account on the online service, something which takes a few seconds in many online communities. Banning an IP address requires an individual to change the IP address with certain software or with a certain type of connection. At the very least, this serves as a form of deterrent. Meanwhile, controlling the terms of public communication online is within the bounds of practicality for most online communities as well. Sites can appoint or employ moderators who screen comments for hate speech and harassment, allowing them to prevent potentially harmful speech on forums and boards. Finally, a good deal of harassment and harm occurs within autonomous “communities” on certain sites, making it possible to remove wholesale the loci of discussion. In October 2012, Reddit removed the “/r/jailbait” subreddit (a self-contained Reddit discussion forum), where users posted sexually-suggestive pictures of underage girls, but only after a media firestorm surrounding the forum. June 2013 marked the removal of the “/r/n*****s” subreddit, devoted entirely to posting and commenting on race-baiting content and hate speech. Users commenting in this forum to show their horror were often messaged and harassed. The closing of these two forums alone removed a locus for harassers to communicate and infiltrate the entirety of the site. Considering that many websites organize themselves into subcommunities in this way, be it via particular blogs, hashtags, facebook pages, or subreddits, why is it not more common for website administrators to remove the problematic ones? The answer seems to lie in concerns about the perceived threat to civil rights associated with anti-harassment measures.

Yet though critics of such measures attempt to frame them as a potential threat to free speech and privacy, this is not necessarily true. None of the measures above require the storage of additional personal data (websites often log the IP addresses of their visitors, if not their names and addresses). Meanwhile, critics harping on the potential threat to free speech are misguided. It is elementary to remind such critics that First Amendment rights only apply to government suppression of communication; individual private media, such as internet sites, can restrict communication as they wish. And legal protections to free speech under the First Amendment do not protect obscene and threatening speech, a category which internet harassment often falls under. Perhaps critics worry about a more abstract freedom that is upended with the removal of hateful or harassing comments or messages: the freedom of a person to express opinions without fear of monetary or social repercussions. But this freedom hardly seems applicable to disgusting and harmful internet harassment most victims face.

Granted, a committed attacker, like the cyber-stalker described in Ms. Hess’ article, can most likely circumvent the protection described here, requiring more sophisticated protection. Making such individuals accountable most likely requires more information collection and more privacy, which requires a more nuanced discussion on the trade-offs between liberties and protection for victims. Until then, however, there are actions website administrators can take to minimize harassment and prejudice with little impact on privacy.

 

This entry was posted in News. Bookmark the permalink.

5 Responses to We Can Do Something About The Toxic Internet

  1. Dan Petrovitch says:

    This is a very well composed essay. It is clear and easy to understand, thoughtful, and thesis-driven. But the most impressive aspect is that you primarily address the issue at hand by offering potential solutions. Whereas some essayists would explain the problem and maybe offer a solution at the end, you explain the problem while simultaneously suggesting ways to fix it, which makes this essay all the more sophisticated.

  2. Darby says:

    I can totally see your ideas and I get what you’re trying to say; however, the essay could be organized differently to make your argument more clear. Your title is “We can do something about the toxic internet” but then your essay argues all of the ways we can’t fix it i.e. anonymous websites, infringement on freedom of speech, and changing usernames. All of these are good support for a different title/thesis. I also think everything you say about Reddit provides good support, so I would put that in a paragraph alone. Lastly, sometimes the writing gets a little verbose. I would look for areas to condense sentences by taking out words that aren’t necessary so the flow is easier for the reader.

  3. Erin says:

    I really liked the ingenuity of your argument. I don’t use reddit, and I liked hearing about it and how it related to your argument. I also liked how you included “computer lingo” (I guess that’s what it’s called?). At points, the essay got a little wordy and hard to follow, so I would suggest tightening it up a bit in those places (the second paragraph specifically–as others have mentioned). Otherwise I thought the essay was thought provoking and individual.

  4. Ben says:

    Even as someone who generally opposes internet restrictions, I couldn’t help but be drawn in by your argument. I think this is because you offer lots of compelling evidence, consider and refute an opposing viewpoint very respectfully, and have a courteous tone.
    I do agree with Christina that you could split up the second paragraph to make it more readable. Overall, your article was a solidly written, thought-provoking expansion of Amanda Hess’ article.

  5. Christina says:

    This essay does a good job of providing possible actions that website administrators can take to decrease the harassment that is so prevalent on the internet. I also liked how the opposing viewpoint regarding the freedom of speech was mentioned and refuted. I think adding one more sentence that clearly states the argument/thesis of the essay to the end of the first paragraph would clarify what the essay is about, and splitting the second paragraph into two paragraphs would make it easier to read, since it’s currently really long and introduces multiple points.

Comments are closed.