Curbing Online Abuse Isn't Impossible
Joyce Park stashed this in Tech biz
I actually found myself this morning saying "It's not even that I mind getting the occasional death threat... but I can't afford to be doxxed or swatted right now because of my health problem." That's not a mindset that makes for super great public discourse on this wonderful, empowering Internet of ours. This article gives me a glimmer of hope that users and product makers can learn to police their own communities. It is heartening to know that the vast majority of trolls and bullies are NOT in the doxxing/swatting hard core and can be noodged towards civilized behavior.
From the article: "Creating a simple hurdle to abusive behavior makes it much less prevalent."
That's a good rule of thumb in designing software that enables a community to develop its own immune system and enforce its own social norms.
Ultimately, online abuse isn’t a technological problem; it’s a social problem that just happens to be powered by technology. The best solutions are going to be those that not only defuse the Internet’s power to amplify abuse but also encourage crucial shifts in social norms, placing bad behavior beyond the pale. When people speak up about online harassment, one of the most common responses is “Well, what did you expect from the Internet?” If we truly want to change our online spaces, the answer from all of us has got to be: more.
It’s important to enforce the rules in ways that people understand.
When Riot’s team started its research, it noticed that the recidivism rate was disturbingly high; in fact, based on number of reports per day, some banned players were actually getting worse after their bans than they were before. At the time, players were informed of their suspension via emails that didn’t explain why the punishment had been meted out. So Riot decided to try a new system that specifically cited the offense. This led to a very different result: Now when banned players returned to the game, their bad behavior dropped measurably.
Explain simply and unemotionally what rule they broke. Call out bad behavior when it happens.
It's not about removing freedom of speech. It's about enforcing community norms.
The problem, of course, is that telling a woman you want to rape and kill her—or even that you merely hope she gets raped and killed—tends to silence her and drive her offline, even if you fail to specify that you’ll use the candlestick in the conservatory. “I reported a threat that said, ‘I will rape you when I get the chance,’” says Anita Sarkeesian, a media critic who has been attacked repeatedly by cybermobs. “I got a response from Twitter stating, ‘The reported account is currently not in violation of the Twitter Rules at this time.’ They continued to suggest that this tweet doesn’t ‘meet the criteria of an actionable threat.’ So according to Twitter, rape threats are only a problem if women can prove beyond a shadow of a doubt that an attack will occur? That’s ridiculous.”
Really, freedom of speech is beside the point. Facebook and Twitter want to be the locus of communities, but they seem to blanch at the notion that such communities would want to enforce norms—which, of course, are defined by shared values rather than by the outer limits of the law. Social networks could take a strong and meaningful stand against harassment simply by applying the same sort of standards in their online spaces that we already apply in our public and professional lives. That’s not a radical step; indeed, it’s literally a normal one. Wishing rape or other violence on women or using derogatory slurs, even as “jokes,” would never fly in most workplaces or communities, and those who engaged in such vitriol would be reprimanded or asked to leave. Why shouldn’t that be the response in our online lives?
To truly shift social norms, the community, by definition, has to get involved in enforcing them. This could mean making comments of disapproval, upvoting and downvoting, or simply reporting bad behavior. The best online forums are the ones that take seriously their role as communities, including the famously civil MetaFilter, whose moderation is guided by a “don’t be an asshole” principle. On a much larger scale, Microsoft’s Xbox network implemented a community-powered reputation system for its new Xbox One console. Using feedback from players, as well as a variety of other metrics, the system determines whether a user gets rated green (“Good Player”), yellow (“Needs Improvement”), or red (“Avoid Me”).
We must create the norm where harassment is not tolerated.