There’s a rich tradition of defying censorship in America. Ban books, and you’ll inspire a week designated to reading them. Censor photos of period blood and nipples on social media, and they’ll be shared and re-posted more fervently than before. Banning is the fastest way to galvanize a movement. Now there’s research to illustrate the point.
In 2012, Instagram cracked down on pro-eating disorder content, banning 17 hashtags such as “thinspiration” and “thighgap.” A study out last week from Georgia Techshows that after the social media platform got strict about how users label and search for posts, activity around such content actually increased. In some instances, likes and comments on pro-ED photos soared by 30 percent.
The research—the first of its kind to measure and capture the effects of pro-ED censorship online—looked at 2.5 million posts from 2011 to 2014 to examine how users interacted with photos before and after certain hashtags were prohibited. Though there was a momentary dip in pro-ED terms post-ban, it wasn’t long before users had created 672 mutations of the tags with subtle variations like “thynspo” or “skinnyyy,” amounting to a nearly 4,000 percent hike in tags. “More tags gave users more opportunities to look at and engage with content that was not moderated,” study author Munmun De Choudhury, assistant professor of interactive computing at Georgia Tech, told VICE. While these findings are troubling on their own, researchers also found that in addition to the surge in new tags, there was also an increase in the discussion of self-harm, suicide, and feelings of isolation.
Not surprisingly, the internet is rife with examples of users shifting their behavior to evade restrictions. Last year, under a new anti-harassment policy, Reddit banned a slew of communities congregating under subreddits dedicated to ridiculing and shaming. Included in the ban were subreddits like r/shitniggerssay, r/transfags and r/fatpeoplehate. After accusing Reddit of being anti-free speech, users simply found new platforms. The first few results in a Google search for Voat, a site that prides itself on a “zero censorship” policy, include links to the communities “Fatpeoplehate” and “Niggers.”
While the impact of Reddit’s ban hasn’t been measured, there’s speculation that its effects seemed only to widen the reach of hate speech. “It’s possible that as people disperse, they plant their ideas in more outlets, enabling more people to get on board,” Stevie Chancellor, a doctoral student in Human Centered Computing at Georgia Tech and the lead author of the study, told VICE. “As with the pro-ED sites, people find a way around things.”
Aggressive censorship also risks denying people valuable resources. When the UK launched a filtering program for internet pornography, it unintentionally blocked access to important helplines, including those for suicide prevention and domestic violence. Instagram’s pro-ED ban, meanwhile, limited access to people who need help, suggests Chancellor. “Another consequence of these variations [in tags] is that you are pushing people to the periphery, which makes them harder to find and harder to help.”
Of course, the idea of an unmediated internet is its own special kind of nightmare. Unmonitored trolling can lead users to dark places, where dangerous behaviors are normalized and encouraged. Such was the case with William Melchert-Dinkel, a Minnesota nurse and digital suicide provocateur who would scout at-risk people in pro-suicide forums and advise them on how to kill themselves; he was convicted of “assisting” a suicide in 2014. And there’s no doubt visiting suicide forums can indeed have harmful effects. A study published last year in the Australian and New Zealand Journal of Psychology found that people who engage with suicidal content on the internet have—no surprise here—more suicidal thoughts.
Nonetheless, studies and anecdotal evidence also show that vulnerable communities benefit from having spaces to disclose their most sinister thoughts, provided those spaces are managed properly. Research from Harvard shows that young people whodiscuss suicidal ideas in a safe, nonjudgmental environment are often deterred from doing themselves harm. Many forums have moderators trained in crisis management, like Eating Disorders Anonymous or the subreddit r/SuicideWatch. Moderators can also be former patients who bring a first-person perspective that even the most well-qualified mental health expert lacks.
Chancellor and De Choudhury think active moderation might be an effective alternative to bans. Instagram has an “Explore” tab that suggests photos based on posts users have liked or posts that are liked by a large number of people in the community. If a user is engaging with a lot of pro-ED photos, it’s likely those will continue to pop up. “It would be beneficial to manage what comes up for users who are at-risk and direct them toward content that is more body-positive,” said Chancellor. Same for when users search certain hashtags. “It could be a pop-up with information on how to find help, or a link to uplifting photos,” she adds.
Search engines like Google and Yahoo are already doing something similar, with positive results so far. When a user searches the terms “suicide” or “how to kill myself,” the first result is a pop-up featuring a phone number and website to the National Suicide Prevention Lifeline. After Google implemented this change in 2010, the National Suicide Prevention Lifeline saw a 9 percent jump in calls.
An Instagram spokesperson told VICE that the platform is aware that making hashtags unsearchable is not the best practice, and has added a message to certain tags containing a warning about the content and links to resources. At press time, this appeared to be the case for non-banned terms like “thinspoooo,” while the originally banned “thinspo” and “thighgap” continue to be unsearchable.
For Instagram and other social-media platforms to offer resources instead of bans would require extensive monitoring and creative collaboration with experts on self-harm—extra effort that has nonetheless already been shown to be more effective than censorship. We need to “think about how we can promote recovery in a way that doesn’t push users away,” De Choudhury said, “and leads to a positive change in behavior.”