Moderating Mental HealthThe Dangers of Social Media’s Report Button

abstract image of a reddish gradient sphere on a light brown background

Artwork by Veronica Corzo-Duchardt

This article was published in Sanctuary Issue #85 | Winter 2020

I got my first safety email in January 2018,” says Charlie, a U.K.-based Twitter user who, like many members of the internet’s ever-growing mental-health community, covers an array of serious topics, including suicide. As an increasing number of social-media users have reported accounts like Charlie’s, Twitter, Facebook, and Instagram have responded by sending these so-called safety warnings, threatening suspensions, ordering users to delete content, and sometimes sending police to users’ homes for “welfare checks.” (Twitter once locked Charlie’s account and refused to reenable it until they deleted a specific tweet that the plat-form deemed offensive.) For social-media users like Charlie, the internet has morphed from a makeshift shelter that helped them share and speak about serious issues to a place where strangers can ruin their day—and put them in real-life danger—with one click of the “report” button.

From the early days of Usenet newsgroups in the 1980s, bulletin boards in the ’80s and ’90s, and blogs in the 2000s, disabled people have been carving out public and private spaces to talk: Amputees swap tips on how to prevent chafing; people with stubbornly undiagnosable chronic illnesses support each other as they encounter an often alienating medical system; and suicide survivors are frank about navigating mental illness. Connecting with other survivors online has been a lifeline for people with mental-health conditions, as research about social media as a form of peer support has shown. “The Future of Mental Healthcare,” a 2016 study published in Epidemiology and Psychiatric Sciences, found that online activity “could advance efforts to promote mental and physical well-being.”

There are many examples of social media acting as a lifeline: Jazz, a 25-year-old living in New York, posted a suicide note on Tumblr in 2015. She intended for the note to be published posthumously, but she made a fortuitous mistake while scheduling it: her followers on the West Coast were still awake when it went live. When some of her followers saw the note, they contacted one of her friends, who was able to get her help. Jazz was in a coma for several days, and she was furious at her followers when she woke up. Though she still thinks about suicide, discussing her mental-health struggles on the internet also makes her feel less alone. “Social media played a huge part in my survival,” she says. “If it weren’t for social media, I wouldn’t be alive.”

Helmi Henkin, who has worked with the National Alliance on Mental Illness and the American Foundation for Suicide Prevention, says she’s purposely frank on social media to minimize the isolation and shame people feel about mental-health struggles, especially if they’re in the throes of a crisis. Her approach works: “I got thanked again today for being so open and vulnerable on social media. I will always be honest about my feelings and be here to remind y’all not to compare your behind-the-scenes to someone else’s highlight reel!” she tweeted in April 2019. Seeing members of the mental-health community openly discuss their experiences—and interact with each other—can imbue others with confidence. These messages might encourage them to seek out a therapist, explore different medications, or realize that their current treatment isn’t working and advocate for a different regimen. “When I see people discuss their struggles it normalizes it a little bit,” frequent Facebook user Elizabeth Baker tells Bitch.

Jubilee: A Black Feminist Homecoming

People like Charlie and Henkin say that being able to speak freely about mental-health symptoms, even those perceived as frightening or disturbing, is crucial to fostering a supportive online environment. Users may discuss past suicide attempts or extreme depression that’s causing suicidal ideation; explore symptoms of psychosis, paranoia, dissociation, and other symptoms that can be intense and frightening; or share their experiences with pursuing medication options, exploring electroconvulsive therapy, or seeking inpatient or outpatient treatment. Many are conscientious about sharing, tagging their posts with content warnings to ensure that only people in a good emotional or mental space will continue reading. Doing so is a form of self-regulation designed to accommodate people who appreciate the forewarning; they can either skip to the next post or continue reading with an awareness of the kind of content they may encounter.

“Sharing with the internet is a different audience. Sometimes it’s easier to share things with the void rather than talking to your closest friends,” says Henkin, speaking to the specific value of social media for these conversations. People with treatment-resistant mental illness in particular may hear from real-life friends that they’re “too much,” but the internet is more welcoming—or, at least, it used to be. Social-media sites are in the throes of a content-moderation crisis. Platforms like YouTube and Facebook are being criticized for allowing white-nationalist content and other hate speech to flourish alongside their thriving mental-health communities. Periodic purges of problem users and accounts result in an endless game of whack-a-mole, with users subverting blocks to pop up again elsewhere. The issue has become so serious that Senator Elizabeth Warren and other presidential candidates are discussing the prospect of breaking up companies like Google and Facebook; in April 2019, Congress held hearings to call social-media giants on the carpet for the spread of hate speech and white nationalism on their platforms.

As these online services fight to subdue the climate of hatred they’ve helped to foster, they’ve begun targeting online communities, including the mental-health community, due to concerns about explicit content around suicide and self-harm. Instances where people have used social media to post suicide notes or livestream suicides have made it easier for these platforms to justify intense monitoring. In 2014, transgender teen Leelah Alcorn attracted national attention when her heartbreaking Tumblr post, which documented years of abuse, family alienation, and depression, surfaced after she died by suicide. “Fix society. Please” was the note’s final line. The incident sparked a national conversation—captured through hashtags like #RealLiveTransAdult—about how families and schools can better protect and affirm trans youth. Alcorn’s note also helped renew a resounding call to ban conversion therapy.

In 2017, a 12-year-old girl who had been sexually abused by family members livestreamed her suicide on Facebook, and it took two weeks for Facebook to remove the video. Though Facebook made no official statement to explain the prolonged delay—YouTube started pulling the video down much earlier—it clearly took action behind the scenes. Several months later, Facebook expanded its suicide prevention tools, including resource pop-ups on the screens of people sharing livefeeds and an algorithm that scans content for keywords and flags possible suicidal content. While Facebook has boasted about the algorithm tool preventing suicide, thanks to it triggering more than 3,500 wellness checks in its first year, it also carries a risk of false positives. The opacity of the algorithm can also have a muffling effect: Users may not know what kind of content will trigger a response, suppressing open conversation. A 2019 Annals of Internal Medicine opinion editorial noted that the use of the algorithm amounted to a nonconsensual experiment without peer review or an ethics board to evaluate it.

Developing protocols for content that discusses suicide requires a much more nuanced process than social-media platforms are equipped for. Twitter users may notice that searching for certain keywords, like “suicide,” will pull up a banner notice with in-formation about contacting the National Suicide Prevention Lifeline. Instagram, meanwhile, blocks a rotating series of tags related to mental health. In early 2019, some users reported that they couldn’t access tags like #Bipolar at all, while Instagram shares the following message before allowing users to view posts tagged #Suicide: “Posts with words or tags you’re searching for often encourage behavior that can cause harm and even lead to death.” Users can “get support” or “see posts anyway,” with the site gatekeeping memorials, suicide education, and other information. Instagram also effectively soft blocks healed self-harm photos, like survival scars, and certain other content by not showing them in its “explore” page and dropping them from hashtags.

Moderation appears to be taking the form of concealment, with Instagram, like other sites, relying on a combination of artificial intelligence and human moderation to make snap decisions. Those decisions—human or machine—don’t always align with the sites’ stated community standards or benefit users. False positives for nudes, however, have lower stakes than false positives for mental-health advocates, who may feel isolated when no one engages with their photos because they can’t find them. Facebook, Instagram, and Twitter have also become more proactive about creating systems that make it easier to report users who appear to be in mental-health distress, with a one-click option on posts as well as an option to fill out forms available on their websites. Twitter suggests that before making a report, users contact “agencies specializing in crisis intervention,” which many people would likely assume refers to calling 911, a tactic that usually results in police response. Facebook directly tells users to “call law enforcement” if they sense an immediate threat, though they also mention suicide hotlines, which will call law enforcement if they believe someone is at risk of suicide. (Facebook, Twitter, and Instagram did not respond to repeated requests for comment.)

Getting police involved, however, is not recommended by mental-health advocates or professionals because interactions between police and people with mental illness can end in death, especially for Black and Latinx people. (One in four people killed by police has a mental-health condition.) Police can also traumatize people in crisis. Guns or uniforms, loud voices, and banging on doors are not sensory experiences conducive to calm and healing. One person who spoke anonymously with Bitch said that loud footsteps and knocking, as in the case of a welfare check, can trigger her PTSD. While some communities use mental-health crisis teams with trained specialists, these units are few and far between. The lack of clarity is frustrating for users like Charlie. “They don’t tell you what tweet you’ve been reported for,” they say. “Strangely, I still get fairly regular emails even when I know I’ve not tweeted anything actionable for a while, which has led me to wonder if there are people camping on old tweets and repeatedly reporting them.” And sometimes, those reports are malicious, specifically designed to isolate people.

Chloe Sagal, who died by suicide in 2018, endured relentless harassment from members of notorious transphobic hate site Kiwi Farms that included stalking and reporting her on Facebook. This made it “much harder for her to get help,” a friend of Sagal’s told the OregonianSome members of the mental-health community consider reports a breach of trust that makes them fear and doubt their followers. Often, users feel like they’re being silenced or stifled, with some fearing they can’t talk openly about what they’re experiencing in the very spaces they’ve curated. Charlie says being punished with a temporary time-out “cut me off from Twitter for 12 hours when I was already feeling vulnerable, which was very difficult. I have heard of people being banned after getting enough of these reports in a short time, which is concerning.” “I did not feel comforted, just intruded upon,” says one user, who prefers to remain anonymous. Another Twitter user had police show up after a false report, while a woman on Facebook remarked that a post containing “kill me now” in a clearly nonsuicidal context was nonetheless reported, causing her to distrust and fear her followers. “I use these platforms as a way to vent, and I’m really really open and honest about the things that are going on in my life,” Jazz says. “Instead of reporting me, it’s a lot better to just message me directly.”

Often, users feel like they’re being silenced or stifled, with some fearing they can’t talk openly about what they’re experiencing in the very spaces they’ve curated.

Tweet this

Social-media networks also fail to engage with the racial components of mental illness: A large number of online mental-health com-munities are white, setting the tone for the “right” way to respond to mental-health crises without considering that this might more accurately be described as the “white” way. Conversations about mental-health stigma and experiences look very different across communities. Black communities, for example, deal with specific racialized experiences that not only affect mental health, but also generate legacies of distrust and fear after generations of slavery, racist and unethical experimentation, and limited racial literacy in the mental-health profession. For those who are not descendants of enslaved people, these legacies still shape society’s perception of Blackness, and in turn, their own experiences. When moderation policies fail to account for these differences in lived experience, they can have a disparate impact, one that cannot be resolved without taking on the whiteness of the tech industry overall.

“We start to lose a lot of nuance when we forget that white people are, in fact, not the largest demographic affected by mental illness,” says Shivani N., an undergraduate student at Brown University studying cognitive science. Communities of color are at a disadvantage when they’re not reflected in online mental-health communities, which creates further isolation and stigma that makes it harder for them to speak up or reach out for help, especially when “help” might turn out to be armed police officers at their doors. “When somebody puts [a comment about suicidal feelings] out there, that they’re having these thoughts, that’s an open invitation for us to reach out just as a basic human to say we care,” says Marlon Rollins, a suicide prevention specialist who sits on the steering committee of the National Suicide Prevention Lifeline. “The absence of response back can reaffirm or validate that the person feels worthless.”

The popular notion that talking to someone about suicide will somehow push them to suicide is a myth—and a harmful one. “We have to get past that anxiety,” says Rollins. If people are too afraid to speak directly with each other, it can further isolate someone in distress. Advanced training isn’t necessary to have a conversation with someone in distress, though there are resources available. For instance, a National Institutes of Health publication, “How to Help Someone Thinking of Suicide,” lists a variety of options for opening up a conversation. These resources could be more proactively provided to users of social media, perhaps in the form of a professionally informed list of conversational prompts and links to resources. Facebook’s reporting mechanism provides some tools and suggestions to help people reach out, including a link to a page with information about how to provide support, or how to contact a hotline for advice, as well as the simple suggestion to “connect with a friend”—but only after the reporting process has been initiated.

These tools should instead be provided upfront when a user expresses concern, with information coaching the reporter on some ways to get in touch with the person causing concern. Users say that a friendly “Hey, do you want to talk?” from a follower is a far more meaningful expression of support than a note indicating that an anonymous person is worried. The internet has long been a double-edged sword: It provides community and it conjures nightmares, offering both the opportunity to form fellowships and a direct route to harassment. The handling of the mental-health community on social media clearly has a long way to go for sites serious about contributing to free expression.

 

s.e. smith's headshot. they are wearing blue and their short, curly brown hair halos their head.
by s.e. smith
View profile »

s.e. smith is a writer, agitator, and commentator based in Northern California.