“We have one set of rules for the hundreds of millions of people who use Twitter and the hundreds of millions of Tweets sent every day,” Twitter informs its users, but even a casual user knows that’s not the case. If your profile is high enough, you can ignore the rules with impunity, including breaking the law—and it will take a federal court case to address that. One such user has become particularly infamous for this: The president of the United States, with multiple commentators pleading with Twitter to ban his account for tweets that clearly break the rules. The question of why Twitter allows the president’s account to remain has been addressed by its leadership as one of free speech and noteworthiness: “Blocking a world leader from Twitter or removing their controversial Tweets would hide important information people should be able to see and debate.”
What it actually is, though, is a failure of moderation, a so-called “soft skill” that the tech industry has indicated little interest in respecting, refining, or investing in. Twitter’s Donald Trump problem is directly related to both its Nazi problem (which itself is much less of a problem for users in nations with strict laws about Nazi content, so we know it is in fact solvable) and its tolerance of targeted harassment that appears in the mentions of many users—especially Black women. Meanwhile, complaints about abusive tweets are often dismissed as non-rule violations while bad actors game the system with floods of “abuse” complaints targeting activist users in the hopes of getting them suspended. It works. Marginalized users have reported being limited or even suspended for pushing back against hate, including that directed at them; TERFs in particular have embraced the mass-report swarm as a tool for getting transgender people kicked off Twitter.
Twitter is well aware of its reputation for being an extremely unpleasant place; even its most prolific users regularly refer to it as “a hellsite.” The company sporadically rolls out new tools that assist users in shaping their interactions with others, and for filtering abusive content both on the timeline and in individual users’ mentions. Muting and blocking individual users can be paired with muting threads and hiding comments, as well as filtering keywords and hashtags. The quality filter lets users hide interactions from people who don’t follow them, designed to reduce drive-by commenting. The company also recently announced plans for yet another way of controlling replies: Turning them off, or limiting them, for individual tweets. It’s an excellent way for people to avoid well-deserved ratios, and can theoretically limit some forms of abuse. (Its loophole, of course, is that users could quote-tweet content or screencap and tag if quotetweeting is forbidden.
The larger issue in these approaches to content moderation is that Twitter approaches violations, harassment, and abuse on the platform as individual problems to be dealt with by individual users, rather than a systemic obstacle to cultivating and protecting community. They put the onus on users to control their environment without acknowledging that users lack the high-level controls needed to meaningfully do so—and without accepting Twitter’s culpability in the matter. If users need to constantly report hateful content in order for Twitter to act on it, the service needs to address that seriously. It does not. For Twitter, engagement via any means is a feature, not a bug. Twitter counts on metrics like number of users and individual engagement to drive ad revenue and venture-capitalist interest, which is what keeps the site alive. More users also means more data available to sell, another powerful tool in the company’s arsenal.
Twitter’s stated goals of collaboration and conversation are in direct conflict with its funding streams, creating an inherent and inescapable tension. Closing abusive accounts, cracking down on bots, and eliminating notorious trolls would also cut down on users and engagement. This should tell readers a great deal about how Twitter and other tech giants view moderation. Many social-media platforms are severely understaffed and rely heavily on outside contractors who often endure grueling working conditions that have been known to cause severe mental-health consequences, including PTSD. Moderators see the worst that human beings can do to each other while being pressured to move at extremely high speed, and many lack the training and cultural competence to understand why reported content is abusive. Their pay is deplorable, their benefits are questionable, and the turnover is high, so moderators can’t apply knowledge from experience to their work.
Moderation is a highly technical skill, as anyone who has done it for any length of time knows. Whether managing comments on a feminist website or dealing with a sprawling social network, it requires serious conversations about values and the kind of community users want to move within, as well as an acknowledgement that a website is not a government and moderation is not censorship. Moderation also requires clearly established rules that can also be applied and interpreted flexibly: A comment inviting the author of an article to “go fuck yourself, you festering cunt” should be removed, while one saying “I really enjoyed this discussion and celebration of the cunt as cultural object, but have concerns about the failure to explore the racial implications thereof” should not, though both include a word that can be used as a slur. More subtly, an inexperienced moderator might not catch a racist dogwhistle or a transphobic reference; even if reported, it may be deemed not in violation, an experience painfully familiar to many Twitter users.
Feminist blog moderation, in fact, offers an instructive example of moderation as curation, aggressively pruning to shape conversations without stifling them. It’s also a tremendously time-consuming one, with some blogs engaging “approval only” settings that required every comment to pass muster with a moderator before going live. This invisible labor was largely performed by women and generally unacknowledged, though readers certainly liked to complain when they thought moderation wasn’t going their way. Being able to quickly assess comments for good and bad faith is an art form. The likelihood that false positives may occur is something few tech companies are willing to accept as a risk of meaningful rules enforcement; it’s clear that the tech industry is more afraid of the screeching right wing than it is of the call to common decency and respect.
The tech industry is notorious for devaluing the soft skills that make it function, and, specifically, for feminizing them. Twitter’s VP of Trust and Safety, Del Harvey, is a woman, one of the few ranking female executives at Twitter. (Indeed, sometimes the fastest way to get action on an abusive tweet is to draw it to the attention of @delbius, something those without blue checkmarks may find hard to do). Another is Leslie Berland, heading up the company’s human resources, another “soft skills” department that is in fact vital to the healthy function of any company. But if it doesn’t involve programming, many tech companies don’t think it’s important or worth investing in—the pay gap is significant between technical and nontechnical pipelines, and so is the race gap. It’s good to be Jack, and a lot less good to be Monica the Benefits Coordinator in HR.
This feminization has a more sinister aspect, one that’s illustrated by commonly expressed sentiments such as “If women were in charge of [X], this wouldn’t happen” or “Women will save us.” Though generally meant as a compliment, this benevolent sexism is rife with misogyny that’s often racialized; too often, it’s Black women who are painted as saviors by whites who apparently cannot be bothered to save themselves. It’s sexist to assign women a nurturing, peacemaking role, with the assumption that they would make better moderators than people of other genders because of some innate talent for it. It is racist to ask Black women to carry the load for everyone else, as #NotYourMule notes. It’s also telling that these feminized skills are starved of the resources they need to function; Twitter, for example, allowed Women, Action, Media to experiment with controlling moderation in 2014 but did not treat it like a serious, long-term tool to incorporate into moderation practices, because the work was labor-intensive and required considerable nuance. Similarly, there’s a long history of expecting moderation services to be provided free or at low cost across the internet; like all things associated with women, moderation is devalued even as people make absurd demands on those who perform it.
Twitter puts the onus on users to control their environment without acknowledging that such users lack the high-level controls needed to meaningfully do so.
Twitter isn’t a feminist blog, of course, and no one expects it to be. But it does need to start treating moderation seriously in the United States; it doesn’t escape notice that when moderation and the law intersect, as in Europe, Twitter is able to set and enforce aggressive content standards. Putting the burden of the work on users is a cop-out, one that leaves users severely disadvantaged while not actually addressing culture problems. I can block @realdonaldtrump, but that doesn’t stop Twitter from broadcasting his hate and inciting harassment of others; and I can block notorious transphobe Graham Linehan, infamous for promoting hate and leading harassment campaigns as @glinner, to make it harder for him to harass me, but his transphobic propaganda still reaches users.
Moderation is an institutional problem, not a personal one. Hiding content from you doesn’t make it go away; it might feel sufficient when @rando123 replies to you with a gross comment, but what about when it’s @nytimes promoting fake race science, or @nazisrc00l spreading conspiracy theories? The answer to the vexing question of what to do about Twitter in general and Donald Trump in specific, is for the site to more clearly identify its own rules and values, abandon both-sidesism, and decide what conversations it wants to have, while noting that as a private entity, its moderation decisions are not subject to free-speech protections. This would require Twitter to invest seriously in moderation, treating it as a technical skill that requires training, good judgment, and fair pay. Twitter could ban rape threats and enforce it, if it chose to. Twitter could ban harassment on the basis of race and enforce it, if it chose to. Twitter could ban antisemitic hate speech and enforce it, if it chose to. All of these are violations of the platform’s clearly stated rules, and Twitter’s executives choose not to enforce them, because doing so would be wildly unprofitable.
Twitter wants to protect revenue projections—and avoid upsetting noisy conservatives—and while it views users as a whole as highly valuable, it does not see intrinsic value in individuals. The site’s new-user growth continues to climb, illustrating that even high-profile departures don’t make a significant impact. For Twitter, the terms of service are a myth and the enforcement of same is a fever dream on the part of users who experience harm; Twitter’s mission and intentions do not include curated content, a climate of equitable conversation, a userbase that interacts on an assumption of mutual respect. The question is not “why won’t Twitter get rid of Donald Trump?” but “why are we all still here?” An attempted mass exodus to Mastodon shows that when it comes to Twitter, we just can’t seem to quit, because doing so feels like letting the Nazis win and deprives us of the rich connections and community we have built despite Twitter’s disastrous culture. The sad truth is that the Nazis won in July 2006, when Twitter.com went live, but their extended victory involved convincing us that Twitter genuinely cared about its stated values while it siphoned billions of dollars in revenue off our data, and our lives.