If you’re on Twitter, there’s a high probability that you follow at least one bot. Whether it’s the cathartic @infinite_scream, the affirmational @tinycarebot, or the informational @everytrumpdonor, bots are an indelible and beloved part of the Twitter landscape. They rely on a distinctive syntax that often makes them easy to identify: The most basic bot styling combines two or more phrases for a final product that may be unexpected, humorous, or oddly poignant, which is what makes bots so enduringly delightful.
The art and activist community—sometimes collectively referred to as #botally—doesn’t consist of the spam bots and Russian infiltrators that have attracted so much criticism from people worried about the undue influence (or irritation) of automated accounts. Some bots are art projects while others are used for social activism and commentary, some are servicey, and some are simply fun—many will interact consensually with users, and some have huge followings (@thinkpiecebot, created by @norareed, has over 30,000 followers). But behind every bot lies a human who’s invested resources and energy in an automated project.
The potential for bots and those who love them is especially promising given the falling barriers for participation, according to Reed, whose long list of bots also includes entries like @infinite_scream, @AutomataPizza, @toxicitychecker, @wh_payroll, and @TumblrSimulator. “Any art form that doesn’t require very much initial investment, that you can have one person build, is easier to use to express an anti-kyriarchy or a feminist message,” they said. “It’s accessible to people who were alienated out of STEM programs, or couldn’t get in because of sexism or racism.” Tools like @v21’s Cheap Bots Done Quick make it easy for those without programming backgrounds to generate their own bots and get them running. However, with great power comes great responsibility: Are the humans of the #botally community following ethics that lessen harm, especially in activist spheres?
Often, that harm is accidental, the consequence of randomly combining words and phrases that haven’t been carefully vetted. For instance, a bot might repeat slurs or produce a problematic lexical string (“angry Black women take over supermarket” or “Jews dominate banking tournament.”) Reed points out, for example, that including “child” in a list of possible nouns can have very dangerous implications when combined with verbs that might end up appearing sexual, leading them to leave it out of most word lists. Some bots might engage other Twitter users without consent, as in the case of those scouring the internet for things to correct, like a briefly-lived bot that searched for “doxxing” to tell people to spell it “doxing,” exposing people talking about harassment to even more harassment.
Darius Kazemi’s (@tinysubversions) @TwoHeadlines was originally designed to mash two headlines together by switching their subjects. The worker-owner at Feel Train Coop, a creative technology cooperative that builds Twitter bots, set out to create a bot that was entertaining, offbeat, and unexpectedly maudlin. But every now and then, the bot made transphobic jokes, like “[Male celebrity] looks stunning in her red carpet dress.” Kazemi didn’t intend to design a transphobic bot, but the bot wasn’t discerning enough to know when it was making an inappropriate joke.
Kazemi eventually developed a transphobic joke detector that told the bot to discard jokes that were likely to involve a harmful gender swap. It meant some false positives, but it’s better to discard a good joke than post a transphobic joke. Reacting to the harm caused by the bot, adjusting the programming, and learning from the experience helped mitigate the problem. Talking about it publicly helped other creators avoid repeating his mistake. Kazemi also created wordfilter, a tool for striking out words associated with slurs, available to members of the public who want to use them with their own bots. For example, a bot will cross-check the list and discard a word that includes the string “nig,” on the grounds that it’s likely to be a racial slur—and if it isn’t, the joke isn’t worth the cost of accidentally letting a racial slur past the filter. Kazemi argues that upholding an ethical code is central to #botally. “I just don’t want my bots doing things that I wouldn’t do myself,” he said.
The stakes are higher when a bot is created to interact with activist spaces and engage in social justice work. For instance, @staywokebot (a @feeltraincoop collaboration with @deray), was initially used to save activists time by allowing them to tag it in conversations for quick references about social issues. Reed and Sarah Nyberg (@srhbutts) created “honeypots,” bots that tweet tempting bait about controversial subjects like atheism and the notion that women are people to entice reactionaries who search Twitter for mentions of their pet causes, thereby distracting them from actual humans. When these bots are used for activism and social commentary, upholding ethics can be challenging.
There are many ways to conduct social justice work, the strategies are constantly changing, and a bot isn’t always as adaptable as a human. Humans interacting with the bot may also have very different views on social issues and how to confront them. Reed says they err toward taking criticism in good faith, as harming a few people for the sake of a joke or political comment isn’t worth it. Kazemi and Reed also regularly check in on their bots to keep track of how humans are interacting with them and periodically update the repositories of words, phrases, and “grammar” used by their bots to keep from turning innocent tweets into something much darker.
Sometimes “god”—a bot’s parent—must take over the timeline briefly to offer commentary and context to followers. Reed recently used @infinite_scream to take Twitter’s co-founder @jack to task, noting that “he follows me,” and thus their comments might get through. Through the bot, they highlighted the systemic pattern of abuse on the platform as well as Twitter’s failure to protect marginalized users. Reed’s bots have also participated in strikes and protest actions, occupying a liminal space between automata and human-curated art. This balance of ethics can get complex, Reed says, when it’s not clear who actually crafted a bot. Reed signs their bots, as does Kazemi, and many bots include their creator’s info in their bios or only follow the creator. Creators may also develop Twitter lists of their projects, making it easy to link a human with a project—and hold that human accountable when that goes wrong.
Members of the #botally Twitter community also mentor new creators, offer cautionary tales, and discuss what’s worked for them and what hasn’t. Some share common goals and ethics with the open source movement, valuing the free exchange of creative information when it improves the quality of their projects and community. This allows projects like @congressedits (@edsu), which lets the world know when someone makes a Wikipedia edit from a Congressional IP address, to be widely cloned. Sharing repositories on sites like Github, as Kazemi does, also promotes collaboration.
It can be challenging for creators to balance a desire to be accountable and the need to protect creators who may be vulnerable to harassment or abuse. As John Emerson (@backspace), the creator of a variety of politically active bots, observes, the same bots used in the United States (like his @officerbot) could put creators in danger if they replicate them in other nations. Honeypots and other bots that engage with people on controversial topics could put their creators at risk since their very goal is to incite and distract.
So what comes next for bots that function as activist tools? Passive consumption of media, sometimes paired with sharing, has become a common online approach, but it dodges the issue of what happens after the initial contact. “I’m really worried that the feelgood stuff ends up replacing action,” comments Reed, who notes that good work should “inspire action.”
“One way I like to think about bots for social change is as activist tools that help people create or mobilize power: bots that alert people, surface useful information, or contribute to oppositional research,” says Emerson. The dogged determination of bots sometimes makes the unseen visible, while Reed notes that people treat bots differently from people, sometimes making it easier to bring up fraught issues. “Bots end up with their weird semi-hybrid access to privilege,” they say, noting that marginalized creators can build privileged bots. Sometimes, the result is that people weight a bot’s words more seriously, in much the same way that a man repeating a woman’s ideas is accorded more respect.
Research on botivism in 2016 found that bots could be used not just to generate engagement, but to transform that engagement into action, something of considerable interest to Courtney Stanton (@q0rtz), Kazemi’s fellow worker-owner at Feel Train, who explains: “Activism always, for me, because this is something that I always need to re-up on, my activism needs to have a purpose: a focus, an agenda, and a success state.” After all, notes Emerson, “Twitter was actually inspired by an activist tool, TXTmob.” Stanton agrees that bots can be used as a powerful helper tool; as an assistant to enable change, they’re unflagging and patient, and unlike activists, they don’t get fatigued.
Creators like Reed and Emerson are confident that even if Twitter does crack down on the bad actors—bots doing harm through annoyance or active campaigns—the bots we know and love will continue to exist, though possibly in different forms. After all, even @jack follows a few. Members of the #botally community could ensure that those bots continue to evolve, not just as sources of entertainment, but useful tools for online activist work.