Online abuse is still misunderstood as a natural consequence for being a marginalized person on the internet. Independent game developer Zoë Quinn knows that all too well: After she broke up with her abusive ex-boyfriend, programmer Eron Gjoni, he published a screed that accused her of cheating on him multiple times, including once with a journalist at Kotaku in exchange for a positive review of her game. All of his accusations were unfounded, but she became a target in what would become known as Gamergate. Since that time, Quinn has been abused online by hundreds of anonymous users on sites like 4Chan and Reddit, hotbeds for the same kind of white supremacists who marched in Charlottesville and helped elect Donald Trump.
Her personal information, including her phone number and address, were released online through a process called doxing. Quinn’s online abusers made it impossible for her to feel safe: They would send photos of her at various games conventions, to let her know she was being watched; hacked into her social media accounts; and went through a similar cycle with anybody who attempted to support her. It cost her peace of mind, security, and any semblance of normalcy.
While the abuse still continues, she’s tried to turn her tragedy into awareness by launching Crash Override, an organization that provides resources to people who are being attacked online. Ahead of her forthcoming memoir Crash Override: How Gamergate (Nearly) Destroyed My Life, and How We Can Win the Fight Against Online Hate, Quinn spoke with Bitch about the cultural disregard of online abuse; reclaiming her life after online abuse; and how all people can protect themselves from doxing.
Were you, at all, hesitant to write a book about your experience? How did you find the courage to get Crash Override into the world?
I was super hesitant because I’ve wanted to make this Gamergate shit a footnote in my life for a very long time. I knew writing the book would sign me up for another wave of [online abuse]— anytime I fart, Breitbart writes an article about it. I had to make sure all of my facts were 100 percent correct because the book would be under such scrutiny. At the end of the day, I know Crash Override can’t solve [online abuse] online. I needed more eyes on the issue and I needed more smart people to push back against [online abuse] in a major way. That outweighs any fear that I might have or my own selfish desire to not have to talk about this shit.
Throughout the book, you said that you owe your career to the internet. I do too, but often, the safety aspect isn’t taught in colleges. Does that lack of education about online abuse make people from marginalized communities more vulnerable?
Yes, absolutely. I’ve done a lot of work with local Black Lives Matter activists for that reason. I work in tech, and I still got super compromised and fucked over. If you’re outside of tech and you’re from a group that’s traditionally marginalized or excluded, that’s another barrier. There definitely needs to be more effort on the tech side to make safety education available for everybody, especially underserved communities.
Law & Order: SVU aired an episode inspired by the online abuse toward you and Anita Sarkeesian. Were you involved in the process for creating that episode? If not, do you think that type of spotlight further fuels abuse?
I think it can [fuel abuse]. It has to be presented in the right way with the person who’s affected centered [in the story]. It’s not just an Law & Order: SVU episode; I’ve seen so many fictionalized versions of myself raped and killed, and none of [these shows] have ever talked to me about it or even acknowledged me. There was one Kickstarter that reached out to ask me for promotion, but that was it. For a lot of people who believe the abuse isn’t hurting the person as much as the attention, there’s definitely validation. It’s a victim’s story being hijacked and told by someone who’s trying to sensationalize shit and make a quick buck. It’s predatory.
In the book, you’ve said that “don’t feed the trolls” or “just get offline” is awful advice. Why? How should people deal with online abuse?
In 2017, there’s still a lack of basic digital literacy about what online communities mean for us socially. We’re still figuring it out, so I get why people would say “just get offline” even though it’s wrong. It’s very easy for that to be the knee-jerk reaction because it’s like, the problem is coming from this particular place, so just don’t go there anymore. They’re not fully realizing that they’re putting all of the responsibility for dealing with abuse on the person affected. If that isn’t an option, the best thing to do is have as much preventative digital hygiene as possible going into it, like having good policies on passwords, being careful with your information, and taking those steps to reduce what is made available about you, just in case the worst comes to pass. If you’re a bystander, don’t be an unwilling participant in someone else’s abuse. It’s very easy to get roped into misinformation.
What would you say are the first three digital hygiene steps every person should take?
The first thing you should do is download a password manager. These are basically things that will make it so you will have strong unique passwords across every single account that you have. Have that in a secure location with an app that automatically fills the information in, so you can take away all of the annoying stuff about having passwords across each site. It’s good practice, but you won’t have to remember them all. That’s a gigantic step. The next step is to go and find all of the information about yourself on the internet. Basically, try to dox yourself. You have the cheat code because you know home address and the stuff you wouldn’t want found. When you find sites where that information pops up, try to find their opt-out or removal pages or policies.
Sometime that’s hard to find, which is super unfortunate, but do that as much as possible. There’s a lot of information on Crash Override with specific links to check. The third [piece of advice] would be checking the privacy settings of all of your social media accounts to make sure they’re in line with what you want to share. Facebook has a really good privacy checkup tool. For Google accounts, you can see who is logged in to your account and from where, and they also have a really helpful privacy checkup tool. You should also have two-factor authentication enabled on all of your accounts, so even if someone has your password, they’d also need to have access to your cell phone [to get into the account].
You’ve become a master of understanding terms of service for social media sites. What should ordinary users be looking for terms of service?
You should be specifically looking to see how they define abuse. Do they define what abuse is? Do they define what actions they might take [against abusers]? Tech companies have their public-facing terms of service, which are intentionally vague and give them more leeway in enforcement. See if they clearly take a tangible stance on abuse, rather than the bullshit “harassment is bad” shit. Do they have a clear course of action for abuse, like taking away someone’s account? Also, check what they’re going to do with your data. What kind of privacy are they going to offer you? They’re almost certainly reselling your information, and you have to see if there’s anything in their terms of service that allow you to opt out. Look for anything privacy-oriented and figure out how you can opt out because once they sell [your information], even if it’s just an advertiser, it can then be sold to other people. Then, [your information] just circulates, which could become a problem. It sucks because terms of service and user licensing agreements are written so dense; it would be cool to see plain terms, but it’s not in their interest to do that because then people would know how little privacy they have online and freak out.
It was particularly frightening reading about how unhelpful law enforcement and the courts were in curbing the harassment. What protections need to extended legally to protect victims from abuse and dissuade online abusers from engaging in the act?
We need to set our eyes higher than individual actors in terms of who is participating in the abuse. We need to look at who’s enabling the abuse: Third-party sites that can have your personal information, resell it, and make it damn near impossible to remove it are unlegislated. There’s no penalty for doing that. There’s no penalty for companies that don’t enforce their terms of service. A consumer protection bill might help because we’re talking about faulty products and consumer safety, so I wonder if that’s an arena that could reduce a lot of harm and take some of the foundational structure out from under [online abuse]. [That’s a better solution] than trying to put anybody through the criminal justice system that’s super fucking broken.
Do you think legislation is less likely to happen now that Donald Trump is president and 4Chan and internet subcultures have moved from the internet’s fringes to the center?
I’m split on it. I think people are more likely to give a shit about [online abuse] now because it’s harder to pretend that it’s not happening. But with the current administration, it’s hard to see how we would get any progressive reform through. If we can get the Cheeto out of the White House and everybody who enabled the Cheeto out of the way, maybe we can continue increasing public consciousness. Awareness about how the internet is used to radicalize white supremacists is higher than it’s ever been, so I’m hopeful that [legislation will be passed] in the long-term. In the short-term though, I think we just need to survive.
How did you make the decision to turn your torment into advocacy with Crash Override? Was the process for creating the organization cathartic in any way?
I felt like my life was over after everything happened. So I thought, “Alright, if my life is gone, I’ll just be a bitchy ghost who haunts [online abusers] and makes it harder for you to do this to other people.” It has been a little bit of a hopeless place. I became a video game designer because I think in systems, so it’s impossible not to see the structural issues that enable all of this shit. I come from a weird Appalachian background where we were super backwoods, so if we wanted anything, we had to make it. I’m a mechanic’s daughter, so it’s a natural impulse to create the things I wish I had when all of these people were trying to ruin my life. I want that to exist, so even if I fail, someone will come along who does it better and I can pass along all of the information that I learned.
It’s been cathartic, but it’s also been fucked up: When I can’t help someone, it’s fucking crushing. The worst part of it is when I go and talk to people in positions of power that can press a fucking button and make someone’s problems go away, and they tell me they care about a Nazi more than they care about a trans kid who’s being attacked. Having to go back to that kid and deliver that news is super fucked up. I’ve been trying to learn a lot about reducing secondary traumatization. Crash Override’s hotline is temporarily disabled while I find people who’ve been doing work in this arena for a long time to run that element [of the organization], but the resource center is still available.
Why do you think that some tech companies are more willing to work with Crash Override to help online abuse victims than other companies?
There’s very little incentive for tech companies to give a shit about actual fucking Nazis organizing on their platforms because it makes their numbers go up, which makes them more attractive to advertisers. It’s an economy of attention that the internet lives on: The horrible attention is not discerned from actual engagement. [Tech companies] don’t know the difference between readers enjoying an article or liking a picture and an actual hate group trying to disseminate misinformation or setting up bots to send around information to Tea Party followers on Twitter. Actively getting rid of that—doing your goddamn job—will lower the numbers they can show to shareholders or potential advertisers. There’s been such shit leadership for such a long period of time that a company that suddenly says, “No, you can’t threaten people and distribute non-consensual sexual imagery” would become the bad guy. They don’t want to be the bad guy to these communities.
Early on in the book, you write, “this is a story about how we become resilient.” In the face of rampant and escalating online abuse, how do we achieve that resilience?
A big part of it is focusing on what we can do to strengthen our community ties. [We have to] coalition build with groups that have the same enemy in common. We have to figure out how to build healthier and stronger ties. It’s not just about saying this is fucked up and this person is doing fucked up shit. We have to check in on each other when shit’s getting rough and figure out how to build community coalitions among different marginalized groups in ways that allow people to still have voice and ownership over their space.