Digital VitriolWhite Supremacists Are Capitalizing on Our Reliance on Zoom

A desk with a laptop and tablet. Zoom, a video-chat platform, is pulled up on the screen.

Photo credit: Gabriel Benois/Unsplash

On my fourteenth day of quarantine, while sheltering in place at my home in California, I joined a Zoom workshop with 190 other participants led by New School professor Jamie Keiles who’s also a contributing writer for New York Times magazine. All of a sudden, the workshop was interrupted by trolls: young white men. They attacked our workshop, turning on their microphones and their cameras so that we could see their faces before using the Zoom screen-share feature to share pornography in front of a shocked and unsuspecting audience. The workshop host later found they were potentially members of Atomwaffen, a neo-Nazi white supremacist hate group.

I had never heard of Zoom-bombing until it happened to me. I’m a Chicana and survivor of sexual- and gender-based violence with a professional background as an emergency response advocate on the Los Angeles Rape and Battery Hotline, so the feeling of violation that washed over me will be a difficult one to shake. The experience left me shaken, angry, and contemplative. After my experience, I posted videos of the attack to my Instagram story. Immediately, I heard from several different women who had experienced Zoom-bombing the same day that I had. Two Latinas were on a Zoom call for End Child Poverty California with Presidential Medal of Freedom recipient Dolores Huerta and Congresswoman Karen Bass when they were interrupted and mooned by a white man. Another woman was on a call that included Representative Ilhan Omar when her Zoom call was overtaken by aggressive, racist trolls.

As COVID-19 continues to tear through our communities, our increasing need to live life inside and online will be impacted by the shape-shifting tactics employed by hate groups who are becoming more aggressive and technologically adept every yea. Whose quality of life, labor, and education are most at risk at the hands of these groups? To understand the current and future impacts of Zoom-bombing on digital life, I spoke with several artists, educators, and entrepreneurs who experienced racially abusive Zoom-bombs while conducting workshops and classes focused on people of color and religious minorities.

There are several things that seem to typify Zoom-bombs: They’re overtly predatory and unambiguously violent in the racist and sexist terror that they impose on unwilling and unsuspecting victims. Digital terrorists who use Zoom as a platform to interrupt meetings, lectures, and workshops often use the chat and screen-share option to display pornography and racist epithets that users are forced to see. The bombers themselves are almost shockingly bold: In the Zoom-bomb I experienced, the attackers had their microphones and cameras on, which meant that viewers could clearly see their faces and usernames. They all appeared to be young white men. And Zoom-bombers actively seek out specific targets for this particular flavor of digital terrorism: teachers, professors, journalists, women of color, Jewish folks, and Black folks. They interrupt distance learning, school board meetings, city council meetings, and digital activist spaces. In a time of great uncertainty and shoddy systems for crisis support, public education, and public health in the United States, white supremacists are doing their level best to interrupt, derail, and terrorize historically marginalized communities online in a moment of global crisis. And it’s nothing new.

Yvette Montoya, a California-based journalist and Latina, joined a Zoom workshop facilitated by Ramona Ortega, founder and CEO of My Money, My Future, a nonprofit aimed at providing financial literacy and entrepreneurial opportunities to women of color, and they were immediately Zoom-bombed by what appeared to be white men who used the screen-share and chat features on Zoom to spell out the N-word in bright red letters. “It was immediate—we didn’t even get to start. They were on there waiting for us,” Montoya recounted over the phone. Ortega knew what was happening as soon as the disruptive behavior started. In the face of massive waves of unemployment, especially of workers in the service and entertainment sectors, Ortega invited members of her approximately 1,000-member listserv to join her for a Zoom workshop in conjunction with Sabio, a group in Los Angeles that works to expand opportunities for underrepresented groups in tech. The Zoom workshop was slotted to focus on retooling and retraining for folks who had been laid off by employers in the midst of the coronavirus pandemic. Ortega says that trolls might have targeted her Zoom workshop because of her track record of advocating on behalf of women of color. “I talk a lot about racial justice issues,” she says. “I think there is an association with my name and my company around [those issues].”

“The attack felt personalized. I don’t feel safe [enough] to use it again,” Ortega said. She has since moved her workshop from Zoom to Instagram Live. For victims of Zoom-bombing like Montoya, the motivations and intended impacts of Zoom-bombers seem clear: to disrupt, antagonize, and inflict harm on users of color. “I was taken aback. We are trying to educate people on their options and we are trying to advocate for women of color. Everything we do is for Black and Brown women. They were saying they ‘hate N-words.’ I’m assuming that has something to do with it. My Money, My Future is a Latina-owned company.”

What some might see as a temporary nuisance that comes with using the internet is actually much more than that. Not only is Zoom-bombing a continuation of what is now decades of mostly unchecked and unbridled white-supremacist terror in the digital space, it is an indicator of what’s sure to continue and amplify if tech companies like Zoom, Facebook, and Twitter don’t take concrete steps to curtail digital terrorism and prioritize the safety and well-being of marginalized users. This is something that Silicon Valley has failed at over and over again. If public spaces like parks, libraries, museums, schools, and universities are closed for gatherings, that leaves online platforms like Zoom to get stuff done.

“Marginalized communities are no longer able to come together in person, so digital convenings like these have become absolutely necessary. If these spaces are being hijacked, where are we supposed to convene?” asks Alok, a performance artist and recent witness to an anti-Black Zoom-bombing. Alok, who identifies as Indian and trans, was delivering a keynote performance-lecture as part of Asian American Heritage Month programming at the University of Illinois, Chicago. In light of the virus, the event was moved online with about 45 people in attendance via Zoom call. “This event was supposed to be a space that centered people of color and trans and gender nonconforming people. It was an event explicitly about racial justice. These trolls were trying to disrupt that to express their anti-Black racism.”

Though Zoom and an exhausting list of other institutions including the Massachusetts Institute of Technology and the University of California, Los Angeles have shared “How to Keep Your Zoom Meeting Safe” write-ups, the so-called safety recommendations that users are expected to implement manually before each Zoom meeting don’t always keep attackers at bay. This is true for Ortega, who used her own Zoom professional account, created a scheduled event, generated a unique ID, shared the call link only with other account holders, deactivated the screen-sharing feature, and required a unique ID with a password for participants to enter the call. For Alok, the published safety guidelines don’t go far enough in the right direction. “While I’m appreciative that these resources exist, I think that the responsibility should be on Zoom to adapt its platform to prevent this kind of vitriol.”

As we imagine a better, more humane world after COVID-19, we must include our digital landscape in those plans.

Tweet this

Even schools and universities that use their tech and operations budgets for Zoom accounts, rather than the free version, have had students and participants suffer at the hands of Zoom-bombing. Aleksandr Moniz Mirov, an Master’s of Fine Arts student at Goddard College and religious studies scholar, was teaching their Religion in the United States class via Zoom when a student started exhibiting strange behavior before making a series of angry antisemitic statements in the class’s group chat. Young white men behaving badly online is nothing new, but it was especially jarring in this setting. “There’s this tendency [of] disaffected white young men to do this kind of stuff. They don’t see marginalized people and women as humans,” Mirov says. “We’re NPC (non-player characters) to wale on as they see fit and our reactions are funny. The whole point is the reaction. They come into a Zoom meeting and they say all these things, things I won’t repeat, and they see your face.”

Mirov, who is Latinx, Jewish, and disabled, has experienced his fair share of online violence. In February 2020, the educator was doxxed, and his personal information was splayed across the internet. “The fact that Zoom is the current educational platform of choice because of its ease of use, but doesn’t have the security features, shows that companies, yet again, are choosing convenience over safety and then blaming instructors. Passing the buck,” Mirov says. “But for colleges and businesses that are using their operations budgets to use this platform, it’s not us. It’s the software not being adequate to prevent that behavior.”

Ortega sees this recent wave of digital terrorism as a predictor of what’s to come if some major changes aren’t made to Zoom’s security offerings. “Coronavirus is exposing the inadequacies of our preparedness. What people call our black swan,” she says. “We know data will be another big risk for companies, politically, and I think it underlines that we are not prepared for it. I think this is the next wave of aggression.” There may have been a time when the axiom “don’t feed the trolls” might have provided a sense of control or empowerment to those impacted by trolling online during our AOL Instant Messenger and Chatroulette days, but we’re beyond that now.

The internet is a violent, dangerous place, and has been for years. Megarich tech giants with access to years and years of analytics, user data, complaints, and the best and the brightest coders and developers, must do better. In fact, tech and Silicon Valley are running out of excuses. As we imagine a better, more humane world after COVID-19, we must include our digital landscape in those plans.

 

by Mala Munoz
View profile »

Mala Muñoz is a Xicana writer, podcaster, and educator from Los Angeles, California.