Very OnlineMaking Facial Recognition Easier Might Make Stalking Easier Too

A Black woman's face peaks out, partially obscured. The words

Illustration by Jessica De Jesus (Photo credit: Lawrson Pinson/Unsplash)

The Guardian ran a story in October 2019 about a Japanese pop star being assaulted by a man in Tokyo who used a selfie she posted to stalk her. The suspect used the train station sign reflected in her eyes in the photo and Google Street View to pinpoint her location, and according to the BBC, the 26-year-old accused stalker waited for his 21-year-old victim at that train station and then proceeded to follow her home. Now, startup tech company Clearview AI might make such dangerous pursuits even easier. Clearview’s goal is to create a database of billions of photos through “the open web” that will make it easier for people, especially law enforcement, to find suspected criminals and others. The company claims its process for amassing these photos is ethical and respects our privacy, but that might not be entirely accurate: According to the New York Times, the company gathered its photos from Facebook, YouTube, Venmo, and other digital sites.

Nearly immediately, women from a wide range of backgrounds and communities reacted strongly to the very idea of Clearview. Kim Hoyos, a queer Latinx filmmaker and digital strategist based in New Jersey, worries about the ways Clearview’s scope will impact people who need to fly under the radar for safety reasons. “As a woman who commutes on a regular route and has faced harassment like my photo being taken on trains, I think [Clearview is] terrifying. It opens the floodgates for individuals to stalk freely,” Hoyos says. “I think about this technology being used to fuel government surveillance and ignite further danger for undocumented immigrants or individuals needing refuge from domestic violence.”

In this new decade, technological strides may seem innovative, fun, and exciting, but these new technologies can also be dangerous; we’ve seen technology be used not only to sharpen the scope of abuse but to make it easier. Smart homes seem fun and exciting until abusive exes begin using them to harm their former partners. These “digital tools of domestic abuse” seem like unrealistic elements out of a horror film, with partners and exes controlling thermostats and garage doors, but they’re our new reality.

Period Aisle Advertisement

“As technology progresses, especially in conjunction with social media utilization, the possibility of abuse is everywhere,” Alison Falk, digital activist managing director of Women of Sex Tech and president of Women In Tech PGH, says. “Clearview AI violates the privacy of all women and femmes, paving the way [for an] increase in stalking, abuse, and personalized but equally unwelcome solicitations. There are so many valid and legitimate reasons for folks to keep their identity private. The risks of abuse associated with Clearview AI heavily outweigh the benefits and, as many have claimed, bring us closer to a police state.”

Given the rapid increase in digital technology (and the fact that laws haven’t yet caught up with these privacy violations), there are now cyber safety guides for survivors of domestic violence. Sexual privacy attorney Carrie Goldberg’s 2019 book, Nobody’s Victim: Fighting Psychos, Stalkers, Pervs, and Trolls explores the myriad ways trolls leverage the internet to abuse strangers with impunity. “[E]verything bad that happens online is allowed to happen because Section 230 of the [Communications Decency Act of 1996] exists,” writes Goldberg, which Bitch cofounder Andi Zeisler explains in her review of Goldberg’s book “was originally written to differentiate internet companies with online message boards from publishers, thereby protecting the former from defamation and obscenity suits.”

Similarly, Marlisse Silver Sweeney’s 2014 article in The Atlantic notes “The law is notoriously slow to adapt to technology, but legal scholars say that if done right, the law can be used as a tool to stop this behavior.” But what happens when the law is working with the technology being used to harm? “When the first antistalking law was passed in the United States (as late as 1990!), stalking was limited to in person following, letters, and phone calls,” Andréa Becker, a queer Latina woman and sociology PhD candidate and researcher based in Brooklyn, tells Bitch. “Now, stalkers can use various forms of social media, GPS location trackers, and the vast amounts of personal information available online, to track and nonconsensually contact their victims. Even our current social media outlets like Facebook and Instagram are ill equipped to protect victims of stalking.”

Since coming to prominence in the 2010s, social media stalking has been largely treated as a joke that doesn’t require police intervention. There’s the ongoing memes about falling so deep into an Instagram stalking spiral that you find yourself digging into the life of your boyfriend’s cousin’s stepsister’s best friend. But what happens when the motives are more insidious? “Adding another type of technology—and one that makes personal information so readily available—only further expands the avenues for women to be harassed and abused,” Becker says. One in six women has experienced stalking. There are, of course, racialized aspects to the crime: According to the 2018 report from U.S. Department of Justice (DOJ), Black women are more likely than white women to be stalked, with people of two or more races nearly twice as likely to be stalked. Stalking victims tend to be young, with people between the ages of 18-19 and 20-24 most likely to experience stalking.

According to that same DOJ report, not knowing what’s coming next and how far their stalker’s behavior will go is the most common fear among victims of stalking, so knowing that someone can track a victim’s every move only amplifies that fear. “Women are already harassed, assaulted, stalked, and killed for being in public places, through no fault of our own,” Kate Walton, a feminist activist, organizer, and writer based in Jakarta, Indonesia, tells me. “Software like Clearview AI would give men even more power and would enable them to more effectively track women and harass them face-to-face, and also online, once they discover their identity. This has massive implications for women’s privacy, safety and security. It’s particularly dangerous for women who are escaping violent intimate relationships, as it would mean that their former partners could much more easily track them down.”

Clearview says its ultimate goal is to assist law enforcement. Currently, you can only access Clearview’s service if you click a button below its tagline (“technology to help solve the hardest crimes”) and file a request. Below the button that allows users to put in said request, it reads, “available now for law enforcement.” The concept feels sketchy, but its past speaks for itself: BuzzFeed News reports that Clearview lied about its own role in identifying a terrorism suspect, and several of the women I spoke with explained that they doubted Clearview the moment they realized it was cofounded by Hoan Ton-That. Owen Thomas, who first wrote about Ton-That in 2009, reported in the San Francisco Chronicle that Ton-That originally gained infamy for his role in creating ViddyHo, a website that essentially did little more than phishing, followed by spam.

The risks of abuse associated with Clearview AI heavily outweigh the benefits.

Tweet this

Thomas explains that San Francisco and Oakland have already banned facial recognition: “The city where Ton-That got his start as a software developer won’t let his creation be put to work.” Now, there’s a larger call to ban facial recognition, with some (Fast Company notes Electronic Privacy Information Center and 40 other privacy groups) calling for a federal facial-recognition ban, especially given that, according to the aforementioned New York Times article, it’s likely that Clearview won’t just be open exclusively to law enforcement for very long. So how did we get here? “Software, tech in general, and products as a whole just aren’t designed for women,” explains Nivi Achanta, a startup founder and technology consultant based in San Francisco. “This is the byproduct of leaving women out of tech and social leadership…. Software is the most powerful thing in the world, and Clearview’s implications for women’s safety, and safety as a whole, is a clear argument for why all technology should be open source.”

While some of the people I spoke with were shocked that Clearview exists and might one day be available for public use, others were unsurprised given that technology is so male-dominated. Evelyn Kronfeld, a trans woman and journalist based in Maryland, falls into the latter category. “When I first read about what this company was doing, I wasn’t that shocked,” she says “Facial recognition is very scary, and the way this technology can be used by law enforcement and corporations should concern us all. The Clearview stuff is gross, but this kind of software is already being used as a tool of mass surveillance.” The rise of Clearview and similar technologies can feel terrifyingly dystopian (“This software could lead to women’s harassment, assault and even death,” Walton says), but Falk implores us to push back by becoming more educated about the issue: “It’s critical that as a society we work toward increasing digital literacy as a whole—especially within the pockets of ethical machine learning, data privacy, and security.”

Awareness of Clearview has led to increased privacy concerns—and now a class-action lawsuit. The complaint reads in part, “What defendant Clearview’s technology really offers…is a massive surveillance state with files on almost every citizen, despite the presumption of innocence. Indeed, one of defendant Clearview’s financial backers has conceded that Clearview may belaying the groundwork for a ‘dystopian future.’ Anyone utilizing the technology could determine the identities of people as they walked down the street, attended a political rally or enjoyed time in public with their families.” The complaint, which wants Clearview to cease operations, is ongoing. But while the idea of Clearview shutting down may bring comfort to some, laws around digital technology, what privacy we actually have a right to on social media (which, in general, doesn’t amount to much), and the future of facial recognition are still being figured out. “We must constantly question how can we dismantle systems and take actions that will restructure an industry that is majority white males getting to decide who sits at the table and build these products,” Falk says. “Otherwise, these products will continue to develop absent of considerations and the dangers they pose to the most marginalized communities.”

 

Rachel Charlene Lewis, who has light brown skin and dark brown curly hair, wears a white button up and gold jewelry and gold glasses.
by Rachel Charlene Lewis
View profile »

Rachel Charlene Lewis has written about culture, identity, and the internet for publications including i-D, Teen Vogue, Refinery29, Greatist, Glamour, Autostraddle, Ravishly, SELF, StyleCaster, The Frisky (RIP), The Mary Sue, and elsewhere. Her literary work, reviews, and interviews have been published in Catapult, The Los Angeles Review of Books, The Normal School, Publisher’s Weekly, The Offing, and in several other magazines. She is on Twitter and Instagram, always.