
Tranae Moran in Coded Bias (Photo credit: Courtesy of the 2050 Group)
In August 2020, the New York City Police Department used facial-recognition technology to track down Derrick Ingram, a protester who had taken to the streets after the police murder of George Floyd. After downloading images from Ingram’s private social media accounts, the NYPD distributed his photos—akin to a “wanted” poster—and came to his doorstep, demanding to arrest him without a warrant. Though facial-recognition software struggles to accurately detect Black and Brown people’s faces, these technologies are still used to surveil marginalized communities without their consent.
In 2015, Shalini Kantayya directed Catching the Sun, a documentary that speaks to unemployed workers, businessmen, Tea Party activists, and policy advisers in the United States and China about best practices for transitioning to solar energy without losing profits. Her latest film, Coded Bias, which premiered at the 2020 Sundance Film Festival, introduces audiences to computer scientist Joy Buolamwini, whose chance discovery of the racism underlying artificial intelligence (AI) facial-recognition softwares led her to form the Algorithmic Justice League, an organization that challenges racial bias in facial-recognition software. “I wanted to make a mirror that would inspire me every morning,” Buolamwini said in Coded Bias. “It could put a picture of a lion on my face, or Serena Williams’s.” She installed a camera with facial-recognition software on top of her mirror in an effort to detect her face. The software refused to recognize her—until she covered her face with a white mask. This project inspired Buolamwini to educate people about the racist nature of AI algorithms.
The story is, of course, as old as time. Just as Kodak’s Shirley cards set the fair-skin standard for photographs, the algorithms of AI facial-recognition softwares replicate the biases of (largely) white male coders. Now that the police use such errant software, people of color not only become non-consenting subjects in their half-baked experiments but also end up losing their right to privacy. Coded Bias opens up the conversation on AI algorithms and the many ways in which their use impacts us all. Bitch spoke with Kantayya about the film, the right to data privacy, and what we can do to ensure our privacy is protected.
Is AI always a bad thing?
It’s not so much about good or bad as it is about the need to be involved in the conversation and interrogation as stakeholders. In the making of this film, I saw a few things that I never thought [were] possible. Three of the biggest tech companies in the world—IBM, Microsoft, and Amazon—announced that they would press pause in the ways they had been developing and selling facial-recognition software. IBM said they were getting out of the facial-recognition game, Microsoft said that they would not sell facial-recognition software to police departments, and Amazon said they would stop researching for a year. That was brought about through, at least in part, Buolamwini’s incredible research that proved their technology was racially biased.
So much of AI is normalized either through our phones or the applications we use. How do we push back?
It’s a great gesture for these tech giants to make these announcements, but it shouldn’t be up to them to decide when they roll out facial recognition to law enforcement. There should be a process, and elected officials in place who will ensure that those processes are followed. Just the fact that Buolamwini, as a graduate student, found that these technologies—[which] were already being sold to police departments, the FBI, and ICE—were racially biased, but somehow the most powerful tech companies in the world didn’t know this. It’s unbelievable! So we need real legislation. We have to empower ourselves with information and challenge our lawmakers. Activists helped move the dial on these technology companies, and it gives us a moonshot moment where we could push for systematic changes to rein in the powers of tech.
There is an extreme case of AI use in China, where people are accustomed to the infringement of their privacy. Is that the dystopia we should fear?
We’re getting comfortable with AI, and I tried to communicate the extremity of that comfort [in the film] through the Chinese government’s use of AI. That part of the film feels like a Black Mirror episode. I know China feels like a faraway place, and it seems like we’re really far from situations where someone scans your face and that tells them your credit score. But Coded Bias tells you that we’re not too far, and that technology is already infringing upon our basic civil rights. AI is a danger to our civil rights when it replicates historical qualities of any real-life bias. Right now, we’re just at the very beginning of that conversation. All this is very new, very powerful, and developing exponentially. I feel a pushback can be effective when a larger group of stakeholders are involved in the conversation about how it’s developed and deployed.
The pandemic has normalized a lot of ways in which we volunteer our private data, be it our body temperature, our medical histories, or even the people with whom we choose to socialize. How does the resistance against AI play out during a pandemic?
A lot of times people frame the conversation around personal responsibility. They ask, “What do we do to protect ourselves?” Sure, BlueJeans is a little better than Zoom, and Signal is a little better than WhatsApp. Protesters in Hong Kong are using lasers, and BLM protesters often use thick makeup. There’s certainly that, but we need systematic change. We need laws that are going to protect us.
Science is always seen as this white male bastion, and your film centers on so many women. Was that an intentional overturning of stereotypes?
As it turns out, women are leading the way in the field of ethics and AI! I didn’t set out to make a film with all women, but there was a moment when I felt very happy about centering the voices of so many women. My cast [members] are not only renowned data scientists, but they’re also out on the streets protesting, mapping petitions, and testifying in front of Congress. Of course they have the degrees, but their own social standing of being women and women of color, belonging to the LGBTQ communities, or being Jewish allowed them to see the technology and its impact from a different and empathetic perspective. They have a lens that’s more sensitive and humane in its approach. And that is something all the women who worked on Coded Bias, myself included, brought to the table. The gender ratios in my film are the same as probably every technology documentary you’ve ever seen. I have just flipped them! It has been incredible to see how unaccustomed we are to seeing women talk about technologies that will define our future with so much authority and rare insight. It was a pleasure to have Coded Bias give voice to the genius minds that Silicon Valley sometimes misses.

Photo courtesy of the 2050 Group
Do you think Black women, Indigenous women, and other women of color stand to be the most affected by the dissemination of these racist algorithms?
Big technology companies have grown in the middle of the crisis. They have gotten rich on our data without paying us for it, and there are no laws in place that citizens can use to balance that power. Women, especially BIPOC women, have suffered the worst consequences of every bias. As Amy Webb explains in [the 2019 book] The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity, there are nine companies that will lead the world because they have a head start in our data. There’s no competition that can compete with the amount of information that Google or Facebook have about us. Women obviously stand to suffer more from the misuse of that data. It’s imperative that we’re more cognizant of the invisible hand of power that these technologies have in shaping our lives, our perspectives, and our active opportunities.
How hopeful are you that there can be systemic change in how AI is deployed and wielded?
I’m immensely hopeful. I make documentaries because they remind me that a small group of people can make a difference, and I’ve seen that happen repeatedly through Coded Bias. There were people in Brownsville, New York, who didn’t know what biometric data was, but they stopped their landlord from installing facial-recognition technology. Then they went on to inspire the first legislation to stop facial recognition in housing developments. There is Big Brother Watch in London, run by young activists who are essentially stopping a rollout of real-time facial recognition by the Metropolitan Police of London. These are just a few examples of people banding together to make change. The biggest obstacle we have in our fight is not Amazon; it’s our own apathy. I am hopeful we’ll rise above that. We are working on a signature campaign in support of a Universal Declaration of Data Rights as Human Rights. There’s a whole list of actions that you can be a part of: Visit codedbias.com/takeaction, support organizations doing work in this field, watch and share the film, and talk about it. That’s the first step to changing things.
This interview has been edited and condensed for clarity and length.