Photo by Saad (Creative Commons)
Felicia L. Montalvo is the 2016 Technology Writing Fellow at Bitch Media.
If you type the phrase “feminist values” into Google, the men’s-rights blog A Voice For Men regularly pops up on the first page of search results. Out of all the relevant pages on “feminist values,” Google’s algorithms determined that a circle jerk of men crying that sexism isn’t real—that, in fact, men are oppressed by feminism—deserved first-page priority. That's a detail that’s particularly frightening when you consider that most people doing Google searches don't make it to page 2 of search results. Sometimes, even our most “intelligently” designed algorithms are pretty ignorant.
This was more than evident last week when Microsoft gave birth to the AI chatbot Tay. Off on her own in the Twitterverse, 19-year-old Tay was quickly pursued by trolls trying to bait her impressionable young algorithms with incendiary tweets that asked her to echo their words. Within a few hours of her debut, she’d denounced feminism, supported Hitler, and called for a Mexican genocide. Within 24 hours, Microsoft yanked Tay from the Internet, deleted her tweets, and issued an apology saying that a few bad apples had spoiled the AI pie.
Photo from Tay's twitter profile.
However, it wasn’t just rogue users that poisoned Tay with hatred, but rather a lack of actual intelligent design. In her article criticizing Tay’s design, interaction designer Caroline Sinders noted that although people appear to have a natural propensity to “kick the tires” of new tech, it is also true that “if your bot is racist, and can be taught to be racist, that’s a design flaw.” In neglecting to blacklist certain words—ones about rape and genocide, for instance—Tay’s creators not only made a huge design error, they also negated their responsibility to provide her with vital ethical filters and a personality that would have made her better able to navigate a world plagued by ignorance, racism, and sexism.
Read this next: Who Disrupts the Disruptors? We Need To Change The Way We Talk About Innovation
Read this next: SXSW Interactive: The Ultimate Experience In Branded Reality
Tay is just one example in a larger trend of unintelligently designed female AI bots and assistants. Drawing on research (and sexist stereotypes) demonstrating that people respond better to female voices, AI designers and programmers often cherry-pick female attributes to encourage trustworthy, human-like connections to machines. Deborah Harrison, one member of the team that creates dialogue for Microsoft’s virtual personal assistant Cortana, asserts that an invariable consequence of creating bots with female attributes is that those bots are promptly sexualized. In a CNN piece titled “Even Virtual Assistants are Sexually Harassed,” Harrison noted that users wanted to “talk dirty, confess their love, role play or bombard them with insults.” This is particularly jarring when you consider that up until a few days ago, many of these personal assistants had no idea how to successfully navigate humans reporting sexual assault, depression, or potential suicide. (Read the full study here).
Providing personal-assistant bots—and other AIs created in the likeness of women—with a design framework that doesn’t perpetuate dangerous behaviors and stereotypes should be a fairly obvious requirement for any intelligent machine that interfaces with humans. At the Re-Work Summit on Virtual Assistants, Harrison explained that instilling “confidence” was a key part of crafting Cortana’s personality. “There is a legacy of what women are expected to be like in an assistant role—we wanted to be really careful not to have her be subservient. That’s not a dynamic we wanted to perpetuate socially.”
Efforts like this add a layer of authenticity and actual intelligence to personal-assistant AIs, and, more important, work to dismantle the notion that technology is an inherently neutral endeavour. Even such modest steps toward humanizing tech with real-world learnings about sexism and racism are often dismissed as the will of social justice warriors to infuse their own bias into machines. Yet a non-response to sexual harassment or a lack of understanding of how to deal with issues faced by non-male users represents a cultural bias as well—that of the designers and the industry overall.
But designing intelligent machines that can successfully interact with humans can’t just be a matter of carefully examining harmful stereotypes of women and men to avoid perpetuating them. As virtual-reality game designer Theresa Duringer notes, words, colors, and names all carry gender-binary connotations, so though “AI can be genderless, our interface with it is often gendered.” To echo Donna Haraway’s sentiments in her foundational essay “A Cyborg Manifesto” perhaps we should be thinking about how we use intelligent machines to move beyond the constraints of gender altogether. Primitive AI interfaces, in the form of chatbots and personal assistants, have provided us with the opportunity to experiment with ideas of what the future of human/machine relationships could look like, and consequently set the precedent for what the goals of superintelligent machines should accomplish. If we use this opportunity to reify gender binaries, encourage stereotypes, and project hatred, then we are severely limiting the potential that these intelligent machines might have to make us better humans.
Read this next: Who Disrupts the Disruptors? We Need To Change The Way We Talk About Innovation
Read this next: SXSW Interactive: The Ultimate Experience In Branded Reality
1 Comment Has Been Posted
Have you ever considered that
Jolhn Dahl replied on
Have you ever considered that perhaps "Trolls" utilising such AI to cause offense is merely a continuation of post-modernist thought?
Add new comment