Illustrations by Jacky Sheridan
This article appears in our Summer 2016 issue, Money. Subscribe today!
On February 16, 2016, “with the deepest respect for American democracy and a love of our country,” Apple CEO Tim Cook hurled a giant “fuck off” at the FBI in the form of an open letter opposing an order to create a backdoor to the iPhone. This backdoor software, if created, would give the government (and anyone else who managed to get their hands on it) the ability to bypass iPhone encryption software and access user data.
A few days after the letter was published, the hashtag #StandWithApple began to circulate, with Twitter users applauding Apple’s bravery in standing up for the people’s right to privacy. Soon other tech giants rallied to the cause: Sundar Pichai, Google’s new CEO, echoed Apple in stating that enabling a backdoor would compromise user security, and Facebook CEO Mark Zuckerberg jumped in to provide a supportive soundbite about the importance of encryption technology. Garden-variety iPhone users and tech leaders alike were ready to go to bat for our civil right to privacy. Yet, in our hashtag-and-emoji circus, we missed a critical point: They’re not fighting for us.
Framing the encryption debate as one of civil rights vs. government power allows private companies like Google (now a subsidiary of Alphabet), Facebook, and Apple to paint themselves as brokers operating on behalf of the public. But these companies have made billions on the hardware, software, and web platforms that regulate every aspect of the digital lives of that public. We are not talking altruistic motives here.
The “Frightful 5,” as journalist Farhad Manjoo dubbed the tech giants, have quickly made themselves indispensable to most of us. We depend on Apple, Amazon, Google, Microsoft, and Facebook to tell us what to do, where to go, how to get there, what to eat, what news to care about, who to follow, what to like, and what we should be doing with our time. In a little under two decades, these companies have not only touched every aspect of our lives, but have become, as Manjoo noted, “the basic building blocks on which every other business, even would-be competitors, depend.” Cross-platform measurement company comScore.com reports that as of September 2015, Google Search dominated almost 64 percent of the search-engine market; Apple ranked as the top smartphone manufacturer, with 42.9 percent of the OEM market share in 2014; and a 2014 Pew Research Center study found that 71 percent of adults online use Facebook. For lay users, it’s nearly impossible to extract any information from the internet without passing through the gates of one of the Frightful 5. Even finding an alternative search engine like DuckDuckGo might involve first passing through Google.
What happens when our primary access points to the world’s information—and each other—are managed by a handful of private companies? And how did we come to think of them as neutral brokers of information?
Science, mathematics, and engineering: They are historically and presently associated with men, and treated as infallible bodies of knowledge that view, manipulate, and build the world from a completely objectivist point of view. The narrative of Silicon Valley’s revolution hailed technology as inherently neutral, and its creators as impartial engineers of the coming utopia. In a 2013 manifesto coauthored with Jared Cohen, The New Digital Age, Alphabet executive chairman Eric Schmidt outlines a vision of the future in which everyone is connected (presumably through Google), stating that “technology is neutral, but people are not,” which suggests that the way we use technology is what determines its values.
If we go by Schmidt’s logic, a hunk of metal filled with combustible material isn’t a bomb until it’s dropped on a city of civilians. But Schmidt is not the only high-ranking technocrat who believes in the blank-slate theory of tech, and it’s easy to see why: The belief in neutral technology is part of a larger cultural perspective that seeks to immunize men and their endeavors from the social constraints that ostensibly affect women: emotions, and, consequently, bias.
Read This Next: Who Disrupts the Disruptors? Changing the Way We Talk About Innovation
Read This Next: In the Spirit of Ada Lovelace, a Seattle School Teaches Women to Code
Forging an ideological link between technology and the hard sciences gives tech giants the power to perform what the pioneering cyber-feminist theorist Donna Haraway calls a “god trick,” which she defined as a “view of infinite vision” that positions the subject entirely apart from, and above, the object. When scientists, engineers, or platform designers grant themselves a veneer of objectivity, they absolve themselves of the responsibility to acknowledge their own personal biases, their own stake in the code.
The reality that digital platforms and algorithms are not inherently neutral becomes clear with platforms like Facebook and Google that are designed using historical data as inputs. According to data scientist Shahzia Holtom, this means that “any biases, such as underrepresentation of women or ethnic minorities, that may be present in the historical data will also be reflected in the results.”
In 2004, Jewish Journal reported that when users typed “jew” into Google’s search bar, the first result was jewwatch.org, a fanatically anti-Semitic hate site. There was a resulting outcry from Jewish advocacy groups and journalists questioning how, out of the 1.72 million web pages relevant to the search term, this result landed on top. Google founder Sergey Brin declined to remove the offending result on the grounds that it would compromise Google’s objectivity; his solution was to create an “Offensive Search Results” warning stating that “the beliefs and preferences of those who work at Google, as well as the opinions of the general public, do not determine or impact our search results.”
This article appears in our Summer 2016 issue, Money. Subscribe today!
Another search-query blind spot surfaced in 2007, when users noticed that when inputting certain phrases that began with “she”—“she invented,” “she discovered,” “she golfed,” “she succeeded”— a spelling correction would appear automatically appear. “Did you mean: he invented?” Google’s official explanation was that its spell-check algorithms were “based on sophisticated machine learning methods…completely generated without human input.” Again: It’s not us, it’s the machines. Ta-da! God trick.
Digital platforms and algorithms don’t just reflect the biases of the coder, they also have a way of replicating the narrow perspective of users as well. This became all too clear this March, when Microsoft debuted its AI chatbot, Tay. The 19-year-old female chatbot was promptly co-opted by a series of internet trolls and within 24 hours became a neo-Nazi mouthpiece for racist and sexist epithets. After Microsoft shut her down, #JusticeForTay rang out on Twitter, with those same trolls clamoring for the revival of the bot they believed was silenced for speaking the truth. Of course, it wasn’t that Tay revealed some inherent truths about society, but rather that her design had failed to provide her with an adequate ethical framework for navigating the bias and violence of the real world. To create that filter would be to deny the impartiality of tech.
The narrative of objective technology differs sharply from the monopolized reality of the Frightful 5. The proliferation of digital platforms depends upon their unmatched addictiveness. “Habit formation” is the magic phrase in Silicon Valley; Nir Eyal, author of Hooked: How to Build Habit-Forming Products, notes that linking digital platform use to a user’s daily routine and emotions is the best way to ensure loyalty—and, by extension, profit. The habit-formation model argues that digital platforms should be designed as a response to particular emotional triggers, especially internalized ones. You’re anxious? Check Facebook. Bored? Hop on Twitter. Depressed? Scroll through Instagram. Not sure about something? Just fucking Google it. Once a platform is recognized as a balm for these triggered internal emotions, we don’t even need the triggers anymore, but simply return on our own. The habit formation model does not satisfy a need, it creates an incessant craving. And by encouraging frequent platform use, it provides an endless supply of data that companies can use to get better at keeping you there. “Major tech companies have 100 of the smartest statisticians and computer scientists…whose job it is to break your willpower,” notes ethical-design advocate Tristan Harris.
Falling into the addiction trap not only turns us into neat data points, it also robs us of the agency to make our own way through digital spaces. This isn’t exactly what the vanguards of 1990s cyberculture had in mind. Author and cyberpunk progenitor William Gibson famously wrote that “the Internet is a complete waste of time—and that’s what’s so great about it.” The web was for “wandering aimlessly” and discovering new things about your world; the “waste of time,” as Gibson saw it, was a valuable expression of dissent from a society that glorified endless productivity.
Australian cyberfeminist collective VNS Matrix did foresee the dangers of a digital space in which, as founder Virginia Barratt told Motherboard in a 2014 interview, “access by women was limited and usually mediated by a male ‘tech.’; to revitalize the anarchical nature of the Internet, VNS produced video games and web hacks that would “hijack the toys from technocowboys and remap cyberculture with a feminist bent.”
Though the culture of cyberpunks and cyber-feminists has been absorbed by Hollywood and spat out as The Matrix, the power of network that inspired their values is still at our disposal. And the realization of our collective agency begins with rejecting the fallacy of neutral technology, and recognizing ourselves in the machines.
The god trick applies not just to products, but to the chronic homogeneity of the tech industry as well. If we can see ourselves in the machines, after all, then it follows that we should also recognize the need for more diverse perspectives. But even with a steady increase in stem education funding over the last decade, many populations—Black and Latinx people and women among them—are wildly underrepresented. And the real-life effects are being felt by coders and users alike: Last summer, after uploading a number of pictures to Google Photos, software developer Jacky Alciné noticed that the facial-recognition function identified two of his dark-skinned friends as gorillas. As he stated in a blog post, “It just doesn’t make sense, barring the obvious, why the world’s most popular search engine was incapable of recognizing the face of a dark-skinned Black person.” A stronger, more diverse quality-assurance team would have been able to prevent the error. But, as a January 2016 Bloomberg Businessweek report pointed out, absent a titanic culture shift (“Over the past two decades, African Americans have made up no more than 1 percent of tech employees at Google, Facebook, and other prominent Silicon Valley companies”), such egregious incidents will continue to occur.
Meanwhile, in 2014, data collected from GitHub revealed that only 17 percent of Google’s engineers were women—and that’s higher than the industry average. But tech journalist Rachel Sklar argues that the general dearth of women in technology isn’t as simple as “there aren’t enough suitable women engineers,” because tech companies don’t just hire engineers. “If Silicon Valley’s money people or fancy keynote founders or biz whizzes on the cover of Entrepreneur were all engineers crushing code 24/7, then fine,” noted Sklar in a post on Medium. “But that is not the case.”
When we recognize that the engine of the tech revolution—the assumption of inherent neutrality—is faulty, and that the Frightful 5’s appeal to the public good is just canny PR, then we can begin to imagine how to effectively design, program, and use these platforms to provide humanity with real, measurable benefits. The techie/former Daily Show producer/comedian Baratunde Thurston, who recently received the Interactive Hall of Fame Award at South by Southwest, took time in his acceptance speech to draw attention to what’s at stake in supposedly dehumanized tech:
If innovation is all about making the world a better place, and the algorithms that claim to do so derive from this very imperfect world sick with racism and sexism and crippling poverty, then isn’t it possible that they might make the world a worse place? Could we end up with virtual-reality racism? Could we have machine-learned sexism? Could poverty be policed by drones and an internet of crap? This is all very possible if we don’t engage consciously in the work that we’re doing.
We can start with an understanding that tech revolutions are inherently social revolutions that project their own set of values and expectations onto the world in which they evolve. Relinquishing the god trick and recognizing that emotion and bias in the machines are not weaknesses or failures, but rather strengths, will make it possible for all of us to move toward a future that makes us better than what we are.
0 Comments Have Been Posted
Add new comment