Facial recognition technology promises to alert us if our children are skipping out on their college classes, to zip us past all the suckers waiting in line at the airport and to create nationwide databases to catch the “bad guys.” This newest biometric data is sold as a shortcut to utopia: technology that delivers responsible kids, quick service and safe streets — all with a scan of the human face. Politicians and companies pushing facial recognition technology say that, like the near-certainty of DNA and the exactness of fingerprint matches, the software is a precise, unbiased alternative to human bigotry in policing. Yet in reality, facial recognition technology is prone to false positives that target Black and Brown people, and then tracks them when they’re on parole. Instead of offering a kind of utopia, this biometric tool locks people into the dystopia of an already unjust criminal legal system.
Increasingly, this criminal legal system relies less on investigative work and more on the devices that are everywhere and record our every move. The ubiquity of security cameras and the dread that has gripped the collective unconscious since 9/11 have normalized a constant gaze and enabled the proliferation of perpetual surveillance. Intrusive technology is a conventional, everyday reality for younger people who have experienced sustained recording of their movements in the public realm since the day they were born. Even those of us who can remember a time before mass surveillance have acclimated to the perpetual presence of devices that record our movements every day we step outside our homes — and, for some of us, even when we stay indoors.
The ubiquity of constant observation is so absolute that we have been conditioned to enable our own surveillance. We click the Facebook ads for “video doorbells” — ads that seem to sense our fear of crime and terror and promise to replace those awful feelings with the buzz of becoming the Watcher, of gazing into our smartphones and communicating with anyone who comes to our doorstep when we’re not home. These commercials promise to give us the power to interrogate, even denigrate, anyone who comes to our doorstep and activates our video doorbell. What few owners of these systems realize is that they are paying for devices that catch, retain, and, in some municipalities, share with law enforcement the images of everyone who comes to our homes — even our family and friends.
According to a 2016 paper published by the Georgetown Law Center on Privacy and Technology entitled “The Perpetual Line-Up,” “one in two Americans is in a law enforcement face recognition network.” These Americans are not necessarily adults. In 2019, The New York Times reported that local city law enforcement had loaded a facial recognition database with thousands of juvenile mug shots. These images of children, teenagers aged 13 to 16, as well as some tweens as young as 11 years old, can be utilized by the NYPD “despite evidence the technology has a higher risk of false matches in younger faces.” People without criminal records are also entered into these systems. According to the Georgetown study, the FBI is no longer limiting its databases to the fingerprints and DNA evidence collected during criminal investigations, but is now using drivers’ license photos to build “a biometric network that primarily includes law-abiding Americans” [emphasis theirs]. This is a problem for everyone, but especially for Black, Indigenous and People of Color (BIPOC), considering the propensity of this technology to return false positives of Black and Brown faces. The ACLU has reported that face recognition technology “is known to produce biased and inaccurate results, especially when applied to people of color.”
The ACLU identifies the roots of face recognition in pre-computational, racist policies like the Chinese Exclusion Act of 1882, which empowered government workers to determine affiliation in an ethnic group with nothing more than a careless glance at persons seeking the universal right to work and live with their families. Black, Indigenous and Latinx people have also experienced the intergenerational trauma of system controllers making life-altering decisions based on outward appearance and biased assumptions regarding racial groups. White supremacy undergirded pencil erasure, one-drop rules and the history of racial passing.
The ACLU warns us to rethink the normalization of the capture of biometric data by law enforcement and by corporations today. Amazon is one of the corporations whose facial recognition use the ACLU is monitoring. The largest company in the world by market share, Amazon is in partnership with over 400 police departments nationwide. In 2018, the company applied for a patent to add face recognition to its Ring video doorbell camera system. Facial recognition offers Amazon control over a nationwide, technology-driven version of the neighborhood watch — one that can subject innocent people to data-gathering software that could potentially label them suspicious and thus in danger of police harassment. This partnership subjects vulnerable populations to more surveillance and these risks are absorbed by Black and Brown bodies in our interactions with police. It is clear that the convenience to remotely address visitors with systems like Ring comes at a price, like the loss of privacy, but society still has not quantified the cost of civil liberties violations when people walking their dogs or checking their mailboxes even beyond the Ring owner’s property line enter flawed database systems controlled by law enforcement.
As new as facial recognition technology is, the police and corporate surveillance for which it is used is rooted in the racism of the past. Charlton McIlwain is a New York University vice provost and professor of media, culture and communication. In his book Black Software: The Internet and Racial Justice from the AfroNet to Black Lives Matter, McIlwain provides a breathtaking summary of the influence a company called Simulmatics had on the 1960 presidential election. His research emphasizes the more sinister ways Black people have been debased by technology through time.
The data-gathering and aggregating thrust of Simulmatics began at the MIT Computation Center. According to McIlwain, the university’s first political science professor, Ithiel de Sola Pool, supported the school’s vision, “to infuse its science and engineering curriculum with the social sciences” and “solidify the nation’s political and economic power.”
Pool worked to conceptually mimic how people make voting decisions through a mathematical equation designed to replicate the voting propensities of specific racial, ethnic, religious and economic groups. This data was used to influence Democratic presidential candidate John F. Kennedy and compel him to articulate a focus on racial and civil rights issues, and sway the presidential election in his favor.
McIlwain insists Simulmatics’s goal was not the liberation of BIPOC people but rather power-consolidating control. Indeed, with a military subcontract called Project AGILE, Simulmatics improved the effectiveness of a propaganda and psychological warfare campaign in Vietnam called Chiêu Hồi that the U.S. mobilized to “coerce Viet Cong insurgents.”
Back in the United States, in the aftermath of the 1965 Watts Riots, Simulmatics used polling and demographic data to capture public opinion. But Simulmatics did more than report what Watts residents were thinking about race and politics; Simulmatics used technology to influence what people outside of Watts were thinking about race and politics. McIlwain explains that, because Simulmatics “had no prior connection to, and little understanding of those communities, they often misunderstood and mischaracterized how Black folks explained their experiences of racism, marginalization and oppression at the hands of the cops, media and other institutions. The new computerized statistical tools that Simulmatics used aggregated and distributed these racial misrepresentations to the public.”
In addition to the interviews they’d gathered, Simulmatics also utilized traffic reports and data from toll booths, bus traffic and gasoline sales to track the movements of people in and out of the riot area. This data helped the establishment successfully track the movement of revolutionaries and ordinary people, and enable them to increasingly utilize computer hardware and software to monitor and oppress Black people. While McIlwain acknowledges that The Kerner Report identified institutionalized violence against Black people, “it also reinforced long standing stereotypes white America held about Black people that amounted to one conclusion: Black people are, if not racially, certainly culturally inferior.”
“The Simulmatics project was an effort to game the system in a way,” McIlwain explains. “To use data we could produce about human behavior to try to manipulate the outcomes of everything from an election to wars. One of its chief purposes was to strategically manufacture disinformation as a way of thwarting would-be uprisings, or riots, or other threats to the system.”
Indeed, McIlwain identifies the Simulmatics project as a point of origination for today’s massive disinformation campaigns, such as Russian interference in the 2016 election. In his book, McIlwain explains that work done by Simulmatics “legitimized and normalized the principles on which it was based: the idea that the computer could model, and therefore manipulate, human systems and behavior. It was once theory. It soon became policy. Black people would continue to remain its subject for experimentation. Computing power would be used on them.”
Computing development accelerated in the 1960s, peak years in the mid-20th century movement for racial justice. According to McIlwain, the government identified the computer as a tool to silence revolutionary fervor, and the computing industry leaped at the opportunity to profit from this government effort. By 1965, this collusion of government, private industry (namely IBM), and elite science and engineering institutions (like MIT) had produced a powerful new technology that they referred to as a “criminal justice information system.”
Used to collect, store and analyze crime data, these systems gave law enforcement resources “to profile and target Black people and communities across the country” McIlwain says, adding that “these systems left a long legacy, and a direct line to today’s most destructive technologies, from facial recognition to all types of risk scoring technologies to digital surveillance tools that make Black people hypervisible targets.”
This line is important to trace, because Simulmatics delivered computational data to political party leaders and to the U.S.’s military industrial complex. “The motivation — in an election or in war — was control,” McIlwain says. “The data-informed strategies that Simulmatics pioneered was valuable to military strategists. They believed that by amassing data about how individuals and groups of human beings behaved they could manipulate and control the thoughts, movements and behaviors of those people. Doing so, they believed, would help them know how to spread disinformation, or neutralize someone they believed was becoming a powerful leader, or try to make someone who is your enemy believe you are their friend. All for the purpose of controlling and leading people towards a desired outcome.”
In his book, McIlwain identifies system controllers who, instead of trying to eradicate racism, instead exploited racism in order to concretize their power. Today, similar technology, particularly facial recognition technology, is still being used to subjugate BIPOC people. “Like many biometric technologies,” McIlwain says, “facial recognition often seeks to identify and deal with people that law enforcement and governments perceive as ‘threats,’ criminals, and undesirables. Given BIPOC’s longstanding association with all of these, facial recognition technologies are often trained to look for us. The machinery that enables facial recognition [is] where BIPOC live and congregate. It provides law enforcement another tool in its arsenal to turn BIPOC into perpetual suspects.”
While this manipulation crosses racial lines, it is fair to say that, given the facts of our shared history, the guiding principle for all biometric systems like facial recognition is racism. Racism influences facial recognition “inasmuch as the search for human distinction is rooted in biology (be it eyes, a face, skin color, or otherwise),” McIlwain says. “The need to establish the ‘truth’ of your identity has always been driven by the desire to separate ‘us’ from ‘them,’ and that ‘us’ and ‘them’ is frequently a racial distinction.” While people who fear the “Darker Other” might sleep better because they believe in the technology-driven security systems that make them feel safe, McIlwain cautions all of us to suspect any technology that purports to keep us safe.
Parents told that facial recognition will keep the bad guys out of their children’s local school should be especially diligent. The ACLU makes the case that not only do children’s faces change at a rapid pace, reducing the effectiveness of the biometric technology, the threat to schools almost always comes from within the school community, so a shooter would not likely be flagged as an outsider anyway.
Consistent with the ACLU, McIlwain argues that we should be concerned by facial recognition in schools for the same reason we should be concerned about its use by Immigration and Customs Enforcement (ICE). “You make suspects out of people when you utilize facial recognition in a given area, when you surveil the people in a given area,” McIlwain says. While ICE justifies this surveillance under the pretext of law and order, the hypervisibility of BIPOC and profit-driven policing produces police control, and vulnerable people are therefore targeted for arrest, incarceration, deportation.” A similar dynamic could play out in schools. For example, law enforcement could use data about a child to target parents suspected of being undocumented workers.
Facial recognition technology is also being used to target those advocating for liberation from these racist systems. McIlwain thinks it is “probably a very safe assumption” that images of many people who have engaged in direct action to support the Movement for Black Lives are held captive in a database somewhere. “It has been very well reported that law enforcement were/are very present at such BLM direct actions, utilizing multiple forms of facial recognition technology or the tools that enable them — any imaging tool like a camera or video — to identify persons of interest or suspects. Think of it as COINTELPRO on steroids.”
But staying away from public protest offers no protection from the watchful gaze of system controllers. People who have been stopped and frisked, even young, innocent citizens, should assume that the police body camera that recorded their encounter with law enforcement has saved an image of their face to a database system. McIlwain believes the increasingly effective technology that links data and data systems only heightens the threat of facial recognition. On its own, facial recognition is problematic. But this technology is even more sinister “when biometric data is linked with criminal records data, linked to social media or internet tracking data, and employment records data,” McIlwain says. “That is how the surveillance capabilities of governments and corporations really expand and become dangerous. And we know that that danger will be felt by BIPOC first and hardest.”
We must resist the idea that constant surveillance gives us safety — and that technology will somehow liberate us from fear. Though it may feel counterintuitive to think of security as a threat, in our dystopian reality, even technology is racist.