Skip to content Skip to footer

Face Surveillance Is a Uniquely Dangerous Technology

Companies are determining whether someone has a propensity to be a terrorist by analyzing their face.

Passersby walk under a surveillance camera which is part of a facial recognition technology test at Berlin Suedkreuz station on August 3, 2017, in Berlin, Germany.

Janine Jackson: From the War Resisters League to the Government Accountability Project, Data for Black Lives and 18 Million Rising, from Families Belong Together to the Electronic Frontier Foundation, more than 85 groups signed letters to corporate giants Microsoft, Amazonand Google, demanding that the companies commit not to sell face surveillance technology to the government.

The coalition cites, not the mere potential for such technology to be used to target vulnerable communities, particularly communities of color, but the history of technology being turned to that purpose.

Media coverage of technological developments often conveys an air of inevitability: Tools, once created, must be used, and businesses exist to make money from them.

How do we center the public good in decision-making around technology that some call a “perpetual lineup”?

Shankar Narayan is director of the Technology and Liberty Project at the ACLU of Washington-Seattle. He joins us now by phone from Hawaii. Welcome to CounterSpin, Shankar Narayan.

Shankar Narayan: Thank you for having me. I appreciate it.

USA Today last summer had a story that read:

Police said they were able to identify the suspected killer in Maryland’s Capital Gazette shooting using facial identification technology, an increasingly popular tool for law enforcement that’s been embroiled in controversy as civil libertarians warn about the risks of misuse.

In the Seattle Times, it was:

When the Washington County Sheriff’s Office in Oregon started using Amazon‘s facial recognition software in 2016, deputies welcomed a new tool to quickly identify suspects and solve cases. The American Civil Liberties Union saw something different: a troubling extension of the government’s ability to keep an eye on its citizens.

So the setup is, “There’s a tool. It’s very useful in crime-fighting, but you might worry it could potentially be misused, if you’re one of those people who doesn’t trust law enforcement.”

The coalition letter that the ACLU was part of says, No. 1, we’re not just talking about civil libertarians, and 2, we aren’t only talking about potential misuse. Could you outline for listeners the concerns that this coalition is representing, and what’s happening now that drives those concerns?

I think that that is a great lead-in. There is a long history of surveillance tools being adopted under the rubric of exactly the same kind of public safety concerns that you talked about. I think it’s often the lead-in to the widespread adoption of a technology, that it’s going to help catch someone in one of the worst possible situations.

But, of course, I think it’s important to tie this back to the long history of use of surveillance technologies that will particularly impact vulnerable communities.

And those are the groups that make up this coalition, groups such as immigrants, communities of color, religious minorities, even domestic violence and sexual assault survivors. All of those are represented here, and I think their point is that it’s not what could go wrong, it’s what has gone wrong when surveillance technologies have been used in the past, because they have had such a disproportionate effect on those very communities.

So the entire history of the 20th century has been one of use of the surveillance tools of the day. One example is the civil rights movement, where virtually every major civil rights leader had a dossier on them in government hands, that was fed by infiltration; it was fed by bugging, simply following people around. They even had their version of fake news, where the government would fake a letter from one activist to another, with the stated goal of trying to drive them apart from each other and weaken the movement.

You know, that’s far from the only example. Even prior to that, the Japanese incarceration, another great example where we have activists in Seattle who have talked to incarceration survivors who talked about how, in fact, the incarceration itself was illegal and unconstitutional, as the Supreme Court later declared. But the existence of those safeguards didn’t stop the infrastructure from being misused.

There was a technological back end that included a registration system, so that when the order was given to incarcerate Japanese people in our area, they knew exactly where to find those people, and send them to the incarceration camps.

A powerful quote that I heard from one of our activists was, “I was a loyal American, but I couldn’t do anything about the incarceration, because my crime was my face.” And I think that should be particularly chilling, given that we now see face surveillance, this potentially game-changing technology that can make its way into government hands, without checks, and without even this prior discussion about how the tool will be used.

And I’d like to say as well that, lest we think that this is all in the past, even since 9/11, for example, the Muslim community in New York City was surveilled using license plate readers and other technologies, without suspicion and without warrant, in an operation that failed to net a single terrorist.

Tools like social media monitoring have been used against Black Lives Matter and Occupy Wall Street. So we see the echoes of the past in how these technologies are being used today. And that’s why we should be particularly concerned, given the power of facial recognition.

One of the concerns that crops up in what coverage we do see has to do with just the quality of the technology. You know, we may hear that it’s not especially good, this facial recognition technology, particularly at recognizing black faces and brown faces.

But I have also heard that perhaps we want to be careful about making this racial bias in the technology the crux of our opposition because, for example, the Chinese state facial recognition system had that problem, and their response was to buy a driver’s license database from an African state, and then improve the ability of the data to in fact recognize black and brown faces.

But we’re not really calling, essentially, for the tool to be better. That’s not really the goal, precisely, is it?

No, it is absolutely not. And I think I should make clear that facial recognition, or face surveillance, is a uniquely dangerous technology that we believe should not be sold to the government. Even a perfectly functioning, unbiased facial recognition system is extremely dangerous. That’s because this technology gives the government unprecedented power to trackpeople, surveil who they are, where they go and who they know, across geographies, across time.

You don’t have to drive your car, if you want to avoid license plate readers. You don’t have to bring your cellphone, if you don’t want it to be tracked. But you can’t leave your face at home. And your face print is exactly the target for these products, like Amazon’s Rekognition, Microsoft Face, that make widespread use of facial recognition technology easy and cheap. And so that’s really the game changer we’re talking about.

In addition to that, the government doesn’t have to decide ahead of time who it’s going to follow around, because face surveillance is silent, it’s undetectable, and it can be applied after the fact to any video or still image. And we know, of course, that there’s more and more video and still imaging, such that anything you do in a public space may well get recorded, then be subject to facial recognition if the government has that tool in its toolbox.

And we also know that black and brown communities are already over-policed; they are already subject to a disproportionate amount of video and other surveillance. So one might expect that facial recognition in government hands will layer on top of that, again, even if it’s perfectly functioning. This is a technology that can be used on footage from officer body cameras, from video cameras, from drones, from private surveillance, and it’s easy and cheap to add to that infrastructure.

So all of these concerns are before you even get to talking about the bias within the technology itself, which seems consistently demonstrated by a number of studies. I think it’s difficult to buy Amazon’s rebuttals to critique these studies, where they say, “Well, you use the wrong confidence interval,” or “You didn’t use the tool the right way.”

Right.

Or, “We made improvements in November, and so you should have used the tool after that.”

All of those, I think, start to sound a little bit tone deaf when you look at the broader trend of study after study, showing that this tool is less accurate at identifying people of color; it’s particularly bad at identifying women of color.

And in addition to that, we want to also add in the layer of what’s called affect recognition. In other words, the use of these tools, not just identify people, but to determine whether they’re happy or sad, maybe whether they’re angry, determine their level of dangerousness.

Wow.

There are, in fact, companies out there that purport to determine whether someone has a propensity to be a terrorist, simply by analyzing their face print. And imagine an officer making a life-or-death decision based on a facial recognition analysis from their body camera that is biased in a way that shows a black person, for example, to be more angry and dangerous than they actually are. And exactly that kind of bias has been found when using these tools.

So bias is an additional layer on top of all of the concerns in building this infrastructure that we have yet to discuss, simply because agencies are already acquiring these tools, really without public notice and without transparency.

Finally — and it sounds like you’re heading towards it — but I have seen stories on, for instance, how San Francisco might become the first US city to forbid its agencies from using facial recognition technology, and I’ve seen opinion columns that warn about some of the dangers and concerns that we’ve talked about here.

But I would say that I do feel that overall, there’s not a sense of urgency from the press corps; media have a kind of, “Well, it’s out of the box now. Let’s see what it does” vibe. There’s a sense that it’s just not plausible somehow to say, “Don’t.” You know, to say “no” to the use of a technology.

And I see that matched to some degree in the public. You know, “Oh, I just assume everything I do is on camera.” And that makes me worried as well.

What, finally, would a press corps vitally engaged in protecting our civil rights be doing? What kind of reporting would you like to see?

A really great question, and I think it connects to a much broader situation. These large technology vendors have assured us that technology is a neutral thing that’s going to come in and solve the problems we have around public safety.

But of course, I think that the job of a responsible media would be to question those assumptions, and ask about a narrative in which we as communities — and particularly the most impacted communities — are able to come together and set a values framework within which technology acts.

I think as a society, our entire country is founded on this idea of civil liberties, right? That’s what our Constitution is. And technologies like face surveillance should not be allowed to end run around those values without a real public discussion.

We’ve been speaking with Shankar Narayan, director of the Technology and Liberty Project at the ACLU of Washington state. Shankar Narayan, thank you so much for joining us this week on CounterSpin.

Thank you. I appreciate being on.

Join us in defending the truth before it’s too late

The future of independent journalism is uncertain, and the consequences of losing it are too grave to ignore. To ensure Truthout remains safe, strong, and free, we need to raise $43,000 in the next 6 days. Every dollar raised goes directly toward the costs of producing news you can trust.

Please give what you can — because by supporting us with a tax-deductible donation, you’re not just preserving a source of news, you’re helping to safeguard what’s left of our democracy.