A new documentary looks at the dangers of artificial intelligence and its increasing omnipresence in daily life, as new research shows that it often reflects racist biases. Earlier this month, Cambridge, Massachusetts, became the latest major city to ban facial recognition technology, joining a growing number of cities, including San Francisco, to ban the artificial intelligence, citing flawed technology and racial and gender bias. A recent study also found that facial recognition identified African-American and Asian faces incorrectly 10 to 100 times more than white faces. The film Coded Bias,which just premiered at the 2020 Sundance Film Festival, begins with Joy Buolamwini, a researcher at the MIT Media Lab, discovering that most facial recognition software does not recognize darker-skinned or female faces. She goes on to uncover that artificial intelligence is not in fact a neutral scientific tool; instead, it internalizes and echoes the inequalities of wider society. For more on the film, we speak with Buolamwini, who uses art to raise awareness on the implications of artificial intelligence, and Shalini Kantayya, director of Coded Bias.
TRANSCRIPT
AMY GOODMAN: This is Democracy Now!, democracynow.org, The War and Peace Report. I’m Amy Goodman, with Nermeen Shaikh. And we’re broadcasting from the Sundance Film Festival in Park City, Utah, from Park City TV, where a new film looks at the racial and gender prejudice baked into artificial intelligence technology, like facial recognition. The film is called Coded Bias.
NERMEEN SHAIKH: Earlier this month, Cambridge, Massachusetts, voted to ban facial recognition, joining a growing number of cities in the U.S., including San Francisco, that have outlawed the artificial intelligence software, citing flawed technology.
AMY GOODMAN: A recent study found facial recognition identified African-American and Asian faces incorrectly 10 to 100 times more than white faces. The study by the National Institute of Standards and Technology found a photo database used by law enforcement incorrectly identified Native Americans at the highest rates.
NERMEEN SHAIKH: The danger of flawed artificial intelligence and its increasing omnipresence in daily life is the focus of the new film Coded Bias. The film begins with Joy Buolamwini, a researcher at the MIT Media Lab, who discovers that most facial recognition software does not recognize darker-skinned or female faces when she has to wear a white mask to be recognized by a robot she herself is programming. She goes on to reveal that artificial intelligence is not in fact a neutral scientific tool, but instead reflects the biases and inequalities of wider society.
AMY GOODMAN: This is Joy Buolamwini testifying before Congress in May.
JOY BUOLAMWINI: I’m an algorithmic bias researcher based at MIT, and I’ve conducted studies that show some of the largest recorded racial and skin-type biases in AI systems sold by companies like IBM, Microsoft and Amazon. You’ve already heard facial recognition and related technologies have some flaws. In one test I ran, Amazon recognition even failed on the face of Oprah Winfrey, labeling her male. Personally, I’ve had to resort to literally wearing a white mask to have my face detected by some of this technology. Coding in white face is the last thing I expected to be doing at MIT, an American epicenter of innovation.
Now, given the use of this technology for mass surveillance, not having my face detected could be seen as a benefit. But besides being employed for dispensing toilet paper, in China the technology is being used to track Uyghur Muslim minorities. Beyond being abused, there are many ways for this technology to fail. Among the most pressing are misidentifications that can lead to false arrest and accusations. … Mistaken identity is more than an inconvenience and can lead to grave consequences.
AMY GOODMAN: That’s Joy Buolamwini, who now joins us here in Park City at the Sundance Film Festival, along with Shalini Kantayya, the director of the film Coded Bias, that’s premiered here at the film festival.
We welcome you both to Democracy Now! So, take it from there, Joy. I mean, how did you end up testifying before Congress? And take us on your journey, from MIT, discovering that your face is one that would be recognized so many fewer times when artificial intelligence technology is used than others. I mean, maybe that’s protection. Who knows?
JOY BUOLAMWINI: Absolutely. So, my journey started as a grad student. I was working on an art project that used face detection technology, and I found that it didn’t detect my face that well, until I put on a white mask. And so, it was that white mask experience that led to questioning: Well, how do computers see in the first place? How is artificial intelligence being used? And if my face isn’t being detected in this context, is it just me or other people?
AMY GOODMAN: Can you also step back? What even does artificial intelligence mean? What does AI mean?
JOY BUOLAMWINI: Sure. So, AI is about giving machines what we perceive to be somewhat intelligent from a human perspective. So, this can be around perceiving the world, so computer vision, giving computers eyes. It can be voice recognition. It can also be about communication. So, think about chatbots, right? Or think about talking to Siri or Alexa. And then, another component to artificial intelligence is about discernment or making judgments. And this can become really dangerous, if you’re deciding how risky somebody is or if they should be hired or fired, because these decisions can impact people’s lives in a material way.
NERMEEN SHAIKH: Well, can you talk about the origins of artificial intelligence? You go over it a bit in the film Coded Bias.
JOY BUOLAMWINI: Yes. And Shalini does a great job of really taking it all the way back to Dartmouth, where you had a group of who I affectionately call “pale males” coming together to decide what intelligence might look like. And here you’re saying, “If you could play chess well, that’s something that looks like intelligence.” The thing also about artificial intelligence is what it is changes. So, as machines get better at specific kinds of tasks, you might say, “Oh, that’s not truly intelligence.” So, it’s a moving line.
AMY GOODMAN: So, Shalini, why don’t you talk about how you came up with the idea for Coded Bias, Joy a central figure, of course, of this film, and take the history further?
SHALINI KANTAYYA: Well, basically, I was sort of like a science fiction fanatic. And so I like reading about technology and imagining the future. And I think so much of what we think about artificial intelligence comes from science fiction. It’s sort of the stuff of Blade Runner and The Terminator. And then, when I started sort of reading and listening to TED Talks by Joy and another mathematician named Cathy O’Neil, other women like Meredith Broussard and Zeynep Tufekci, I realized that artificial intelligence was something entirely different in the now. It was becoming a gatekeeper, making automated decisions about who gets hired, who gets healthcare and who gets into college. And when I discovered Joy’s work, I was just captivated by this young woman who was disrupting the disruptors.
AMY GOODMAN: So, let’s go to a clip from your remarkable film, Coded Bias. This shows police in London stopping a young black teen.
SILKIE: Tell me what’s happening.
GRIFF FERRIS: This young black kid’s in school uniform, got stopped as a result of a match. Took him down that street just to one side and like very thoroughly searched him. It was all plainclothes officers, as well. It was four plainclothes officers who stopped him. Fingerprinted him after about like maybe 10, 15 minutes of searching and checking his details and fingerprinting him. And they came back and said it’s not him.
Excuse me. I work for a human rights campaigning organization. We’re campaigning against facial recognition technology. We’re campaigning against facial — we’re called Big Brother Watch. We’re a human rights campaigning organization. We’re campaigning against this technology here today. And then you’ve just been stopped because of that. They misidentified you. And these are our details here.
He was a bit shaken. His friends were there. They couldn’t believe what had happened to him.
Yeah, yeah. You’ve been misidentified by their systems And they’ve stopped you and used that as justification to stop and search you.
But this is an innocent, young 14-year-old child who’s been stopped by the police as a result of a facial recognition misidentification.
AMY GOODMAN: So, that’s a clip from Coded Bias. Joy Buolamwini, explain further what took place here, the misidentification, the identification. Some might perversely say it’s better for this technology to fail, so that people can’t be identified, but this is the opposite case.
JOY BUOLAMWINI: Absolutely. So you were saying earlier maybe not being identified is a good thing. But then there are the misidentifications that have a real-world impact. So, in the clip and in the film, you actually see the work of Big Brother Watch U.K. And in this particular scenario, Big Brother Watch U.K. was able to track what was going on in London. And one of the things they showed in their study, “Face Off,” was you had false positive match rates of over 90%. So you see this one example here, but they also had reports where more than 2,400 innocent people were mismatched. So it’s not just a case of, “Oh, you’re not detected.” That might be sometimes. But you could be misidentified as somebody you’re not, and the consequences can be grave.
AMY GOODMAN: And we’re playing this clip at a time when The New York Times reports London’s Police Department — London’s Police Department said it would begin using facial recognition to spot criminal suspects with video cameras as they walk the streets, adopting a level of surveillance that is rare outside China, the technology London deploying goes beyond many of the facial recognition systems used elsewhere, which match a photo against a database. The new technology uses software that can immediately identify people on a police watchlist as soon as they’re filmed on a video camera, Joy.
JOY BUOLAMWINI: And I think you might need to say “attempt to identify,” because oftentimes the claims that are made about these technologies don’t necessarily match up to the reality. Earlier you spoke about the National Institute of Standards and Technology study. They studied more than 189 algorithms from 99 different companies. And so, this is the majority of the facial recognition technology that’s out there — racial bias, gender bias, age bias, as well. So, if you have a face, you have a place in this conversation, and we all need to be concerned. So I think it’s highly irresponsible to deploy technologies that we already know have significant flaws, that we already know can be abused. It’s commonsense to place a moratorium until we’re at a better place.
NERMEEN SHAIKH: Well, Shalini, another place that you profile in the documentary is China. And you speak to this woman at some length. So, a couple of questions. First, how did you get access? And your response to the fact that she actually supported the credit — what is it? The social credit system?
SHALINI KANTAYYA: Absolutely.
NERMEEN SHAIKH: If you could explain what that is, how it works there and what your sense is of the kind of support that this system has in China? And then, Joy, along the same lines as what you were talking about earlier, in places like China, where the artificial intelligence and facial recognition, the technology is developed there, is there a similar bias? And if so, what is it? But first, Shalini.
SHALINI KANTAYYA: Well, I got access through a local production company in China. And I feel that this woman kind of gave us insight into this social credit system that is coming up in China, to sort of where they’re using facial recognition in tandem with the social credit system. So, if you — basically, they’re tracking you, they’re watching you, they’re surveilling you, and they’re scoring you. And not only what you do impacts your score, but what your friends do impact your score. And this young woman, who I — who is featured in the film, says that, you know, in fact, we don’t have to trust our own senses anymore, that we can rely on this sort of social credit score to actually have integrity in who we trust and who we don’t trust. And I think, in the film, you know, we sort of want to think, “Oh, that’s sort of a galaxy far, far away from the U.S.” But in the making of this film, I saw all kinds of parallels of that type of scoring that’s happening here in the U.S. and in other places around the world.
NERMEEN SHAIKH: Explain how you see that it’s comparable or could be.
SHALINI KANTAYYA: Well, as Amy Webb says so poignantly in the film, we’re all being scored all the time, from our Uber scores to our Facebook likes. All of that information is being tracked and analyzed all of the time. And so we’re all being rated all of the time. And so, that kind of tracking can impact how much we pay for insurance, what kind of opportunities are shown to us online. And so, very much it becomes sort of an algorithmic determinism.
AMY GOODMAN: And Joy?
JOY BUOLAMWINI: So, to the question of how are the systems working in China, in our first study, called “Gender Shades,” we looked at IBM, Microsoft, but we also looked at Face++, a billion-dollar tech startup in China. And we found similar racial bias and gender bias. But, overall, when they’ve done studies on AI systems developed in China, they tend to work better on Chinese faces, right? And those developed in Western nations tend to work better on Western faces, as well.
One thing I did want to bring up related to China and data collection is this data colonialism that we’re starting to see. So you have reports of Chinese companies going to African nations, providing facial recognition or surveillance technologies in exchange for something very precious, the biometric data of the citizens. So, now, parallel to what we had with the slave trade — right? — where you’re extracting bodies, now you’re extracting digital bodies in service of a global trade, because even when you talk about what’s going on in London, they’re using technology from a company called NEC that’s based in Japan. And so you have to really think about the global context for how these technologies spread around the world.
SHALINI KANTAYYA: And just to add to that, China has unfettered access to data. It has now been mandated that if you want to access the internet in China, you must submit to facial recognition. So, that is the basis for which they’re building this kind of scoring system.
AMY GOODMAN: I wanted to go to another clip from Coded Bias. This is the author of the book Algorithms of Oppression.
SAFIYA UMOJA NOBLE: The way we know about algorithmic impact is by looking at the outcomes. For example, when Americans are bet against and selected and optimized for failure. So it’s like looking for a particular profile of people who can get a subprime mortgage and kind of betting against their failure and then foreclosing on them and wiping out their wealth. That was an algorithmic game that came out of Wall Street. During the mortgage crisis, you had the largest wipeout of black wealth in the history of the United States. Just like that. This is what I mean by algorithmic oppression. The tyranny of these types of practices of discrimination have just become opaque.
AMY GOODMAN: That’s Safiya Noble. And I want to go to another clip now of Coded Bias which features a woman from Philadelphia who was subjected to a recidivism risk algorithm, which judges and probation officers use to calculate the risk of a person reoffending. The scoring system was investigated and found to be racially biased. This is LaTonya Myers and her lawyer, Mark Houldin.
LATONYA MYERS: I go into my probations office, and she tells me I have to report once a week. I’m like, “Hold up. Did you see everything that I just accomplished? Like I’ve been home for four years. I got gainful employment. I just got two citations, one from the City Council of Philadelphia, one from the mayor of Philadelphia. Like, are you seriously going to like put me on reporting every week? For what? I don’t deserve to be on high-risk probation.”
MARK HOULDIN: I was in a meeting with the Probation Department. They were just like mentioning that they had this algorithm that labeled people high-, medium- or low-risk. And so, I knew that the algorithm decided what risk level you were.
LATONYA MYERS: That educated me enough to go back to my PO and be like, “You mean to tell me you can’t put in account anything positive that I have done to counteract the results of what this algorithm is saying?” And she was like, “No, there’s no way.” This computer overruled the discernment of a judge and a PO together.
MARK HOULDIN: And by labeling you high-risk and requiring you to report in person, you could have lost your job. And then that could have made you high-risk.
LATONYA MYERS: That’s what hurts the most, knowing that everything that I’ve built up to that moment, and I’m still looked at like a risk. I feel like everything I’m doing is for nothing.
AMY GOODMAN: Shalini Kantayya is director of Coded Bias, that clip, those clips we just played. Shalini, as we wrap up, what about regulation?
SHALINI KANTAYYA: These algorithms are impacting all of us in the most — in our civil rights, and we need legislation. We need meaningful legislation around algorithms.
AMY GOODMAN: And the explanation of algorithms, in just 20 seconds, Joy, for us nonscientists?
JOY BUOLAMWINI: Yes. So, Algorithms are essentially processes that are meant to come to give or solve a particular task. So, when we talk about AI, we’re talking about systems that can perceive the world, that can communicate and, most importantly, make determinations. And these determinations impact our lives.
AMY GOODMAN: Well, we want to thank you so much for being with us. Joy Buolamwini is researcher at the MIT Media Lab and founder of the Algorithmic Justice League. We’re going to link to her speeches and her congressional testimony at democracynow.org. And Shalini Kantayya is director of the new film, that’s just premiered here at the Sundance Film Festival, called Coded Bias.
And that does it for our broadcast. On Friday at 2 p.m., I’ll be speaking here in Park City, Utah, at the museum right next to Dolly’s Bookstore about impeachment and elections. Next Tuesday, February 4th, I’ll be in Washington, D.C., just before the State of the Union. I’ll be interviewing Lonnie Bunch, founding director of the Smithsonian’s National Museum of African American History and Culture, at 6:00 at Busboys and Poets in Washington, D.C.
And next Friday, February 7th, at noon, Nermeen Shaikh will be moderating a panel with the Squad at Howard University. Alexandria Ocasio-Cortez, Rashida Tlaib, Ilhan Omar and Ayanna Pressley, the four congressmembers, will be with Nermeen at Howard University at noon, February 7th. You can go to all details at democracynow.org.
Also, Democracy Now! we’ll be broadcasting the Senate impeachment trial at democracynow.org. I’m Amy Goodman, with Nermeen Shaikh.
Truthout Is Preparing to Meet Trump’s Agenda With Resistance at Every Turn
Dear Truthout Community,
If you feel rage, despondency, confusion and deep fear today, you are not alone. We’re feeling it too. We are heartsick. Facing down Trump’s fascist agenda, we are desperately worried about the most vulnerable people among us, including our loved ones and everyone in the Truthout community, and our minds are racing a million miles a minute to try to map out all that needs to be done.
We must give ourselves space to grieve and feel our fear, feel our rage, and keep in the forefront of our mind the stark truth that millions of real human lives are on the line. And simultaneously, we’ve got to get to work, take stock of our resources, and prepare to throw ourselves full force into the movement.
Journalism is a linchpin of that movement. Even as we are reeling, we’re summoning up all the energy we can to face down what’s coming, because we know that one of the sharpest weapons against fascism is publishing the truth.
There are many terrifying planks to the Trump agenda, and we plan to devote ourselves to reporting thoroughly on each one and, crucially, covering the movements resisting them. We also recognize that Trump is a dire threat to journalism itself, and that we must take this seriously from the outset.
After the election, the four of us sat down to have some hard but necessary conversations about Truthout under a Trump presidency. How would we defend our publication from an avalanche of far right lawsuits that seek to bankrupt us? How would we keep our reporters safe if they need to cover outbreaks of political violence, or if they are targeted by authorities? How will we urgently produce the practical analysis, tools and movement coverage that you need right now — breaking through our normal routines to meet a terrifying moment in ways that best serve you?
It will be a tough, scary four years to produce social justice-driven journalism. We need to deliver news, strategy, liberatory ideas, tools and movement-sparking solutions with a force that we never have had to before. And at the same time, we desperately need to protect our ability to do so.
We know this is such a painful moment and donations may understandably be the last thing on your mind. But we must ask for your support, which is needed in a new and urgent way.
We promise we will kick into an even higher gear to give you truthful news that cuts against the disinformation and vitriol and hate and violence. We promise to publish analyses that will serve the needs of the movements we all rely on to survive the next four years, and even build for the future. We promise to be responsive, to recognize you as members of our community with a vital stake and voice in this work.
Please dig deep if you can, but a donation of any amount will be a truly meaningful and tangible action in this cataclysmic historical moment. We’re presently working to find 1500 new monthly donors to Truthout before the end of the year.
We’re with you. Let’s do all we can to move forward together.
With love, rage, and solidarity,
Maya, Negin, Saima, and Ziggy