Part of the Series
Movement Memos
“The truth is, every time community groups have asked questions about policing, the police haven’t had good answers. And when really pushed, they had to fold to recognize that maybe this technology wasn’t worth the money, wasn’t doing what it was said,” says Andrew Guthrie Ferguson, author of The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. In this episode of “Movement Memos,” Guthrie Ferguson and host Kelly Hayes explore the history and failures of predictive policing, and raise the alarm about the creation of new data empires.
Music by Son Monarcas & David Celeste
TRANSCRIPT
Note: This a rush transcript and has been lightly edited for clarity. Copy may not be in its final form.
Kelly Hayes: Welcome to “Movement Memos,” a Truthout podcast about organizing, solidarity, and the work of making change. I’m your host, writer and organizer Kelly Hayes. This week, we are talking about high-tech policing and how so-called predictive technologies hurt our communities. This episode is a bit of a primer on predictive policing, which I hope will help set us up for deeper conversations about how activists are resisting mass surveillance and other police tech. We’re going to be hearing from Andrew Guthrie Ferguson, a law professor at American University Washington College of Law, who is also the author of The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. Andrew’s book is a great introduction to these technologies, and some of the companies behind them, and it also does an excellent job of explaining how these technologies have already failed us.
These supposed advancements in policing have not only failed to reduce crime, but have actually caused more harm in our communities, and that harm is persistent and cyclical. Because when it comes to police reforms, and Big Tech, big promises frequently lead to massive failures, which often results in more investment, not less. The answer, we are always told, is more money, and even more ambitious initiatives. Rather than learning from past failures or practicing accountability, Big Tech leaders and police reformers hype up the next innovation, and insist that with more money, and bigger technologies, all of their failed thinking will finally add up to success.
In my conversation with Paris Marx last year, we discussed how the automation hype of the 2010s should be considered when parsing the hype around AI. During those years, we were told that automation was going to transform our lives, and that truck drivers and food service workers would all soon be replaced by machines. That, of course, didn’t happen. Now, as AI leaders begin to pivot in their own messaging, after a year of big promises and fear mongering, I think police tech is something that we should all scrutinize in light of the bold promises of Big Tech’s social and economic interventions. Because, the products they let loose upon our communities have real consequences. Safety is not improved and our quality of life is not enhanced, but mass surveillance grows, and the systemic biases that govern our lives become more entrenched. We’re going to talk more about what that looks like, today and in some future episodes, but I want to start by taking these technologies out of the realm of science fiction, which is how I think they exist in many people’s minds. We’re not talking about fictional stories like Minority Report, where future crimes can be predicted in advance. Despite the talking points of tech leaders and some government officials, nothing like that is anywhere on the horizon. So, let’s talk about what does exist, why it doesn’t work, and how communities have rebelled against it.
If you appreciate this episode, and you would like to support “Movement Memos,” you can help sustain our work by subscribing to Truthout’s newsletter or by making a donation at truthout.org. You can also support the show by subscribing to the podcast on Apple or Spotify, or wherever you get your podcasts, or by leaving a positive review on those platforms. Sharing episodes that you find useful with your friends and co-strugglers is also a huge help. As a union shop with the best family and sick leave policies in the industry, we could not do this work without the support of readers and listeners like you, so thanks for believing in us and for all that you do. And with that, I hope you enjoy the show.
(musical interlude)
Andrew Guthrie Ferguson: My name is Andrew Guthrie Ferguson. I’m a law professor at American University Washington College of Law here in Washington, DC. I’m the author of the The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement, and I write about issues of surveillance technology, including big data, policing, predictive policing, facial recognition technologies, persistent surveillance systems, the internet of things, and basically all the ways that our government is surveilling us in different ways and some of the ways we’re surveilling ourselves.
I began this career, my law career as a public defender in Washington, DC, trying cases from juvenile cases to homicides, and I teach criminal procedure and evidence, and criminal law here at the law school.
So, I think the best way to think about predictive policing is to divide it up into three categories. Two are almost I would say traditional now, because we’ve been experimenting and largely failing with them for the last decade.
The first is place-based predictive policing, the idea that you can take past crime data, maybe the type of crime, the location, the time of day, and use that information to “predict” where a future crime will be such that you can put a police car in the right place at the right time, and either deter the crime or catch the person who’s committing the alleged act. That’s a theory. There have been many problems which we can talk about. But the theory is that past crime data might be helpful to predict future actions.
And some of the background for where place-based predictive policing came from is that there are certain crimes, burglaries, car thefts, that are almost predictable only in the sense that there’s something about the environment, the environmental risk factors that leads people to commit a series of car thefts there. Maybe it’s a parking lot without lighting or anyone around, or maybe it’s a burglary in a neighborhood that statistically speaking, if there’s a burglary in one neighborhood, there may be more burglaries in this same neighborhood, probably because it’s the same group of people going back to keep trying their hand at the crime.
And so that insight that you could actually take past crime and predict future crime has been adopted by police departments, and largely has proven unhelpful and failing, and we can talk about that. But that’s place-based predictive policing.
Person-based predictive policing is a second type of predictive policing, which basically says we can use risk factors of individuals, maybe that they’d been arrested or convicted, or even they were a victim of violent crime, to predict that they might be involved in criminal activity in the future. This has been tried in Chicago, it’s been tried in Los Angeles, and largely has completely failed. But the idea was that we could take this past crime data and use it to focus police resources to target the group of people who are most at risk of committing crimes.
And we can talk about the failings and the problems with it. But the theory was that in a world of finite resources, police officers could move forward and target with a focused deterrence framework of people they thought were most at risk of crime.
The third just general sense of predictive policing is something we’re seeing now that’s coming to the fore with this rise of video analytics and artificial intelligence, the idea that you might be able to predict using pattern matching certain activities that could be seen as suspicious. So for example, you could train video analytics to recognize something that more or less looks like what could be a robbery. And then as the cameras are rolling, the software is also rolling to be able to see a pattern that reminds the computer of a particular kind of crime and then alert officers to the scene.
We also see it with automated license plate readers. Maybe the travel that you use in your car is consistent with how drug traffickers might travel about their business. And so the algorithm would pick up a prediction that this car that’s been driving this pattern might be involved in criminal activity. Again, a different form of predictive policing. But the idea of all of them is that you can take past crime data, you can run it through an algorithm. And somehow through the miracle of data-driven analysis, predict the future.
As I said, and as we can talk about it, it has largely failed, both place-based predictive policing, person-based predictive policing. And while the jury might be out in terms of some of this AI video analytics, odds are it’s probably going to meet the same ill fate.
KH: Now, as we dive into explanations of these varying technologies and their shortcomings, I want us to remember the divergence between our interests and those of tech companies, the police, and the people and forces that are actually served by policing. I think these words from Mariame Kaba and Andrea Ritchie’s book No More Police might help us along:
Police exist to enforce existing relations of power and property. Period. They may claim to preserve public safety and protect the vulnerable, but police consistently perpetrate violence while failing to create safety for the vast majority of the population, no matter how much money we throw at them. Their actions reflect their core purposes: to preserve racial capitalism, and to manage and disappear its fallout.
Police reforms have a long history in the United States. When the violence of policing results in a destabilization of the social order, due to organizing and protests, we get reforms, such as the professionalization of policing in the last century and the technological solutions of today. Those reforms heap more resources upon police, but do not change the core functions of policing. With those core functions intact, the same problems persist, and we are told still more reform, and thus, more resources are needed. I want us to keep these trends in mind as we think about the directions that predictive policing has taken, and will continue to take in our lives. I also want us to understand why companies that produce failed technologies, such as SoundThinking, continue to receive more investment – because the truth is, they are succeeding at something, even if that something isn’t public safety.
AGF: So place-based predictive policing got its start in Los Angeles. There was an idea that the Los Angeles Police Department could take past crime statistics and be able to reallocate police resources to go to the particular places where the crime would occur.
There was a company then called PredPol that had a contract with the Los Angeles Police Department, that basically sold them this idea that data-driven policing through this algorithm could help them do more with less, be at the right place at the right time, and that they could actually put their police car where they thought, let’s say a burglary would be, a car theft would be, or anything else.
Now, what happened over time, and it took almost a decade of movement activists to push against this kind of technology, was that essentially they could neither show that the predictions were accurate such that they were actually putting the police car at the right place at the right time, and thus reducing crime. And furthermore, they were essentially targeting certain areas that of course correlate with poverty, correlate with economic need and deprivation. And they were spending police resources to focus on the data to go to these particular places, rather than necessarily focusing on why there might be criminal activity in those places in the first place.
And so place-based predictive policing began in Los Angeles, but then spread like wildfire throughout much of the last decade, through jurisdictions all across the country that saw the logic of taking past crime data, which they had, and applying it to the future, without really asking the question of A, does this work? B, will this actually be helpful to officers? And C, is it worth the money? Because of course, there’s opportunity costs in the money you spend on a technology that you’re not investing in a community.
And as we’ve seen over the last decade, that after departments adopted this technology, they could largely not show that it worked, that it lowered the crimes they were worried about. They could show that it was actually targeting poor communities and communities of color in many places. And further, that many times, they didn’t even have buy-in from their own officers who were told to follow the algorithm.
And so this whole sort of management structure that was based on predictive algorithms resulted in a system that was costing money, not lowering crime, not helping out the police, not even wanted by the police, and generating a lot of community outrage that there was this algorithm that was determining police resources in their community.
And of course, as one of the great flaws of most policing in America, the community wasn’t even consulted about whether this was something that they wanted, needed, or thought was a good idea.
And so after many communities learned about what was happening, there was a ton of pushback and most of the early experiments of predictive policing have been shut down, at least in name.
KH: This point about them having been shut down “in name” is important, because sometimes, in the face of bad news or bad press, companies that produce police tech simply rebrand. One example of this maneuver is PredPol’s transformation to Geolitica in 2021. Geolitica produces software that processes data and incident reports and generates predictions about where and when crimes are most likely to occur. In a piece co-published in Wired and The Markup in October of 2023, journalists Aaron Sankin and Surya Mattu shared the results of their analysis of how well the technology had worked in Plainfield, New Jersey. They found that the success rate of Geolitica’s predictions was less than one half of one percent.
In its early startup phase, PredPol raised $3.68 million. Their services were subsequently purchased by police departments around the country, despite widespread criticism from activists that the project amounted to the “tech-washing” of racist police practices. In 2023, after years of reporting about how the company’s technology simply doesn’t work, Geolitica was acquired by SoundThinking, which is a rebrand of ShotSpotter. So, why did one troubled, rebranded police tech company acquire another? We’ll talk more about that a bit later.
AGF: Person-based predictive policing was the evolution of place-based predictive policing. The thought was, well, if we can take locations that we think of higher risk, wouldn’t it be better if we could also find individuals who are high risk? Obviously individuals commit crimes, and so if you can focus on the individuals, maybe we could reduce their activity and criminal behavior.
One of the better, early experiments of this idea, and I say better only because it was seen for all of its flaws, was out of Chicago. Chicago police decided to partner with some academics to figure out if they could identify who was at risk for violence in their community.
And in Chicago, many of the young people who were involved in violent actions were involved in reciprocal, quasi gang-based retaliation acts. “If you shoot me, I’m going to go out and shoot you and your friends. If you and your friends get shot, well, we got to go back and shoot.” So they could almost map out that there was going to be this reciprocal violent activity that was back and forth between different groups in Chicago.
And what they did was they took that insight and they said, “What if we can identify the individuals who would belong in this group, if we can find the risk factors?” Now, the risk factors they picked initially were whether people had been arrested by the police, whether they had been arrested for a violent crime or a drug crime, whether they’ve been convicted of a crime, and/or whether they were a victim of a crime. So even victims were identified as individuals with higher risk.
And the initial theory was, well, if we can come up with a list of people who we think are more likely to be involved in shooting each other, we can then go intervene maybe. And this happened that they had these things called custom notification letters, where they literally would go knock on a young man’s door with a letter. And picture, it’s a detective, a person from the community, and maybe a social worker and say, “Hey, you’re on our,” they called it the heat list. “You’re on our heat list. We think you’re more likely to either shoot someone or get shot, and we would like you to essentially redirect your life path.” Maybe they had offered services. Of course, they didn’t really fund the services, but the theory was that they could intervene early.
And so this initial list, about 1,300 names eventually over time morphed into a list of 400,000-plus names, where individuals in Chicago were each given a risk score, literally a numeric score from zero to 500-plus about whether you were at risk or not at risk.
Now, these numbers were largely arbitrary in the sense that there wasn’t any scientifically validated idea of what the difference between a two and a 500 was. And so what it tended to be was basically a most wanted list, but it was actually such a big most wanted list, it was so many people that eventually became unusable. And over time, and again, after the community in Chicago, the community members in Chicago protested against this kind of scoring of human beings as risk based on very little, they eventually shut it down. And when they reviewed it to say, “Hey, did this work?” They couldn’t actually show that it was actually reducing violence.
There were so many people that it really was over-inclusive. And furthermore, the timing was sort of spread out, 18 months. It didn’t give police any information about what they could possibly do to either help this person or stop the violence. It was just a scoring mechanism. And they recognized that this really wasn’t working and it had brought community uproar as it should have. And they largely shut it down, recognizing this kind of predictive policing wasn’t helpful.
The final point is they also recognized that the way you got on the list was all through police contact. So if one way you would get a higher score was an arrest by police, and you had a system of policing like in Chicago, which is rife with racial discrimination, racism, and policing poverty in certain communities, those people who are getting connected to the legal system and thus getting a higher risk are obviously going to be poor communities and communities of color. And that’s who ended up on the heat list primarily was young men of color who were given these risk scores without any real benefit, but were obviously clearly going to shape how police treated them when they saw them on the streets or when they were looking for suspects for a crime. They already had a ready-made list of people that they could target.
KH: During the years that the “heat list” was in use, the city of Chicago refused to comply with community demands for transparency about how the system worked. This lack of transparency led local cops to refer to the “heat list” as the “crystal ball unit.”
The “heat list” caused considerable harm to its algorithmic targets. Because while the list lacked the magic of a crystal ball, it did give police cover to abuse community members who had been marked with an assumption of guilt. As one commander told journalist Matt Stroud, “If you end up on that list, there’s a reason you’re there.” It didn’t matter that the police could not substantiate those claims. The public was expected to accept that the “crystal ball unit” knew who was guilty, and that police excesses were therefore warranted against those it identified.
As Stroud reported in The Verge in 2021, “The heat list wasn’t particularly predictive … It wasn’t high-tech. Cops would just use the list as a way to target people.” In that piece, Stroud documented how a man named Robert McDaniel, whose police record was limited to marijuana and shooting dice, inexplicably found himself on the list. As a result, he was the target of constant police surveillance, and was ultimately shot twice by community members who McDaniel says believed that his excessive contact with the police indicated that he was an informant.
The list was also used by immigration officials when considering people’s applications for legal status, and by prosecutors considering whether or not to pursue sentencing enhancements. Even though the “heat list” supposedly didn’t differentiate between potential victims and potential perpetrators, it marked everyone it listed for victimization by the state.
But, as we will continue to see, this project did not lead Chicago or other cities to stop pursuing technological, data-driven solutions to crime.
AGF: So there’s a new rise of video analytics and pattern matching, which is the next evolution of predictive policing. It’s not predictive policing in the traditional sense that we have seen before, but it’s this idea that maybe we can use computers, and video analytics, and data-driven computers such as automated license plate readers, which are constantly being fed into a system, to predict future criminal activity.
So the way it would work is let’s say you are worried about people in a park after dark. It’s actually a pretty easy thing for video analytics to identify. You’re not supposed to be in the park after dark. You can set up a system to say, “Well, if there’s movement in this park after dark, we’ll have an alert, and then we’ll send police to the area.” It is predictive analytics in the sense of you have coded that prediction beforehand. It obviously replaces the police officer from having to hang out in the park, waiting for someone to be there. And it’s the kind of thing that’s being built into these real-time crime centers that we’re starting to see show up in major cities and even smaller cities across America.
And the idea is if you’re feeding in camera systems to this major center of policing, the predictive analytics will allow police to identify individuals. Of course, you could do it with individuals with facial recognition. Now to be fair, we haven’t rolled out that system anywhere in America where they’re actually using live facial recognition through their cameras. But again, that would be predictive analytics. We predict that this person will be involved in a crime. If we see him in front of our cameras, we’ll be able to identify where he is and track him across the system. Those kinds of technologies are just emerging, as we are getting video analytics with the capabilities to do this kind of pattern matching.
And again, sometimes it’s just patterns. So let’s say there’s a suspicion that a particular home is being used as a front for drug dealing. Video analytics with a camera can actually identify how many people go in and out of the home. So if there are 400 people that go in and out of a home every day, that’s kind of suspicious, and you can program the computer to say, “Something’s going on in this house. It’s unusual for a normal home”. Maybe this is the kind of thing that will build suspicions, such that police can direct their resources.
And this is the next iteration of predictive policing, because police have it in their heads that these kinds of technologies will enhance what they can do, will give them more power. And so they’ve been investing in it.
KH: In the UK, authorities are reportedly working to develop a predictive policing system powered by AI, that will supposedly allow cops to predict and prevent crimes before they occur. Using the kind of pattern analysis that Andrew described, a government source told journalist David Parsley that the technology could detect “all kinds of criminal activity including drug gangs, paedophile rings, terrorist activity and modern slave traders.” By pairing existing, failed technologies, such as person-based predictive tech, with machine learning and pattern analysis, UK officials believe they can both “prevent and detect crime.” The UK also hopes to gain access to banking information, so that the system can scan accounts for potential signs of criminal activity.
It’s worth noting that, in Australia, when AI was deployed to detect data and patterns indicative of fraud, the government wound up paying out billions in damages to people who were falsely accused by the system. The false allegations the system generated ruined people’s lives, causing some people to lose their homes, while some died by suicide.
So, what we’re talking about doing is taking technologies that have already proven faulty, inconclusive and harmful, and Frankensteining them together into something new, and granting that new monster tech more power and legitimacy.
Even if a technology does have the potential to sometimes reveal criminal activity, we have to ask ourselves whether the human costs are worth those successes for law enforcement, or whether those successes even represent the actual goal of such technologies.
What I see, in all of this tech, is an added license for police to behave the way they always want to behave: aggressively. Any technology that legitimizes an aggressive or enthusiastic police response ultimately threatens the public. We saw this, here in Chicago, in the case of ShotSpotter, which is a network of microphones that are largely concentrated in Black and brown neighborhoods, for the purpose of detecting gunshots. The technology is highly unreliable, and cannot parse the difference between gunshots and fireworks, which is why the city turns it off on the fourth of July. But when police show up to a potential shooting, they show up ready to fire their own guns, as we saw in the case of 13 year-old Adam Toledo. Adam was gunned down by a police officer in Chicago’s Little Village neighborhood in the spring of 2021, after putting his hands up, as an officer had demanded.
Adam’s case drew widespread attention, but disturbingly, high-profile police killings usually don’t lead to a reduction in the use of problematic technologies. In fact, such tragedies tend to usher in new waves of such technologies. Just as the professionalization of police was regarded as the answer to police violence, emerging technologies now offer the false promise of safer policing.
AGF: So we have seen a repeated pattern in America, where police have misused the power granted to them, have harmed communities, and the community has reacted to that harm in a way that calls into doubt the legitimacy and validity of police power. And the response to that valid criticism, in many times, is to come up with a technological fix that seemingly is removing the human police officer from the calculation.
And so in the first iteration of police protests in the last decade, we had a concern that the problem here was human police officers acting in a way that they were using their discretion too broadly, and data because it was seen as objective… And of course it’s not objective. Data is just us in binary code, but it was seen as objective would be a way to mollify the concerns of the community to say that police are abusing their power. “Don’t worry, we’re just following the data. Don’t worry. The reason we’re over-policing this neighborhood is because the data tells us to do this. Not us. It’s the data.” And it sort of creates this quasi objective-sounding response to the ordinary problems of policing.
And we see it almost in a cyclical fashion, where sometimes the outrage of a community gets to a place where the chief needs an answer. The police chief needs an answer, needs to respond to this outrage. And one of the ways police have responded is to say, “Don’t worry, we have this new technological fix.”
One of the best examples was after the stop-and-frisk program in New York City, the NYPD was unconstitutionally using stop-and-frisk practice, stopping young Black and brown men over and over again in violation of the Fourth Amendment, the federal court held that. Now when Chief Bratton came in around the second time, his response to the media who was saying, “Hey chief, what are you going to do about this problem?” It’s like, “Don’t worry, we got a new solution. It’s called predictive policing. It’ll be fine.”
And it’s again, this response to the idea of problems in policing with a technological fix because it sounds less human, it sounds less fraught with human problems. But of course the algorithms are based on past crime data. The algorithms are being interpreted by human police officers. The humanity of citizens and police hasn’t changed.
And so while it sounds like a promise that it’s improving, in many cases, it’s just either reifying the existing structural inequities or exacerbating them, because everyone is blindly accepting the data as objective when of course it’s not objective. If you’re a police officer and you’re told, “Hey, the algorithm told me to go to this neighborhood,” and you’re like, “I don’t really want to go to this neighborhood, but the algorithm told me,” you might do it because you’re a police officer, you’re not a data scientist. And you might follow what the algorithm says, even if the algorithm itself is based on racially coded, racially biased data, because policing has been based on racially coded, racially biased practices. And so the data is just reflecting that inequity and that bias. But you now don’t even have the capability to be able to question it, because you’re the officer told to follow the algorithm on your computer as you drive around the neighborhood. And that of course only reifies the structural problems of policing.
KH: Another concept in Andrew’s book that struck a chord with me was the idea of police as data collectors. As someone who’s been arrested a few times, I am accustomed to the police acting as data collectors. They take fingerprints and photos, record our addresses, and often ask irrelevant, personal questions. But in 2016, I came across a news story that expanded my understanding of police as data collectors. As Lauren Kirchner reported in ProPublica, cities in Florida, Connecticut, Pennsylvania and North Carolina were establishing their own private DNA databases that enabled fast, cheap testing services. In order to build out their databases, police were collecting DNA from people who were not involved with, or even suspected of a crime. In a practice some had dubbed “stop and spit,” police were asking people who they approached and questioned at random to submit DNA for the database. One young man who complied told Kirchner that he did so because he believed he had to.
But while DNA is a physical form of potential evidence that may require a cheek swab or a blood sample, data about our lives and communities that may be weaponized by police is also being collected in less obvious ways.
AGF: So police have become data collectors in two ways. In some pilot projects like in Los Angeles, the police were literally tasked to become data collectors in that they were told to go fill out these things called field interview cards, that were literally what you would picture that might mean. They’d go out to a neighborhood and they’d see people. They’d have little paper cards, and they would go get information about what car someone was driving, whether someone had a new tattoo, who’s dating whom, what’s happening on the street.
And those cards would get filled out and then filed back at the station house, where they’d be inputted into a data system that was being run by Palantir, where they could have essentially an investigatory data set of who’s around in this community, who’s engaging with whom, what groups are in what neighborhoods, and such.
And so one of the ways Los Angeles sort of set up its system, and they were doing this through a person-based predictive policing system called Laser. The idea was that they were going to target the problem people in the community like laser surgery. This is offensive, but they were going to remove the tumors from society like laser surgery. That was literally in the promo for the system, as offensive as that is.
But the idea was that they would find these individuals who again would be their high-risk people, and they would send police officers out to go collect data about them. Why? Because if there was a crime, and the police wanted to figure out, well, who’s running that corner? Who’s involved in activity in that area? If they had pretty real-time information that is getting collected about individuals, it made their investigation that much easier. Essentially, the Palantir system is what’s called a social network analysis system. Essentially, it takes pieces of information and connects the dots in different ways. So a phone number could be connected to different people who called the phone number. An address can be connected to all the different people who live at that address. And so the social network analysis was able to sort of visualize groups and patterns of activity, in this case, criminal activity in Los Angeles. But it only works of course if you have fresh data. And so they literally turned human police officers into data collectors so that the system would have fresh data in order to run its analysis.
Nowadays, the data collection’s happening almost behind the scenes, because as we’ve sort of developed a digital police officer who’s driving a car with GPS, has a body camera with GPS on it, might even have a smartphone with data running there, has cameras that are able to collect objects and identify people involved in situations. The data collection is actually happening incidentally to the office writing it down. And so the systems are collecting where officers are, who they’re contacting with, where the crime was happening, what cars were in the area, what phones were in the area. All of these things are now being collected as a matter of course.
And central to each of these ways of organizing the data collected in a community is a police officer who is either collecting it through video or digital means, or literally just writing it down in their notebook and uploading it later, such that police departments have a better sense of who is living in their area, who they think is involved in criminal activity, who’s friends with whom, who’s angry at whom, who’s had certain problems with other people. And they’re collecting this in a way to sort of augment their ability to control communities, see communities, understand problems in communities, and investigate crimes when they happen in those communities.
KH: Another use of data for carceral purposes that some of you may be familiar with comes in the form of risk assessments.
AGF: Risk assessments in court are, I would say a different problem set than predictive policing and database policing, even though they have a core similarity in that they’re taking past criminal data information and using it to predict some future action.
The use of risk assessments is really seen in two places. The first is for pretrial release. For example, there may be an algorithmic assessment about whether someone who’s been arrested for a crime should be released to the community or held in jail before their next hearing or their court case.
And separately and distinctly, there is risk assessments in sentencing. So after they’ve been prosecuted, after they’ve been convicted, there’s an algorithm that might determine whether or not they’re a high risk or a low risk for recidivism for committing another crime upon release.
Both data sets and both algorithms have a core similarity in the fact that they are taking factors about a human being, a community, their relationships, jobs, housing, education, prior connections with the criminal justice system, and using that information to extrapolate whether or not they are a high risk or a lower risk based on what other people have done. Essentially saying, “Well, people who have these risk factors tend to be more at risk, and people who have these risk factors tend to be less at risk.”
Of course, the problem is those risk factors are fraught with all of the economic and social inequalities in America, such that if things like higher education, or housing, or a job are reasons why you might be lower risk for release, it means that if you have been connected through your community or just the fortuity of your life circumstances to have those things stable, it would mean that the algorithm would see you as less of a risk, even though you might be far more of a risk than someone who just wasn’t born into those circumstances.
Generally speaking, the risk assessments do not use race. But of course, the proxies for racial inequality and structural inequality in America are pretty hard to avoid, that they are part and parcel in who gets arrested. And because the risk assessments were normed on the existing legal system, many of those structural inequalities are within the system. Generally speaking, the social and economic connections that are used to judge recidivism can’t be untangled with all of the inequalities in society. So many times, what you are predicting is social and economic poverty or inequality as much as the individual’s actual decision about whether or not they would commit a crime again in the future.
KH: We have been focused on technology that is specifically geared toward policing in this episode, but as Andrew discusses in his book, the private sector is full of data collection that can easily be used against us. As Andrew writes, “A complete big data convergence between private consumer data collection and public law enforcement collection has not yet occurred, but the lines are blurry and growing fainter.”
AGF: So we are currently living in a moment where we’ve evolved from a data-driven policing system that’s in many ways external, with cameras, and predictive algorithms, and police data, to have a parallel investigatory system of the kinds of self surveillance data sets that we are building on ourselves.
And so in fact, I’m currently writing my next book, which is basically the idea that everything we’re doing in the digital world is evidence and can be used against us. So if you picture a modern affluent home, they have an Echo device in their home kitchen to ask questions about. There’s a ring doorbell viewing who’s coming in and out. There are Amazon Neighbors connections to figure out who’s doing something suspicious in their neighborhood. Amazon also of course knows what you buy and what you read. There are delivery trucks with video cameras. And we who have subscribed to Amazon and Amazon Prime have essentially given a single company information that we would probably never knowingly give to a government.
I mean, if you can imagine. Imagine somebody says, “Hey, I’d like to put a wiretap in your kitchen so we can listen to what you say. We’d like to put a doorbell with a camera on your door so we can see who you’re associating with. We’d like to develop a network of neighborhood associations who will snitch on each other if anything suspicious should happen. Also, we’d like to know what you’re reading. We’d like to know what you’re buying at the supermarket and on pharmacies and everything else. And by the way, we’ve equipped these trucks that are going to drive around the society and videotape everything that’s going on.” That’s dystopia, but that’s also what Amazon is selling to us and we are buying.
And my point is that this self-surveillance is at best a warrant away from being revealed. Any police officer who thinks you did something wrong in your home, or outside your door, or in your kitchen, at best, all they have to do is go get a warrant. It’s arguable they even need a warrant. But assuming they need a warrant, if they think you committed a crime and there’s a judge who will sign a warrant, they can get a warrant, which means all that information is available to police.
And you add to that the data police are already collecting with higher functioning video cameras, facial recognition, surveillance planes in Baltimore and other places. And you’re developing this two-tiered system that’s working together, where police are collecting external data. We are creating internal data that’s now available to police with a warrant. And we are essentially creating our own, I call it “Dystopia Prime,” where we are selling ourselves and our desire for surveillance and safety to companies, not really realizing that Amazon doesn’t have a choice when they get the warrant. They get asked by a judge, get asked by a law enforcement officer through a judge to give them the information about your Echo in your kitchen. They have to do it.
And we haven’t fully processed how that information… Not just Amazon. We could talk about Google and the fact that your phone knows exactly where you are, what you’re doing, what you search for. Your Google device, if you have an iPhone or a Google phone, that information has revealed or would reveal every place you went, by inference what you were doing there. It probably would know the things that you’re asking about through keyword searches and other kinds of situations. And it is all available with a warrant. So if there’s a crime and police assume that people who are committing crimes have their cell phones, all they have to do is go to a judge, follow, there’s actually a three-step process with Google and geolocation, but they follow the three-step process. They get the data, and your smartphone becomes a witness against you in a way that I don’t think we fully process how it will impact people who want reproductive freedom, people who want to live a life with their children who can be trans or anything else, people who would like to protest the government, including police brutality.
All of those people are creating a digital trail that can be used against them if the government chooses to do so. And I think that we have seen in America today that many governments and many states are willing to criminalize activities based on their own political beliefs of what they think is right or what their laws say in ways that we may not be comfortable with. And I don’t think we’ve fully woken up to the fact that our data trails are going to be used against us in ways that are going to change our privacy, are obviously going to target communities of color, and dissidents, people who want to protest the government, and really will impact issues of abortion freedom, trans youth, and being able to live your life the way you want to do it. And I think that we really haven’t seen the full danger of this digital, self-surveillance world that we’ve created and that is available to law enforcement.
KH: Circling back to SoundThinking’s acquisition of Geolitica, it’s very important that we look at the ways that policing technologies are being consolidated.
AGF: The consolidation of police technologies and providers of police surveillance is evolving in a way that I think people haven’t fully paid attention to. SoundThinking, which used to be called ShotSpotter, is a company that now is billing itself as a platform for policing. Essentially, you would hire SoundThinking to run the gunshot detection system that’s ShotSpotter, but also they bought the technology behind HunchLab, which was an early predictive policing technology. And the technology behind PredPol, which was one of the first predictive policing technologies in order to offer what they call a resource allocation, resource router system, essentially to put the police at the right place at the right time based on past crime data. They also offer a database searching situation, so police can use it for investigative leads. And essentially they’re offering police departments a one-stop shop for all of their data.
And we’re seeing companies like SoundThinking in competition with companies like Axon. Axon, better known for not just its tasers, but also its body cameras, that recognize that if you essentially give away video cameras, body cameras to police, someone has to keep all of that footage. Someone has to be able to sort through that footage. Someone can then sell add-on services to be able to use video analytics and AI to be able to parse through all that data and prepare it for court.
And so Axon is also battling to become the platform for policing in that if they have all the data, they can then charge the service fees for the backend services. And you’re starting to see this with big companies and small companies, who recognize that if they can get the contract for policing and offer data services as part of that, they can offer a whole series of… Depending on what the department wants, of data-driven policing enhancements, but of course gives them the full contract.
And the reason why this is a change is it used to be that police were buying simple add-ons to their system. Like they could buy, PredPol was simply a one-off contract. It was one particular thing. It wasn’t really controlling all the data systems. ShotSpotter when it was just ShotSpotter would just sell audio sensor devices, gunshot detection devices to be able to identify the gunshots in the community. These are one-off technologies, as opposed to offering the platform.
The reason why that matters is that if you’re the platform, the Facebook, the Google, the Apple, you have incredible control over what happens to that data. You get to essentially monopolize the data and control it both as a contract, but also as a way of controlling the information.
And we just haven’t seen public safety data and public safety systems privatized in this way that we’re seeing now. And I think people haven’t paid attention to how the platformization of policing is changing power, is changing who controls the power. And oddly, since many of these companies are public companies, we’re also now having to consider the whims of the stock market in terms of this, what used to be a pure public safety need.
And so I think there are lots of questions that communities need to ask about whether we’re okay with giving any company the control over the policing platform, because in many ways, that company is going to be a more sophisticated actor than the police, right? If you’re a police chief today, you started out in a world of mimeograph papers. You’re definitely not a data scientist, even if you’re smart. And you can’t have the conversations or even the analytical ability to judge what this company is doing with its rather sophisticated data analysts and computer technology that exists.
And so it’s disempowering police, and empowering police technology companies, and completely leaving out the community and the people who are going to be impacted by both in a way where we just really haven’t paid attention to how this is changing power in America today.
KH: Axon, which produces body cameras, regularly lobbies for police reforms that would generate more contracts. So, when police violence creates a social crisis, Axon lobbies for cops to wear its cameras, as a solution, and now wants to build a data empire with all of the surveillance footage the cameras acquire. Meanwhile, organizers are fighting efforts by Fraternal Order of Police organizations to grant police bonuses for using their body cameras. So, what we’re really looking at, with these technological reforms, is the commodification of police killings, and the creation of new hubs of technological and financial power – all at our expense. Meanwhile, a record number of people were killed by police in 2023. U.S. police killed 1,329 people last year, which represents nearly a 19 percent increase over the last 11 years.
Disturbingly, these business models, based on faulty tech and false promises to the public, are working entirely too well. In 2022, SoundThinking also acquired COPLINK X, which is a “law enforcement information platform capable of accessing over 1.3 billion records using natural language and traditional search terms.” That product is now called CrimeTracer. CrimeTracer and ShotSpotter are two of four products that SoundThinking calls its “SafetySmart Platform.” On its website, SoundThinking presents the “suite” of products as the solution for “understaffed” police departments, stating:
Police departments across the country are in great need of force-multiplying technologies that help keep communities safe and improve quality of life. CrimeTracer, and by extension the entire SafetySmart Platform, are geared toward every stage of the law enforcement lifecycle to precisely address this.
Chicago Mayor Brandon Johnson campaigned on a promise to cancel Chicago’s ShotSpotter contract, but in December of 2023, the CEO of ShotSpotter announced on an earnings call that Chicago was not only keeping ShotSpotter, but also running a pilot program with CrimeTracer.
In an upcoming episode, we will be hearing from activists who have been working to end Chicago’s ShotSpotter contract – which expires in mid-February. ShotSpotter is presently being used in 120 US cities, and in some locations internationally, including Palestine. After hearing a bit more about these technologies today, I hope you’re all eager to support the organizers opposing this tech, and to learn more.
If you are interested in learning more about how we can push back against mass surveillance and data-driven technologies, there are a lot of resources out there, and Andrew had a few suggestions.
AGF: I think there are definitely books to read. Ruha Benjamin has written a wonderful book called Race After Technology: Abolitionist Tools for the New Jim Code. Kashmir Hill has a new book on facial recognition technologies, which is obviously a new threat to privacy, and civil rights, and racial justice. Both are wonderful reads, are compelling reads, but they’re also revealing the underlying power structures that are going on here where in many ways, police are being reactive to the technological innovations and less powerful in some ways behind it, because they’re always reacting. And communities are even farther behind the curve being able to react to it. There are obviously some community groups in Los Angeles and Chicago that have been wonderfully vocal and wonderfully smart to educate the people. But if you think about the 18,000 different law enforcement entities in America and all the different areas in America, that collective group of people who are pushing back, and are sophisticated and confident enough in the technology to recognize what’s wrong are really few. And the fragmentation of policing in America leads to an inability to really challenge how the technology is growing and expanding, in communities that just don’t have that sort of community movement that is pushing back.
And so I think that we need to be paying attention to how the technology is changing policing and also how the technology is essentially disempowering community groups, and figuring out ways to empower those community groups to be able to push back, to be able to ask those hard questions.
Because the truth is, every time community groups have asked questions about policing, the police haven’t had good answers. And when really pushed, they had to fold to recognize that maybe this technology wasn’t worth the money, wasn’t doing what it was said. And while sure, it sounded good in a soundbite, it sounded good to the city council when you said you had to do something to stop crime, in reality, it wasn’t doing what it said, and may also have had real harms on those communities. But figuring out a way to empower communities, to ask those hard questions, to push back is a real challenge because of the fragmented nature of policing in America.
KH: I want to thank Andrew Guthrie Ferguson for joining us today. Don’t forget to check out Andrew’s book, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. It is an excellent resource. I hope folks who are processing everything we’ve discussed today will consider how the trends we have talked about here fit within the traditional model of police reform. As Naomi Murakawa writes, “The more police brutalize and kill, the greater their budgets for training, hiring, and hardware.” That’s how police reform operates. In the case of predictive technologies and AI, police are further legitimized in their violence because algorithms create perceived threats for them to act upon. That legitimization is also part of the scam that is police reform, which is why we will never be served by these approaches. That’s why it’s so important that we engage with campaigns being waged by abolitionist organizers around the country to challenge high tech policing. We will talk more about how to do that soon.
I also want to thank our listeners for joining us today, and remember, our best defense against cynicism is to do good, and to remember that the good we do matters. Until next time, I’ll see you in the streets.
Show Notes
- Don’t forget to check out Andrew’s book The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement.
- If you would like to hear from Kelly between episodes, you can sign up for her newsletter.
- To learn more about the ShotSpotter campaign, you can follow them on Twitter and Instagram.
Referenced
- No More Police: A Case for Abolition by Mariame KabaAndrea J. Ritchie
- Abolition For The People: The Movement For A Future Without Policing & Prisons, edited by Colin Kaepernick
- Race After Technology: Abolitionist Tools for the New Jim Code by Ruha Benjamin
- “Sam Altman’s self-serving vision of the future” by Paris Marx
- “Chicago shouldn’t renew its ShotSpotter contract” by Robert Vargas
- “Heat Listed” by Matt Stroud
- “Police to use ‘Minority Report’ technology to predict crimes before they happen” by David Parsley
- “DNA Dragnet: In Some Cities, Police Go From Stop-and-Frisk to Stop-and-Spit” by Lauren Kirchner
- “Predictive Policing Software Terrible At Predicting Crimes” by Aaron Sankin and Surya Mattu
- “Police departments sued over predictive policing programs” by Dave Collins
- “A pioneer in predictive policing is starting a troubling new project” by Ali Winston and Ingrid Burrington
- “CPD Reported Hundreds of Missed Shootings to ShotSpotter Last Year” by Jim Daley and Max Blaisdell
What happens next?
Only a few days remain before the presidential election. To make sure we can continue our vital coverage before Tuesday, we’re asking for your support.
Truthout is funded overwhelmingly by readers like you. Your gift allows us to hold the political candidates accountable, delve into the nuance of complex issues, and stay wholly focused on seeking justice.
No matter what happens on November 5, your gift today ensures that there’s a place for independent journalism in the future – regardless of right-wing suppression, industry corporatization, and any other challenges we have yet to face. Please make a one-time or monthly donation to Truthout today.