In the past 15 years, policing has grown its reach, largely through an array of technologies that record and store our personal details and daily activities. Using algorithms and other formulae, authorities are able to repurpose data to meet the emerging demands of the criminal legal and immigration systems. From predictive policing to GPS-enabled ankle monitors to gunshot trackers to massive interlinked databases, police are extending their capacity to track and control. But in recent years, communities, researchers and activists have begun to build a critique of these technologies. Their critique may ultimately take us well beyond liberal notions of privacy to address fundamental questions of political power and freedom.
Predictive Policing
One key target has been predictive policing. Implemented as early as 2008, predictive policing gathers data on incidents of crime and people who commit crime to predict future events and trends. Over the years, various versions of this policing technology, such as LASER or Hot Spot, have proven problematic. The most recent exposé of this widely used technology surfaced in an October 2023 piece by Aaron Sankin and Surya Mattu, published jointly by The Markup and Wired. The authors’ findings revealed that the policing technology of the widely contracted company Geolitica (formerly PredPol) had a success rate of less than 1 percent in its mission of predicting the time and place of a crime. Drawing on more than 23,000 predictions from 360 locations in Plainfield, New Jersey, the authors found a success rate of 0.6 percent for burglary and 0.1 percent for assaults and robberies. Part of the reason for these disastrous results was a statistical model which yields a large number of predictions in the hope of capturing at least some crime incidents in their net — a little like buying 1,000 lottery tickets in the hopes of getting at least one winner, regardless of how much is lost along the way.
Predictive policing algorithms also incorporate racial bias, often directing law enforcement to communities already rife with police, surveillance and high arrest rates. The Electronic Frontier Foundation describes predictive policing as a “self-fulfilling prophecy,” meaning that if authorities direct more police to an area or at a targeted group, police will make more arrests there regardless of the presence of crime.
The shortcomings of predictive policing led Plainfield authorities to follow in the footsteps of Los Angeles and other former clients of Geolitica and cancel their contract. Los Angeles’s cancellation grew out of a campaign led by the Stop LAPD Spying Coalition, whose activists revealed the racist bias in the technology’s predictions and the false logic of the company’s claim that “criminal offenders are essentially hunter-gatherers; they forage for opportunities to commit crimes.”
GPS Monitoring
Studies of GPS-enabled electronic monitors reveal patterns of inaccuracy. In 2023, a data scrape led by freelance data journalist Matt Chapman uncovered gross inaccuracies in the pretrial GPS monitoring program in Cook County, Illinois — the largest in the nation. Chapman found the devices generated thousands of false alerts, often leading to police raids and baseless arrests. A separate 2021 Cook County study concluded that 80 percent of the alarms for violation of electronic monitoring rules were “false positives.” These false alerts can have serious consequences. One respondent described the trauma of receiving six texts per day over a period of 18 months that delivered false alerts about alleged electronic monitoring violations. One of those false alerts led to a two-day stint in jail. His fate was not unique. Truthout has talked with dozens of people across the country who have been wrongly sent back to prison after their “tracking” device reported that they were located several blocks, even several miles, away from where they actually were. One Chicago woman told us that a false alert led to her arrest. She subsequently fell in her jail cell, fractured her jaw and needed surgery when she was released.
Gunshot Trackers
SoundThinking (formerly ShotSpotter) is a detection technology that claims to track and trace the sounds of gunshots in urban areas. But studies in several of the more than 100 cities where SoundThinking has contracts paint an alarming picture of inaccuracy. Despite complaints that false alerts disproportionately target Black and Brown neighborhoods, most decision-makers maintain their infatuation with the product. For its part, SoundThinking remains content with business as usual. In over 20 years of operation, the company has not produced a single scientific study testing how reliably their technology can tell the difference between the sound of gunfire and other loud noises. Instead, the company aggressively defends the secrecy of their product design. When a SoundThinking alert in Chicago led to the arrest of an individual, the company refused a court order to bring forward evidence of how it assessed gunshot sounds. The firm chose instead to accept a contempt of court charge. Chicago Mayor Brandon Johnson has pledged to not renew the city’s contract with SoundThinking in 2024. City leaders in Dayton, Atlanta and Seattle have taken similar steps by recently blocking or ending SoundThinking contracts.
Other Technologies
Racial bias has surfaced in other technologies, most notably in facial recognition apps that have led to the misidentification, and in some cases arrest, of at least six Black men in a number of cities including Detroit, New Orleans and Baltimore. Moreover, a 2023 New Orleans study contended that this technology fell short in proponents’ claims to be able to solve crime.
Risk assessment tools that build algorithms based on data from racist criminal legal institutions and social service agencies have also come under fire from several scholars and researchers arguing that they wrongly classify people’s suitability for pretrial release or the appropriateness of a sentence.
Less Regulated Than Toasters
Part of the explanation for these inaccuracies lies with the failure to adequately test these technologies before marketing. While toaster producers must conform to stringent regulations and subject their products to rigorous testing, in the high-stakes world of policing, producers often get a free pass.
The only technical requirement for an electronic ankle monitor at the national level is an optional set of standards produced in 2016 by the National Institute of Justice requiring a geolocation accuracy of 98 feet. Most residences, especially urban apartments, could not accommodate a person who is 98 feet from the geolocator box. Hence a miscalculation of 98 feet would register as a violation of household restrictions.
Meanwhile, Black computer scientist Joy Buolamwini used research on her own face to expose what she labeled the “coded gaze.” The coded gaze refers to the data base of faces used to create models for prediction. In Buolamwini’s assessment, the database of faces for testing this technology is disproportionately white and male, making the software more likely to identify a face as white and male. In fact, Buolamwini, who is a dark-skinned Black woman, found that the technology could not even see her face, apparently because she was out of the norm.
Rather than developing rigorous pre-marketing testing protocols, as tech writer Dhruv Mehrota told Truthout, these technologies “are tested in the field.” Dillon Reisman, founder of the American Civil Liberties Union of New Jersey’s Automated Injustice Project, told The Markup that all over New Jersey, companies are selling “unproven, untested tools that promise to solve all of law enforcement’s needs, and, in the end, all they do is worsen the inequalities of policing and for no benefit to public safety.”
Instead of providing test results, police technology companies primarily rely on promoting individual success stories or simplistically attributing reductions in crime and the saving of lives to the presence of their technologies without considering other factors. Dayton, Ohio-based human rights activist Julio Mateo told Truthout that SoundThinking tries “to play up the situations in which these technologies help and try to make invisible the times when people are searched and traumatized.”
Companies and decision-makers seem not to consider the opportunity costs or ancillary impact of using these devices. For example, in voting for the reinstatement of SoundThinking in New Orleans after a two-year ban, Black city councilor Eugene Green proclaimed, “If we have it for 10 years and it only solves one crime, but there’s no abuse, then that’s a victory for the citizens of New Orleans.” Like most supporters of police technology, Green failed to acknowledge that the financial and human resources devoted to SoundThinking could have gone to programs proven to prevent violence by providing direct benefits to impacted populations in the form of services such as mental wellness, after-school activities and job training. Similarly, Green’s comments overlooked the trauma of people subjected to repeated false alerts.
On the surface, these outrageous failures to test police technologies without even the rigor demanded of a toaster appear puzzling. We expect our phones, laptops, tablets, and every other device we use to meet a certain consumer standard. A cellphone that consistently connected us to the wrong number or jumbled the entries in our contact lists would have a very short shelf life. But completely different standards apply to technologies of control and oppression, especially those that deal with Black people and other marginalized populations.
Why the Paradox Continues
This apparent paradox exists for several reasons. At a systems level, the decentralized structure of policing and law enforcement facilitates the expansion of these technologies. Local authorities typically make their own decisions on surveillance and policing. For the purveyors of these technologies, local decision-making offers a huge and welcoming marketplace. While cities like Boston and San Francisco have banned facial recognition, most smaller jurisdictions lack the technical expertise and resources to conduct serious investigations into police technology. They rarely have policies or research agendas to address the potential perils of apps like facial recognition or gunshot trackers. As a result, the main sources of information for local government are frequently the company representatives themselves. In many cases, local police or sheriffs, operating through their own networks, become the major promoters of these technologies across regions, largely because they enhance that image of the technical efficiency of their operations.
The decentralized structure also makes mounting national opposition campaigns more challenging, especially since federal authorities have chosen not to impose regulations. In fact, in many instances, federal authorities promote such usage, offering free access to surplus military equipment and invasive surveillance technology through the Law Enforcement Support Office’s 1033 Program as well as grants operating through the Department of Homeland Security and National Security Agency. As of 2021, more than 10,000 federal, state and local law enforcement agencies were participating in the 1033 Program. Further, the emergence of COVID-19 relief funds through the American Rescue Plan Act (ARPA) directed new resource flows to local authorities for police surveillance technologies such as automatic license plate-readers, facial recognition systems, gunshot detection programs and phone hacking tools. President Joe Biden encouraged such expenditures during an address to a Gun Violence Prevention Task Force meeting in 2022, urging cities to purchase “gun-fighting technologies, like technologies that hears, locates gunshots so there can be immediate response because you know exactly where it came from.” The nonprofit Electronic Privacy Information Center estimated that as of September 2022, at least 70 local governments had allocated ARPA funding to surveillance technology.
In addition to systemic factors, police technology also requires a controlling narrative. What researcher Evgeny Morozov calls technological-solutionism, is essential to that narrative. Technological-solutionism influences decision-makers and thought leaders to ignore options for addressing deep social problems like white supremacy or the need to redistribute income and resources. Instead, technological-solutionism recasts complex social phenomena as “neatly defined problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized — if only the right algorithms are in place!” In contemporary capitalism such solutions enhance the profits and the power of Big Tech while making claims to address inequities, particularly those based on race. This obsession with technological solutions dampens efforts at critique and provides space for expanding or tweaking police technology. Moreover, technological-solutionism has emerged amid a fundamental restructuring of contemporary capitalism, characterized by the rise of Big Tech and the expansion of policing in all its forms. This transformation has enabled a range of “solutions” unimaginable less than two decades ago, including the technologies discussed here.
We Desperately Need a New Framework for Tech
However, we are only in the early days of what I refer to as “digital colonialism,” a period that began with the launch of the first iPhone in 2007. In the world of digital colonialism, solutions come from tech giants like Google, Microsoft, Apple, Meta and Amazon. In the manner of colonialists of the past, Big Tech leads the establishment of a settler regime within the unconquered territory of the digital world. The companies set the rules, control the technology and dictate the regime of accumulation. Like colonial states, these powers value order and hierarchies based on race, ethnicity and gender. Just as colonial states offered the Bible, Western education and the products of industrialization, so do Amazon and their ilk offer the digital world of Chrome, cellphones and Uber in exchange for the essential raw material for their empire: data.
As immense as the data on current computer clouds may seem, the colonial oligarchs are just starting to figure out how to deploy artificial intelligence to collect and use people’s data to both maximize their profits and extend the depth of social control. Data from facial recognition, crudely racist as it may be, is only beginning to intersect with other punitive and controlling technologies. While research has unearthed several of the shortcomings of predictive policing and gunshot locators, exposing these flaws represents only a baby step on the path to challenging the immense power of the digital monopolists.
For the moment, to borrow a phrase from Audre Lorde, critics are using the master’s tools to contest the power of Big Tech. Like the first discoverers of gold in South Africa, activists and researchers are grabbing a few nuggets of consumer products while handing over a lot more wealth in terms of biometrics and other data. Transforming these power dynamics won’t come from merely attacking the inaccuracies or racial bias baked into modern surveillance and policing. In fact, enhancing the technical capacity or reducing the racial bias in these technologies may only create more efficient punitive regimes.
Many of these technologies simply have no place in a world that respects life. Databases have many uses, especially in tracking climate change or air quality, but only if informed by a social justice framework that is not driven by profit nor dogmatic paradigms that either deify or totally reject technology.
We remain a long way from putting such frameworks in place. At a moment when the cutting-edge of technology and surveillance and the world’s political acumen are trained on Gaza, a tiny strip of land which is perhaps the ultimate laboratory for these technologies, building that framework looms all the more urgent.
Thanks to Teresa Barnes, Dhruv Mehrota, Matt Chapman and Julio Mateo for providing the comments and information used to compile this article.
Note: A correction was made to fix a typo in Audre Lorde’s name.
Truthout Is Preparing to Meet Trump’s Agenda With Resistance at Every Turn
Dear Truthout Community,
If you feel rage, despondency, confusion and deep fear today, you are not alone. We’re feeling it too. We are heartsick. Facing down Trump’s fascist agenda, we are desperately worried about the most vulnerable people among us, including our loved ones and everyone in the Truthout community, and our minds are racing a million miles a minute to try to map out all that needs to be done.
We must give ourselves space to grieve and feel our fear, feel our rage, and keep in the forefront of our mind the stark truth that millions of real human lives are on the line. And simultaneously, we’ve got to get to work, take stock of our resources, and prepare to throw ourselves full force into the movement.
Journalism is a linchpin of that movement. Even as we are reeling, we’re summoning up all the energy we can to face down what’s coming, because we know that one of the sharpest weapons against fascism is publishing the truth.
There are many terrifying planks to the Trump agenda, and we plan to devote ourselves to reporting thoroughly on each one and, crucially, covering the movements resisting them. We also recognize that Trump is a dire threat to journalism itself, and that we must take this seriously from the outset.
Last week, the four of us sat down to have some hard but necessary conversations about Truthout under a Trump presidency. How would we defend our publication from an avalanche of far right lawsuits that seek to bankrupt us? How would we keep our reporters safe if they need to cover outbreaks of political violence, or if they are targeted by authorities? How will we urgently produce the practical analysis, tools and movement coverage that you need right now — breaking through our normal routines to meet a terrifying moment in ways that best serve you?
It will be a tough, scary four years to produce social justice-driven journalism. We need to deliver news, strategy, liberatory ideas, tools and movement-sparking solutions with a force that we never have had to before. And at the same time, we desperately need to protect our ability to do so.
We know this is such a painful moment and donations may understandably be the last thing on your mind. But we must ask for your support, which is needed in a new and urgent way.
We promise we will kick into an even higher gear to give you truthful news that cuts against the disinformation and vitriol and hate and violence. We promise to publish analyses that will serve the needs of the movements we all rely on to survive the next four years, and even build for the future. We promise to be responsive, to recognize you as members of our community with a vital stake and voice in this work.
Please dig deep if you can, but a donation of any amount will be a truly meaningful and tangible action in this cataclysmic historical moment. We are presently looking for 182 new monthly donors in the next 24 hours.
We’re with you. Let’s do all we can to move forward together.
With love, rage, and solidarity,
Maya, Negin, Saima, and Ziggy