Skip to content Skip to footer

Canada’s Predictive Policing Tech Is Poorly Regulated Under AI Policy

As surveillance has become a fact of life, digital privacy must become a human right.

Historical crime data is inherently biased, and producing algorithms based on that data reproduces those very biases.

In February 2019, Nijeer Parks was arrested in Woodbridge, New Jersey, based on a facial recognition match that linked him to several crimes. Parks faced charges of aggravated assault, unlawful possession of weapons, using a fake ID, possession of marijuana, shoplifting, leaving the scene of a crime and resisting arrest.

In November 2019, the case against him was dismissed — there was no evidence of him committing a crime besides a faulty facial recognition match.

In the near year between being arrested and being cleared of those crimes, Parks faced a legal and personal nightmare. Before even learning what the evidence against him was, he spent 11 days in jail, and over the next year, paid nearly $5,000 to defend himself against a crime he did not commit. On top of this, Parks has yet to receive an apology for any wrongdoings committed against him.

“I’ve never heard anything from anybody else…. No, ‘We’re sorry. We could have went about it a different way’,” said Parks in an interview with CNN. “Nothing.”

He is now in the midst of an ongoing lawsuit against New Jersey police and prosecutors.

Parks was the third known person in the United States to have been falsely arrested based on a faulty facial recognition match, joining Robert Williams and Michael Oliver — all Black men. While this has yet to happen in Canada, marginalized peoples in this country are in danger of facing similar issues if the federal government does not act provocatively.

Canadian Outlook

Canadian law enforcement agencies are ingrained with systemic biases, confirmed by the House of Commons in 2021 in their Report of the Standing Committee on Public Safety and National Security.

The use of algorithmic policing technologies, such as facial recognition, in police work has steadily increased over the past several years, as reported by The Citizen Lab and the University of Toronto’s International Human Rights Program.

This has created the issue of predictive policing in Canada, which relies on historical crime data to forecast crime. Proponents of predictive policing argue that it predicts crime more effectively than traditional policing methods, eliminating bias. However, historical crime data is inherently biased, and producing algorithms based on that data reproduces those very biases.

What compounds this issue is that Canada’s current artificial intelligence (AI) policies are severely lacking in regulating how law enforcement use algorithmic policing technologies, and in providing protections for marginalized peoples (Black, Indigenous, Asian, Brown, transgender, etc.) from potential police abuse.

Experts such as Canadian AI governance researcher Ana Brandusescu want the federal government to act proactively and ban these technologies outright. If not an outright ban, she believes that transparency and accountability are important principles that must be incorporated into Canada’s AI policies and the procurement of AI technologies.

These principles would help provide “a really clear idea of how public money is spent, and where it’s going,” said Brandusescu in an interview with Truthout.

Types of Algorithmic Policing Technologies

Location-based algorithmic technology identifies “where and when potential criminal activity might occur” by using historical (often problematic) police data, as defined by a report published by The Citizen Lab.

The Vancouver Police Department’s use of GeoDASH is an example of this; the technology can disproportionately target the marginalized and vulnerable communities that live in Vancouver’s Downtown Eastside.

Person-focused algorithmic technologies rely “on data analysis … to identify people who are more likely to be involved in potential criminal activity,” according to The Citizen Lab.

Another example is the Calgary Police Service’s use of Gotham, a data analysis software developed by the defense company Palantir.

Gotham provides the Police Service with “physical characteristics, relationships, interactions with police, religious affiliation, and possible involved activities,” while also “[mapping] out the location of purported crime and calls for services,” wrote The Citizen Lab.

While surveillance technologies do not have a predictive element, they have their own set of issues — as the falsely arrested Parks, Williams and Oliver can attest to.

Tested and proven by prominent AI scholars such as Joy Buolamwini and Timnit Gebru, facial recognition technology fails to accurately register the faces of racially marginalized and trans people. The facial data that facial recognition technology is trained on is largely of white, gender-normative faces.

Current Landscape

At the moment, there are two policies in Canada that concern AI: the Directive on Automated Decision Making (ADM) and the Algorithmic Impact Assessment (AIA), both developed by the Treasury Board Secretariat.

According to AI governance researcher Brandusescu, “the Directive on ADM or AIA are not doing enough to support public accountability.”

The AIA is a mandatory risk assessment questionnaire for companies with 81 questions about their business process, algorithm(s), data and how they designed their systems. However, because of the lack of independent oversight, there is no measure in place to prevent companies from treating it as a rubber-stamp exercise.

The Directive on ADM is only applicable to AI technologies developed in-house by the federal government or outsourced to private companies.

However, it has no power over AI technologies developed for provincial use, and no power over private companies who develop their technology on their own accord and then either sell it to different governmental institutions or offer free trials, which is what happened with Clearview AI.

Prior to being published, an internal draft review of the Directive on ADM by the Treasury Board Secretariat raised concerns regarding the legal and ethical use of AI in policing as “algorithms are trained on historical data, [and] their users run the risk of perpetuating past injustices and discriminatory policing practices.”

In an interview with Truthout, Sean Benmor, a senior communication advisor for the Department of Innovation, Science, and Economic Development (ISED) Canada, responding on behalf of the Treasury Board Secretariat said, “The use of algorithmic technologies in law enforcement could be subject to the Directive [on ADM] if they are in scope.”

However, what is outlined in the scope of the Directive on ADM is vague. According to section 5.2, it is applicable “to any system, tool, or statistical models used to recommend or make an administrative decision about a client.”

It is unclear what an administrative decision about a “client” means. Does it mean the use of an algorithmic technology to make an arrest? Does the Vancouver Police Department’s use of GeoDASH and the Calgary Police Service’s use of Palantir Gotham fall under section 5.2?

In addition, not all algorithmic policing technologies are automated decision-making systems, meaning there is another gaping hole in the policy. The draft review pointed out that the Directive on ADM would not have covered Clearview AI’s facial recognition technology “because the tool itself did not make any decisions.”

If there is no current policy dedicated to regulating facial recognition technology — the tool that produced the faulty matches that led to the arrests of Parks, Miller and Oliver — then that is a serious issue that the federal government must address.

According to Treasury Board Secretariat representatives who spoke at the first public gathering on the Directive on ADM in November 2021, the final version of the internal review was supposed to be published by early 2022, but it has not been finished yet.

Benmor told Truthout, “Work is underway for a regular review of the Directive, which includes consideration of additional measures to strengthen the instrument’s approach to addressing bias.”

The Digital Charter Implementation Act

Bill C-11, the Digital Charter Implementation Act, was a proposed policy that sook to strengthen digital privacy protections for people in Canada. It died on the order paper of the 2021 federal election after receiving only two readings, but its contents represent the state of digital privacy reform on a federal level.

It was created in light of Clearview AI’s privacy violations that caused the federal government to reexamine its existing frameworks. Forty-eight Canadian law enforcement agencies — including the Royal Canadian Mounted Police — admitted to using the U.S. tech company’s facial recognition technology, database of images and biometric facial arrays of people in Canada.

This was an inflection point for Canada’s public and political discourse concerning facial recognition.

When asked about Bill C-11’s future, Benmor said, “Minister [François-Philippe Champagne] has indicated that new legislation will consider stakeholder’s comments on the former Bill C-11…. One such comment pertained to the need for greater transparency and accountability on the part of organizations who are developing and using AI systems which may impact Canadians.”

Solutions

On March 21, AI governance researcher Brandusescu provided testimony as an expert on the use of and impact of facial recognition for Canada’s House of Commons’s Standing Committee on Access to Information, Privacy, and Ethics. She proposed several solutions that can be applied to algorithmic technologies in general.

Clearview AI was able to work around Canada’s existing digital privacy frameworks because they provided law enforcement with their technology on a trial basis, meaning there was no contract involved and there was no measure in place to regulate that.

In her testimony, Brandusescu recommended that the Office of the Privacy Commissioner “create a policy for the proactive disclosure of free software trials used by law enforcement, and all of government, as well as create a public registry for them.”

She also maintains that a public registry is necessary for all AI technologies, especially those used by law enforcement. “A public AI registry will be useful for researchers, academics, and investigative journalists to inform the public,” she said.

For companies linked to human rights abuses, such as Palantir, she believes they should be removed from Canada’s pre-qualified AI supplier list.

Regarding the existing AIA, “The Office of the Privacy Commissioner should work with the Treasury Board Secretariat to develop more specific, ongoing monitoring and reporting requirements so the public knows of the use of impact of a system has changed since the initial [assessment],” said Brandusescu.

Going Forward

As surveillance has become a fact of life, digital privacy must become a human right.

“While AI has the power to solve immense problems and enable unprecedented innovation, it can also create new challenges when left unchecked,” Benmor said.

This acknowledgment is important but only if solutions are implemented alongside it as soon as possible. Brandusescu’s recommendations will go a long way in preventing the injustices that happened to Parks, Williams and Oliver in the U.S. from happening to marginalized peoples in Canada.

Truthout Is Preparing to Meet Trump’s Agenda With Resistance at Every Turn

Dear Truthout Community,

If you feel rage, despondency, confusion and deep fear today, you are not alone. We’re feeling it too. We are heartsick. Facing down Trump’s fascist agenda, we are desperately worried about the most vulnerable people among us, including our loved ones and everyone in the Truthout community, and our minds are racing a million miles a minute to try to map out all that needs to be done.

We must give ourselves space to grieve and feel our fear, feel our rage, and keep in the forefront of our mind the stark truth that millions of real human lives are on the line. And simultaneously, we’ve got to get to work, take stock of our resources, and prepare to throw ourselves full force into the movement.

Journalism is a linchpin of that movement. Even as we are reeling, we’re summoning up all the energy we can to face down what’s coming, because we know that one of the sharpest weapons against fascism is publishing the truth.

There are many terrifying planks to the Trump agenda, and we plan to devote ourselves to reporting thoroughly on each one and, crucially, covering the movements resisting them. We also recognize that Trump is a dire threat to journalism itself, and that we must take this seriously from the outset.

After the election, the four of us sat down to have some hard but necessary conversations about Truthout under a Trump presidency. How would we defend our publication from an avalanche of far right lawsuits that seek to bankrupt us? How would we keep our reporters safe if they need to cover outbreaks of political violence, or if they are targeted by authorities? How will we urgently produce the practical analysis, tools and movement coverage that you need right now — breaking through our normal routines to meet a terrifying moment in ways that best serve you?

It will be a tough, scary four years to produce social justice-driven journalism. We need to deliver news, strategy, liberatory ideas, tools and movement-sparking solutions with a force that we never have had to before. And at the same time, we desperately need to protect our ability to do so.

We know this is such a painful moment and donations may understandably be the last thing on your mind. But we must ask for your support, which is needed in a new and urgent way.

We promise we will kick into an even higher gear to give you truthful news that cuts against the disinformation and vitriol and hate and violence. We promise to publish analyses that will serve the needs of the movements we all rely on to survive the next four years, and even build for the future. We promise to be responsive, to recognize you as members of our community with a vital stake and voice in this work.

Please dig deep if you can, but a donation of any amount will be a truly meaningful and tangible action in this cataclysmic historical moment.

We’re with you. Let’s do all we can to move forward together.

With love, rage, and solidarity,

Maya, Negin, Saima, and Ziggy