Crime Predicting Tech Is “Supercharging Racism,” Amnesty Finds

British police forces are using AI crime prediction tools that disproportionately target Black and racialized communities, a new report from Amnesty International UK has revealed.
According to Amnesty, nearly three-quarters of UK police forces rely on predictive policing, which uses past crime data to estimate where crimes might happen or who might commit them.
The 120-page report, “Automated Racism – How Police Data and Algorithms Code Discrimination Into Policing,” argues that these predictive systems reinforce racial bias instead of improving public safety.
“These systems have been built with discriminatory data and only serve to supercharge racism,” Sacha Deshmukh, Chief Executive at Amnesty International UK, said in a press release.
AI Policing Reinforces Bias, Not Safety
One key criticism in the report is that these AI-driven tools rely on flawed and biased data, leading to a cycle in which over-policed communities are repeatedly flagged as high-risk.
The report highlights how these systems disproportionately affect Black communities in cities like London, Manchester, and Birmingham. In the London borough of Lambeth, for example, Black people were stopped and searched four times more often than white people. Yet, in 80% of those cases, no crime was committed, proving that most individuals targeted were innocent.
Another controversial system, used by the Metropolitan Police, assigns people “risk scores” based on intelligence reports—even if they have never committed a crime. Amnesty says this violates basic human rights, including the presumption of innocence.
Calls for a Ban on AI Crime Prediction
Amnesty is now calling for a ban on predictive policing systems across the UK. The organization is also pushing for greater transparency, arguing that people should know when AI-driven policing is being used against them.
The report warns that without accountability, these tools will continue to harm already marginalized communities.
“The use of predictive policing tools violates human rights. The evidence that this technology keeps us safe just isn’t there; the evidence that it violates our fundamental rights is clear as day,” Deshmukh added. “We are all much more than computer-generated risk scores.”
Image credit: Metropolitan Police