A fine balance: the ethics and efficacy of predictive policing
The recent anniversary of Stephen Lawrence’s death served as a reminder of historical tensions between the police and minority communities. These tensions could be exacerbated by the increasing use of predictive technology in policing, which can increase bias and undermine trust. Reconciling this with potential benefits of these new technologies is a challenging task.
The possibility – offered by predictive policing – of using increasingly scarce police resources to target individuals or areas likely to commit crime is an attractive prospect. Research shows that some forces have found it about 10 times more likely to predict the location of future crime than random patrolling. North Wales Police, who use predictive techniques to tackle burglary, have seen burglary offences drop from 4,617 in 2013 to 3,716 in the last year.
Predictive policing, however, poses serious ethical challenges, especially in an environment where trust is already fragile. The algorithms used rely on data that reflects, and so perpetuates, inequalities in the justice system. In the US, an investigation into the COMPAS sentencing algorithm found that black defendants were twice as likely to be deemed at risk of offending than white defendants. More recently, Durham Constabulary had to alter predictive algorithms used to make custody decisions amid concerns they were discriminating against poor people. It is also unclear who is accountable when errors are made. Any model that can reinforce and exacerbate underlying biases in crime data and police practice should be treated with extreme caution.
Predictive policing can enable bias, but it could also be used to help confront it. The College of Policing already have guidelines on unconscious bias and fair decision making. Predictive analytics can underpin this with statistical evidence of where bias lies. New technology can allow officers to understand the probability of error when they suspect a black or working-class citizen and therefore account for biases more easily in their decision-making.
This knowledge can start to encourage a better data infrastructure. The algorithms used to predict the location of crime will only be as good as the information that is fed into them. The Domain Awareness System (DAS), used in the US, analyses data from multiple sources and can automatically cross-reference this data with information held on police databases, helping increase quality. Improving the data infrastructure, through ensuring the completeness of information and sharing data, are of crucial importance for the algorithms to produce accurate results and help tackle bias.
Underpinning this must be a strong accountability framework. Durham Constabulary emphasise that the HART algorithm they use to assess the risk of individuals reoffending is an advisory tool, meaning the officer retains responsibility for ensuring decisions are fair and unbiased. However, some academics argue accountability should rest solely with the tech companies producing the software. In healthcare, the Medicines & Healthcare Products Regulatory Agency (MHRA) regulate every software or device used in healthcare to ensure its safety. As algorithms enter policing, careful thought on the best accountability framework is crucial.
Race and bias is at the heart of some of the biggest issues facing policing. As technology improves, and resources remain scarce, forces may turn to predictive policing as a powerful model for future policing. However, the focus must always be on ethics as well as efficacy.
Predictive policing can enable bias, but it could also be used to help confront it. The College of Policing already have guidelines on unconscious bias and fair decision making. Predictive analytics can underpin this with statistical evidence of where bias lies. New technology can allow officers to understand the probability of error when they suspect a black or working-class citizen and therefore account for biases more easily in their decision-making.
This knowledge can start to encourage a better data infrastructure. The algorithms used to predict the location of crime will only be as good as the information that is fed into them. The Domain Awareness System (DAS), used in the US, analyses data from multiple sources and can automatically cross-reference this data with information held on police databases, helping increase quality. Improving the data infrastructure, through ensuring the completeness of information and sharing data, are of crucial importance for the algorithms to produce accurate results and help tackle bias.
Underpinning this must be a strong accountability framework. Durham Constabulary emphasise that the HART algorithm they use to assess the risk of individuals reoffending is an advisory tool, meaning the officer retains responsibility for ensuring decisions are fair and unbiased. However, some academics argue accountability should rest solely with the tech companies producing the software. In healthcare, the Medicines & Healthcare Products Regulatory Agency (MHRA) regulate every software or device used in healthcare to ensure its safety. As algorithms enter policing, careful thought on the best accountability framework is crucial.
Race and bias is at the heart of some of the biggest issues facing policing. As technology improves, and resources remain scarce, forces may turn to predictive policing as a powerful model for future policing. However, the focus must always be on ethics as well as efficacy.
As technology improves, and resources remain scarce, forces may turn to predictive policing as a powerful model for future policing. However, the focus must always be on ethics as well as efficacy.
As technology improves, and resources remain scarce, forces may turn to predictive policing as a powerful model for future policing. However, the focus must always be on ethics as well as efficacy.