Test 8 March, 2017

AI public services: biases, accountability and transparency (III)

Last Tuesday, the Science and Technology Committee launched the first government-led inquiry into algorithms in decision-making. It is a much-awaited initiative as algorithms already have a huge impact on people’s lives. On the more benign level they can make movie or music recommendations. Potentially more worryingly they shape the type of information people see – from how results are displayed in a search engine query to what news appears on social media newsfeeds to the actual (or fake) news content. As highlighted in our previous blog, applying algorithms to decision-making in public services could increase their efficiency and potentially lead to better outcomes for citizens, but safeguards need to be put in place to lessen the negative impacts.

Algorithms, simply defined as a set of procedures to accomplish a task, are the backbone of every advance in artificial intelligence (AI) from robotics to machine learning (ML). Given their pervasiveness and economic impact – AI could add up £654bn to the UK economy in the next 18 years – they have increasingly crept their way into the political agenda, mostly in the form of increased investment. The Industrial Strategy set out a Challenge Fund, which could be used to support projects in robotics and AI. In the Digital Strategy, £17.3 million was awarded to the Engineering and Physical Sciences Research Council (EPSRC) to support AI research. These investments are essential to harness the potential of AI; however, they will not solve some of the fundamental societal challenges we face when using algorithms for decision-making.

Aside from the fears of algorithms getting out of hand – Wikipedia’s bot wars could be seen as an early sign of this – there are some, maybe more immediate, negative consequences to their application, which need to be addressed. Firstly, algorithms can further perpetrate existing societal biases. In its investigation of Northpointe’s algorithm for offender risk-assessment used in the United States’ judiciary system, ProPublica, an investigative journalism organisation, demonstrated that it was biased against black people. “Black defendants with a low risk of reoffending were more likely than white ones to be labeled as high risk” as reported by the Financial Times. Often these biases arise because the training data needed to develop ML algorithms are biased. Whatever the reason, these biases need to be addressed, as their impact is intolerable and hugely problematic. Moreover, it raises the question of how should algorithms be held to account for the decisions they have made, especially when they lead to negative consequences such as discrimination. Secondly, the inherent ‘black-box’ nature of algorithms, particularly in deep and reinforcement learning, makes it almost impossible to understand why an algorithm produced the results it did. This further emphasises the importance developing algorithmic accountability.

Several initiatives emerging from academia, the private sector or sometimes governments around the world have tried to address these issues. Researchers from the US, Chili and the UK co-authored Principles for Accountable Algorithms, which outlines a viable route for the creation of algorithmic accountability. The British Academy and Royal Society are currently researching issues surrounding data ethics and governance and will also cover issues around AI. The Leverhulme Centre for the Future of Artificial Intelligence is currently running a series of nine projects on the nature and impact of AI.

The government has a crucial role to play in the debate about the challenges posed by algorithms in decision-making. It will need think very carefully about these issues and think about how it will address them without creating unnecessary regulatory burdens which could stifle innovation.