Test 25 July, 2018

EdTech and Algorithmic Bias

Reform’s report, Beyond gadgets: EdTech to help close the opportunity gap, provides an incredible outlook on how EdTech is creating new opportunities by enhancing the quality of education and reducing teacher workload. The report highlights personalized learning as a particularly revolutionary innovation, providing individual students with tailor-made learning support to narrow the opportunity gap. A hotly contested topic in education, however, remains the curriculum. What are the potential benefits and challenges of EdTech in light of increased demands to ‘decolonize the curriculum’ and ensuring that classes are free of racial and gender bias?

Bias is when particular characteristics of certain groups are disproportionately weighted in favour or against each other, creating an unfair prejudice toward one of them. This can be conscious or unconscious, and despite its stigma, it is embedded in several social and institutional structures. Recently, the growth of algorithms designed for important industries has prompted a debate on how far human bias may be integrated into these algorithms. A 2016 Pro Publica report, for example, described how algorithms that were used during trials to predict future criminals were biased against African-Americans – and unfortunately, this example is only one out of many.

This happens because machines are given a certain input in form of training data to produce an output. If the training data is incomplete or laden with prejudice, the output created by the algorithm will be correspondingly biased.  As New York Times bestselling author Cathy O’Neil, argues, “our own values and desires influence our choices, from the data we choose to collect to the questions we ask. Models are opinions embedded in mathematics.” It is thus not machines themselves that create bias. Rather, they mirror human values and apply them in their decision–making process. The same risk is present when using algorithms for school and university curriculums.

As Reform’s research shows, EdTech can personalize learning to provide “insightful, adaptive and personalized teaching to students”. This has enormous potential, especially in STEM subjects, but within the humanities and social sciences algorithmic bias may pose the risk of reproducing inequality, and racial and gender bias. The critical point here is also algorithm’s perceived “reputation for impartiality”, as explained by O’Neil. Education and learning itself is a process that is fundamentally based upon critical thought. It is therefore even more important not to embed bias into algorithms, and as a consequence, solidify it into the curriculum.

So how can EdTech be used fairly, to the benefit of all? A much disputed option has been “fairness through unawareness” of the sensitive attribute of individual pupils. A ‘one-size-fits-all’ approach, however, thwarts the very promise of EdTech and personalized learning. A bias-free AI could provide the educational foundation for creating more equity in society. Using EdTech to address the individual struggles of disadvantaged pupils will provide them with the skills to narrow the opportunity gap. But achieving this requires eliminating potential biases within our educational system before they are integrated into machines. Therein lies the big challenge and opportunity of AI in education.

This happens because machines are given a certain input in form of training data to produce an output. If the training data is incomplete or laden with prejudice, the output created by the algorithm will be correspondingly biased.  As New York Times bestselling author Cathy O’Neil, argues, “our own values and desires influence our choices, from the data we choose to collect to the questions we ask. Models are opinions embedded in mathematics.” It is thus not machines themselves that create bias. Rather, they mirror human values and apply them in their decision–making process. The same risk is present when using algorithms for school and university curriculums.

As Reform’s research shows, EdTech can personalize learning to provide “insightful, adaptive and personalized teaching to students”. This has enormous potential, especially in STEM subjects, but within the humanities and social sciences algorithmic bias may pose the risk of reproducing inequality, and racial and gender bias. The critical point here is also algorithm’s perceived “reputation for impartiality”, as explained by O’Neil. Education and learning itself is a process that is fundamentally based upon critical thought. It is therefore even more important not to embed bias into algorithms, and as a consequence, solidify it into the curriculum.

So how can EdTech be used fairly, to the benefit of all? A much disputed option has been “fairness through unawareness” of the sensitive attribute of individual pupils. A ‘one-size-fits-all’ approach, however, thwarts the very promise of EdTech and personalized learning. A bias-free AI could provide the educational foundation for creating more equity in society. Using EdTech to address the individual struggles of disadvantaged pupils will provide them with the skills to narrow the opportunity gap. But achieving this requires eliminating potential biases within our educational system before they are integrated into machines. Therein lies the big challenge and opportunity of AI in education.


A bias-free AI could provide the educational foundation for creating more equity in society

A bias-free AI could provide the educational foundation for creating more equity in society


Beyond gadgets: EdTech to help close the opportunity gap