Thursday 28 March 2019

Artificial Intelligence, Ethics and Education

What is AI? 

Artificial Intelligence refers to those computer systems which are both autonomous and adaptive; i.e. they are systems which have the ability to perform complex tasks without constant guidance by a user and they also have the ability to improve performance by learning from experience. The process of getting computers to learn without being explicitly programmed is called Machine Learning. 
We are all familiar with the increasing role that AI is playing an increasing part in our lives. Machine-learning is managing our email junk folders; it is suggesting the next word when we are texting; it is labelling and organising our photo albums; and it makes suggestions on what we should buy next from Amazon or watch next on Netflix. Most of these functions rely on ‘Supervised’ Machine Learning algorithms that are developed on the basis of an initial training set of data, which is then supplemented as further information becomes available. 

AI and Ethics 

Ethical concerns about AI revolve around ‘algorithmic bias’ i.e. around the validity of the way in which the algorithm is constructed and usually around the nature of the training dataset. These arguments take three forms: 
  1. Concerns about Bias: the training dataset on which the algorithm was originally constructed may not reflect the composition of the wider population. To take an extreme example, a dataset that is based on American billionaires is likely to be white, educated, aged over 45, male and, by definition, rich.
  2. Concerns about Fairness: the training dataset is based on accurate historic data, but those data reflect unfair practices. For example, in 2011 the City of Boston MA launched ‘Street Bump App’ which maps the location of potholes that needed repairing around the city by collecting data from the accelerometer in the Smartphone. The app successfully collected data and saved the City time and money in surveying the roads. However, a review of the project after 12 months showed that a disproportionate number the potholes identified and repaired were in affluent middle-class areas, to the detriment of those in poorer areas. This was almost certainly because affluent middle-class residents were more likely to own a smartphone and were more likely to download the app. In similar vein, data scientist
    Cathy O’Neill, author of Weapons of Math Destruction, has voiced her concerns about the way in which the algorithms are being used in the US criminal justice system. The police are using historic arrest data as a proxy for crime data to drive preventative policing models. Because of this, the algorithm simply reinforces historic practice by sending the police back to the neighbourhoods which they are already over-policing; and are not sent to neighbourhoods which have crime, but those crimes are found. The irony is that, in these examples, the intention was to create algorithms which were free from human bias, however, because of the way in which they were constructed that had unintended consequence of perpetuating historic inequalities. 
  3. Concerns about Unethical Behaviour: the dataset is deliberately skewed or designed to behave in a dishonourable way. AI is fundamentally an ethically neutral platform. It can be used or misused like any other technology. History teaches us that most technologies are misused at some point. 
In order to avoid historic or intentional bias, it is necessary to develop new protocols. Once designers deviate from historic data and endeavour to build an algorithm which is based on data which is deemed both unbiased and fair, they are presented with some quite serious ethical challenges. Here there will be parallels here to the debates about the value of positive discrimination in the workplace. One way to manage AI is to establish some protocols which will ensure that we avoid algorithmic bias - and here diversity is the key. There needs to be a Diversity of Background and a Diversity of Mindset of the team building the algorithms to avoid “group think”; a Diversity of Data that comprises any training set; and Diversity of Algorithmic Models used. 
Looking ahead it is likely that there will need to be formal regulation of algorithm design (rather akin the way in which financial services is regulated) which will entail the development of regulatory function of ethical audit. This role will ensure that algorithms are not subject to intentional or unintended bias. 

The Ethics of AI in Education 

The use of AI in education is in its infancy. We are beginning to see adaptive learning platforms, such as CenturyTech, being used in schools, primarily to supplement and support what teachers are doing in the classroom. Whether or not this is the first tentative step towards the ‘Holy Grail’ of fully adaptive and personalised learning that does not require teacher input is a debate for another day. Looking ahead it is like AI in Education is likely to pose some significant ethical issues. 
  1. First, as those who are embroiled in GDPR know only too well, there are a whole range of concerns about the security, ownership and privacy of personal student data that is captured and stored within an AI platform. There will need to policies and protocols in this area. 
  2. Secondly, there are concerns about the fairness of access to AI technologies and the potential for AI to increase the ‘Digital Divide’ between those who can afford access to the technology and those who don’t. 
  3. Thirdly, there is a danger of having a biased training set on which educational AI technologies are founded reinforce social/ cultural/ etc. stereotypes. For example, it is quite possible that the dataset for an AI learning platform might be skewed because the early adopters all come from affluent fee-paying schools who can afford to provide access. 

Assessment Algorithms. 

Perhaps the greatest ethical issues might come around AI being used to make significant summative assessments of students’ abilities in the allocation of places at university or into the jobs market. We have already seen the ‘Big Four’ accountancy firms preferring their own assessment platforms to consideration of A-level and Degree results in order to find recruits who have the most potential (e.g. ‘Big Four’ look beyond academics – Financial Times 28/02/2016). It is quite possible to conceive of a time when both universities and employers, motivated from the noble intention of assessing potential and facilitating social mobility, will rely on their own assessment recruitment algorithms to identify suitable candidates. If this were to have it would be vital that any algorithm be subject to rigorous ethical audit to ensure that it meets a standard test of fairness. 

Final Remarks 

We have only begun to realise the potential that Artificial Intelligence has to shape C21 society and, sadly, social and ethical debate is struggling to keep up with the development of the technology. There needs to be an informed debate about the place of AI in society, and particularly of how it is going to be applied in education. In order to do this, we need a much greater understanding in society of how AI and Machine Learning work – and that is a challenge which I hope will be taken up by schools over the coming months and years.

This article was published in Digital Strategy Edition 2, March 2019 by the ISC Digital Strategy Group.

No comments:

Post a Comment