AI needs an interest in explanations; not interesting explanations
In my last blog How to speak AI? I referred readers to the recent publication of the Explaining Decisions made with AI guides from the Information Commissioner’s Office (ICO) and Alan Turing Institute.
Specifically, I focussed on the critical importance of these two influential bodies setting out a common language through a clear set of definitions. This, in my view, is a significant step towards a collective understanding of the benefits and risks of AI. I concluded by tempting readers to return to discuss the Guides in more detail and how these might help in developing our collective understanding and awareness of our own individual accountability where AI is concerned.
But let me first go back a step. Another aspect of the guides which I found incredibly useful was that together they tackle the issue of how personal data is managed in the digital world, and how this differs when the AI and its algorithms are layered on top. Consider the recent debacle in education, where the use of an algorithm introduced a bias, which meant it was impossible to ‘break the mould’ and be the first person ever in your school to achieve an A in a particular subject due to reliance on historical evidence. . These Guides do a good job of drawing a distinction in the treatment of data between these two interconnected worlds.
What do I mean? Let’s look at what I’ll describe as data ethics first, or how we manage and use personal data. This is an area that is already highly regulated: the consent needed, individual rights, how you treat data – it is all very clearly legislated for [think the Data Protection Act and GDPR]. So, in this respect there is little, if any, ethical choice; it is clear which issues should be considered and it is clear where to look for the guidance on this. Clearly, as our data use becomes increasingly complex, more and more dilemmas will emerge and so we will return to the more complex aspects of data ethics at a later stage in this series.
Whereas the development and implementation of AI is full of ethical dilemmas.
The first of these Guides, ‘Explaining the basics of AI’ tackles this head on – because dilemmas can arise from the very beginning, when explaining your use of AI. How transparent will you be when explaining to people about your use of AI and the decisions made using AI?
It starts by discussing what exactly is meant by an explanation. Helpfully, this first guide provides several. And this is important, as it is not only in their research, but in the findings of many others that the importance of context is stressed when explaining a decision made using AI.
When a decision is made only by a human, then a person knows where to turn if they are wanting to understand why that decision was made. It is different with a decision made using AI; the responsibility may be less clear. But an individual should still be able to receive an explanation from those accountable for the AI system. And different approaches may be needed for different audiences: the data subject, the staff whose decisions are supported by AI systems, and external auditors, to name only a few.
This Guide outlines 6 different types of explanation:
- rationale explanation
- responsibility explanation
- data explanation
- fairness explanation
- safety and performance explanation
- impact explanation.
For each it also provides a process-based, and an outcome-based explanation. The former, as you can imagine, can be used to show that ‘you have followed good governance processes and best practices throughout your design and use’. The outcome-based explanation seeks to explain the results in plain, easily understandable, and everyday language what the reason was for a particular algorithmically generated outcome.
The Guide works through each of the different types of explanation in extremely clear language and it is well worth taking some time to explore and understand these in some detail.
I mentioned above, the role of context, and this first Guide brings this in by looking at 5 different factors. As with the explanations, each is then dealt with in greater detail, and it even covers how to prioritise these.
The guide closes with some simple principles:
- be transparent;
- be accountable;
- consider the context you are operating in; and,
- reflect on the impact of your AI system on the individuals affected, as well as wider society.
These same principles are true across all aspects of banking. Professionalism is not about one moment in time when taking or making a decision – whether AI is involved or not.
A recent report from the Intelligence Unit of the Economist, states that the current pandemic will see an urgency in the need for AI-based decisions that are ethical, fair and well-documented. ‘Explainability’ will be at the core of getting this right. So, as the use of supporting AI decision systems are integrated more and more across banking, we feel it is important to ensure our members stay informed. In my next blog, we’ll look at what the guides say about understanding AI in practice and what a modern banker might want to augment their understanding of this.