When the computer says no, challenge it
The rise of machines across the banking sector has long been heralded. But, according to Dr Viktor Dörfler, there are limitations as to what they can deliver – and plenty of misguided rhetoric around the use of non-human processes. Here, he demystifies artificial intelligence, machine learning and automation.
It’s hard to read or watch content about the future of financial services and banking without being bombarded by the message that technology and digitalisation are reimagining processes, optimising efficiency, and rapidly evolving the customer experience.
Terminology such as ‘chatbots’, ‘artificial intelligence’, ‘sentiment analysis’ and ‘automation’ have become common parlance in discussions on the future of banks – with the recurring theme that these technologies can make banking better for everyone.
However, as Dr Viktor Dörfler, Senior Lecturer in Information and Knowledge Management, University of Strathclyde Business School – and a Visiting Professor at the University of Zagreb, Croatia, and Széchenyi University in Hungary – explains, there is often a tendency to confuse the roles of different types of technology, and to sometimes overstate the opportunities offered by them.
“There is certainly confusion around some of the different terms – and that confusion is very common,” he says. “In particular, there are three terms that are regularly confused, along with a misunderstanding of what these technologies and processes can deliver: artificial intelligence [AI], machine learning [ML], and automation.”
So, what are the differences between AI, ML and automation – and what are their limitations?
“Artificial intelligence is where a machine performs something that we humans do by thinking, but the definition does not say anything about it doing so in a way that is similar to a human,” Dörfler explains. “That’s a very important detail – that the machine carries out a function that would require us, and therefore presumably it, to make a conscious decision. This is the area where there is undoubtedly the most over-promise in banking. Do we really have machines in banking that can think? The answer is no. And it is not happening anytime soon; I don’t believe that is even possible.
"Of course, there is huge appetite for what is perceived to be AI in banking – because this is a multi-trillion dollar
industry, where a competitive edge is incredibly valuable, and so people are willing to be sold to. But the notion that a machine can make decisions based on attributing human characteristics is simply not right, and hinders our understanding. Confucius said something along the lines of ‘when the words are not correct, thinking becomes muddled’.”
Dörfler says that many of the technologies that people perceive to be AI in banking are, in actual fact, nothing more than automation.
“There is a very important difference to understand between AI and automation,” he says. “When you strip back many of the practices and processes that machines are used for in banking, you see that they are really just automating regular, well-defined and consistent data processes.
“It is an interesting observation, because in my view it is often vendors that are over-promising the use of AI, many robotic process automation [RPA] companies are very clear about offering simple automated processes, at least first, as you need to get your data processes sorted out before implementing AI. And, in most cases, banks don’t actually need AI. Using automation to process data faster and more accurately is more than sufficient. It’s just that we seem to have this obsession with over-selling automation as AI.”
While most banking processes are, in fact, automated rather than leveraging AI, Dörfler says the real power of automation is felt if it can be expanded to incorporate another element – ML.
“Once banks’ data processing is arranged in a well-structured form and automated so that machines can do that processing faster and accurately – simple automation – it becomes really interesting when you can add ML. ML is one aspect of AI – it is not a different entity.
“ML means we can programme the automation process by providing a large number of examples with favourable and unfavourable outcomes, and the machine will learn to replicate the statistical frequency resulting from the examples. This may lead to selection or identifying anomalies based on pre-set criteria and parameters, in the form of a so-called ‘goal function’, that has been set by a human. It is not a machine with real intelligence – because the machine is not thinking for itself to make those decisions. It cannot make judgments. But it can identify patterns that humans would not be able to identify, based on information it has been provided with.”
To further explain the role of ML, Dörfler offers a working example. “Let’s take the example of an artificial neural network [ANN] being used to process a very large number of fMRI pictures taken from cancer patients,” he says. “The process will identify patterns that human oncologists could not – for a very simple reason that it can process hundreds of billions of these pictures in a short time, which humans simply cannot do.
“It can process those millions or billions of images, very quickly, and set aside the ones that meet a certain set of criteria far quicker and more consistently than any human.
“You need smart people using smart technology. It is not ‘either or’.” Dr Viktor Dörfler, University of Strathclyde Business School
“Of course, it cannot ‘test’ for cancer – it cannot make a judgment as to whether the patterns identified mean the patient should be sent for treatment. That has to be done by a human. But that human could decide to set the goal function to admit patients that meet certain characteristics.
“However, that is the real value of automation and ML,” Dörfler continues. “If you can bring together the human experts with the machine, so that the machine does the heavy lifting and identifies the patterns – but then refers them to the human for judgment – then that is a very powerful combination. You need smart people using smart technology. It is not ‘either or’.”
Dörfler says organisations that do not align human ‘smartness’ with technology, and that rely too heavily on automation and ML alone, run the risk of actually frustrating and annoying customers, rather than enhancing their experience.
To demonstrate, he shares another personal story.
“In 2015 I attended a conference with my mentor in Lima, Peru,” he recalls. “Clearly, the company through which we booked our flights and accommodation had fed our information into a machine and, from that day on and for the next two years, we kept receiving flight offers for Lima. This is pointless – and it shows how the lack of human input creates a bad experience. If you ask a human travel agent how to deal with a person who, when he was 63 as my mentor was, and who had travelled for the first time in his life to Peru, that travel agent would suggest they are unlikely to want to go to Peru again soon. Instead, you might assume, by looking at their travel history, that these guys are travelling all over the world – and they would want flights offers to, for example, previously unvisited destinations. But AI cannot do that because AI cannot think.
“AI does not make you smarter. AI amplifies what you have, and if you happen to be stupid, it will amplify that as well.” Dr Viktor Dörfler, University of Strathclyde Business School
“The problem is that those guys who believe in the ‘thinking machine’ believe it is only a matter of time away. I believe that it cannot be done at all. See, any such occasion could be programmed – but only one by one, as there is no thinking.”
Despite that view, Dörfler is clear that machines – automation and ML – can and do add value to banks, suggesting that examples are evident in everyday life.
“One obvious area where this technology is working really well is in fraud detection,” he says. “At another conference when the participants went for lunch they left their jackets in the conference room. During the afternoon session, the police came, stopped the conference and informed the participants that fraudsters had entered the room during the lunch break, stolen a bunch of credit cards and used them to make a number of transactions nearby.
“What is interesting is that the participants did not know, and neither did the police,” he adds. “The reason it came to light was that the banks figured it out, because their machines had identified – based on pre-set parameters – an unusual spending pattern. There was a deviation from what was expected. This was flagged and identified – and measures were put in place to stop it.
“This works fantastically well and it is going to expand in the future. But again, it is going to support the experts, not replace them. AI does not make you smarter. AI amplifies what you have, and if you happen to be stupid, it will amplify that as well.”
As for whether these technologies can help banks enhance customer loyalty, Dörfler says they can if they are used to add value – such as through the fraud prevention activity outlined above – but not if banks over-rely on them, thus resulting in a lack of common sense and, therefore, greater frustration for customers.
In addition, he says there are certain bank experiences for which customers, at least some of them, will always want human interaction – and where only human interaction can help them achieve their aims.
“People who do banking as I do barely need to see a human banker ever,” he explains. “I have automated all my payments, my salary comes in and it covers all my outgoings, I do not need lots of new products or help. This is all stuff that simple database processes, which should be automated appropriately, can look after.
“So, when do you think I would be OK with AI and when would I prefer to talk to a human? In general if I can ask a precise question, AI should work; but if I need actual advice, when I don’t even understand what my problem is, I would like a human. So how do you reconcile this? I think the answer may be so simple that those who push the AI chatbots out a little too eagerly don’t even think of it: give your customers a choice. Although it is tricky to depict a constellation of circumstances that could always get it right, I can tell you in most cases ‘there and then’. And there is a nice side effect of this approach: if the customers choose AI, they are less likely to be unhappy about it.”
There is one last question to answer, says Dörfler. “Why do I argue that computers will not be able to think? If you look at those AI solutions in which knowledge is somehow encoded, symbolic reasoning systems and knowledge based expert systems, we need the expert to spell out first what we put into AI – in other words, knowledge is limited to explicit knowledge. In systems capable of ML, which apart from ANN also includes knowledge-based expert systems, learning is reduced to reinforcement learning. This means that we lose most of the human knowledge and learning in either case. And it is not a computer problem, we cannot make tacit knowledge and richer forms of learning happen in computers as they are non-algorithmic, in fact, we do not know how they work. We do know computers – what we do not know is the human mind…”