Back to News and Blogs

AI will revolutionise customer service in banking — but for that to happen, banks need to start structuring their data for AI today

24

November

23

Finance
About Science Card

Banks need AI.

Banking models today might be recognisable to bankers from centuries ago. However, the technology they use to carry out their business has advanced at as fast a pace as the world around them: new technology has consistently been adopted and adapted by banks to give their operations an edge. Banca Monte dei Paschi Di Siena (MPS) serves as a good example of this: founded in 1472 in Siena, Italy, they’ve been through (and helped fund) the renaissance, embraced the industrial revolution, and now offer the digital banking products and services that today’s consumers and businesses take for granted. It’s highly likely then that a bank like MPS would continue on this trajectory, and take advantage of the next wave of technological advancement being driven by AI, and would integrate it to improve the service that they bring to their customers.

Automation is an essential part of banking, as they deal in a world that operates at scale and pace. Our view is that if banks don’t embrace the AI revolution, they stand to lose their competitive advantage. AI can increase margins, reduce costs, improve the quality of customer service, improve credit and reduce interest-rate risk, and improve decision making — to the extent that, if banks tread water while AI changes the world, we’ll have a situation comparable to the fall of Kodak, sleeping through the digital camera revolution.

Where are we today with AI?

Artificial Intelligence started with the Turing test in 1950 [1], with the aim of determining if a machine can demonstrate intelligence. Fast forward to today, the evolution of computing hardware, infrastructure and power has enabled the creation of ChatGPT, an AI chatbot able to answer almost all of our questions. The role that ChatGPT has played in normalising AI in peoples’ daily lives can’t be overstated: they reached 30 million customers in just one year, and now have 100 million users.

Computing power has historically been the main constraint — alongside data — on the advancement of AI. That constraint, however, is now becoming its enabler: it is fundamentally the progress in chip design [2,3,4], computing capabilities [5], the ability to train models across many chips [6,7], and theoretical advancements (e.g. transformers) [8] that has led to the latest breakthroughs in AI.

It is due to these advancements, and the scale of the internet in generating and making available so much data, that it has now been possible to train extremely large models on gigantic datasets, ultimately leading to the machine intelligence that is used in ChatGPT. To put the size of these models into perspective, if you wanted to train ChatGPT on your new MacBook, it would take you around 10,000 years — large models are trained on supercomputers which often have installation costs in excess of $300m.

And so that’s brought us to the point that we are at today, with software that can answer almost any question we have; able to create images, videos and music; able to create computer programmes; able to demonstrate basic reasoning; and undoubtedly we’re just at the beginning of what is going to be possible with AI. There are very exciting times ahead, and lest the pace of progress is dampened by fear-mongering, it’s important to point out that the benefits of AI outweigh the — otherwise legitimate — concerns often discussed in the media [9].

Large Language Models are the foundation of ChatGPT — but how do you integrate them into your bank’s customer service?

One of the most exciting opportunities for banks is in integrating the latest AI advancements in Large Language Models (LLMs) into customer services. The success of AI in banking hinges on its ability to effectively utilise the vast amounts of data available — but how does a bank get started with this? There are fundamentally only two choices. Either you build it yourself, or you use (or customise) an existing 3rd party service, such as ChatGPT.

Choice 1: building it yourself

Banks — just like any other participant in the AI revolution — have access to amazing open-source libraries (such as https://github.com/taishi-i/awesome-ChatGPT-repositories, meta-llama/Llama-2–70b-chat-hf and https://github.com/karpathy/nanoGPT, or the pre-trained models on https://huggingface.co, to list a few). Whilst the full and real ChatGPT model is not itself available, mini, cut or replicate versions are available and can be installed and used for your own applications.

But there is a catch. In order to use the models in a ChatGPT-like way requires a very experienced team of AI engineers, data engineers and skilled programmers, and access to both a significant amount of computing resources and to very large datasets. In our professional experience, it takes a team of 15 AI engineers and 15 software engineers working enthusiastically at a high pace for two full years, in order to build a standalone in-house mini-ChatGPT, including undertaking fine-tuning with human feedback (RLHF) to reach a similar quality level — all the while incurring millions of dollars of cost in computing resources, data storage, creating training data. And this estimate is quite probably on the highly optimistic side. Again, from experience, building your own workable AI is very difficult and is comparable to building your own computer chip.

Choice 2: integrating 3rd parties

Integrating 3rd parties is straightforward enough. Connect APIs, run some back-end work, and in no time you will have your application connected with a high-quality service such as ChatGPT — which can then potentially be further customised by your in-house team.

But what you gain in convenience, your customers lose in privacy. What are the implications for your customer’s data of connecting to an AI API? Where does it stand with regards to data protection? What about compliance, your regulators and your customers’ privacy? As an example of the potential implications, everything you feed into ChatGPT is recorded — and so will ChatGPT learn all of your customers’ names, their questions and their money concerns? Degrading of privacy is unacceptable for banks.

You can request zero data retention from ChatGPT. For this, you need to get in touch with their sales team. It will take some time. Last summer, OpenAI released ChatGPT enterprise addressing exactly this, preserving the privacy of data for businesses. The promise is SOC2 compliance, although hosting is still US-led, and not in the EEA. As you can imagine, the queue is long.

Another viable alternative is using AWS Bedrock/foundation models, which give full control over your own data, but on the flip-side, still require a degree of DevOps setup and user-interface build, and due to not using GPT-4, these models possess a different level of quality.

It is important to mention that ChatGPT is currently best-in-class. But this will likely change a few times over the coming years: many companies are training LLMs, and will compete for customers using their AI. This being the case, when using 3rd parties it is a good idea to stay flexible.

Even when using a 3rd party supplier, there is still a pay-off between cost-efficiency and data ownership. Nonetheless, whichever 3rd party you choose, you will need to pay meticulous attention to their data policies. How will they use your data to train their model? How will they share your data? Where is your data stored? You might well find that 3rd party integration is more complex on the legal side than on the actual implementation side.

What are the benefits of AI in customer service?

One obvious area where AI and LLMs could revolutionise banking is in customer service. The benefits of using AI to power customer service in banking are hugely significant:

  • Uniform and near-infinite scaling of tone of voice. Banks can scale their tone of voice, and set a precise definition of how they want their customer to be treated, to near infinity, without losing quality. Their customer service therefore becomes ultra-scalable and hyper-personalisable, with the ability to match a multitude of customer profiles.

  • Need for human intervention can be reduced massively. Banks will be able to reduce the need for human intervention, from 40% of cases, down to approx. 5%.

  • Customer waiting time will be near zero. This is significant not just for routine enquiries, but also extremely valuable for urgent case resolution, and increases trust in the bank.

  • Escalations are timely and efficient. Staying on urgent case resolutions, GPT itself is very good in assessing urgency, and this helps with escalating genuinely pressing or challenging cases to human agents.

  • Customer service costs can be reduced significantly. A call centre of 300 people can be replaced with an AI customer service team of 50. The AI customer service team is a mix of engineers, sales and customer service professionals, whereas call centres are predominantly just customer service professionals.

  • Low cost per query. AI’s very low cost per query leads to further cost savings, in comparison to human-led services.

What are the negatives of using AI in customer service?

The negatives of AI in customer service are primarily associated with its setup costs, and the difficulties of maintenance.

  • Setup costs can be very expensive. Whether crafting the AI-user experience and designing the human handover process with a 3rd party provider, or building from scratch with an in-house AI team, the implementation costs can be very high compared to traditional customer service setups. This is due to the novelty, engineering, calibration and testing requirements, cost of computing power, or 3rd party service fees.

  • Training and maintenance are difficult and have to be very accurately monitored. AI can change its behaviour unfavourably, provide incorrect answers, or even be generally disliked and perceived as not useful, and this could be detrimental to both the bank and its customers.

  • Complex enquiries could be misinterpreted. Private banking, for example, could suffer reputational damage with an AI “too stupid” to execute complex queries.

How can banks get started today?

To get started in the fast-paced, rapidly evolving AI space, it is important to take a deep breath and aim to understand the basics.

We recommend the following for banks — or indeed any institution — to get ready for the AI revolution, particularly in customer service:

  • Get data ready! AI needs data in order to function, and this data — your data — needs to be centralised, clean and available in a particular way to your AI model. The objective is to create a flawless data set that can be used to train your company’s in-house AI model when ChatGPT copy codes become widely available, in two to three years’ time. Seamlessly collecting and storing all of your customer information now — including interactions, usage data, chats and so on — will ensure that your business is prepared to adopt the AI revolution at a point in the future.

  • Build a chatbot prototype with a 3rd party API. If you have not done so yet, start building an MVP of your customer service with the basic ChatGPT API. See how far you can take it and explore how you can address the privacy issues.

  • Hire 1–3 AI engineers with the aim of building your own in-house LLM sandbox. You might end up never using it, but only by doing it yourself will you understand the full depth of the technology and the challenge of an LLM engine. The investment is partially one of operational readiness, and partially one of education: your AI engineers should regularly be giving company seminars about their findings and progress, to educate decision-makers and team members about the technology behind LLMs.

These three steps, taken now, will build you a great foundation from which to harness AI for your customer service in the future. This puts you on a path to massively reduce your customer service costs, potentially by up to 50%.

A final word on data. Privacy and data protection are essential for building trust with bank customers, and how AI systems handle and protect user data is paramount. Striking a balance between the need for data to drive AI systems, and the obligation to protect user privacy, is crucial to the decision-making around the implementation of an AI model. Robust data governance practices, secure data handling protocols, and strong encryption methods can help address privacy concerns.