In this piece, Paulo Kaneta, International Consultant at Altus Consulting, takes a look at AI and how insurance brands can use its power, whilst retaining the human touch.

All industry verticals are currently trying to navigate the buzz, hyperbole, uncertainty and the facts around Artificial Intelligence (AI). There are some pioneer organisations who are daring to dip their toes into the unknown waters of AI, but there are many more who are yet to take the plunge and embrace the use of the technology. Many will cite uncertainties around AI applications and how they are used, needing a concrete definition of how it will shape their industry.
I recently attended the Insurance Innovators Summit 2023 in London, where unsurprisingly, most of the discussion was gravitating around AI, highlighting the risks and concerns of using the technology, with limited discussion on the actual solutions being developed and implemented.
This demonstrates that we are currently much more aware of the challenges behind using AI, such as data quality, customer interaction, complex decision-making capacity, and job replacement, rather than its benefits. As the industry faces into the imminent impact of AI, it is crucial for us, as the humans, to build a clear view of how we can use this innovation to support our work, rather than replace our work. AI is not going away, and we need to evolve our thinking to be about a future where there is a symbiotic environment of AI technology working alongside humans, supporting us in our roles and enabling us to be innovative, efficient and bring human-centric values to our work.

DATA QUALITY
By getting deeper into the world of AI, there are two major topics that frequently come to light: data quality and AI readiness.
The first is analogous to the historical relationship that process had with automation, where to guarantee a good and automated solution, a good process had to be implemented first. In the same way, today, in the AI world, data is the key to developing a good AI. The maxim, “good AI requires good data,” translates to the fundamentals of a successful solution: access to good, accurate, relevant, and unbiased data.
A well-known example is of a “big tech” company that used AI to filter candidates during the recruitment process. It realised that its artificial intelligence, built using 10 years of historical data, was biased towards men as it had “learned” that gender was a relevant selection variable. If a similar issue were to happen during the underwriting process, an AI solution could exponentially magnify any data issue, bringing unwanted risks to the company’s portfolio and potentially cause severe financial and reputational damage to an insurer.
Extending this issue illustrates the concern around the readiness of an organisation to implement AI. Any organisation must be sure about its capabilities to generate and maintain clean and unbiased data, considering not only its processes but also all the architecture behind its business and technology. In the same way that digital transformation should never start from the change in technology, AI adoption should never begin with AI alone. Before jumping into any solution, an organisation should assess if they have a clear strategy, with objectives around its customer-facing components such as product, service, brand, distribution channel and customer experience.
It is also crucial for an organisation to clearly identify its existing capabilities and the ones needed to implement the defined strategy, and only then should it start discussing how AI solutions will align with those overall objectives. For example, where the capabilities of an AI solution to deliver a fully humanised customer experience are not yet fully formed an organisation needs to consider if the overall experience offered is aligned with the organisation’s CX strategy.
There are many other concerns gravitating around the wider use of AI in an organisation’s operation, such as scalability of the mistakes, autonomy trusted to AI in executing decisions and anxiety around it replacing jobs, however, the popularisation of tools such as ChatGPT, Co-pilot, Bard and others, unveil a broad range of opportunities to transform the industry. AI’s capability to execute processes with speed, precision and consistency, and its capacity to analyse and process a vast volume of data paves the road for organisations to start effectively discussing hyper-personalisation of products and experiences, implementing cutting-edge fraud prevention solutions, and even proposing low-touch interactions for historically well-known high-touch products.

PERSONALISED EXPERIENCE
As the world navigates and adopts these opportunities, the strategic use of AI in Insurance should go beyond operational efficiency, but it should be the catalyst to shift the industry toward a more personalised and adaptive landscape.
When we consider the current landscape, where there are as many (if not more) uncertainties than opportunities in using AI, it is essential to recognise the need to discuss the harmonisation of human-AI interactions. We cannot deny the reality of the impact of AI solutions on our lives from now on, and it is still uncertain what level of symbiotic relationship will exist in the future.
Having said that, it is important to start understanding and discussing the terms of coexistence and build ecosystems where the business will still be run by humans, who will be empowered by AI, and not the other way around. Although AI is predicted to replace many tasks executed by humans, we are still far from AI capable of empathy and creativity, and those are the key elements to building strategic differentiation. Taking this into consideration, the greater risk is not the replacement aspect of the AI but the culture of dependency that those solutions can create in humans. AI will never be as creative and empathetic as humans, but AI can take the creativity and empathy from us humans.
In summary, although there is a buzz around AI applications in the Insurance industry, the real impacts are more complex and still need to be discovered. Many pioneer companies are already working on developing proposals and solutions, which is an excellent start to building knowledge and understanding of the technology. However, the actual impacts of AI should be discussed at a deeper level, and they should not be taken only from the technological point of view but from a holistic perspective, considering how the technology fits into the organisation’s overall strategy.
The real challenge of the topic is not the technology by itself, but the questions around how these innovations will fit with the human aspect of the business, either by delivering customer experiences or by supporting the business decisions in the organisation.
Through all of this, it is important to remember that robust data governance and controls which align to “state of the art” understanding of risks need to govern the technological tools being implemented. All of these have to sit as part of a well-defined Target Operating Model or “North Star”, covering people, process and technology, towards which the organisation is working, utilising AI as a tool to improve the competitiveness and the quality of the business and not a brain to make decisions for the organisation.

Be the first to comment