Richard Archer, Insurwave’s Chief Strategy Officer takes a look at the AI future of insurance.

Dr Melvin Kranzberg, famed professor of the history of technology at the Georgia Institute of Technology and founding editor of Technology and Culture, explained in his series of truisms outlining the various ‘laws of technology’ that: ‘Technology is neither good nor bad; nor is it neutral’. It is moulded by those who use it. In the insurance industry, AI should be used and understood as a tool like any other, one that can enhance and improve roles or processes that already exist.
While AI’s market size is projected to grow from USD 11.33 billion in 2024 to USD 49.3 billion by 2032, there is still some scepticism around its adoption. However, there will always need to be a human touch in the process, as simply dropping AI into an existing process is bound to cause issues. For example, AI’s output must be reviewed, which is where human oversight and using your own underwriting teams and subject matter expertise internally or externally comes into play.
To overcome this scepticism, you must first build trust. How you do that can be organised into three key pillars:
1) Regulatory frameworks
2) Human oversight
3) Facilitated understanding
Regulatory frameworks
With the EU developing the AI Act, a comprehensive legal framework to address the risks associated with this evolving technology, and the AI global guidelines released jointly by the UK and US cybersecurity, we believe this will help drive further clarity and establish
benchmarks for evaluating AI solutions sensibly.
Taking a closer look at the EU framework, some interesting themes can be seen which will help move the dial from scepticism to adoption. For example, providers of high-risk AI systems (AI intended to be used as a product or the security component of a product), must meet strict requirements to ensure that their AI systems are trustworthy, transparent and accountable. Among other obligations, they must conduct risk assessments, use high-quality data, document their technical and ethical choices, keep records of their system’s performance and much more.
While we can expect guidelines specifying the practical implementation of the classification of AI systems alongside a comprehensive list of practical examples of high-risk and non-high risk use cases, these requirements lay the foundation from which new opportunities for AI
applications in insurance can be derived.

Human oversight
The insurance industry has been and continues to be slow to adopt new technology, but progress is being made, evident from the investments being made to improve automated workflows. However, AI is not a replacer for those who previously manually updated the data but
essentially an enabler for insurers to better understand their data and for the underwriter or for a claims team to access the right insights into that data that they would have to normally manually look to interrogate or define some rules about how they could find the right report or the right analytics. AI accelerates the way in which you can identify patterns in the data that enable better decision-making.
One area that still needs to be fully explored is predictive analytics, especially in a world of geopolitical uncertainty. Specifically, how can insurers use predictive analytics to help their clients better understand their own risks or risks that they have not anticipated yet?
Similarly, the ability to integrate and overlay third-party data sources to give insurers a different view of the potential impact of an event is becoming increasingly vital. Building on that, AI can then analyse the historical view of data to identify any potential patterns from that data. For example, there might be a certain city or geography that would be useful to glean trend data from and should any trends emerge, this can then be used to inform future client services and propositions – a key focus for insurers.
Facilitated understanding
Like any tool, AI’s effectiveness is measured in part by its users’ knowledge. Therefore, comprehensive upskilling and adoption programs are key to maximising the technology’s potential.
As companies develop the technology platforms that will enable AI, they should concurrently implement educational programs to inform employees about the practical, legal, and ethical aspects of AI. Upskilling the workforce to excel in an AI-driven environment is essential for
fully leveraging the benefits of this technology for both companies and their employees.
From a data science perspective, data scientists must also be trained to think beyond the model metrics and think in terms of business metrics. While on the business side, there must be an understanding of the specific risks of AI and how to respond.
Don’t wait
Insurers have a wealth of data that is not being explored. Early use cases are showing promise in terms of value being realised. But they have only scratched the surface. What we do know is that starting initial investments in AI will uncover new insights that will only build on a future foundation for exploration. Future applications such as predictive analytics prove that by collating and combining data sources and interrogating them, you may uncover a wealth of new opportunities to take advantage of.

Be the first to comment