This article is by Selim Cavanagh, Director of Insurance at Mind Foundry, and it looks at the risks of AI tech across the insurance chain, plus the potential benefits;
It is now common knowledge that artificial intelligence (AI) has the potential to fundamentally transform the insurance industry. However, whilst this presents an extraordinary opportunity, and some insurers now have hundreds of models in production across their businesses, as AI adoption scales, so do the associated risks and costs. Those ahead of the curve are aware of these risks and want to get ahead of the problem. This has put AI regulation at the top of the agenda for governments, regulators, and organisations alike.
AI governance is a collection of frameworks, policies, and practices to ensure that AI technologies are developed and used in a way that minimises the risk and maximises intended benefits. Against an ever-changing regulatory landscape, rigorous AI governance is vital for insurers to build resilience, boost profits, and ensure fair outcomes for customers.
Risks associated with using AI in insurance
AI models are tailored to solve specific problems, but if things like data availability, security, ease of integration, or business goals are not considered before implementation, then operational inefficiencies will likely arise. If these questions aren’t addressed at the start, insurers will be exposed to models that don’t perform as they need to, from underwriting the wrong policies to claims leakage. AI does not guarantee increased profits, in fact, it can have the opposite effect if insurers do not consider all of these factors when designing and building their models.
Equally critical to well-thought-out implementation strategies, is the ongoing upkeep of AI models to prevent further operational inefficiencies. 91% of models experience performance degradation within their first year, which means monitoring and governing these models is critical. The more models an insurer uses, the more difficult this challenge becomes.
It also means data science teams must spend more time and effort retraining and maintaining existing models rather than helping insurers scale their AI with new ones. Repetitive tasks like model retraining are not what data scientists want to be doing, which goes some way to explain why the average data scientist will only remain in their current job for 1.7 years before leaving.

New regulations, like the Consumer Duty, will impact how we use AI. Insurers will have to constantly assess the inner workings of their AI models and their outcomes to ensure they are compliant with regulations. Failure to do so can lead to financial penalties, reputational damage, and a breakdown of trust with customers.
Insurers may also face legal ramifications if they fail to address the issue of AI bias. Customers often provide sensitive information to insurers which means AI models could make recommendations based on biased data such as religion, address, or ethnicity – even if their creators did not intend for them to analyse those variables. Beyond the legal ramifications that could result from this, there are ethical considerations to contend with too.
At the same time as AI adoption has expanded in insurance, these systems have also become more advanced, able to make more accurate decisions using vast volumes of increasingly granular data. Consequently, determining liability in case of errors or failures has become increasingly difficult. As the regulatory implications of these problems become more defined and compliance becomes more essential, insurance leaders will need to clearly define accountability and ensure safeguards and contingency plans are in place.
AI models that lack transparency and explainability are a key industry issue. An inability to articulate to customers, internal stakeholders, and regulators how and why AI systems have made certain decisions and recommendations can undermine trust – which is something that is fundamental in the insurance sector.

What can be done to mitigate these risks?
Create a transparent AI framework
Existing and upcoming regulations mean that transparent and explainable AI models are more important than ever. Insurers need to be able to understand and analyse their models and explain their outputs to a range of stakeholders.
With a transparent framework for building, managing, maintaining, and scaling AI, potential issues can be promptly identified allowing insurers to stay compliant and adaptable. An effective framework will also help insurers identify any potentially biased recommendations enabling them to take action before it is properly deployed.
Outline accountability and risk ownership
To ensure AI models are reliable and robust, various stakeholders must be involved in their creation, implementation, and management. For example, the Consumer Duty will place the responsibility of any AI outcomes on the insurer and require them to be able to provide clear evidence that they are taking steps to ensure positive outcomes for their customers. From data scientists to system integrators, to departmental directors, there must be clear lines of accountability for risk ownership within an organisation.
With proper governance, hundreds of models spread across different business functions can be overseen together so that no generated information is left siloed. Models can then be combined to solve new problems, and learnings shared so that models do not only maintain performance levels, but also improve over time.
Create performance metrics and conduct rigorous testing
Operational boundaries and key performance indicators (KPIs) must be established to validate the performance of AI models and their impact on the business. These boundaries can benchmark reliability and consistency, and can implement feedback loops that facilitate continuous improvement.
Individual model monitoring protocols – such as drift detection – are also fundamental to AI adoption as they make it possible to identify whether a model’s performance declines before it affects the bottom line. It is important to recognise that all of this does not need to be done in-house. Many different types of software solutions can facilitate proper AI governance and insurance leaders should consider all available tools to ensure successful AI adoption and free up time for in-house data science teams to focus on more important and complex tasks.
The future of AI in insurance
By deploying and maintaining explainable, well-governed AI models, insurers can achieve operational efficiencies, cost savings, regulatory compliance, and ultimately a competitive edge. In an industry as crowded as insurance, staying at the forefront of innovation is crucial.
With the right strategies, insurers can mitigate AI risks and fully capitalise on its benefits to set a new industry standard.

Be the first to comment