Building Trust to Confidently Underwrite AI

This Q&A piece is by: Alex Johnson, VP Insurance, Global Industry Lead, Quantexa

Q: Why are insurers embracing AI internally but hesitating to insure it externally?

A: AI promises some clear gains – productivity increases through faster intake, claims triage, and quicker interpretation of complex data for risk selection. Externally, however, for companies that the industry insures, hesitations exist due to the opaque and fast-moving nature of AI, which has recently led to major carriers seeking permission to exclude AI-related liabilities.

There are still large amounts of unpredictability in models’ output and a lack of confidence in the transparency and consistency of AI’s use due to hallucinations, leaks, hacks, and cyber threats. Combined with the challenge of determining who is at fault when problems arise from poor decisions, it becomes difficult to accurately predict both the likelihood and the impact when AI goes wrong.

In any insurance coverage, these dynamics don’t make it easy to underwrite accurately.

Q: So what does “AI-ready” actually look like, and where are most insurers falling short?

A:. “AI-ready” implies that the necessary data groundwork has already been completed, where data accurately and reliably informs large language model execution. I see this generally falling into 4 main categories:

● Trust: Where provenance, lineage, and quality are known; bias and drift are monitored; and every feature can be traced to its source.

● Control: Data access for LLM use is restricted to trusted information, which is governed, logged, and policy-driven across lines of business and external third parties.

● Connectedness: Where data can be brought together to inform greater insight into parties and entities you do business with, including people, places, companies, risk objects, contact information, and suppliers. This means models can learn from real-world relationships and behaviours, not siloed individual records.

● Context: Generating signals that are enriched with features from connected and trusted data, which includes both internal and external sources. This context helps ground AI and LLMs to reflect on how risk manifests in the real world.

Companies often fall short due to fragmented systems, duplicate data, and siloed processes, creating weak lineage between source data and AI outcomes. These blind spots affect model behaviour and make AI results difficult to validate, and therefore difficult to insure.

 

Q: How can insurers build greater trust in this space (including their own systems)?

A: Carriers should treat AI like any other emerging external risk. Like cyber and business interruption a decade ago, AI may seem opaque and unquantifiable today, but with standards, better data, and transparent risk models, it can quickly become insurable.

The same playbook can apply here. With advances in technology, I expect this evolution to happen much faster through:

1. Decision governance: Defining policies linking AI use to business risk, and mapping them against outcomes. AI governance should mandate documented data frameworks that cover trust, control, connectedness, and context as previously outlined.

2. Transparent pipelines: Maintaining full lineage from how raw data goes through unification processes (such as single customer / risk view creation), feature engineering, model building, and to decision. This enables claims severity, frequency, and outcomes to be better understood.

3. Continuous assurance and visibility: Looking at companies that monitor input data quality, impact on model drift, and have corrective processes to monitor outcomes in real time. Trigger reviews when necessary e.g. how cyber insurers now track vulnerability management and incident response maturity.

4. Combining internal and external intelligence: Because historical AI loss data is sparse, carriers should supplement internal metrics with external signals like regulatory actions and litigation trends. This broader context improves exposure and accumulation risk assessment, similar to how business interruption models now use supply chain data.

Q: How does Decision Intelligence work in real underwriting, and what risks can it surface that traditional analysis can’t?

A: Decision Intelligence connects source data to entities, context, and decisions. In underwriting, it creates a real-time, unified view of risk at submission. This covers rich party-level profiles, corporate structures, historical and active relationships, prior claims, exposures, and emerging signals.

From there, you can:

● Surface hidden correlations: e.g. a vendor dependency that ties dozens of insureds to the same LLM provider – indicating accumulation risk from a single model update.

● Detect behavioural anomalies: e.g. a sudden spike in AI-generated customer interactions that could inflate misrepresentation or E&O exposure.

● Explain signals, not just scores: underwriters see why the risk shifts – what data points, relationships, and events drove the change – and can document acceptance or decline rationales.

● Closes the loop: leverage the same approaches to quickly assess and segment claims as well as review information post-bind. Losses and near-misses are fed back into the data foundation, leveraging tools such as knowledge graphs. These sharpening future feature generation, and in turn, selection and pricing models.

Traditional “row-and-column” analysis often misses this, and a contextual approach can better capture systemic and third-party dependencies to reduce carriers’ concerns about the “network effect” of aggregated AI losses.

Q: How can stronger data foundations enable insurers to confidently underwrite AI risk and lead, rather than pull back from this emerging class?

A: Whilst the market is signalling caution and some carriers have filed AI exclusions or carve-outs, I see positive signs. Many carriers, rather than retreating, are looking to invest in solutions that enable them to lead in a market to price for this risk.

This is about collaboration in their own technology and data controls, investment in building consistent standards, and working with brokers, regulators, and vendors to normalise things such as risk definitions or how to evidence outcomes. The bottom line is that whilst, rightly, some hesitancy exists due to its recency, there are tools and techniques that insurers can both use and encourage that will enable more rigorous governance that improves how AI-enabled decisions become understood.

Insurers can use their own investment to convert uncertainty into an underwriting edge and strengthen their own internal AI adoption while creating the transparency and confidence that external markets need to price this risk with confidence.

I have no doubt we will see conversation move quickly from blanket exclusions to evidence-based coverage throughout 2026.

About alastair walker 19486 Articles
20 years experience as a journalist and magazine editor. I'm your contact for press releases, events, news and commercial opportunities at Insurance-Edge.Net

Be the first to comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.