AI Thoughts: Can Claims Settled By AI Still Retain The Human Touch?

One of the key topics at this year’s BIBA was the impact of AI on the insurance sector. From number crunching on admin and pricing, to FNOL, chatbot queries and much more, the benefits of AI are potentially huge. When it comes to claims there are many things to consider and IE fired a few questions over to Rob Bevington, Head of Data Science at Synectics Solutions, to find out more.

It’s worth running this as a mini feature because the theme of training AI, or using it as a kind of mentor or co-pilot, is something that was being talked about in depth at BIBA 2024. If AI is to become a true partner in the decision-making process on claims, then everyone needs to be sure that the tech is trained to be `human’ in some respects.

Here’s the word;

Question: Can AI work predictively, by joining up pre-determined reference points in claims, or renewals?

An insurance claim for a damaged iPhone is logged just prior to Apple’s next release date. Does this date correlation mean the claim is fraudulent? No. But given that claims do spike prior to major tech releases, there is risk. Even more so if the individual in question has made such a claim in the past.

AI is already being used to link data points with outcome patterns to predictively risk score scenarios like this, and far more complex cases – simultaneously enabling automated fast tracking of good customers and more focussed investigations where necessary. One insurer we work with using our predictive analytics solution, Precision, is now able to detect four times the number of fraudulent claims (in a key risk category for them), 88% more efficiently.

AI’s predictive capabilities based on making the right connections is even more impressive in automotive, where patterns become even more challenging to identify due to the volume of unstructured data that’s typically in play – especially in claims. The use of large language Gen AI models to identify phrasing variations will have huge implications here.

Question: In some respects are we still “training” AI to look for patterns within data, timings on quotes or claims, keywords or phrases used etc?

We do still need to train AI but phrasing it this way can be a little misleading. It has an undertone of failure; suggesting that “successful” AI should perhaps be autonomous. I disagree.

In our experience, the best outcomes are achieved by the combining the right data, the right training (and periodic recalibrations), and human expertise. It’s why our AI models present fraud teams with risk scores – and the key determining data points behind those score – rather than absolutes.

Also, fraud MOs will evolve across the insurance industry as a whole and specifically within an insurer’s own portfolio. This means two things. Firstly, AI models trained on syndicated industry data points/outcomes + insurer client base data points/outcomes will always produce more accurate predictive results. Secondly, recalibration at specific intervals is needed to account for changing trends to ensure models don’t get skewed. Generally, we see a 5% improvement in results with each recalibration we carry out.

About alastair walker 19532 Articles
20 years experience as a journalist and magazine editor. I'm your contact for press releases, events, news and commercial opportunities at Insurance-Edge.Net

Be the first to comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.