Some thoughts on AI from Nigel Cannings (left below) CTO and James Laird COO (below right), both of Intelligent Voice.


Generative AI has emerged as a groundbreaking innovation reshaping various industries, including insurance. Large Language Models (LLMs) wielded by pioneering entities provide a competitive edge. Intelligent Voice, collaborating with forward-thinking insurers, harnesses the transformative potential of Generative AI for claims management. However, amidst this evolution, notable risks have come to light.
In the prevailing excitement, there’s an idea that deploying Generative AI is easy. Many have engaged with ChatGPT and customised prompts for their distinct workflows. Given this familiarity, should insurers readily embrace this technology? Can they seamlessly integrate an LLM to scrutinise claims for fraudulent activities? The assumption seems straightforward, doesn’t it?
The simple answer is that it isn’t. In its meticulous testing of cutting-edge LLMs against authentic insurance claims data, Intelligent Voice has uncovered instances where these models misrepresented or simply invented crucial claim details. These misrepresentations, evident in the model’s summary, judgment, and assessment of claims fraud risk, are referred to in data science terms as “hallucinations”.
They can be seen as simply a frustration for most day-to-day use cases. Still, when present in processes such as claims management, they have the potential to become reminiscent of a phenomenon known as ‘fitting up’ that plagued law enforcement during the 70s and 80s. This practice involved police officers manipulating circumstances to suit desired outcomes, mirroring the unconstrained behaviour of today’s AI models—a metaphorical ‘Life on Mars’ with substantial consumer risks.
Where once a rogue officer might have ‘verballed’ a suspect by attributing a fictitious admission of guilt in their formal written notes, we have witnessed unconstrained, out-of-the-box models attempting the same. Unlike the rogue officer, there is no nefarious intent, the problem is that it is very hard to get an LLM to refuse to answer a question outright, preferring to “hallucinate” an answer rather than keep schtum. The model wants to please. Unfortunately, the impact can be significant.

In response to the corruption scandals that affected policing, there was a heightened demand for more rigorous evidence in court proceedings. This shift necessitated a change in policing style and nowhere was this more evident than within the Metropolitan Police’s elite team, the Flying Squad. The unit had to adopt a more proactive approach to law enforcement, focusing on capturing offenders in the act to provide incontrovertible evidence of criminal activities. This approach aimed to restore public trust and ensure that convictions were based on solid evidence, rather than potentially tainted by corrupt practices.
The issue of “fitting up” was ultimately dealt with by the courts and the evolution of best practice. Similarly, establishing guardrails becomes imperative when seeking to ensure reliable results from Generative models, with Intelligent Voice advocating for a similar rules-driven approach as the route to embedding trust and transparency in the ethical deployment of LLM’s
It’s undeniable that a pressing responsibility comes with the remarkable power of Generative AI. While legislation lags or just misses the point, early adopters must implement essential processes promptly to ensure the responsible realisation of its value.
The potential for Generative AI in claims management is immense, promising streamlined processes and more accurate assessments. Nonetheless, these benefits come with caveats that demand immediate attention. Intelligent Voice’s engagement with insurers underscores the critical need for a structured framework governing AI utilisation in insurance. Imagine the uproar if good customers had legitimate claims unfairly declined, just because an LLM felt it had to provide an answer to a question about risk? We have witnessed the terrible impact of overreliance and unchallenged acceptance of computational reasoning with the Post Office scandal: This can never be repeated.
In addressing this risk, Intelligent Voice has eliminated the risk by implementing a fully explainable rules-based approach at the front end of the claims triage process and used Generative AI to expand on the value of this analysis thereby creating a fully auditable decision intelligence pipeline.

It is easy to be taken in by the allure of adopting cutting-edge technology, but a balanced approach is crucial. Unchecked AI can inadvertently harm consumers through erroneous judgments, emphasising the necessity for clear boundaries. As regulators grapple with evolving technology, early adopters must proactively establish safeguards, ensuring that the integration of Generative AI in claims management remains ethically sound and effective.
The road to harnessing the full potential of Generative AI in insurance is rife with challenges. As such, prudent implementation strategies must prioritise ethical considerations, embracing innovation while safeguarding consumer interests. Intelligent Voice’s collaborative efforts illuminate the path toward responsible adoption, emphasising the urgency of balancing progress with accountability.
In this dynamic landscape, the trajectory of Generative AI in insurance hinges on the conscientious efforts of early adopters. By championing responsible integration, the industry can navigate risks while reaping the transformative rewards this burgeoning technology offers.

Be the first to comment