The Government has just launched 3 new research projects to investigate how businesses can make best use of AI in insurance and law as well as analysing consumer attitudes to AI.
One of the projects will focus on technology driven change and next generation insurance value chains – how AI can be applied to processes such as underwriting and claims processing, speeding up the process for customers. Working with business, the project will consider how AI technologies can transform delivery of insurance services and save consumers money.
While this is a great initiative, Tony Tarquini, European insurance director, Pegasystems argues that the government must be cautious in their approach to the project.
“It’s great news that the government has turned its gaze to technology in the UK insurance sector as this should help spur on a traditionally conservative industry! By their very nature, insurance companies have always been rather risk averse and slow to adopt the latest technology, and this extends to the uptake of AI. Given this mindset, during these projects the government must be wary to not thrust change upon the insurance industry, but develop a constructive forum for insurers about how to best apply AI in a safe and practical way that will truly add value.
“AI has been around for a long time, with reams of research, theoretical PhDs etc. available on the topic, but in terms of the day-to-day operation of an insurance company, insurers really struggle to apply AI technology at a basic level. Yes, there are some insurance companies who are already taking their tentative first steps into the world of AI, but many are not, and these are the organisations who are putting themselves in danger of missing out on massive returns and succumbing to the onslaught of insurtech innovations. At an operational level, the key to its successful implementation is determining the best means of applying AI in a real office environment.
“Secondly, it’s important to bear in mind the highly regulated environment in which the insurance industry exists. Any AI technologies developed must have the necessary ethical and transparency parameters in place – a term often referenced as “Responsible AI”. Other industries have seen AI amplifying inherent prejudice and this has to be avoided when writing policies and judging claims. Furthermore, the transparency of the AI in use has to be at a level whereby a regulator can understand the rules and algorithms in place which produce a set of specific outcomes.”