New Partnership Aims To Understand Risks Associated With AI

A new £2m academic-industry partnership will develop novel methods to understand, measure, and ultimately insure against risk associated with the commercial application of artificial intelligence. For insurance brands there are always going to be compliance, data handling, privacy, claims settlement decision making and other issues linked to AI deployment. Here are the details.

The UKRI Prosperity Partnership ‘AI2: Assurance and Insurance for Artificial Intelligence’, led by the University of Edinburgh, alongside insurance group AXA, WMG, University of Warwick and the University of Oxford, will lay the groundwork for future artificial intelligence through research by exploring insurance and protection services to protect organisations from unreliable AI solutions.

This ultimately will allow insurers to accurately price and underwrite AI-related risks in areas such as transport and healthcare – for driverless cars or medical devices, for example – in a way that is currently challenging.

Establishing a robust AI assurance and insurance framework will also lead to the wider and safer adoption of AI technologies in industry by transferring risk into the insurance market, the partners believe. It will also provide clear incentives for AI developers to create safer and more reliable products.

Lead academic Professor Lukasz Szpruch of the University of Edinburgh’s School of Maths said:

“As AI systems become more autonomous and embedded in high-stakes environments, traditional forms of insurance are no longer sufficient. AI insurance offers a new paradigm—one that explicitly covers risks like model failure, bias, or unintended behaviour that arise even when systems function within their design parameters. More than just risk transfer, it’s a mechanism to align incentives and reward those who build transparent, robust, and well-governed AI.”

The project is part of a suite of industry challenge-led research projects addressing AI and insurance facilitated by Tobi Schneider, Edinburgh Innovations’ Financial Services and FinTech Sector Lead, based at the Edinburgh Futures Institute. Mr Schneider said:

“AI offers substantial potential benefits for society and the economy, but it also carries imminent risks. If we are to realise the benefits, we must be able to understand and mitigate the risks in tangible, applicable ways.

“This exciting project is one that will ultimately lead to better risk management practices and standards for AI, meaning people and businesses will be better protected from harm.”

The 23 new Prosperity Partnerships, part-funded by the Engineering and Physical Sciences Research Council (EPSRC) alongside industry and universities, will tackle key industry challenges in areas from drug manufacturing and artificial intelligence to cybersecurity.

Another Prosperity Partnership, between the University of Edinburgh’s EPCC and Rolls-Royce, will use supercomputing to model sustainable fuels for future aviation.

Science Minister, Lord Vallance said:

“These partnerships show the range of real-world challenges the UK’s world-class research base is helping to tackle – from cutting carbon emissions in heavy transport, to improving access to life-saving medicines.

“By backing scientists to work hand-in-hand with industry, we’re combining cutting-edge research with business expertise to turn science into practical solutions that can make a difference in people’s daily lives.”

About alastair walker 18967 Articles
20 years experience as a journalist and magazine editor. I'm your contact for press releases, events, news and commercial opportunities at Insurance-Edge.Net

1 Comment

  1. The shift toward insurable AI is more than economic adaptation — it’s an upstream signal: autonomy now carries actuarial consequence.

    But we can’t insure what we can’t first govern.

    Traditional insurance assumes risk emerges post-deployment. But in AI systems, failure often originates before logic even forms — inside the model’s assumptions, incentives, and permissions.

    The real breakthrough won’t just be pricing failure. It’ll be enforcing refusal — before systems compute paths that breach integrity under pressure.

    Glad to see the UK pushing into this territory. Governance-first AI won’t just be safer — it’ll be insurable because it was governable.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.