According to mainstream US media, the UK and US have declined to sign a global AI declaration, highlighting growing divisions over international regulation. While AI innovation accelerates, the lack of coordinated safeguards raises concerns about emerging risks – particularly for businesses and the insurance sector that supports them. In theory a worldwide standard on AI safety is laudable, but is it workable?
This particular agreement was also meant to “bake in” inclusive and sustainable features into AI algorithms, according to a statement from the French govt yesterday – more here. But does that mean the data could be skewed in a particular direction to suit a political agenda? It’s interesting to note that countries who have signed the AI Declaration include Nigeria, Cyprus, China and India – all nations where online fraud, corruption, copyright infringment and many more dubious business practices are on an industrial scale. Is it worth the paper it’s written on?
For insurers this stuff is crucial because if one demographic group in a particular postcode has a tendency to submit potentially fraudulent claims, then AI decision models should reflect that risk. If not, then diversity actually passes that risky premium increase onto those who live in the same area but are statistically less likely to stage a crash-for-cash.
Likewise storm modelling and localised flooding should be based on factual events, nearby water courses, river management data, plus building and contents valuations. Once you add on some “sustainable” factors you invite potential price-fixing based on bias against older properties, or perhaps houses that fail to meet the UK A-C ratings on energy consumption. Again, is that fair?
These are big questions and one-size-AI isn’t going to fit all scenarios. Here’s some analysis from Mark Kirby at Intersys;

AI Regulation Standoff Increases Business and Insurance Sector Risks
Mark Kirby, Professional Services Director at Intersys, comments: “The UK and US’s refusal to sign the global AI declaration is a clear signal that national and financial interests are being prioritised over collective security. While AI innovation continues at an astonishing pace, the absence of robust international safeguards poses serious risks – not just for businesses, but for the insurance industry that underpins them.
AI’s ability to process and generate vast amounts of data creates new exposure points. Bias in training models can lead to unfair or inaccurate decision-making, presenting challenges in underwriting and claims assessments. Meanwhile, the rise of AI-driven fraud – such as deepfake-enabled scams and hyper-personalised phishing attacks – demands urgent attention from insurers assessing cyber risk.
Compounding this is the threat of AI ‘data poisoning,’ where malicious actors manipulate datasets to distort AI outputs. Without proper oversight, we risk an environment where fraudulent claims become harder to detect, identity verification is undermined, and businesses face an evolving cyber threat landscape.
The insurance industry must prepare for these challenges now. The failure to establish international AI standards only increases exposure, making it imperative for insurers to integrate AI risk management into policies, fraud detection, and cyber liability coverage. With AI continuing to shape the business world, insurers cannot afford to wait for governments to catch up.”

Be the first to comment