EU and UK AI Regulation: What Insurance Brokers Need to Know

As the regulatory gap between the UK and EU widens, brokers face new challenges when advising clients that build, deploy or rely on AI. Claud Bilbao, RVP, Underwriting & Distribution at Cowbell UK, explains what this divergence means in practice and why understanding it is fast becoming part of everyday risk conversations.

August 2025 saw the EU’s AI Act begin to take effect, setting out formal obligations for model providers and strict rules for high-risk applications. At the same time, the UK confirmed it would stay the course with its lighter, “pro-innovation” framework – no standalone legislation, no new regulator and no binding compliance requirements.

The EU AI Act is built around structure and enforcement. It classifies AI systems by risk level, imposes transparency and testing obligations on high-risk use cases, and introduces penalties for prohibited practices. General-purpose or “foundation” model providers must also disclose training data sources and ensure copyright compliance.

By contrast, the UK’s approach places trust in existing regulators to interpret broad principles such as fairness, transparency and accountability. The aim is to support innovation, not constrain it.

That flexibility is helping to attract investment, particularly following the Government’s £150 billion tech prosperity deal earlier this year. But it also creates uncertainty. Without fixed rules, questions about liability and oversight remain open to interpretation.

Importantly, the EU’s AI Act applies beyond its borders. Any UK business selling AI products into the EU or operating within its supply chains could find itself subject to the same obligations as EU-based firms.

Explainable AI

One area where this difference is already visible is in the debate around ‘explainable AI’ and data transparency. Explainable AI – the idea that AI decisions should be transparent and traceable – is becoming one of the defining issues in this new regulatory landscape.

Under the EU AI Act, explainability sits at the centre of the framework, particularly for high-risk systems. Organisations will need to show how their models reach decisions and prove that data inputs and processes are fair, transparent and compliant. The UK takes a lighter approach, encouraging explainability as best practice rather than making it a legal requirement.

As a result, there’s still much to iron out – and some major legal cases are still moving through the courts. Their outcomes will help define where the boundaries lie, so it’s an area to watch closely.

Whatever the outcome, you can expect to see AI referenced more and more across risk reviews, policy wording and claims. The sensible stance is a watching brief over the next decade: change will be rapid, impacts will compound, and the operating environment could shift dramatically.

Ultimately, this balance between capability and responsibility will shape how regulators, insurers and clients view AI governance. For brokers, recognising these nuances can help them steer more informed conversations about transparency, consent and the broader implications of AI adoption.

AI and cyber: converging risks

As AI adoption accelerates, it isn’t just changing how businesses operate – it’s transforming the threat landscape too. Generative models are being used by cybercriminals to automate attacks, write convincing phishing emails and probe systems at scale.

For that reason, AI and cyber risk can no longer be treated separately. Brokers should encourage clients to integrate AI governance into their cyber resilience strategies, ensuring controls, training and third-party checks keep pace with the technology’s evolution.

What this means for brokers

For UK brokers, the priority is clarity. How are clients using AI? Are they developing models in-house, deploying third-party systems, or integrating AI into existing workflows?

AI creators will face the most direct exposure, particularly those serving EU markets. But even users need to consider how AI outputs influence decisions, data handling or customer outcomes. Misuse or errors in these systems could raise questions of accountability and coverage.

Another factor is ‘shadow AI’ – tools or systems introduced without central oversight. These can create unseen vulnerabilities, whether through data privacy lapses, copyright infringement or unvetted model use. Asking the right questions early helps clients identify these blind spots and ensures risks aren’t missed in policy discussions.

The UK’s principles-based model puts the sector on the front foot for innovation, removing friction and encouraging experimentation. Yet for a risk that’s continually evolving, a stricter, more prescriptive framework can look prudent. The real task for brokers and insurers is balancing speed and flexibility with assurance and accountability.

A few useful conversation starters include: Which AI systems are in use, and for what purpose? Is there an internal AI policy or risk framework?

These discussions help to raise not only regulatory exposure, but also potential reputational and operational risks – key considerations for insurers as they refine underwriting models.

The insurer’s perspective

Regulatory divergence will influence how insurers assess AI-driven businesses. EU-based carriers may become more prescriptive, demanding greater transparency around data and decision-making to stay aligned with the AI Act. UK-based insurers may take a more flexible stance, but they’ll still be mindful of global standards and cross-border obligations.

For brokers arranging placements across multiple jurisdictions, understanding how these regulatory differences affect appetite and wording will be critical. In practice, it means engaging earlier with underwriters, clarifying which AI uses are declared and ensuring clients understand any relevant exclusions or conditions.

Next steps for brokers

AI regulation is still developing, but there are practical steps brokers can take now.

● Start conversations about AI use and governance early, as part of renewal or risk reviews.

● Encourage clients to create internal AI policies and maintain clear records of use.

● Identify exposures and consider how AI could affect liability, reputational risk, or coverage interpretation.

● Track policy wording and watch for AI-related clauses or exclusions emerging in cyber, PI or D&O policies and stay connected. ● Follow updates from the UK’s AI Regulation Roadmap and the new EU AI Office.

We’re still in the early stages of AI regulation, but its impact on insurance is already visible. Brokers who understand these frameworks and can help clients navigate them will be best placed to manage emerging exposures and maintain trust.

About alastair walker 19509 Articles
20 years experience as a journalist and magazine editor. I'm your contact for press releases, events, news and commercial opportunities at Insurance-Edge.Net

2 Comments

  1. So lets assume you used AI to generate the article. What AI practices or standards did your organization implement? Did they manually check the data AI considered or the output generated by AI?
    Should I be concerned if my employee also uses AI to generate reports and publications?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.