How Can We Scale Up Cover To Match Rising Risk Data?

This latest piece is by Richard Hartley, CEO of Cytora and it looks at how Commercial insurance brands can scale up to meet rising, and ever-changing risks.

We live in an increasingly turbulent world. Whether it’s supply chain chaos, cyber threats or climate change, businesses face a growing amount of risk, both in terms of the number of concurrent risks and the profile of risks where the level of severity (e.g. economic loss) is going up.  

The emergence of new exposures has led recent research to suggest that the size of the commercial insurance market will likely double before 2030. In particular, the cumulative impact of climate change means that every business will have both additional risks to think about and potentially bigger losses. Businesses in California, for example, are increasingly exposed to climate change due to extreme weather like forest fires.  

Property will be the fastest growing segment, with global premiums forecast to increase by 5.3% annually to 2040. Climate risks will be the core driver of the growth in property. Liability premiums are set to triple to by 2040 with the growth of exposure coming from artificial intelligence, social inflation, and climate change litigation. 

Race to unlock scalability 

All of this has upped the stakes for insurers. From a competitive standpoint, the growth in the level of risk has kicked off a race to unlock scalability, enabling insurers to grow to address the new level of risk without adding expense. This is particularly important in the mid-market which tends to be both large in addressable opportunity and suited to a digital first operating model.  

To scale effectively, risks of different levels of complexity need to be processed and turned decision-ready by a single platform yet routed to different types of decision making. This way an insurer can adjust where risks go, effectively matching the characteristics of the risk to decision making expertise in a given market, or line of business. It facilitates scalability and control in the processing of risk, and delivery back to brokers and customers. 

To do this, insurers need to be able to standardise their input from brokers, gauge the complexity of risks and route risks to different types of decision making – all without absorbing the capacity of their underwriting teams. This is becoming a crucial means of achieving scalability at industry leading levels of profitability.  

Greater breadth in complexity  

Given the coexistence of low complexity and high complexity risks in the same intake means insurers need to improve how they match risk complexity to expertise. This is happening at both extremes: simpler risks are becoming simpler, requiring more automation to drive margin, while complex risks are more complex.  

Insurers need to be able to distinguish between those types of risks upfront without absorbing capacity. With that, they will benefit from straight-through processing of simple low complexity risks while identifying risks that are more complex requiring human driven decision making. For example, low complexity risks in the mid-market can be routed to auto-quoting while high complexity risks can be routed to an underwriter for decision making on coverage and technical rating etc.  

In every product line, insurers will receive some risk submissions that are low complexity and have high degrees of similarity, both to each other and to existing risks already within the in-force portfolio. At the same time, a proportion of the intake will contain risks of higher complexity, different to what is standard with higher severity loss potential.  

Insurers will need to understand what facets of the exposure to focus on and where to set the thresholds of when risks require certain levels of expertise. Understanding the risk upfront without absorbing capacity is paramount to accurately choosing the best processing path, enabling insurers to outperform on the axis of productivity and underwriting profitability.  

Insurers should specialise how they route risks into different specialist processing paths, allowing them to allocate the right type of decision making and level of capacity to match different risk cohorts. This operates at different levels of scale – for example, in each line of business, some simple risks may be straight through processed, and some risks referred, but also at the scale of the insurer, it makes sense to create digital first units that are set up to refer risks by exception. 

Insurers should avoid using the same capacity and decision modes to underwrite risks of different complexity. This is where underwriting profitability suffers, and the insurer becomes unscalable.  

Risk processing capacity  

It’s now urgent that insurers address the scalability of their risk flows and reduce friction.  

It’s really about processing capacity: the aim is to process more risk submissions into multiple decision-making destinations without increasing the cost of doing so. As it stands, risk processing is still very convoluted and inefficient. Risks arrive in a range of inconsistent formats and must be manually assessed before the underwriting process has even begun. Risks that are digitised, evaluated, and routed for decision making with zero capacity absorbed is the key to unlock scalable unit economics.  

The firms that unlock scalability in this way will be the ones to grow in parallel with the world’s risk. 

About alastair walker 12533 Articles
20 years experience as a journalist and magazine editor. I'm your contact for press releases, events, news and commercial opportunities at Insurance-Edge.Net

Be the first to comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.