Insurers are innovating. But Mark Eastham, CEO at Avantia, the technology-enabled insurer behind HomeProtect, argues that when it comes to risk pricing, only those who simultaneously innovate on multiple fronts will realise the value required to thrive.
It’s a well-trodden truth in our industry that not all risks are created equal. And that, for insurers, some risks are riskier than others.
But progress marches on and new technologies mean new avenues of innovation for insurers to explore. And with that, the opportunity to drastically improve risk pricing around more complex perils ought to be a huge priority for many. The early signs are that current attempts to develop more sophisticated approaches to risk pricing are too often focused on a single area of improvement.
But as the pace of change continues to accelerate, only those insurers who take a more thoughtful, holistic approach to innovation around pricing risks – a three-pronged approach based on quality data, machine-learning, and effective deployment– can be confident that they will make the strides required to remain competitive in the long run.
The first thing an aspiring innovator should consider is data. Of course, the more data an organisation holds, the better its chances of generating new insights, more accurately pricing risks and, ultimately, gaining an edge. It’s this thinking that sees so many insurers investing in new and interesting sources of non-traditional data.
However, on average, the biggest gains come in the early stages of scale. When most of the factors influencing a customer’s propensity to claim have already been identified, returns start to diminish and additional data – while freely available – will generally have a lower predictive value. The sudden availability of reams of data has given rise to the misconception that all data is equally valuable, and that it’s simply a matter of getting hold of as much as you can. In reality, the ratio of return on investment (in time and attention if not money) soon hits a brick wall.
So, while looking at large volumes of data can be worthwhile depending on the situation, getting more and more data for the sake of it should not be considered a strategy. On the contrary, most of the biggest gains will involve applying newer, sophisticated modeling techniques to existing, well-tested predictors of claim propensity.
But of course, it’s still imperative to make sure this data is of the highest quality. One way to do this is by going direct to the consumer. This enables insurers to take control over the data they receive – asking the precise questions they need the answers to and receiving the data in the format they require, without interference by an intermediary. Best of all, it means that datasets are continuously updated and risk calculations can change accordingly.
Alongside this, insurers should also consider how they analyse their data. Most risk models are calculated using general linear models (GLMs) – and for those with smaller datasets, perhaps analysing only the claims on their own book, this makes some sense.
But as increased volumes of sophisticated data come into play, GLMs start to show their limitations. There is a visible ceiling to the number of interactions that a GLM can spot and a human is needed to identify these interactions and rate them correctly.
On the other hand, machine learning offers insurers the opportunity to interrogate larger datasets more quickly. And to identify emergent trends or patterns without the need for a hypothesis against which to test or much human supervision.
Even better, the benefits brought by machine learning increase dramatically the more data an insurer has. There is little point in applying machine learning to small and simple datasets where a GLM would suffice. But those insurers that apply cutting-edge algorithms to belt-busting datasets unlock a distinct advantage over the competition. Not only do they have more data from which to uncover insights, but they derive greater value from each datapoint too.
Of course, it is one thing to have the best open-source machine learning algorithms and apply it to the richest datasets. But it means nothing if insurers can’t quickly deploy their models into a live trading environment, in the highest level of granularity.
Ultimately, any innovation around risk pricing requires insurers to get their algorithms out to consumers, on a machine learning platform that can make decisions in real-time to quickly price risks and make a call on the level of cover that can be offered.
Yet this is easier said than done. Many large insurers are wedded to legacy platforms that are difficult to get away from. And smaller firms can often deploy very effectively in their narrow niche but lack the expertise to do so in more complex product areas.
Start small and scale
Very few insurers today are making full, joined-up use of these three strategies, nor are many in the position to immediately do so. New, agile and innovative insurtechs are making waves with machine learning platforms that enable fantastic analysis that they can take to the consumer at speed. But without the masses of historical claims data that traditional insurers have at their disposal it is very difficult for them to price risks accurately. And it is unlikely that many have the business appetite to incur the level of cost required to learn from their own claims experience over time.
On the other end of the spectrum, larger insurers have the data, but transitioning to a new platform is a huge task, carrying commensurate costs and uncertainties. It cannot be done overnight. Some are starting to take a much more nimble approach through incubator models, with the idea of testing and developing the approaches aside from the core business in preparation for wider integration down the line. But of course, this carries the downside of creating another disconnected silo that will need to be brought back in at a later stage – it simply delays part of the problem.
Like so many golden opportunities in business the best approach is probably one of the trickiest to achieve– namely the creation of agile, empowered cross-departmental working groups, incorporating data scientists, underwriters, compliance, and IT security, all working collectively towards shared objectives.
Nonetheless, this is a journey that all insurers will need to go on – particularly those working in more complex markets like home and contents insurance, where the difficulty in pricing some risks presents an opportunity that is simply too big to ignore. Those with the wherewithal to take a smarter, joined-up approach to the transition have a rare chance to gain a substantial competitive advantage.