Fix The Data, Then Migrate To The Cloud

Robert Brack, Vice President – Global Insurance Solutions, Mphasis, takes a look at data migration, updating legacy systems and more.

In their journey to digital transformation, insurance companies are faced with a data deluge from multiple sources, and connected devices and systems. According to a 2017 IDC forecast, enterprise data will multiply to 163 ZB by 2025. This burgeoning of data volumes is accompanied by the risk of data breaches – and adding to this challenge is the emergence of complex, ever-changing regulatory requirements and industry standards to protect data security and consumer privacy.

Migrating legacy system data to the cloud affords insurers quicker access to data, better safeguarding, improved compliance with standards and regulations, and superior customer experiences. However, what many insurance companies miss considering is the all-important factor of rationalising and standardising data as part of the process – fixing the data is key to successful cloud migration.

Bad quality on-premises data translates to worse data in the cloud environment. The questions to ask before migrating are many. How good (or poor) is our data structure? Do we have redundant, no-single-source-of-truth data? How compliant and secure is our data?

Adding two steps to the cloud migration process will help achieve a successful data-to-insights-to-decision journey:

Create a centralised and common data model prior to migration

Once you have set your goals and outcomes, identify internal and external data and data entities (relying more on external data will be advantageous), including customer feedback, product and service details, inventory data, and inputs from sales. Such a comprehensive model, with diverse and new data, can be an effective and accurate single source of truth for all enterprise databases.

When prescriptive and predictive analytics are applied to this data, insurance companies can achieve resilience and agility, while eliminating the risks of data ‘dead-ends’. This model can be incorporated into a master data management and data-governance system. For instance, a large global carrier is now using cross-functional data to create a “living” reinsurance aggregation model to effectively manage risk in real time and hedge exposures with reinsurance and insurance linked securities.

Improve data even as it moves to the cloud

By introducing a few additional steps (even if it involves extra cost or time), significant efficiencies can be achieved that will enhance the value of cloud migration. This could range from changing the data structure or databases for improvement, to aligning applications to the changed data structures.

As data alignment to maximise utilisation becomes more prevalent in the space, risk selection and servicing of acquired risk are becoming more transactional and commoditised. The resulting rate making accuracy and aligned management of capital will drive insurer profitability, as resource optimisation is becoming focused on more complex activities.

Value levers in rationalising data

To enable an end-to-end agile data pipeline with integrated handshakes within the framework, the following actions need to be taken:

Deploy data design principles and parameters

This provides the discipline to limit input/output design variability through the use of baseline assumptions, with variance allowed only to inform dependencies to enablement.

Simplify the data model

This is achieved through reduction or elimination of redundant processes, operational alignment, mapping organisational structure and resources for right actions at the right time, and identification of the right automation candidates.

Enable a continuous improvement paradigm using analytics

The evolution from decisioning to the predictive, and ultimately to the prescriptive mode, can be achieved by leveraging regressive data analysis. The question to ask is simple – how can we use past results to predictively reduce or eradicate redundant process steps? Such an exercise can drive significant cost and efficiency benefits while enhancing the broker, customer and user experiences.

Establish framework dimensions

Establish which data sources to automate and which to manually execute. Building an automated discovery process helps better tracking and assessment of migration, while embedding the right metrics enables organisations to accurately measure the performance of cloud migration. Prioritisation can be based on data qualification with structured ‘data hooks’ to drive step elimination. This will identify gaps or flaws, minimise process and measurement misalignments and eliminate inhibitors to automation.

Define the use of unstructured data formats

This important step will enable organisations to define the degree of variability, effectively segment data types, define tools to solve volume process steps and define the cost benefits.

Test it

Process and tech regression, and stress/pressure tests with business volumes will help to accurately establish the alignment of cost to benefits.

The above steps facilitate a new digital target operating model whose ease and speed can enable increased revenue at a lower cost. Data rationalisation and standardisation need not be a complicated process. By making it agile and continuous, it can be implemented with ease within insurance operations. And when done well, it can prevent future data sprawl, deliver financial gains, and closely align the strategic and financial aspects of IT planning to converge in on business goals and vision.

About alastair walker 8773 Articles
20 years experience as a journalist and magazine editor. I'm your contact for press releases, events, news and commercial opportunities at Insurance-Edge.Net

Be the first to comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.