This article is by Prathiba Krishna, Head of AI and Ethics at SAS, and it delves deeper into the issues of AI usage across insurance, plus the ethical challenges faced by the industry and how the FCA Code of Conduct can act as a useful brake.

Since the introduction of ChatGPT in late 2022, all eyes have been on how generative AI can propel industries forward. For insurance professionals, there are many benefits of using AI, from risk assessment and real-time pricing data, to speeding up the claims process and enhancing the customer experience.
However, the technology requires careful consideration, without stifling innovation, and whilst AI can certainly enhance operations throughout the sector, insurers must ask themselves two main questions: are they implementing AI for the right purpose? And most importantly, have unwanted side effects of harm or bias been accounted for before implementation has even begun?
As the capabilities of artificial intelligence expand rapidly, insurers are responsible for managing the technology effectively. They must ensure that their use of AI is governed and that their data is not biased. At the same time, they’ll need to embrace ethical approaches that adhere to multiple layers of guidelines and regulations, such as those that protect privacy.
Earlier this year, an industry-first voluntary ‘code of conduct’ was launched for the insurance market when using AI across its practices. The code itself does not impose new regulations on firms but aims to establish a standard of responsibility when they are using AI for claims settlements. However, the code is not intended to duplicate or replace other initiatives that may appear in the future from regulatory bodies such as the Financial Conduct Authority (FCA).

EVEN CODE NEEDS A REGULATORY CODE
The core of the initiative is to establish and uphold the highest standards of behaviour and ethical responsibility when planning, designing or utilising AI in the management and settlement of insurance claims. On top of this, the overall ambition is to understand the potential and actuality of AI to lay the foundations required for claims departments to ensure AI applications are implemented transparently and securely.
Within the code, there are multiple principles which emerge from transparency, fairness and data protection to accountability, accuracy and human oversight. Overall, these principles aim to promote the responsible and ethical use of AI in the insurance industry, ultimately fostering trust among stakeholders and ensuring that AI technologies serve the best interests of both insurers and policyholders.
A great deal of work has gone into making the code of conduct both accessible and applicable to everyone in the insurance market. For example, the code is designed with the input of over 120 participants from insurance companies, consultants and partners and is largely aligned to EU AI laws, meaning most of its elements can be reciprocated across the EU and other regions.
As one of the experts involved in designing the code, it has become very clear over the last year that the time for action is now. More recently, insurers have been actively implementing data analysis techniques and software to revolutionise their strategies, with a particular focus on integrating AI. For example, SAS has delivered open, trusted, scalable and sustainable AI capabilities for over 40 years helping insurers of all sizes achieve growth, profitability and compliance through its AI for Insurance capabilities.
However, now with the new code of conduct, there is an extra layer of benefits for insurers to strengthen the processes which were already taking place. Despite being voluntary, the code will ensure each insurer is in the best position to stay on top of the latest technical and legislative developments, so they can maximise their data to transform their approach to customer experience, fraud and risk management.

A LIVING DOCUMENT
The code itself is a ‘living document’ which can be adapted to industry developments meaning it won’t become outdated. This also allows industry challenges to be overcome, particularly sensitive issues including gender, race and diversity practices. Some of the ways to deal with these challenges will include localising development and adaptation of the code, decentralising decision making, developing and adopting international and regional frameworks and enabling transparency and accountability. AI and the code of conduct for its use is intended to be an infinitely adaptable tool to suit a multitude of circumstances and lead to more equitable and culturally sensitive AI applications.
The success of the new code of conduct does depend on widespread adoption and continuous collaboration within the industry. Furthermore, the code is not here to suggest AI should be used in every practice. For example, AI can be used to settle low value claims but this may not always be the best option. Insurers need to ensure the AI model is relevant to the circumstances and that data drift and bias is monitored, the maturity of the AI infrastructure is assessed and the option for human intervention is available.
Overall, the newly launched code represents a launch pad for the ever-evolving topic of AI and the questions it poses for workforces, workplaces and work processes alike and the movement towards greater collaboration across companies is certainly an interesting one to watch.

Be the first to comment