Identify sensitive attributes
Synthesized is offering insurance businesses something for the age of insurtech and data. This is a new way to discover unhelpful bias within their data which, if mitigated, could make quotes, claims and premiums much fairer. The innovative AI-based UK startup has unveiled FairLens, the world’s first data-centric open-source software for identifying and measuring data bias.
Award-winning Synthesized is keen to encourage companies and sectors around the globe to make use of FairLens to discover in-house if their data contains bias so that the effects can be mitigated. FairLens allows data scientists to automatically discover and visualise hidden biases and measure fairness in data. As many people who feel victims of bias take to social media or organise boycotts online, or share their alleged experience, it’s important to combat the feeling of bias, as much as actual discrimination via online quote forms, intrusive questions, misgendering etc.
Denis Borovikov, co-founder and chief technology officer at Synthesized, said: “Many data science models rely on biased and skewed datasets. What we have created, with FairLens, is a mathematical framework to discover and visualize data bias. We hope FairLens will enable data practitioners to gain a deeper understanding of their data, and to help ensure fair and ethical use of data in analysis and data science tasks.”
Nicolai Baldin, co-founder and chief executive of Synthesized added “While data bias is still a taboo subject for many companies and industries, what FairLens enables is a behind-the-scenes discovery of data bias, which can then be mitigated.”
Many insurance apps, for instance for automobile, health or life insurance make a decision without human involvement, based on a company’s data. With limited, poor-quality or skewed datasets, data-driven applications often fail to achieve their intended purpose as they are inherently biased. In short, past lifestyles are no indicator of future ones.
The insurance sector could benefit immediately from the FairLens analysis which will be able to reveal, in seconds, any undiscovered biases in the data. Understanding the hidden biases in data will help calibrate their data science models to ensure fairer outcomes and access to previously underserved and underrepresented customers. It would potentially dramatically reduce the risk of non compliance with regulations and help protect brand reputation.
FairLens decreases the time it takes data scientists to find bias in their models, which can take months. FairLens takes a different approach and can calculate bias contained in hundreds of thousands of columns of data, in seconds. With FairLens, data scientists can:
Baldin concluded: “With the help of the developer and data science communities, and our machine learning technology, we can build and enhance FairLens and as a result we hope to make data fairer for all and hopefully bring a valuable impact on society.”