This piece is by Rory Yates, Global Strategic Lead at insurance platform provider, EIS, and it looks at the idea of regulating AI usage across the insurance sector. One way to do that would be via a licensing system, so that agreed standards can be met on automated decision-making.
With AI now firmly at peak hype, giant leaps being made daily, and rushed releases of new tools hitting the market, the concern is understandable. There’s a need to understand the implications of this technology and ensure we make it act in our best interests. One way to do this is to ensure people understand how to act responsibly. A few years ago, I helped the BSI launch PAS440 for Responsible Innovation, offering a framework and a set of guidelines.This taught me most people want guidance and clarity on what represents ‘best practice’. They said they feel better prepared to establish the best course of action with their colleagues and businesses more widely. I don’t believe individuals typically intend to act in a way that’s detrimental to humanity. Although there are always exceptions. But I do believe people seek and benefit from guidance. Not only would a form of certification help with this it will also help with professional development and accreditation. We know from the “social media experiment” that trying to retrospectively regulate new technologies or technology-driven environments is incredibly complex. So the alternative appears to be proactivity. However, the issue with top-down regulation, in my opinion, is two-fold: First, it doesn’t provide sufficient practitioner-level guidance, ensuring those engineering the models and those applying them clearly understand what their actions mean. Second, it’s very hard to regulate. AI is already at sufficient scale. New AI technologies have proliferated for years now. Understanding what, where, when and how these will manifest adverse issues and what prevents them is simply an unachievable task.
LICENSE TO USE AI?It must now be about how we move forward most effectively. A license model typically implies “permission” to do something. That usually implies that you are somehow qualified to act in a way that’s believed to be “right”. This doesn’t feel like a problematic frame to work within. You must achieve licenses in many other areas of life that arguably have a lot less potential risk or risks as expansive as those presented by AI. The infrastructure and capability to do this, though, is also typically considerable, and there’s a somewhat perverse opportunity to use the machine responsibly and possibly even AI to help with this, creating an opportunity to “show” what good looks like. Regulation is typically seen as ineffective, often placing unclear and, at times, unnecessary burdens on industries. While this is true in certain circumstances, it’s hard to argue that it can also set standards for fairer and more competitive market landscapes. Take GDPR, GIPP in Insurance and The Consumer Duty in Financial Services. These are not without flaws, but all are aimed at acting responsibly. In this sense, it’s providing a clear opportunity and not just an obligation. If we can ensure we are acting in the consumers’ or in humanity’s best interest, everyone wins. I believe a sensible approach to license or certification is extremely beneficial for highly regulated industries like insurance, especially as they now require a clear shift in how they take responsibility for their customers. Lastly, bad actors are ever-present. We already know the potential for AI in driving cybercrime and fraudulent claims. Ensuring there are only accredited professionals developing and utilsiing these technologies will be one way to control who has access to this technology and how they are using it. Whilst also creating a positive labour market, one that has clear determinable standards for what “good” looks like.