Richard Beaty and Edward Le Gassick, Members of the Forum of Insurance Lawyers and both at Kennedys LLP have put together a piece looking at the issues surrounding the government’s idea to have a “named person” responsible for AI decision making.
In 2009, the late Edward O. Wilson, formerly Professor Emeritus at Harvard University, observed that ‘…the real problem of humanity is the following: we have palaeolithic emotions; medieval institutions; and God-like technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.’ It is a sobering thought that, in the intervening thirteen years, advancements in the God-like technology and in particular, Artificial Intelligence (‘AI’), have been exponential, but everything else is still pretty much the same.
Although AI is by no means a modern concept, early applications of the technology were limited in scope by the lack of computational processing resource, insufficiently large data sets (upon which AI depends), and limited algorithmic power. Not anymore.
In our digitally saturated society, information, measured in exabytes, zettabytes and yottabytes, exists in structured and unstructured data – big data, comprised of both obvious sources of information, and often less obvious sources such as social media usage, travel route preferences, smart device data, clicks & iris tracking. Supervised and unsupervised AI facilitates synthesis of these data sets to create otherwise inaccessible inferences, which in turn are pivotal to creating clusters and categories. In the space of just over a decade, AI and subsidiary applications such as machine learning have become the revolutionary driving force of modern economic activity.
WHO KNOWS WHERE THE ALGORITHM WILL GO?
The UK government, eager to ensure the UK’s place at the heart of this economic revolution and by extension to help businesses ‘monetise’ big data, produced a policy paper in July 2022. The policy paper launched an outline regulatory framework designed to help stimulate investment, growth and innovation and the paper ‘…sets out [the] overall pro-innovation direction of travel on regulating AI’. At this stage, the paper is only a consultation document but the calls for views and evidence closed on 26th September 2022. That equates to a consultation period of just 10 weeks (from its launch on 18th July) to deal with a topic of labyrinthine complexity.
The government describes AI as ‘…a general purpose technology like electricity, the internet and the combustion engine’. On one level that analogy may hold true; all of those things clearly have a general purpose and, of course, following mass adoption they all ultimately transformed regularly organised civil society. However, on another level the analogy seems like a remarkable under-statement. It is suggested that no other general purpose technology has the potential to undermine public debate, the rule of law and democracy.
Although the policy paper acknowledges these threats, the paper also recognises that UK legislation has not been developed with AI in mind noting that ‘…there are no UK laws that were explicitly written to regulate AI’ relying instead on a ‘…patchwork of legal and regulatory requirements built for other purposes’ of which the mainstay is the EU based data protection framework contained in the UK GDPR. It seems odd that the government has announced that the EU regulatory approach (including it must be presumed to data protection law) is not right for the UK because its lack of granularity could hinder innovation.
So, rather than try to resolve the patchwork of legal and regulatory requirements by promulgating specific legislation, the government’s preferred approach is simply to pass the baton of oversight to class based regulators under the banner of a ‘principles’ focused regime. The stated aim is to regulate, in a risk-based and proportionate manner, the use of AI rather than the AI itself. That, it is suggested, seems to be a recipe for confusion.
GUIDELINES ARE NOT LAW
First, Regulators may provide guidelines and take decisions; but they do not make the law. Making law is the sole preserve of Parliament and the Courts. A public regulator, as a decision-maker, is subject to the supervisory jurisdiction of the courts by the operation of judicial review. Moreover, the mere fact that a Regulator may seek to prohibit a particular practice by the imposition of rules and edicts, does not mean that the practice in question is automatically contrary to law.
Second, it is axiomatic that different regulators regulate different things. According to the National Audit Office’s ‘Short Guide to Regulation’ there are 90 regulators operating in the UK It would, it is suggested, be a triumph of hope over experience, to expect that a co-ordinated and consistent approach to AI could be demonstrated by such a large group of organisations even if the government introduces overarching cross-sectoral principles based on the OECD guidelines.
Third, as the government acknowledges, a principles focused regime does not create a new framework of individual rights, neither do the proposals for accountability (in particular for legal liability in respect of AI processes) require oversight by a natural person. Not only will this raise questions for third-party developers, concerning their own exposure – potentially stifling development of further emerging uses of the technology; but if a corporation can appoint a subsidiary company to be accountable for AI, surely that must dilute the effectiveness of any sanction for misuse?
Fourth, data protection law (whether UK or EU) does not adequately address AI processes. The rules relating to automated decision-making under Art.22 are far from clear. The issue, in a nutshell, is whether the rule gives a data subject ‘…the right not to be subject to a decision based solely on automated processing’ (as per the express words of Art. 22.1) OR simply a qualified prohibition against automated, regardless of the data subject’s choices, a view which seems to be preferred by the EDPB.
AI is here to stay and whilst the proposal arguably provides a timely opportunity for AI utilising companies to audit and offset legal and financial risk with sound cyber security and data management frameworks, surely delegating the oversight of fundamental freedoms to a coterie of regulators is not ESG? As EO Wilson also said ‘we are drowning in information, while starving for wisdom.’