Comment Blog 4 July, 2024

Navigating uncertainty: the future of Legal AI

Giulia Gentile
Lecturer in Law, Essex Law School

This blog was published to stimulate conversation ahead of Reform's roundtable with Giulia on the role of AI in a thriving legal sector, kindly supported by the Solicitors Regulation Authority. A companion blog written by James Clough, Chief Technology Officer of Robin AI, can be read here.

The relationship between artificial intelligence and the legal sector is complex and multilayered. Legal AI (LegAI) can speedily process data, draft contracts and preliminary legal advice, carry out e-discovery, manage files and documents, predict cases’ outcomes, and the list continues. These systems clearly have the potential to increase efficiency, and could benefit the legal profession and justice systems. Justice systems in particular are currently experiencing high backlogs and contestation from the media and politicians. Meanwhile speeches and declarations by UK judges point to the openness of the judiciary towards these technologies.

Yet several cases have highlighted the risks of AI, in the legal sector and beyond. Consider, for instance, the New York Bar lawyer who used ChatGPT to produce a brief. It later transpired that the pleadings included fake case citations. Even before fully fledged AI systems, the Post Office Horizon scandal underscored how (undisclosed) bugs in automated systems can lead to serious legal consequences for individuals, their rights and freedoms. Or the recent experiment of the creation of a fake CV that included several bullet points of nonsense but, when fed into employment AI, received a 90 per cent call-back rate. The full list of risks and harms that could emerge from automated and AI systems is a long one.

Those risks and harms should be addressed by effective regulation. But they also shake the foundations of our understanding of what law is, and what it means to be a lawyer or a judge. And the demand for regulation intersects with and is underpinned by these theoretical issues.

What dimensions of the law should be only and fully human? What aspects of the legal profession may be subject to automation, and which should not? What is the difference between being judged by a human, or by an AI system? Addressing these questions is crucial from a number of perspectives: to avoid disproportionate harms and risks caused by AI; to ensure that the law remains imbued with human values that are open to politics and contestation; to ensure the proper training of future lawyers; to preserve public trust in the legal profession and uphold the ethical standards of the sector. We should not fall prey to the fallacy that the absence of AI regulation will be corrected by the market as has happened for questions of data governance and regulation.

The UK’s current approach to AI regulation is based on cross-sectoral principles to be applied by various regulators, including the Solicitors Regulatory Authority, the Bar Standards Authority and HM Courts and Tribunal Services. Some of these bodies have already produced guidelines on the implications of AI in an attempt to offer clarity to operators. Yet questions remain concerning the bindingness of those guidance documents and their ability to effectively prevent harms while mitigating risks.

Examining more complete regulatory approaches, two paths emerge: (a) identify horizontal rules that govern the use of AI in the legal profession and justice systems and relevant safeguards; (b) embrace a risk-based approach and single out which areas of the legal profession and the judiciary could be subject to AI and which could not. Until more complete regulation is adopted, caution is called for in allowing the expansion of (often poorly understood) AI in legal and juridical practice.