Legal Systems and Artificial Intelligence

Lead Research Organisation: University of Cambridge
Department Name: Centre For Business Research

Abstract

A World Economic Forum meeting at Davos 2019 heralded the dawn of 'Society 5.0' in Japan. Its goal: creating a 'human-centred society that balances economic advancement with the resolution of social problems by a system that highly integrates cyberspace and physical space.' Using Artificial Intelligence (AI), robotics and data, 'Society 5.0' proposes to '...enable the provision of only those products and services that are needed to the people that need them at the time they are needed, thereby optimizing the entire social and organizational system.' The Japanese government accepts that realising this vision 'will not be without its difficulties,' but intends 'to face them head-on with the aim of being the first in the world as a country facing challenging issues to present a model future society.' The UK government is similarly committed to investing in AI and likewise views the AI as central to engineering a more profitable economy and prosperous society.

This vision is, however, starting to crystallise in the rhetoric of LegalTech developers who have the data-intensive-and thus target-rich-environment of law in their sights. Buoyed by investment and claims of superior decision-making capabilities over human lawyers and judges, LegalTech is now being deputised to usher in a new era of 'smart' law built on AI and Big Data. While there are a number of bold claims made about the capabilities of these technologies, comparatively little attention has been directed to more fundamental questions about how we might assess the feasibility of using them to replicate core aspects of legal process, and ensuring the public has a meaningful say in the development and implementation.

This innovative and timely research project intends to approach these questions from a number of vectors. At a theoretical level, we consider the likely consequences of this step using a Horizon Scanning methodology developed in collaboration with our Japanese partners and an innovative systemic-evolutionary model of law. Many aspects of legal reasoning have algorithmic features which could lend themselves to automation. However, an evolutionary perspective also points to features of legal reasoning which are inconsistent with ML: including the reflexivity of legal knowledge and the incompleteness of legal rules at the point where they encounter the 'chaotic' and unstructured data generated by other social sub-systems. We will test our theory by developing a hierarchical model (or ontology), derived from our legal expertise and public available datasets, for classifying employment relationships under UK law. This will let us probe the extent to which legal reasoning can be modelled using less computational-intensive methods such as Markov Models and Monte Carlo Trees.

Building upon these theoretical innovations, we will then turn our attention from modelling a legal domain using historical data to exploring whether the outcome of legal cases can be reliably predicted using various technique for optimising datasets. For this we will use a data set comprised of 24,179 cases from the High Court of England and Wales. This will allow us to harness Natural Language Processing (NLP) techniques such as named entity recognition (to identify relevant parties) and sentiment analysis (to analyse opinions and determine the disposition of a party) in addition to identifying the main legal and factual points of the dispute, remedies, costs, and trial durations. By trailing various predictive heuristics and ML techniques against this dataset we hope to develop a more granular understanding as to the feasibility of predicting dispute outcomes and insight to what factors are relevant for legal decision-making. This will allow us to then undertake a comparative analysis with the results of existing studies and shed light on the legal contexts and questions where AI can and cannot be used to produce accurate and repeatable results.

Planned Impact

Artificial Intelligence research encompasses a broad-and ever-expanding-array of disciplines including, but not limited to: the computer sciences, mathematics, linguistics, electrical engineering, psychology, neuroscience, economics, and operations research. While we do not profess to make discrete contributions to each of these fields, we believe this project cuts to the core of questions whose answers have implications for the use of AI in law, but for technical research into AI and social-scientific examinations of its societal impact. Specifically, our proposed research examines the central question of how we might identify and define limits or 'red lines' for using AI to replicate core aspects of the legal system, specifically legal adjudication.

With this in mind, our research is likely to have an impact beyond those contexts that we can foresee at the outset. Most immediately, however, we believe that our research will have the most proximate impact on legal scholarship, public policy around the use of AI in law, government innovation and investment strategies, and ongoing regulatory compliance debates. By involving stakeholders from the LegalTech community, intergovernmental organisations, and government ministries we will receive input from relevant stakeholders driving the development of AI, but our research will not be limited to their input. Here we will build on our existing contacts with international organisations including the OECD and ILO, government departments (BEIS and the MoJ in the UK, METI and the MoJ in Japan), and civil society groups.

The transformative potential and promise of AI is a matter of great public interest and concern. As such, we will ensure that our research project includes input and evidence from the public and civil society organisations. Here we will build on a series of public engagement forums hosted by CoI Christopher Markou as part of his Leverhulme Trust postdoctoral fellowship, and supported by the Law Society of England and Wales and Royal Society for the Arts. These events, scheduled for September 2019, will be hosted at law schools across the UK to educate the public on the implementation of AI and Big Data into legal administration and law enforcement. Public sentiment and concerns will then be fed back into a jointly authored report presented to the UK Ministry of Justice, and capped off by a public lecture by Dr Markou for the Cambridge Festival of Ideas in October 2018. This work will be accompanied by a series of op-eds in major newspapers, blogs, and media spots in print, video, and radio that will help raise the public profile of the project and dissemination of its findings.

We will also integrate our dissemination plan with the activities of the Cambridge Trust and Technology Initiative which is run by Co-Is Jat Singh and Jennifer Cobbe. The Trust and Technology Initiative is a 'big tent,' bringing people together, facilitating collaboration, and engaging industry, civil society, government, and the public, across: (i) relationships and interplays between technology and society; the legal, ethical and political frameworks impacting both trust and technology, and innovative governance, in areas such as transport, critical infrastructure, identity, manufacturing, healthcare, financial systems and networks, communications systems, internet of things; (ii) the nature of trust and distrust; trust in technology, and trust through technology; the many dimensions of trust at individual, organisational and societal levels; and (iii) rigorous technical foundations, for resilient, secure and safe computer systems, including data and communications platforms, artificial intelligence, and robotics
 
Description Most of the first year of the project was devoted to developing the framework we will use to apply machine learning techniques to legal analysis. An early finding of this phase was that models need to take on board the reflexivity of law, that is, its tendency to construct the object it describes, and the two-way interactions between law and its context which this observation entails. In the second year we made substantial progress on the construction of datasets, and in the third year we have substantial progress on the analysis phase of the project. We expect to have results to report in the coming months.
Exploitation Route As our initial insights are further developed, we expect the research to have implications for the design of machine learning algorithms in the field of law. We will also create important new datasets.
Sectors Creative Economy,Government, Democracy and Justice

 
Description We are in regular contact with users of AI in the legal field on how our work can be operationalised in practice and expect to have a number of impacts to report in due course.
First Year Of Impact 2020
Sector Education,Financial Services, and Management Consultancy,Government, Democracy and Justice
Impact Types Cultural,Societal,Economic,Policy & public services