Rule of Law in the Age of AI: Principles of Distributive Liability for Multi-Agent Societies

Lead Research Organisation: Cardiff University
Department Name: Sch of Psychology

Abstract

The UK and Japan appeal to similar models of subjectivity in categorizing legal liability. Rooted historically and philosophically in the figure of the human actor capable of exercising free will within a given environment, such a model of subjectivity ascribes legal liability to human agents imagined as autonomous and independent. However, recent advancements in artificial intelligence (AI) that augment the autonomy of artificial agents such as autonomous driving systems, social robots equipped with artificial emotional intelligence, and intelligent surgery or diagnosis assistant system challenge this traditional notion of agency while presenting serious practical problems for determining legal liability within networks of distributed human-machine agency. For example, if the accident occurs from cooperation between human and an intelligent machine, we do not know how to distribute legal liability based on current legal theory. Although legal theory assumes that the autonomous human agent should take the responsibility of the accident, but in the case of human-intelligent machine interaction, human subjectivity itself is influenced by the behavior of intelligent machines, according to the findings of cognitive psychology, of the critical theory of subjectivity, and of the anthropology of science and technology.
This lack of the transparent and clear distributive principles of legal liability may hamper the healthy development of society where human dignity and technological innovation can travel together, because, no one can trust the behavior and quality of the machine, that may cause corporal or lethal injury, without workable legal liability regime.
Faced with this challenge, that is caused and will be aggravated by the proliferation of AI in UK and Japan, an objective of our study is to make the distributive principle of legal liability clear in the multi-agent society and proposing the relevant legal policy to establish the rule of law in the age of AI, that enables us to construct the "Najimi society" where humans and intelligent machines can cohabit, with sensitivity to the cultural diversity of the formation of subjectivity.
In order to achieve the objective above, we create the three interrelated and collaborative research groups:
Group 1: Law-Economics-Philosophy group that proposes the stylized model to analyze and evaluate the multi-agent situation, based on dynamic game theory connected to the philosophy of the relativity of human subjectivity, in order to figure out the distributive principle of legal liability and the legal policy for the rule of law in the age of AI, based on both the quantitative data and qualitative data from the other groups, with the support from experienced legal practitioner and policy makers.
Group 2: Cognitive Robotics and Human Factors and Cognitive Psychology group that implements various computer simulation and psychological experiments to capture data on human interaction and performance with as well as attidues and experience of intelligent machines - in this case (simulated) autonomous vehicles. The outputs of this group will examine the validity of the first group's model and provide mainly the quantitative data relating to subjectivity with the first group, leading to help to construct more reliable model and workable legal principles and policies.
Group 3: Cultural Anthropology group that engages in comparative ethnographic fieldwork on human-robot relations within Japan and the UK to better account for the cultural variability of distributed agency within differing social, legal, and scientific contexts. The output of this group will help the interpretation of the quantitative data and allow the first group to keep sensitivities to the diversity.
By the inherently transdisiciplinary and international cooperation described above, our project will contribute to make UK and Japanese society more adoptive to emerging technology through clarifying the legal regime.

Planned Impact

Our novel, cutting edge, cross cultural research will have significant social impact domestically, internationally and globally that makes our current society more adoptable to emerging many intelligent technologies. There are multiple pathways to impact:
1. Publish our academic achievements as articles within high impact influential international journals. Such publications will increase the possibility that our proposal of the behavioural findings, legal system and fundamental theory will be adopted by other researchers and policy makers.
2. Social promotion of our academic project. To gain the public acceptance and understanding of our project, we will hold public symposia or other social activities aimed at public acceptance. We will frame these activities in such a way that the audience's experience in this activity will help us to promote our project. Some research members of our project have been engaging in the social activity that aims to help public to understand the potential benefit and cost of emerging technology. For example, some participated as a panelist of public forum that treats potential accidents caused by human-machine interaction. Others held a philosophy café or workshop to give citizens a chance to reflect their perception of human and machine and to raise a critical view to emerging technology and society. These activities may not seem to have a huge impact on society, but continuous effort to include public into scientific circle should have considerable social impact in long term. Based on the achievements of our research, we will engage in these activities to make a social impact.
3. Interactive education with engineers who build intelligent machines in private or public sectors. Our cultural anthropological team has a rich experience to make communications with front line engineers to raise their critical sense to design the interactive machines, that may influence the subjectivity of the users. They also hold a workshop to critically reflect the influence of diverse interaction between human and machine. These activities may influence and change the mindset of engineers on site and lead to potentially huge social impact on the actual practice of building and designing intelligent machines. And we all are sure that our academic achivement of our research project will enlarge and support these activities that will change the whole society more adoptive to the emerging technologies. For instance, the Cardiff group will act via various research centres and groups that they lead/co-lead, have strong industrial connection with over 40 external organisations.
4. Make a continuously operating international platform between UK and Japan to exchange stimulating ideas among participants. We will make an international platform that may become future foundation of international academic activity and policy making regarding to intelligent machines. Our research team is already inherently international, and further cooperation with international researchers or practitioners may not only enrich our research project but also increase the social impact of our research. We are reflecting the possibility of holding an annual international research conference or workshop in order to invite many international AI and related experts with diverse backgrounds.
5. Propose concrete policies regarding to the legal regulation and legal system of intelligent machines, initially amongst lawfirms that we are conneceted with such as AXA, Burges Salmon, Nagashima, Nishimura and Asahi partners, Ohno, and Tsunematsu as well as the Japan Ministry of Economy,Trade and Industry, together with many thousands of employees and connections. As we include experienced legal practitioners and policy makers into our project, we can profit from their knowledge to make new laws and workable legal system.
 
Description Period 2022-2023
Significant progress made this past year mainly across three research themes: (1) Anthropomorphising Autonomous Vehicles (AVs) using humanoid robots and manipulations of dialogue style (explainable AI element) x 3 experiments (> 600 participants) - already resulted in 2 x IEEE international conference papers, another to be submitted (IEEE) 31st March 2023, work underway on a journal paper, and discussions with collaborators in Japan (JST) on a cross-country experiment; (2) Human-centric Cyber Security Aspects of AVs x 1 experiment (>100 participants so far) with another 1-2 in the design phases - already resulted in an international conference paper (AHFE 2023 - July 2023) and discussions with collaborators in Japan (JST) on a possible cross-country experiment; (3) Perceptions of Risk Avoidance Tolerances of Autonomous Vehicles - i.e. measuring human trust (and other perceptions) in/off AVs when an extraordinary action can be taken to try and avoid and inevitable accident caused by a third party - first experiment designed and soon to be deployed with others planned - and in discussions with collaborators in Japan about a possible cross country experiment.
We have submitted a journal article (Transportation Research Part A) - although almost 11 months and no reviews back yet despite following up regularly and have another manuscript ready to submit - both based on research conducted during Years 1 and 2.
We have attended 2 x international conferences (IEEE Ro-Man 2022; IEEE IROS 2022) and presented some of our cutting edge work on (1) above.
We have been able (with covid-19 restrictions relaxed) to visit collaborators in Japan (Kyoto, Osaka, Doshisha - Oct-Nov 2022) for multiple workshops, presentations, brainstorming sessions and discussions regarding 2022-2023 research priorities and future funding and will return Mar-Apr 2023 to continue these activities and link up with other research groups and universities - collaborating with our colleagues in Japan (including University of Tokyo).

Period 2021-2022
The focus of the work is the attribution of blame in autonomous vehicle accidents. Like other researchers we found that, generally speaking, autonomous vehicles are blamed more than human driven vehicles, despite the equivalence of circumstances in which an accident occurred. Our findings depart from the norm by showing that this effect is contingent on the circumstances of the accident. Results from four studies are consistent with the finding that observers engage in capability discounting, namely the blame and trust in autonomous vehicles are shaped by their perceived capabilities in the driving context in which the accident occurs. The less the prevention of an accident hinges on reaction speed and accuracy, the less likely the autonomous vehicle is judged to outperform a human driver. We showed this to be true using two methods, one involving the manipulation of driving style (cautious, normal or sporty) and the other causal cue strength.

These manipulations have now been used on our Japan sample and the results are to hand - marking another cross cultural study within this project. We have found differences between the UK and Japan sample but the interpretation of results has yet to be agreed across the research teams. We hope that this will be resolved before our first in-person visit to Japan planned for ~Sep-Oct 2022 (noting we have held many virtual workshops since the start of the covid-19 pandemic).

We have one new manuscript - based on some of the above findings - submitted to a top tier journal and another close to completion - based on cross cultural studies with our collaborators in Japan.


Period 2020-2021
At Cardiff work has centred on three studies, each employing the net-based presentation of accident vignettes and their impact on blame and trust. The work has compared how blame and trust of accident (and near accident) observers varies between automatous and human-driven vehicles. The key finding from the first study was that participants applied double standards when assigning blame to humans and autonomous systems: an autonomous system was usually blamed more than a human driver for executing the same actions under the same circumstances with the same consequences. These findings not only have important implications to AI-related legislation, but also highlight the necessity to promote the design of robots and other automation systems which can help calibrate public perceptions and expectations of their characteristics and capabilities.

These finding apply to the Cardiff sample; results using an identical method and materials is being analysed from the Japan sample. Two further studies have been run at Cardiff that follow up several features of the first study. The results of these studies are still being processed.

Now that the Covid restrictions are about to be lifted we will begin studies of simulated accidents produced using our simulator. We plan that these will be both lab-based and net-based. Data collection will begin in March 2021.
Exploitation Route New online paradigms developed that can be used / further developed to measure trust and blame attribution in autonomous driving scenarios - involving e.g. humanoid robot informational assistants, cyber security aspects, and AV risk tolerances. Some of these paradigms are being considered by our collaborators in Japan Universities - with some experiments translated to Japanese and plans for data collection. There is clear potential for further wider cross-cultural work to take place.
Sectors Digital/Communication/Information Technologies (including Software),Education,Environment,Government, Democracy and Justice,Manufacturing, including Industrial Biotechology,Security and Diplomacy,Transport

 
Description Emerging (early) impact baseline work: e.g. A paper based on the first study was presented at the AHFE 2021 conference and won best paper award in the Human Factors in Transportation Stream: Zhang, Qiyuan, Wallbridge, Christopher D., Jones, Dylan M. and Morgan, Phil 2021. The blame game: double standards apply to autonomous vehicle accidents. Presented at: AHFE 2021 Virtual Conference on Human Aspects of Transportation, Virtual, 25-29 July 2021. Advances in Human Aspects of Transportation. Lecture Notes in Networks and Systems Springer, Cham, pp. 308-314. 10.1007/978-3-030-80012-3_36. Other key papers include: Wallbridge, C. D., Marcinkiewicz, V., Zhang, Q. and Morgan, P. 2022. Towards anthropomorphising autonomous vehicles: speech and embodiment on trust and blame after an accident. Presented at: Robot Trust for Symbiotic Societies (RTSS) at IROS 2022, Kyoto, Japan, 23-27 October 2022. Zhang, Q., Wallbridge, C., Morgan, P. L. and Jones, D. M. 2022. Using simulation-software-generated animations to investigate attitudes towards autonomous vehicles accidents. Procedia Computer Science 207, pp. 3516-3525. (10.1016/j.procs.2022.09.410) Marcinkiewicz, V., Wallbridge, C. D., Zhang, Q. and Morgan, P. 2022. Integrating humanoid robots into simulation software generated animations to explore judgments on self-driving car Accidents. Presented at: IEEE Ro-Man 2022 Conference, Naples, Italy, 29 August - 2 September 2022. We are interacting with other organisations (e.g. BAE Systems, Airbus, National Highways, SPIRENT) who have expressed interest in our methods and techniques- visited our laboratories - key focus on transportation zone - driving simulator etc. We are returning to Japan March-April 2023 and as well as having multiple workshops with project collaborators there - we also have plans to meet with two other research groups at Universities in Kyoto as well as two other research groups at Universities in Tokyo and quite possibly other research centres (details being confirmed). We contributed to the Law Commission's review on the Comprehensive Regulatory Framework for Self-Driving Vehicles.
First Year Of Impact 2021
Sector Government, Democracy and Justice,Transport
Impact Types Policy & public services

 
Description AHFE paper (virtual) New York 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Zhang, Q., Wallbridge, C., Jones, D. M., & Morgan, P. (2021). The blame game: Double standards apply to automatous vehicle accidents. AHFE 2021, New York (virual).
Year(s) Of Engagement Activity 2021
 
Description Conference presentation and attendance 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact IEEE IROS Conference October 2022 - Kyoto, Japan, >2500 attendees. Our paper was in a special workshop on Trust in Robots - https://www.trustworthyrobots.eu/rtss-workshop/. More than 40 people attended the talk from academia, industry, government and third sector. Lots of discussions e.g. future collaboration.
Year(s) Of Engagement Activity 2022
URL https://www.trustworthyrobots.eu/rtss-workshop/
 
Description IEEE Ro-Man Conference 2022 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Ro-Man Conference October 2022 - Naples, Italy, >1500 attendees. Our paper was in a special workshop on Trust, Acceptance and Social Cues in Human-Robot Interaction - http://scrita.herts.ac.uk/2022/. More than 60 people attended the talk from academia, industry, government and third sector. Lots of discussions e.g. future collaboration.
Year(s) Of Engagement Activity 2022
URL http://scrita.herts.ac.uk/2022/
 
Description Paris Conference paper (virtual) 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Jones, D., Morgan, P., Zhang, Q., & Wallbridge, C. (2020). Autonomy: Legal and psychological perspectives. ICRA2020. Workshop: How will autonomous robots and systems influence society. Paris (virtual).
Year(s) Of Engagement Activity 2020
 
Description Workshops + Presentations + Lab Visits in Kyoto and Osaka 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact The UK (ESRC) project team held workshops with the Japan (JST) team to discuss progress and plans across all project work packages as well as discussing ideas for future projects based on this one.
Year(s) Of Engagement Activity 2022