Human-machine learning of ambiguities to support safe, effective, and legal decision making
Lead Research Organisation:
University of Surrey
Department Name: Computing Science
Abstract
Mobile autonomous robots offer huge potential to help humans and reduce risk to life in a variety of potentially dangerous defence and security (as well as civilian) applications. However, there is an acute lack of trust in robot autonomy in the real world - in terms of operational performance, adherence to the rules of law and safety, and human values. Furthermore, poor transparency and lack of explainability (particularly with popular deep learning methods) add to the mistrust when autonomous decisions do not align with human "common sense". All of these factors are preventing the adoption of autonomous robots and causing a barrier to the future vision of seamless human-robot cooperation. The crux of the problem is that autonomous robots do not perform well under the many types of ambiguity that arise commonly in the real world. These can be caused by inadequate sensing information or conflicting objectives of performance, safety, and legality. On the other hand, humans are very good at recognising and resolving these ambiguities.
This project aims to imbue autonomous robots with a human-like ability to handle real-world ambiguities. This will be achieved through the logical and probabilistic machine learning approach of Bayesian meta-interpretive learning (BMIL). In simple terms, this approach uses a set of logical statements (i.e., propositions, connectives, etc.) that are akin to human language. In contrast, the popular approach of deep learning uses complex multi-layered neural networks with millions of numerical connections. It is through the logical reprsentation and human-like reasoning of BMIL that it will be possible to encode expert human knowledge into the perceptive "world model" and deliberative "planner" of the robot's "artificial brain". The human-like decision-making will be encoded in a variety of ways: A) By design from operational and legal experts in the form of initial logical rules; B) Through passive learning of new logical representations and rules during intervention by human overrides when the robot is not behaving as expected; and C) Through recognising ambiguities before they arise and active learning of rules to resolve them with human assistance.
A general autonomy framework will be developed to incorporate the new approach. It is intended that this will be applicable to all forms of autonomous robots in all applications. However, as a credible and feasible case study, we are focusing our real-world experiments on aquatic applications using an uncrewed surface vehicle (USV) or "robot boat" with underwater acoustic sensors (sonar) for searching underwater spaces. This problem is relevant in several areas of defence and security, including water gap crossing, naval mine countermeasures, and anti-submarine warfare. Specifically, our application focus will be on the police underwater search problem, which has challenging operational goals (i.e., finding small and potentially concealed objects underwater and amidst clutter), as well as considerations for the safety of the human divers and other users of the waterway (e.g., akin to the International Regulations for Preventing Collisions at Sea), and legal obligations relating to preservation of the evidence chain and timeliness due to custodial constraints.
This project aims to imbue autonomous robots with a human-like ability to handle real-world ambiguities. This will be achieved through the logical and probabilistic machine learning approach of Bayesian meta-interpretive learning (BMIL). In simple terms, this approach uses a set of logical statements (i.e., propositions, connectives, etc.) that are akin to human language. In contrast, the popular approach of deep learning uses complex multi-layered neural networks with millions of numerical connections. It is through the logical reprsentation and human-like reasoning of BMIL that it will be possible to encode expert human knowledge into the perceptive "world model" and deliberative "planner" of the robot's "artificial brain". The human-like decision-making will be encoded in a variety of ways: A) By design from operational and legal experts in the form of initial logical rules; B) Through passive learning of new logical representations and rules during intervention by human overrides when the robot is not behaving as expected; and C) Through recognising ambiguities before they arise and active learning of rules to resolve them with human assistance.
A general autonomy framework will be developed to incorporate the new approach. It is intended that this will be applicable to all forms of autonomous robots in all applications. However, as a credible and feasible case study, we are focusing our real-world experiments on aquatic applications using an uncrewed surface vehicle (USV) or "robot boat" with underwater acoustic sensors (sonar) for searching underwater spaces. This problem is relevant in several areas of defence and security, including water gap crossing, naval mine countermeasures, and anti-submarine warfare. Specifically, our application focus will be on the police underwater search problem, which has challenging operational goals (i.e., finding small and potentially concealed objects underwater and amidst clutter), as well as considerations for the safety of the human divers and other users of the waterway (e.g., akin to the International Regulations for Preventing Collisions at Sea), and legal obligations relating to preservation of the evidence chain and timeliness due to custodial constraints.
Organisations
Publications


Title | Autonomous Agent Framework |
Description | A novel framework for the automatic generation of an autonomous planning agent's model from a specification. The generated model includes a representation of world states and primitive operations on the world state suitable for the manual definition of high-level agent actions. The framework is compatible with the Planning Domain Definition Languaged used in automated planning and scheduling systems. |
Type Of Material | Computer model/algorithm |
Year Produced | 2024 |
Provided To Others? | Yes |
Impact | The Autonomous Agent Framework is included in a submission to the IJCAI 24 conference currently under review. It has only made public very recently. |
URL | https://github.com/JamesTrewern/louise-FSA |
Title | Vanilla-Louise algorithm |
Description | A new Meta-Interpretive Learning algorithm using Second-Order SLD-Resolution, accompanied by a theoretical proof of its inductive soundness and completeness. |
Type Of Material | Computer model/algorithm |
Year Produced | 2024 |
Provided To Others? | Yes |
Impact | The algorithm is included in a recent submission at the IJCA 24 conference and is still at the stage of review for publication. It has only been made available online very recently. |
URL | https://github.com/JamesTrewern/louise-FSA |
Description | Partnership with Thales |
Organisation | Thales Group |
Department | Thales UK Limited |
Country | United Kingdom |
Sector | Private |
PI Contribution | The aim of our project is to improve the trustworthiness of the next-generation autonomous systems in the area of under-water search. Thales UK is committed to being at the forefront of what it anticipates to be the next great revolution in naval technology - Maritime Autonomy. The Underwater Systems (UWS) division of the Thales is interested in the outcomes of the research insights into improvements of the autonomous decision making processes. |
Collaborator Contribution | - provide industry expertise to the research team through attendance of Thales personnel at steering meetings - provide expertise relating to col-regs and MCM and ASW doctrine - Attendance at appropriate research workshops - Participation in the research advisory board |
Impact | This partnership contributed to the 1st workshop with stakeholders/partners held on 20th Nov 2023 Multi-disciplinary areas: 1) AI & Robotics 2) Maritime Autonomy 3) Expertise relating to col-regs and MCM and ASW doctrine (Legal aspects) |
Start Year | 2023 |
Title | Vanilla-Louise learning system |
Description | A new Meta-Interpretive Learning system using Second-Order SLD-Resolution (implementation of Vanilla-Louise algorithm in Prolog) |
Type Of Technology | Software |
Year Produced | 2024 |
Open Source License? | Yes |
Impact | This learning system is the basis of the experiments in a recent paper submitted to the IJCA 24 conference (currently under review). |
URL | https://github.com/JamesTrewern/louise-FSA |
Description | Organising the 1st Stakeholders / Partners Workshop on Human-machine learning of ambiguities to support safe, effective, and legal decision making |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Industry/Business |
Results and Impact | As part of our EPSRC grant on "Human-machine learning of ambiguities to support safe, effective, and legal decision making", we organised our first large meeting with the project's stakeholders and partners from industry, government agencies and other institutions on 20th November 2024. The attendees included around 20 external project partners and representatives from the project stakeholders (Metropolitan Police, Dstl, Thales, BMT, ). Agenda: 09:30-10:00 Introduction to the project (Dr Alireza Tamaddoni-Nezhad, PI) 10:00-10:30 Applications and case study (Dr Alan Hunter, CoI) 10:30-11:00 Towards trustworthy autonomous underwater search: case study and simulation (Dr Alfie Treloar, Project postdoc) 11:00-11:30 Human-machine learning and integration with simulations: work-in-progress (Dr Stassa Patsantzis, Project postdoc) 11:30-12:15 Initial feedback and discussion (project stakeholders / partners) 12:15-12:30 AI Safety in the Surrey Institute of People-Centred AI (Dr Andrew Rogoyski, Director of Partnerships and Innovation, Surrey Institute for People-Centred AI) 12:30-13:30 Working lunch + Live demo of the simulation 13:30-14:30 Group discussion/feedback 1 - goals, ambitions and challenges of the project 14:30-15:00 Coffee break 15:00-16:00 Group discussion/feedback 2 - case study applications and how we evaluate the approach 16:00-16:30 Next steps / closing |
Year(s) Of Engagement Activity | 2023 |
Description | Presenting at the First International Symposium on Trustworthy Autonomous Systems (TAS) |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | Presented a poster on "Towards explainable and trustworthy autonomous underwater search using logic-based machine learning" at the First International Symposium on Trustworthy Autonomous Systems (TAS), 11-12 July 2023, Edinburgh |
Year(s) Of Engagement Activity | 2023 |
URL | https://tas.ac.uk/bigeventscpt/first-international-symposium-on-trustworthy-autonomous-systems/ |