Human-machine learning of ambiguities to support safe, effective, and legal decision making
Lead Research Organisation:
University of Surrey
Department Name: Computing Science
Abstract
Mobile autonomous robots offer huge potential to help humans and reduce risk to life in a variety of potentially dangerous defence and security (as well as civilian) applications. However, there is an acute lack of trust in robot autonomy in the real world - in terms of operational performance, adherence to the rules of law and safety, and human values. Furthermore, poor transparency and lack of explainability (particularly with popular deep learning methods) add to the mistrust when autonomous decisions do not align with human "common sense". All of these factors are preventing the adoption of autonomous robots and causing a barrier to the future vision of seamless human-robot cooperation. The crux of the problem is that autonomous robots do not perform well under the many types of ambiguity that arise commonly in the real world. These can be caused by inadequate sensing information or conflicting objectives of performance, safety, and legality. On the other hand, humans are very good at recognising and resolving these ambiguities.
This project aims to imbue autonomous robots with a human-like ability to handle real-world ambiguities. This will be achieved through the logical and probabilistic machine learning approach of Bayesian meta-interpretive learning (BMIL). In simple terms, this approach uses a set of logical statements (i.e., propositions, connectives, etc.) that are akin to human language. In contrast, the popular approach of deep learning uses complex multi-layered neural networks with millions of numerical connections. It is through the logical reprsentation and human-like reasoning of BMIL that it will be possible to encode expert human knowledge into the perceptive "world model" and deliberative "planner" of the robot's "artificial brain". The human-like decision-making will be encoded in a variety of ways: A) By design from operational and legal experts in the form of initial logical rules; B) Through passive learning of new logical representations and rules during intervention by human overrides when the robot is not behaving as expected; and C) Through recognising ambiguities before they arise and active learning of rules to resolve them with human assistance.
A general autonomy framework will be developed to incorporate the new approach. It is intended that this will be applicable to all forms of autonomous robots in all applications. However, as a credible and feasible case study, we are focusing our real-world experiments on aquatic applications using an uncrewed surface vehicle (USV) or "robot boat" with underwater acoustic sensors (sonar) for searching underwater spaces. This problem is relevant in several areas of defence and security, including water gap crossing, naval mine countermeasures, and anti-submarine warfare. Specifically, our application focus will be on the police underwater search problem, which has challenging operational goals (i.e., finding small and potentially concealed objects underwater and amidst clutter), as well as considerations for the safety of the human divers and other users of the waterway (e.g., akin to the International Regulations for Preventing Collisions at Sea), and legal obligations relating to preservation of the evidence chain and timeliness due to custodial constraints.
This project aims to imbue autonomous robots with a human-like ability to handle real-world ambiguities. This will be achieved through the logical and probabilistic machine learning approach of Bayesian meta-interpretive learning (BMIL). In simple terms, this approach uses a set of logical statements (i.e., propositions, connectives, etc.) that are akin to human language. In contrast, the popular approach of deep learning uses complex multi-layered neural networks with millions of numerical connections. It is through the logical reprsentation and human-like reasoning of BMIL that it will be possible to encode expert human knowledge into the perceptive "world model" and deliberative "planner" of the robot's "artificial brain". The human-like decision-making will be encoded in a variety of ways: A) By design from operational and legal experts in the form of initial logical rules; B) Through passive learning of new logical representations and rules during intervention by human overrides when the robot is not behaving as expected; and C) Through recognising ambiguities before they arise and active learning of rules to resolve them with human assistance.
A general autonomy framework will be developed to incorporate the new approach. It is intended that this will be applicable to all forms of autonomous robots in all applications. However, as a credible and feasible case study, we are focusing our real-world experiments on aquatic applications using an uncrewed surface vehicle (USV) or "robot boat" with underwater acoustic sensors (sonar) for searching underwater spaces. This problem is relevant in several areas of defence and security, including water gap crossing, naval mine countermeasures, and anti-submarine warfare. Specifically, our application focus will be on the police underwater search problem, which has challenging operational goals (i.e., finding small and potentially concealed objects underwater and amidst clutter), as well as considerations for the safety of the human divers and other users of the waterway (e.g., akin to the International Regulations for Preventing Collisions at Sea), and legal obligations relating to preservation of the evidence chain and timeliness due to custodial constraints.
Publications
Chaghazardi Z
(2023)
Explainable and Trustworthy Traffic Sign Detection for Safe Autonomous Driving: An Inductive Logic Programming Approach
in Electronic Proceedings in Theoretical Computer Science
Chaghazardi Z
(2025)
Leveraging Inductive Logic Programming and Deep Learning for Trustworthy Vision
Nga Y
(2024)
Automated Recognition of Submerged Body-like Objects in Sonar Images Using Convolutional Neural Networks
in Remote Sensing
Trewern J
(2025)
Meta-Interpretive learning as Second Order Resolution
Varghese D
(2025)
Towards enhancing LLMs with logic-based reasoning
| Description | Doctoral Scholarships Network for AI-Enabled Digital Accessibility (ADA) |
| Amount | £2,150,000 (GBP) |
| Organisation | The Leverhulme Trust |
| Sector | Charity/Non Profit |
| Country | United Kingdom |
| Start | 08/2024 |
| End | 08/2032 |
| Title | Autonomous Agent Framework |
| Description | A novel framework for the automatic generation of an autonomous planning agent's model from a specification. The generated model includes a representation of world states and primitive operations on the world state suitable for the manual definition of high-level agent actions. The framework is compatible with the Planning Domain Definition Languaged used in automated planning and scheduling systems. |
| Type Of Material | Computer model/algorithm |
| Year Produced | 2024 |
| Provided To Others? | Yes |
| Impact | The Autonomous Agent Framework is included in a submission to the IJCAI 24 conference currently under review. It has only made public very recently. |
| URL | https://github.com/JamesTrewern/louise-FSA |
| Title | Sidescan sonar images for training automated recognition of submerged body-like objects |
| Description | This dataset contains sonar image data collected from various locations in Bath and Bristol, UK. It was collected using an autonomous uncrewed surface vessel (USV) equipped with Blueprint Subsea StarFish 450 and StarFish 990 side scanning sonar. The higher resolution images were used to train convolutional neural networks (CNNs) for autonomous detection of a sunken mannequin, used as a proxy for a drowning victim in missing persons scenarios. |
| Type Of Material | Database/Collection of data |
| Year Produced | 2024 |
| Provided To Others? | Yes |
| Impact | This new approach can be used to improve the accuracy of object detection of the under-water robotic system in our EPSRC project |
| URL | https://researchdata.bath.ac.uk/id/eprint/1467 |
| Title | Vanilla-Louise algorithm |
| Description | A new Meta-Interpretive Learning algorithm using Second-Order SLD-Resolution, accompanied by a theoretical proof of its inductive soundness and completeness. |
| Type Of Material | Computer model/algorithm |
| Year Produced | 2024 |
| Provided To Others? | Yes |
| Impact | The algorithm is included in a recent submission at the IJCA 24 conference and is still at the stage of review for publication. It has only been made available online very recently. |
| URL | https://github.com/JamesTrewern/louise-FSA |
| Description | Partnership with Thales |
| Organisation | Thales Group |
| Department | Thales UK Limited |
| Country | United Kingdom |
| Sector | Private |
| PI Contribution | The aim of our project is to improve the trustworthiness of the next-generation autonomous systems in the area of under-water search. Thales UK is committed to being at the forefront of what it anticipates to be the next great revolution in naval technology - Maritime Autonomy. The Underwater Systems (UWS) division of the Thales is interested in the outcomes of the research insights into improvements of the autonomous decision making processes. |
| Collaborator Contribution | - provide industry expertise to the research team through attendance of Thales personnel at steering meetings - provide expertise relating to col-regs and MCM and ASW doctrine - Attendance at appropriate research workshops - Participation in the research advisory board |
| Impact | This partnership contributed to the 1st workshop with stakeholders/partners held on 20th Nov 2023 Multi-disciplinary areas: 1) AI & Robotics 2) Maritime Autonomy 3) Expertise relating to col-regs and MCM and ASW doctrine (Legal aspects) |
| Start Year | 2023 |
| Title | Meta Interpretive Learning system Prolog2 (in Rust) |
| Description | Prolog2 is an implementation of second-order SLD-Resolution, the basis of Meta-Interpretive learning. Gaining efficiency from the compiled nature of the Rust programming language and the lack of meta-interpretation used in other MIL approahces. |
| Type Of Technology | Software |
| Year Produced | 2024 |
| Open Source License? | Yes |
| Impact | Prolog2 was developed during this EPSRC project and was used in the machine learning experiments. More details available from the following paper: https://hmlr-lab.github.io/pdfs/Second_Order_SLD_ILP2024.pdf |
| URL | https://hmlr-lab.github.io/epsrc-hmla/software.html |
| Title | Numerical-Symbolic Learning System NumLog |
| Description | NumLog is an Inductive Logic Programming (ILP) system designed for feature range discovery. NumLog generates quantitative rules with clear confidence bounds to discover feature-range values from examples. |
| Type Of Technology | Software |
| Year Produced | 2024 |
| Open Source License? | Yes |
| Impact | NumLog was developed during this EPSRC project and was used in the machine learning experiments. More details available from the following paper: https://hmlr-lab.github.io/pdfs/NumLog_ILP2024.pdf |
| URL | https://hmlr-lab.github.io/epsrc-hmla/software.html |
| Title | Vanilla-Louise learning system |
| Description | A new Meta-Interpretive Learning system using Second-Order SLD-Resolution (implementation of Vanilla-Louise algorithm in Prolog) |
| Type Of Technology | Software |
| Year Produced | 2024 |
| Open Source License? | Yes |
| Impact | Vanilla-Louise was developed during this EPSRC project and was used in the machine learning experiments. More details available from the following paper: https://hmlr-lab.github.io/pdfs/Second_Order_SLD_ILP2024.pdf |
| URL | https://hmlr-lab.github.io/epsrc-hmla/software.html |
| Description | Attending and presenting at the 4th International Joint Conference on Learning and Reasoning (IJCLR 2024) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Team members from this EPSRC project attended and presented their accepted research papers at the 4th International Joint Conference on Learning and Reasoning (IJCLR 2024). We received useful feedback from the IJCLR community on our research papers, and one of our papers also received the Best Paper Award. Our bid for IJCLR 2025 was also successful, and our research group will be organising the next IJCLR conference, which will be held in the UK. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.lamda.nju.edu.cn/ijclr24/index.html |
| Description | Dstl Water Search Academia Day at Stanborough Lakes |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | The autonomous search-robot boat from our EPSRC project (Bath team) was selected for a demo at the Dstl/Police Search Team Workshop at Stanborough Lake, and the team's robot successfully found underwater manikins. This demo led to further discussion and engagement with Dstl. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Organising the 1st Stakeholders / Partners Workshop on Human-machine learning of ambiguities to support safe, effective, and legal decision making |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | As part of our EPSRC grant on "Human-machine learning of ambiguities to support safe, effective, and legal decision making", we organised our first large meeting with the project's stakeholders and partners from industry, government agencies and other institutions on 20th November 2024. The attendees included around 20 external project partners and representatives from the project stakeholders (Metropolitan Police, Dstl, Thales, BMT, ). Agenda: 09:30-10:00 Introduction to the project (Dr Alireza Tamaddoni-Nezhad, PI) 10:00-10:30 Applications and case study (Dr Alan Hunter, CoI) 10:30-11:00 Towards trustworthy autonomous underwater search: case study and simulation (Dr Alfie Treloar, Project postdoc) 11:00-11:30 Human-machine learning and integration with simulations: work-in-progress (Dr Stassa Patsantzis, Project postdoc) 11:30-12:15 Initial feedback and discussion (project stakeholders / partners) 12:15-12:30 AI Safety in the Surrey Institute of People-Centred AI (Dr Andrew Rogoyski, Director of Partnerships and Innovation, Surrey Institute for People-Centred AI) 12:30-13:30 Working lunch + Live demo of the simulation 13:30-14:30 Group discussion/feedback 1 - goals, ambitions and challenges of the project 14:30-15:00 Coffee break 15:00-16:00 Group discussion/feedback 2 - case study applications and how we evaluate the approach 16:00-16:30 Next steps / closing |
| Year(s) Of Engagement Activity | 2023 |
| Description | Presenting at the First International Symposium on Trustworthy Autonomous Systems (TAS) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Presented a poster on "Towards explainable and trustworthy autonomous underwater search using logic-based machine learning" at the First International Symposium on Trustworthy Autonomous Systems (TAS), 11-12 July 2023, Edinburgh |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://tas.ac.uk/bigeventscpt/first-international-symposium-on-trustworthy-autonomous-systems/ |