COHERENT: COllaborative HiErarchical Robotic ExplaNaTions
Lead Research Organisation:
King's College London
Department Name: Informatics
Abstract
For robots to build trustable interactions with users two aspects will be crucial during the next decade. First, the ability to produce explainable decisions combining reasons from all the levels
of the robotic architecture from low to high level; and second, to be able to effectively communicate such decisions and re-plan according to new user inputs in real-time along with the execution.
COHERENT will develop a novel framework to combine explanations originated at the different robotic levels into a single explanation. This combination is not unique and may depend on several factors including the step into the action sequence, or the temporal importance of each information source. Robotic tasks are interesting because they entail performing a sequence of actions, and thus the system must be able to deliver these explanations also during the execution of the task, either because the user requested or actively because an unforeseen situation occurs. COHERENT will propose effective evaluation metrics oriented to the special case of explanations in HRI systems. The proposed measures, based on trustworthiness and acceptance, will be defined together with the definition of benchmark tasks that are repeatable and enable the comparison of results across different explainable developments.
We will demonstrate our framework for hierarchical explanation components through a manipulation task of assisting a human to fold clothes. Cloth manipulation is a very rich example that requires considering bi-manual manipulations, environmental constraints, and perception of textiles for its state estimation. Eventually, the robot can even require the user to help in doing difficult actions by providing relevant information, so interaction opportunities are multiple. We will build on previous results on cloth manipulation to develop explainable machine learning techniques from the perception, learned movements, task planning and interaction layers, based on a novel generic representation, the Cohesion Graph, that is shared across the layers. The COHERENT framework will be integrated into the standard planning system ROSplan to increase its visibility and adoption.
of the robotic architecture from low to high level; and second, to be able to effectively communicate such decisions and re-plan according to new user inputs in real-time along with the execution.
COHERENT will develop a novel framework to combine explanations originated at the different robotic levels into a single explanation. This combination is not unique and may depend on several factors including the step into the action sequence, or the temporal importance of each information source. Robotic tasks are interesting because they entail performing a sequence of actions, and thus the system must be able to deliver these explanations also during the execution of the task, either because the user requested or actively because an unforeseen situation occurs. COHERENT will propose effective evaluation metrics oriented to the special case of explanations in HRI systems. The proposed measures, based on trustworthiness and acceptance, will be defined together with the definition of benchmark tasks that are repeatable and enable the comparison of results across different explainable developments.
We will demonstrate our framework for hierarchical explanation components through a manipulation task of assisting a human to fold clothes. Cloth manipulation is a very rich example that requires considering bi-manual manipulations, environmental constraints, and perception of textiles for its state estimation. Eventually, the robot can even require the user to help in doing difficult actions by providing relevant information, so interaction opportunities are multiple. We will build on previous results on cloth manipulation to develop explainable machine learning techniques from the perception, learned movements, task planning and interaction layers, based on a novel generic representation, the Cohesion Graph, that is shared across the layers. The COHERENT framework will be integrated into the standard planning system ROSplan to increase its visibility and adoption.
Publications
Canal G
(2023)
Generating predicate suggestions based on the space of plans: an example of planning with preferences.
in User modeling and user-adapted interaction
Canal G
(2021)
Are Preferences Useful for Better Assistance? A Physically Assistive Robotics User Study
in ACM Transactions on Human-Robot Interaction
Coles A
(2024)
Planning and Acting While the Clock Ticks
in Proceedings of the International Conference on Automated Planning and Scheduling
Izquierdo-Badiola S
(2022)
Improved Task Planning through Failure Anticipation in Human-Robot Collaboration
Izquierdo-Badiola S.
(2024)
PlanCollabNL: Leveraging Large Language Models for Adaptive Plan Generation in Human-Robot Collaboration
Krarup B
(2024)
Explaining Plan Quality Differences
in Proceedings of the International Conference on Automated Planning and Scheduling
Mariasin S
(2024)
Evaluating Distributional Predictions of Search Time: Put Up or Shut Up Games (Extended Abstract)
in Proceedings of the International Symposium on Combinatorial Search
Olivares-Alarcos A
(2024)
Ontological modeling and reasoning for comparison and contrastive narration of robot plans
| Description | This project was in the area of AI planning and explanations for human-robot co-operation. In short, robots can use AI planners to decide what to do, and when to do it, in order to achieve their goals; but when robots are co-operating with people, there is the need to find plans that account for the user's and the robot's capabilities and preferences, and to be able to explain plans to users so they are satisfied with how the goals are being achieved. The three key thematic areas of achievement, where this project has substantially pushed the envelope in terms of scientific capabilities are: - Work on the fundamentals of AI planning has developed novel approaches for doing the planning necessary for human-robot co-operation: it is sensitive to the passage of time, and to the capabilities and operational/user preferences of humans and robots. - Work on explainability in human-robot co-operation has revealed insights into when users want explanations, and what forms of explanation they want. - Work on explainable planning has led to techniques that leverage our planning capabilities, and insights into explanations from a human perspective, to be able to generate explanations of plans for users, which can be given at the right time. |
| Exploitation Route | All our developed planning software, and evaluation benchmarks, have been published and is available for others to use. We have received an Impact Acceleration award from KCL to fund 12 months of the PI and Co-I's time to work with an industrial partner to adopt the findings of this project to a commercial setting, as a pathway towards impact. |
| Sectors | Aerospace Defence and Marine Digital/Communication/Information Technologies (including Software) Manufacturing including Industrial Biotechology |
| Description | Advancing Impact in AI Planning and Robotics |
| Amount | £49,145 (GBP) |
| Organisation | King's College London |
| Sector | Academic/University |
| Country | United Kingdom |
| Start | 03/2025 |
| End | 03/2026 |
| Description | PhD Studentship through the CDT in Safe and Trusted AI |
| Amount | £77,681 (GBP) |
| Organisation | King's College London |
| Sector | Academic/University |
| Country | United Kingdom |
| Start | 09/2021 |
| End | 09/2025 |
| Description | COHERENT Project Collaboration |
| Organisation | Polytechnic University of Catalonia |
| Country | Spain |
| Sector | Academic/University |
| PI Contribution | My team at KCL are leading on developments in the area of explainable AI planning, and robotics, complementing the skills of the other project partners. |
| Collaborator Contribution | The project partners are focusing on explainability in other areas of AI; and on user studies for human-robot interaction. |
| Impact | Outcomes are still in development. |
| Start Year | 2021 |
| Description | COHERENT Project Collaboration |
| Organisation | University of Naples |
| Country | Italy |
| Sector | Academic/University |
| PI Contribution | My team at KCL are leading on developments in the area of explainable AI planning, and robotics, complementing the skills of the other project partners. |
| Collaborator Contribution | The project partners are focusing on explainability in other areas of AI; and on user studies for human-robot interaction. |
| Impact | Outcomes are still in development. |
| Start Year | 2021 |
| Title | Planner for Planning and Acting while the Clock Ticks |
| Description | Planner developments to support the paper 'Planning and Acting while the Clock Ticks' |
| Type Of Technology | Software |
| Year Produced | 2024 |
| Impact | Production of a paper, and feed-in to follow-on grant proposal. |
| Title | Planning system developments |
| Description | Developments to a state-of-the-art temporal planning system, to enable it to work effectively in problems with diverse action costs, as seen in mixed human-robot teams where costs represent the pros/cons of assigning tasks to humans and robots. |
| Type Of Technology | Software |
| Year Produced | 2022 |
| Impact | In progress, leading to paper output and licensing discussions. |
