Trustworthy Robotic Assistants

Lead Research Organisation: University of Liverpool
Department Name: Computer Science


The development of robotic assistants is being held back by the lack of a coherent and credible safety framework. Consequently, robotic assistant applications are confined either to research labs or, in practice, to scenarios where physical interaction with humans is purposely limited, e.g. surveillance, transport or entertainment (e.g. museums).

In the domestic/personal domain, however, interactions take place in an informal, unstructured, and typically highly complex way. Even in more constrained industrial settings, the need for reduced manufacturing costs is motivating the creation of robots capable of much greater flexibility and intelligence. These robots need to work near to, be taught by, and perhaps even interact physically with, human co-workers.

So, how can we enhance robots so that they can participate in sophisticated interactions with humans in a safe and trustworthy manner? This is a fundamental research question that must be addressed before the traditional physical safety barrier between the robot and the human can be removed, which is essential for close-proximity human-robot interactions. How, then, might we establish such safety arguments? Intrinsically safe robots must incorporate safety at all levels (mechanical; control; and human interaction). There has been some work on safety at lower, mechanical, levels to severely restrict movements near humans, without regard to whether the movements are "safe" or not. Crucially, no one has yet tackled the high-level behaviours of robotic assistants during interaction with humans, i.e. not only whether the robot makes safe moves, but whether it knowingly or deliberately makes unsafe moves. This is the focus of our project.

Formal verification exhaustively analyses all of the robot's possible choices, but uses a vastly simplified environmental model. Simulation-based testing of robot-human interactions can be carried out in a fast, directed way and involves a much more realistic environmental model, but is essentially selective and does not take into account true human interaction. Formative user evaluation provides exactly this validation, constructing a comprehensive analysis from the human participant's point of view.

It is the aim of our project to bring these three approaches together to tackle the holistic analysis of safety in human-robot interactions. This will require significant research in enhancing each of the, very distinct, approaches so they can work together and subsequently be applied in realistic human-robot scenarios. This has not previously been achieved. Developing strong links between the techniques, for example through formal assertions and interaction hypotheses, together with extension of the basic techniques to cope with practical robotics, is the core part of our research.

Though non-trivial to achieve, this combined approach will be very powerful. Not only will analysis from one technique stimulate new explorations for the others, but each distinct technique actually remedies some of the deficiencies of another. Thus, this combination provides a new, strong, comprehensive, end-to-end verification and validation method for assessing safety in human-robot interactions.

Planned Impact

Analysis by the International Federation of Robotics (World Robotics 2007) has revealed that over 90% of unit sales are of domestic service robots, e.g., automated vacuum cleaners. However, over 80% of the current and foreseeable future value of the market is in professional service robotics. In short, the first growth in assistive robotics must come from the professional service robotics sector, as opposed to the domestic one, epitomised by the intelligent vacuum cleaner, where profit margin is very low. Consequently, many robotic assistants for everyday situations are actively under development in industry.

Even though an increasing proportion of the UK manufacturing base is comprised of SMEs, there are few robotics systems available that meet the changing needs of such enterprises. Generic robotic assistants being developed by large industrial entities need to be adapted to an SME's specific requirements. Safety is a fundamental issue here, if the traditional barriers between humans and robots are to be reduced. Our project will have a crucial enabling technological impact in environments in which assistant robots could be exploited, for example flexible manufacturing, occupational therapy, or rehabilitation, and where co-location of humans and robot workers could become commonplace.

Through our wide network of industrial collaborators, we will target more general exploitation. We will organize industrial events to promote and foster the broad area of robot-human interaction. As described in the "Case for Support", plans are in place to engage with broader communities: the BARA AFfR Industrial Days at the TAROS conferences; membership in EURON; links with national knowledge transfer initiatives such as KT-Equal, ESP KTN and Health Technologies KTN; leadership of the UK network for assistive and rehabilitation robotics; and the involvement in the autonomous systems community. In addition to TAROS, we aim to organise events at other academic conferences or industrial symposia, to promote the idea of "safe human-robot interaction" and to exhibit its benefits across both academia and industry. A number of BRL staff are working with the international group engaged in an extensive expansion of international standard ISO/TC 184, to take it beyond standards for industrial robotics to encompass service robotics. Inevitably, and appropriately, safety is the major theme of this new standard. Clearly, those working on this project will be greatly informed by developments in ISO/TC 184, but will also be able to influence its subsequent enhancements over the coming years, with a concomitantly significant international impact. Finally, as the three sites have a broad network of collaborators and contacts, further dissemination will be conducted through extensive engagement within this network.


10 25 50
publication icon
Amirabdollahian, F (2013) Can You Trust Your Robotic Assistant?

publication icon
Dennis L (2016) Formal verification of ethical choices in autonomous systems in Robotics and Autonomous Systems

publication icon
Dennis L (2018) Two-stage agent program verification in Journal of Logic and Computation

publication icon
Fisher M (2015) Misplaced Trust? in Engineering & Technology Reference

publication icon
McCabe-Dansted J (2019) Sublogics of a branching time logic of robustness in Information and Computation

Description Reliability and Safety. We conducted formal verification of an autonomous personal care robot, Care-o-bot, deployed within the University of Hertfordshire's Robot House test facility. We modelled the Care-o-bot's behaviour formally using Brahms, a high-level agent programming language. Using a software tool developed in-house, they were able to translate Brahms into Promela the input language to the model checker Spin. Safety- and mission-critical requirements were translated into LTL and were then formally verified using Spin This provided a higher level of confidence in the ability of Care-o-bot to take care of a person in the Robot House. In subsequent work an expanded set of properties was used to formalise a wider range of requirements, further increasing the confidence in Care-o-bot and further demonstrating the effectiveness of formal verification for human-robot interaction scenarios.

Ethics and Trust. We explored the viability of verifying formally whether an autonomous system is capable of behaving ethically. There are many different ethical systems across different societies, and it may be useful for definitions of ethical behaviour to change. A high-level agent programming language, Ethan, was devised to allow autonomous agents to be programmed with additional ethical requirements. Then, an existing agent interpreter and model checker were extended to allow for the formal verification of ethical behaviour. Human-robot interaction studies into trust and trustworthiness gave important insights into how people perceive robots performing different types of tasks with different levels of robot performance.

Verifying Robotic Systems. We have also made progress towards the holistic framework for the V&V of autonomous robotic assistants. As part of the programme of populating the holistic framework, we have carried out testing of algorithms in and formal verification of autonomy.
Exploitation Route Techniques for verification of human-robot interaction used in subsequent research proposals and beginning to have an impact on robotics industry and regulatory/standards context.
Sectors Communities and Social Services/Policy,Digital/Communication/Information Technologies (including Software),Other

Description Led on to subsequent EPSRC proposals. Work in this project has helped inform BSI Robot Safety and Robot Ethics standards. Testing techniques developed are being taken up by industry. Findings led to new student projects that will further illuminate issues of safety, reliability, trust, and trustworthiness for robots operating in human-inhabited environments.
First Year Of Impact 2014
Sector Other
Impact Types Policy & public services

Description Britis Standards Institution: AMT/10 Committee on Robot Safety and Robot Ethics
Geographic Reach National 
Policy Influence Type Membership of a guideline committee
Description Co-Chair of IEEE Technical Committee on the Verificaiton of Autonomous Systems
Geographic Reach Multiple continents/international 
Policy Influence Type Membership of a guideline committee
Description Membership of IEEE P7009 Standards committee on "Failsafe Design of Autonomous Systems"
Geographic Reach Multiple continents/international 
Policy Influence Type Membership of a guideline committee
Description Network on the Verification and Validation of Autonomous Systems
Amount £107,725 (GBP)
Funding ID EP/M027309/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 08/2015 
End 08/2018
Description Verifiable Autonomy
Amount £640,791 (GBP)
Funding ID EP/L024845/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 09/2014 
End 03/2018
Description Formal Methods for Autonomous Systems workshop, Porto 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Formal Methods for Autonomous Systems workshop, Porto
Year(s) Of Engagement Activity 2019,2020