Trusted Autonomous Systems

Lead Research Organisation: Imperial College London
Department Name: Computing

Abstract

Fully autonomous systems are here. In the past 50 years we have quickly moved from controlled systems, where the operator has full control on the actions of the system (e.g., a digger), to supervised systems that follow human instructions (e.g., automated sewing machines), to automatic systems performing a series of sophisticated operations without human control (e.g., today's robotic car assembly lines), to autonomous systems. Autonomous systems (AS) are highly adaptive systems that sense the environment and learn to make decisions on their actions displaying a high degree of pro-activeness in reaching a certain objective. They are autonomous in the sense that they do not need the presence of a human to operate, although they may communicate, cooperate, and negotiate with them or fellow autonomous systems to reach their goals.The objective of this fellowship is to develop the scientific and engineering underpinnings for autonomous systems to become part of our every-day's life. There is a clear benefit for society if repetitive or dangerous tasks are performed by machines. Yet, there is a perceived resistance in the media and the public at large to increasingly sophisticated technology assisting key aspects of our lives. These concerns are justified. Most people have first hand experience of software and automatic devices not performing as they should; why should they be willing to delegate to them crucial aspects of their needs?In a recent influential report published by the Royal Academy of Engineering in August 2009 and widely discussed in the media it is argued that there is a real danger that these technologies will not be put into use unless urgent questions about the legal, ethical, social, and regulatory implications are addressed. For instance, the report highlights the issue of liability in case of collisions between autonomous driverless cars. Who should be held responsible? The passenger? The software? The owner? The maker of the vehicle? Quite clearly society and the government need to engage in a maturedebate on several of these issues. However, the report identifies an even more fundamental point:``Who will be responsible for certification of autonomous systems? [Royal Society, Autonomous Systems, August 2009, page 4]While there are complex regulatory aspects to this question, its underlying scientific implication is that we, as computer scientists and engineers, urgently need to offer society techniques to be able to verify and certify that autonomous systems behave as they are intended to. To achieve this, four research objectives are identified:1) The formulation of logic-based languages for the principled specification of AS, including key properties such as fault-tolerance, diagnosability, resilience, etc.2) The development of efficient model checking techniques, including AS-based abstraction, parametric and parallel model checking, for the verification of AS. 3) The construction and open source release of a state-of-the-art model checker for autonomous systems to be used for use-cases certifications.4) The validation of these techniques in three key areas of immediate and mid-term societal importance: autonomous vehicles, services, and e-health.This fellowship intends to pursue novel techniques in computational logic to answer the technical challenges above. A success in these areas will open the way for the verification of AS, thereby opening the way to their certification for mainstream use in society.

Planned Impact

Autonomous Systems is a rapidly emerging technology with the potential of revolutionising economies and societies not dissimilarly to what major innovations of the past century, like cars, planes, personal computers, the Internet, etc. As argued in a recent report by the Royal Academy of Engineering the technology cannot be deployed unless certification issues are tackled. This proposal intends to contribute significantly to the scientific and technological basis for certifications to be carried out. Therefore, the proposed research is intended to have a very significant impact on industry and society as a whole in the mid (3-5 years) to long-term. The immediate non-academic beneficiaries of the project are our certification project partners and the key areas they are prominent in, i.e., autonomous vehicles, web-services, and federated health records. These areas will gain advantage from early exposure to principled certification via model checking and may profit from an early adoption of the technology. Should certification standards or recommendations result in any of these areas following work in the project, this will be a very noteworthy development. Even if no complete certification standards arise during the life of the project, given the high-prominence of our partners, it is likely that the results will directly impact the standards that will ultimately arise. The three areas are seen by experts to grow so significantly in the next few years that the project's economic impact may well be of very high significance in each of the three targeted areas. In addition to the well-defined applications above, the project is intended to contribute more generally to the long-term growth of the UK economy. Specifically, given current expertise and early investment, the UK has the opportunity of becoming one of the leading global players in the area of Autonomous Systems. Applications beyond those tackled in the project range from other forms of transport (flight, underwater, rail), to e-government, smart homes for assisted leaving, security systems, etc. These are industries with high technological know-how whose early development would enable the UK to lead world-markets without serious concerns of being undercut by countries with lower manpower costs. Perhaps more importantly than economic considerations, there is a clear benefit for society if AS become part of our everyday life. Many repetitive or dangerous tasks will be delegated to machines and humans will be able to focus on activities of higher added value. In this sense AS will enhance the standards of living of this country and abroad. Certification of systems are recognised to be the necessary precondition for this to happen. Research produced in this project will provide some of the basis for AS certifications to be carried out. Specifically, the formal techniques developed in WP1 and WP2 will be used in training specialised personnel, including students, for the verification of AS. The MCAS model checker released in WP3 will be used in laboratories to verify AS of various nature. MCAS will be released free of charge to ensure very high user take up. MCAS is likely to outlive the project: the open-source release will ensure other parties will be able to contribute to the project and adapt it for their own particular uses. The certification usecases in WP4 will provide initial certifications in key AS areas: ground vehicles, web-services, and federated health records. Each of them has significant economic and societal impact. Furthermore, the usecases studies will provide a frame of reference for other applications to be treated similarly. Detailed impact of these can be found in the accompanying impact plan.

Publications

10 25 50

publication icon
A. Lomuscio (2015) Handbook of Epistemic Logic

publication icon
Belardinelli F (2013) Decidability of Model Checking Non-Uniform Artifact-Centric Quantified Interpreted Systems in 23rd International Joint Conference on Artificial Intelligence (IJCAI13)

publication icon
Belardinelli F (2012) Automated Verification of Quantum Protocols using MCMAS in Electronic Proceedings in Theoretical Computer Science

publication icon
Belardinelli F (2012) Automated Verification of Quantum Protocols using MCMAS in 10th Workshop on Quantitative Aspects of Programming Languages (QAPL12)

publication icon
Belardinelli F (2011) Model Checking Temporal-Epistemic Logic Using Alternating Tree Automata in Fundamenta Informaticae

publication icon
Belardinelli F (2014) Verification of Agent-Based Artifact Systems in Journal of Artificial Intelligence Research

publication icon
Belardinelli F. (2012) Interactions between Knowledge and Time in a First-Order Logic for Multi-Agent Systems: Completeness Results in JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH

publication icon
Belardinelli F. (2010) Interactions between time and knowledge in a first-order logic for multi-agent systems in Proceedings of the International Conference on Knowledge Representation and Reasoning

publication icon
Boureanu I. (2012) Automatic verification of epistemic specifications under convergent equational theories in 11th International Conference on Autonomous Agents and Multiagent Systems 2012, AAMAS 2012: Innovative Applications Track

publication icon
Cermák P (2014) Computer Aided Verification

publication icon
Cermák P (2018) Practical verification of multi-agent systems against Slk specifications in Information and Computation

publication icon
De Giacomo G (2012) Synthesizing agent protocols from LTL specifications against multiple partially-observable environments in 13th International Conference Principles of Knowledge Representation and Reasoning (KR12)

publication icon
Gonzalez P (2013) Model checking GSM-based Multi-Agent systems in 11th International Conference on Service Oriented Computing

publication icon
Jones A (2012) Group Synthesis for Parametric Temporal-Epistemic Logic in 11th International Conference on Autonomous Agents and Multi-Agent systems (AAMAS12)

publication icon
Jones A.V. (2010) Distributed BDD-based BMC for the verification of multi-agent systems in Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS

publication icon
Kouvaros P (2016) Parameterised verification for multi-agent systems in Artificial Intelligence

publication icon
Lomuscio A (2015) MCMAS: an open-source model checker for the verification of multi-agent systems in International Journal on Software Tools for Technology Transfer

publication icon
Lomuscio A (2014) An abstraction technique for the verification of multi-agent systems against ATL specifications in 14th International Conference on Principles of Knowledge Representation and Reasoning (KR14)

publication icon
Lomuscio A (2014) Decidability of model checking multi-agent systems against some EHS specifications in 21st European Conference on Artificial Intelligence (ECAI14)

publication icon
Lomuscio A (2010) Towards verifying contract regulated service composition in Autonomous Agents and Multi-Agent Systems

publication icon
LOMUSCIO A (2013) ASSUME-GUARANTEE REASONING WITH LOCAL SPECIFICATIONS in International Journal of Foundations of Computer Science

publication icon
Lomuscio A (2014) Model Checking Unbounded Artifact-Centric Systems in 14th International Conference on Principles of Knowledge Representation and Reasoning (KR14)

publication icon
Lomuscio A (2011) Runtime Monitoring of Contract Regulated Web Services in Fundamenta Informaticae

publication icon
Lomuscio A (2013) An Epistemic Halpern-Shoham Logic in 23rd International Joint Conference on Artificial Intelligence (IJCAI13)

publication icon
Panagiotis K (2013) Automatic Verification of Parametrised Interleaved Multi-Agent Systems in 12th International Conference on Autonomous Agents and Multi-Agent systems

 
Description We have developed methodologies to help ensure that autonomous systems, including autonomous robots, behave according to their intended specifications. We have developed new mathematical methods to address this problem and built a toolkit that can be used for this aim.

The project contributed to bringing the issue of AI Safety, ie ensuring that AI performs as intended, to the forefront of AI research.
Exploitation Route All theoretical results have been published and are available to the community. The toolkit we have developed is released with an open-access licence and it can be modified and reused openly by the community.

The findings will be used to verify the correctness of a wide range of systems ranging from autonomous robots to software systems.

While the project was succesful in its own merit from an academic point of view, the are still requires considerable investment both at public and at private level. I was encouraged to see that UKRI published a call earlier this year on closely related themes (Trustworthy Autonomous Systems). It is also encouraging that Google, Uber, and many other key companies are now investing heavily in this area.
Sectors Aerospace, Defence and Marine,Communities and Social Services/Policy,Creative Economy,Digital/Communication/Information Technologies (including Software),Manufacturing, including Industrial Biotechology,Security and Diplomacy,Other

URL http://vas.doc.ic.ac.uk/
 
Description The findings for this award have been used in the UK and abroad in Universities and Research Laboratories. Specifically the theoretical results obtained have enabled several academics to construct and build sophisticated verification techniques for autonomous systems. The open-source software we released is used world-wide to verify multi-agent systems. The methods developed during this award are regularly taught at Imperial College London to undergraduate and postgraduate students and now form part of their curriculum. A course on these methods was given at Stanford in the Summer of 2019.
First Year Of Impact 2012
Sector Aerospace, Defence and Marine,Digital/Communication/Information Technologies (including Software),Education,Security and Diplomacy
Impact Types Cultural,Policy & public services

 
Description LV-Pri20: Logic-based verification of privacy-preservation in Europe's 2020 ICT
Amount € 220,000 (EUR)
Funding ID 661362 
Organisation European Commission 
Sector Public
Country European Union (EU)
Start 04/2015 
End 04/2017
 
Description RAEng Chair in Emerging Technologies
Amount £1,300,000 (GBP)
Organisation Royal Academy of Engineering 
Sector Charity/Non Profit
Country United Kingdom
Start 03/2018 
End 02/2027
 
Description UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence
Amount £6,865,984 (GBP)
Funding ID EP/S023356/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 03/2019 
End 09/2027
 
Description Verification of Learning-Enabled Cyber-Physical Systems
Amount $1,400,000 (USD)
Organisation Defense Advanced Research Projects Agency (DARPA) 
Sector Public
Country United States
Start 05/2018 
End 04/2022
 
Title MCMAS-P - A parameterised model checker for multi-agent systems 
Description The software enables the verification of unbounded multi-agent systems modelled in the language provided by the toolkit 
Type Of Technology Software 
Year Produced 2014 
Open Source License? Yes  
Impact Verification of several protocols of interest in networking and security. 
URL http://vas.doc.ic.ac.uk/software/extensions/
 
Title MCMAS_{PA} - A predicate abstraction model checker for multi-agent systems 
Description The toolkit provides an experimental predicate abstraction methodology for multi-agent systems 
Type Of Technology Software 
Year Produced 2015 
Open Source License? Yes  
Impact Preliminary work - research still ongoing 
URL http://vas.doc.ic.ac.uk/software/extensions/
 
Description Partecipation in World Economic Forum Meetings 
Form Of Engagement Activity Participation in an open day or visit at my research institution
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Policymakers/politicians
Results and Impact I participated to a series of meetings held by the World Economic Forum (Dubai, Geneva) to highlight the need for AI to be assessed and trusted before it's deployed. I reported our research funded by this award and discussed the societal implications.
Year(s) Of Engagement Activity 2014,2015,2016,2017
 
Description Presentation at World Economic Forum (Geneva) 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact Invited talk on risks and opportunities of AI/autonomous systems and how the research we carry out can inform us on this.
Year(s) Of Engagement Activity 2015
 
Description Talk at Foresight 2025 Event at Imperial College London 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact Invited talk to Imperial College business partners on the opportunities offered by AI systems and Autonomous Systems and how the research carried out at College can help mitigating the risks. Workshop followed where the ideas from the presentation were discussed among delegates.
Year(s) Of Engagement Activity 2015
URL http://www.imperialtechforesight.com/