Trusted Autonomous Systems
Lead Research Organisation:
Imperial College London
Department Name: Computing
Abstract
Fully autonomous systems are here. In the past 50 years we have quickly moved from controlled systems, where the operator has full control on the actions of the system (e.g., a digger), to supervised systems that follow human instructions (e.g., automated sewing machines), to automatic systems performing a series of sophisticated operations without human control (e.g., today's robotic car assembly lines), to autonomous systems. Autonomous systems (AS) are highly adaptive systems that sense the environment and learn to make decisions on their actions displaying a high degree of pro-activeness in reaching a certain objective. They are autonomous in the sense that they do not need the presence of a human to operate, although they may communicate, cooperate, and negotiate with them or fellow autonomous systems to reach their goals.The objective of this fellowship is to develop the scientific and engineering underpinnings for autonomous systems to become part of our every-day's life. There is a clear benefit for society if repetitive or dangerous tasks are performed by machines. Yet, there is a perceived resistance in the media and the public at large to increasingly sophisticated technology assisting key aspects of our lives. These concerns are justified. Most people have first hand experience of software and automatic devices not performing as they should; why should they be willing to delegate to them crucial aspects of their needs?In a recent influential report published by the Royal Academy of Engineering in August 2009 and widely discussed in the media it is argued that there is a real danger that these technologies will not be put into use unless urgent questions about the legal, ethical, social, and regulatory implications are addressed. For instance, the report highlights the issue of liability in case of collisions between autonomous driverless cars. Who should be held responsible? The passenger? The software? The owner? The maker of the vehicle? Quite clearly society and the government need to engage in a maturedebate on several of these issues. However, the report identifies an even more fundamental point:``Who will be responsible for certification of autonomous systems? [Royal Society, Autonomous Systems, August 2009, page 4]While there are complex regulatory aspects to this question, its underlying scientific implication is that we, as computer scientists and engineers, urgently need to offer society techniques to be able to verify and certify that autonomous systems behave as they are intended to. To achieve this, four research objectives are identified:1) The formulation of logic-based languages for the principled specification of AS, including key properties such as fault-tolerance, diagnosability, resilience, etc.2) The development of efficient model checking techniques, including AS-based abstraction, parametric and parallel model checking, for the verification of AS. 3) The construction and open source release of a state-of-the-art model checker for autonomous systems to be used for use-cases certifications.4) The validation of these techniques in three key areas of immediate and mid-term societal importance: autonomous vehicles, services, and e-health.This fellowship intends to pursue novel techniques in computational logic to answer the technical challenges above. A success in these areas will open the way for the verification of AS, thereby opening the way to their certification for mainstream use in society.
Planned Impact
Autonomous Systems is a rapidly emerging technology with the potential of revolutionising economies and societies not dissimilarly to what major innovations of the past century, like cars, planes, personal computers, the Internet, etc. As argued in a recent report by the Royal Academy of Engineering the technology cannot be deployed unless certification issues are tackled. This proposal intends to contribute significantly to the scientific and technological basis for certifications to be carried out. Therefore, the proposed research is intended to have a very significant impact on industry and society as a whole in the mid (3-5 years) to long-term. The immediate non-academic beneficiaries of the project are our certification project partners and the key areas they are prominent in, i.e., autonomous vehicles, web-services, and federated health records. These areas will gain advantage from early exposure to principled certification via model checking and may profit from an early adoption of the technology. Should certification standards or recommendations result in any of these areas following work in the project, this will be a very noteworthy development. Even if no complete certification standards arise during the life of the project, given the high-prominence of our partners, it is likely that the results will directly impact the standards that will ultimately arise. The three areas are seen by experts to grow so significantly in the next few years that the project's economic impact may well be of very high significance in each of the three targeted areas. In addition to the well-defined applications above, the project is intended to contribute more generally to the long-term growth of the UK economy. Specifically, given current expertise and early investment, the UK has the opportunity of becoming one of the leading global players in the area of Autonomous Systems. Applications beyond those tackled in the project range from other forms of transport (flight, underwater, rail), to e-government, smart homes for assisted leaving, security systems, etc. These are industries with high technological know-how whose early development would enable the UK to lead world-markets without serious concerns of being undercut by countries with lower manpower costs. Perhaps more importantly than economic considerations, there is a clear benefit for society if AS become part of our everyday life. Many repetitive or dangerous tasks will be delegated to machines and humans will be able to focus on activities of higher added value. In this sense AS will enhance the standards of living of this country and abroad. Certification of systems are recognised to be the necessary precondition for this to happen. Research produced in this project will provide some of the basis for AS certifications to be carried out. Specifically, the formal techniques developed in WP1 and WP2 will be used in training specialised personnel, including students, for the verification of AS. The MCAS model checker released in WP3 will be used in laboratories to verify AS of various nature. MCAS will be released free of charge to ensure very high user take up. MCAS is likely to outlive the project: the open-source release will ensure other parties will be able to contribute to the project and adapt it for their own particular uses. The certification usecases in WP4 will provide initial certifications in key AS areas: ground vehicles, web-services, and federated health records. Each of them has significant economic and societal impact. Furthermore, the usecases studies will provide a frame of reference for other applications to be treated similarly. Detailed impact of these can be found in the accompanying impact plan.
Organisations
- Imperial College London (Lead Research Organisation)
- Sysbrain (Project Partner)
- University College London (Project Partner)
- Sapienza University of Rome (Project Partner)
- Polish Academy of Sciences (Project Partner)
- University of Southampton (Project Partner)
- IBM Watson Research Centre (Project Partner)
People |
ORCID iD |
Alessio Lomuscio (Principal Investigator) |
Publications


A. Lomuscio
(2015)
Verifying Multi-Agent Systems by Model Checking Three-valued Abstractions



A. Lomuscio
(2015)
Handbook of Epistemic Logic

Barker S
(2012)
Logic Programs, Norms and Action

Belardinelli F
(2012)
Automated Verification of Quantum Protocols using MCMAS
in 10th Workshop on Quantitative Aspects of Programming Languages (QAPL12)

Belardinelli F
(2013)
Decidability of Model Checking Non-Uniform Artifact-Centric Quantified Interpreted Systems
in 23rd International Joint Conference on Artificial Intelligence (IJCAI13)

Belardinelli F
(2020)
Verification of multi-agent systems with public actions against strategy logic
in Artificial Intelligence
Description | We have developed methodologies to help ensure that autonomous systems, including autonomous robots, behave according to their intended specifications. We have developed new mathematical methods to address this problem and built a toolkit that can be used for this aim. The project contributed to bringing the issue of AI Safety, ie ensuring that AI performs as intended, to the forefront of AI research. |
Exploitation Route | All theoretical results have been published and are available to the community. The toolkit we have developed is released with an open-access licence and it can be modified and reused openly by the community. The findings will be used to verify the correctness of a wide range of systems ranging from autonomous robots to software systems. While the project was succesful in its own merit from an academic point of view, the are still requires considerable investment both at public and at private level. I was encouraged to see that UKRI published a call earlier this year on closely related themes (Trustworthy Autonomous Systems). It is also encouraging that Google, Uber, and many other key companies are now investing heavily in this area. |
Sectors | Aerospace Defence and Marine Communities and Social Services/Policy Creative Economy Digital/Communication/Information Technologies (including Software) Manufacturing including Industrial Biotechology Security and Diplomacy Other |
URL | http://vas.doc.ic.ac.uk/ |
Description | The findings for this award have been used in the UK and abroad in Universities and Research Laboratories. Specifically the theoretical results obtained have enabled several academics to construct and build sophisticated verification techniques for autonomous systems. The open-source software we released is used world-wide to verify multi-agent systems. The methods developed during this award are regularly taught at Imperial College London to undergraduate and postgraduate students and now form part of their curriculum. A course on these methods was given at Stanford in the Summer of 2019. |
First Year Of Impact | 2012 |
Sector | Aerospace, Defence and Marine,Digital/Communication/Information Technologies (including Software),Education,Security and Diplomacy |
Impact Types | Cultural Policy & public services |
Description | LV-Pri20: Logic-based verification of privacy-preservation in Europe's 2020 ICT |
Amount | € 220,000 (EUR) |
Funding ID | 661362 |
Organisation | European Commission |
Sector | Public |
Country | European Union (EU) |
Start | 03/2015 |
End | 04/2017 |
Description | RAEng Chair in Emerging Technologies |
Amount | £1,300,000 (GBP) |
Organisation | Royal Academy of Engineering |
Sector | Charity/Non Profit |
Country | United Kingdom |
Start | 03/2018 |
End | 02/2027 |
Description | UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence |
Amount | £6,865,984 (GBP) |
Funding ID | EP/S023356/1 |
Organisation | Engineering and Physical Sciences Research Council (EPSRC) |
Sector | Public |
Country | United Kingdom |
Start | 03/2019 |
End | 09/2027 |
Description | Verification of Learning-Enabled Cyber-Physical Systems |
Amount | $1,400,000 (USD) |
Organisation | Defense Advanced Research Projects Agency (DARPA) |
Sector | Public |
Country | United States |
Start | 04/2018 |
End | 04/2022 |
Title | MCMAS-P - A parameterised model checker for multi-agent systems |
Description | The software enables the verification of unbounded multi-agent systems modelled in the language provided by the toolkit |
Type Of Technology | Software |
Year Produced | 2014 |
Open Source License? | Yes |
Impact | Verification of several protocols of interest in networking and security. |
URL | http://vas.doc.ic.ac.uk/software/extensions/ |
Title | MCMAS_{PA} - A predicate abstraction model checker for multi-agent systems |
Description | The toolkit provides an experimental predicate abstraction methodology for multi-agent systems |
Type Of Technology | Software |
Year Produced | 2015 |
Open Source License? | Yes |
Impact | Preliminary work - research still ongoing |
URL | http://vas.doc.ic.ac.uk/software/extensions/ |
Description | Partecipation in World Economic Forum Meetings |
Form Of Engagement Activity | Participation in an open day or visit at my research institution |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Policymakers/politicians |
Results and Impact | I participated to a series of meetings held by the World Economic Forum (Dubai, Geneva) to highlight the need for AI to be assessed and trusted before it's deployed. I reported our research funded by this award and discussed the societal implications. |
Year(s) Of Engagement Activity | 2014,2015,2016,2017 |
Description | Presentation at World Economic Forum (Geneva) |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | Invited talk on risks and opportunities of AI/autonomous systems and how the research we carry out can inform us on this. |
Year(s) Of Engagement Activity | 2015 |
Description | Talk at Foresight 2025 Event at Imperial College London |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Industry/Business |
Results and Impact | Invited talk to Imperial College business partners on the opportunities offered by AI systems and Autonomous Systems and how the research carried out at College can help mitigating the risks. Workshop followed where the ideas from the presentation were discussed among delegates. |
Year(s) Of Engagement Activity | 2015 |
URL | http://www.imperialtechforesight.com/ |