Learning, Approximating and Minimising Streaming Automata for Large-scale Optimisation

Lead Research Organisation: University of East Anglia
Department Name: Computing Sciences

Abstract

The proposed research lies at the interface of the areas of verification and machine learning, interactions of which are attracting a lot of attention currently and of potential huge benefits for both sides.

Verification is this domain of computer science aiming at checking and certifying computer systems. Computer systems are increasingly used at all levels of society and peoples' lives and it is paramount to verify that they behave the way they are designed to and that we expect (examples of crucial importance, among many others, are embedded software for planes auto-pilot or self-driving cars). Unfortunately, the verification of complex systems encounters limits: there is no universal fully automated way to verify every system and one needs to find a good trade-off between the constraints of time, memory space and accuracy, which are often difficult to overcome.

Machine learning has been studied since the 50's and regained much attention recently with breakthroughs in speech recognition, image processing or game playing. The development of neural networks (studied since the 60's) awarded Hinton, LeCun, and Bengio the Turing award 2019 and using deep learning, the British firm DeepMind developed its successful AlphaGo and AlphaGo Zero which were impressive steps forward and reaffirmed the amazing potential of machine learning.

This project proposes to apply learning techniques in verification to improve the efficiency of some algorithms which certify computer systems and to compute fast accurate models for real-life systems.

Automata are one of the mathematical tools used in verification to model computer or real-life systems. Giving certifications on these systems often boils down to running some algorithms on the corresponding automata. The efficiency of such algorithms usually depends on the size of the considered automaton. Minimising automata is thus a paramount problem in verification, as a way to verify large computer or real-life systems faster.
This proposal aims at studying the minimisation of some streaming models of quantitative automata using machine learning techniques. The kind of automata we are going to focus on, are streaming models, in the sense that the input is not stored but received as a stream of data and dealt with on the fly, thus being particularly suitable for the treatment of big data. They are also suited to deal with optimisation problems such as minimising the resource consumption of a system or computing the worst-case running time of a program, for example.

Minimising these kind of automata is highly challenging and linked with the long-standing open problem of the determinisation of max-plus automata. This proposal gives several directions of research, such as using learning methods to tackle it.

Planned Impact

The proposed research is at the boundary of verification and machine learning. The study of the interactions between these two fields is attracting a lot of attention currently as they are potentially hugely beneficial for each other:
- the development of machine learning and the amazing results obtained recently in image processing, speech recognition and game-playing, give reasons to think that learning techniques can be used to speed-up verification processes in order to certify large scale complex computer or real-life systems fast. This project will have immediate impact in this direction.
- it is commonly accepted now that there is an urgent need to develop models able to verify artificial intelligence systems and make them trustworthy. By carrying out her project, the PI will gain expertise in machine learning with the aim of developing a future project in verification of artificial intelligence systems.

A first immediate impact of the project will be academic, as developed in the Academic Impact section in the Case for Support. A longer-term impact will be in applying this research in the two directions mentioned above. The proposed research will open new directions of research and long-term projects. The collaborations proposed by the PI with experts in Machine Learning who have links with industrial partners, as well as the environment she is working in at City, in particular her already established collaborations with some members of the Centre for Software Reliability, provide an avenue to explore to increase the impact of the proposed research.

The Pathways to Impact document details how the impact of the project will be enhanced via dissemination of the results, collaborations with other institutions and organisation of a workshop.

Publications

10 25 50