A generic transducer-based approach to modelling and verifying infinite-state systems: techniques, applications, and tools

Lead Research Organisation: University of Oxford
Department Name: Computer Science

Abstract

Computers have become so complex today that the likelihood of subtle errors is greater than ever before. Moreover, since computerised systems are ubiquitous (e.g. planes, railways, and nuclear power plants), the impact of such errors will certainly be far-reaching. In the past such errors have resulted in a loss of time, money, and even human lives. The development of (fully-automatic) model checking technologies --- pioneered by the recent ACM Turing Award winners Clarke, Emerson, and Sifakis --- has been so influential in minimising the likelihood of subtle errors that model checking has been adopted by major companies including IBM, Intel, Motorola, and NASA. Despite its success, model checking suffers from the inherent state-explosion problem, which remains difficult today even though a substantial progress has been made in the past two decades. The past fifteen years have seen an increasing level of awareness amongst researchers that modelling computerised systems as infinite-state systems is not only more suitable, but might also help circumvent the notorious state-explosion problem. Such a modelling approach views the parameters that cause the state-explosion problem as potentially unbounded or infinite. These include the sizes of arrays, stacks, queues, integer or real valued variables, discrete-time or real-time clocks, and the number of processes in distributed protocols. Instead of the state-explosion problem, such abstractions as infinite-state systems yield undecidability in general. The field of infinite-state model checking aims to develop tools and techniques to deal with this problem. Broadly speaking, approaches to infinite-state model checking can be classified as follows:1. Restrictions to decidable formalisms.2. General (undecidable) formalisms with the aid of decidable semantic restrictions or semi-algorithmsMost research in infinite-state model checking thus far adopts only one of these approaches without seriously considering the other. Moreover, little has been done to see the connections between these two approaches. This is unfortunate since both approaches have their own disadvantages that can be considerably minimised only by considering both approaches in parallel. The project proposes a particular hybrid approach taking into account the two aforementioned approaches simultaneously. The goal is to systematically develop generic tools and techniques for infinite-state model checking aiming for both sound theoretical foundations and practical applicability. This project focuses on general formalisms that are inspired by various notions of finite-state transducers, as they are known to be clean, expressive, and most amenable to theoretical analysis.

Planned Impact

I expect that the impact of the proposed research will be far-reaching and especially noticeable among the following groups of people: 1. Research communities including such areas as computer science (e.g. verification, software engineering, and theoretical computer science), operation research, astronomy, and biology. I will make a case for this in Academic Beneficiaries. To ensure that researchers will benefit from this project, I plan to publish regularly at premier conferences in verification (e.g. CAV and LICS), present annually at workshops in verification (e.g. GAMES) and in the more general area of theoretical computer science (e.g. UK's BCTCS), collaborate with researchers, and write accessible surveys summarising the outcomes of the proposed research. I will also make my publications fully accessible to the public by putting them on my website. Finally, the softwares that I will develop will benefit other researchers, who will be able to use and build on them for their research needs. 2. Industry. Many large companies (e.g. IBM, Intel, Motorola, NASA, SUN Microsystems, and UK's own Praxis High Integrity Systems) have been the major consumers of model checking technologies. The proposed research could allow them to fully-automatically verify systems that are far more complex than what current model checking technologies can handle, and therefore significantly reduce the likelihood of subtle errors along with their disastrous impacts to world stability and global economic performance. During the course of the fellowship, I plan to initiate some collaborations with industry. To this end, I plan to present at major verification workshops (e.g. SPIN) which are attended regularly by employees from companies that are interested in model checking (e.g. IBM, Intel, and NASA). I also plan to present the proposed research annually at Oxford Computing Laboratory Industrial Day. Finally, I will also explore some commercialisation avenues through University of Oxford's commercialisation team (ISIS Innovation Ltd). I estimate that the benefits to the industrial sectors will be noticeable in five years, and enormous in 10-15 years. 3. Students. The importance of model checking has prompted many universities in the UK (including Edinburgh and Oxford) and abroad to offer courses on the topic as an integral part of their curricula. The proposed research will benefit such courses by offering new solutions to the state-explosion problem, which is a major topic in model checking. By the end of this project, I plan to develop an advanced course on model checking and write an accessible monograph which incorporate materials from the proposed project. Through this effort, I estimate the benefits of the proposed research to university students will be noticeable in 5 years. 4. Programmers and software developers. The 80/20 rule is a well-known rule of thumb stating that 80% of software development effort is spent on debugging, while the rest is spent on writing codes. Researchers believe that model checking will form an important part of software developer tools in the near future. The proposed project could therefore significantly help programmers to minimise the amount of precious time spent on debugging, which could instead be more fruitfully spent on improving other aspects of their softwares. In the future, I plan to improve the usability of the software tools developed in this project so that every programmer could benefit from them. I estimate this impact will be noticeable in 10 years. 5. General users of computer-operated systems. Such systems are ubiquitous today (which can be found in the planes, railway systems, and nuclear reactors, to name a few). Everybody who lives in the UK, USA, and Europe virtually uses such systems on a daily basis. This project is likely to benefit such people and could potentially save people's lives. I estimate that such benefits will be noticeable in 10-15 years.
 
Description The aim of the project is to explore the strengths and weaknesses of generic transducer-based formalism in verification of infinite-state systems. The project has in general met most of the proposed objectives. For example, it is shown that there is a generic semi-algorithm that for safety and liveness for many classes of infinite-state systems (e.g. pushdown automata, tree-rewriting, etc.). So, the theoretical objective was mostly met.

We also discovered several unexpected techniques for analysing specific classes of infinite-state systems along the way, i.e., concurrent programs with numeric data types and recursions. The model captures many features that software developers use when programming. The techniques can be classified as "efficient reduction of analysis of infinite-state systems to SMT-solvers". We have shown (in our CAV'11 and CAV'12 papers) that they do help make classical techniques more scalable (e.g. now we can verify examples derived from real-world examples including memory resource management code for Linux Device Drivers) and that they have generated new research (at the time of writing, 40 papers have built on top of the result). Finally, this technique also found an application in analysis of graph databases (published in our ACM TODS 2012 paper).
Exploitation Route Better methods for analysing software are extremely useful for increasing software quality in general. I think commercialisation and industry collaboration would help create a greater impact to the society/economy.
Sectors Digital/Communication/Information Technologies (including Software)

Education

Security and Diplomacy

URL http://homepages.inf.ed.ac.uk/v1awidja/publications.html