Taming Concurrency

Lead Research Organisation: Newcastle University
Department Name: Sch of Computer Science


Computer programs are notoriously difficult to perfect (everyone has experienced some form of inconvenience from "bugs") and even if this situation is beginning to come under control, it is at enormous cost. One reason for the high development cost of software is that errors made early in the design of a system can lay undetected until that system is tested - or even worse, used by customers. Correcting errors at such late stages is extremely expensive because so much work has to be repeated. So-called "formal methods" were first deployed on safety-critical software but are becoming more and more cost-effective because their use throughout design can drastically reduce the "scrap and rework" that comes from late detection of design mistakes. Formal methods make this possible because they use formal notations for specifying what should be built and thus offer a notion of correctness for each design step. Verifying design decisions becomes a proof process that can be helped by appropriate theorem proving software. Unfortunately, just as real progress is being made (both with general engineering practices and with the application of formal methods), separate commercial developments are increasing the challenges enormously. The general direction of new difficulties is "concurrency". Programs that are concurrent have to run in contexts which interfere with their progress. Concurrency can come from a desire for better performance, from use of embedded programs in conjunction with physical devices such as cars and planes, or from the use of the latest hardware designs that are built from many processors. Traditional engineering approaches, and even many modern automatic tools for detecting errors in software, are not going to suffice for the world of massive concurrency because the number of execution paths is astronomically large when processes can interfere with each other. Fortunately, there is research (in which UK researchers are at the forefront) for reasoning both about where interference is absent and/or constrained. These research avenues, however, need to be brought together and tool support must be implemented before they are usable by engineers. These are the expected outputs of "Taming Concurrency".

Planned Impact

There will be immediate research benefits from the proposed research; future benefits to those who develop programs in industry or institutions; and long term benefits to all users of computers. These are considered and justified in turn.

Success in this research would bring together several strands of thought on the most appropriate ways to develop concurrent programs. This is a field where UK researchers are already at the forefront of research but the synergy that would result from a unification of rely/guarantee thinking with separation logic could be another major advance on which we could all build. The same payoff would result from a link between atomicity refinement and linearisability methods. Furthermore, although a top-down approach is to be used as an initial guide, the concepts should also provide useful targets for bottom-up methods and tool support.

More tractable methods for developing complex concurrent programs will have a strong impact on the work of development programmers who are already struggling with concurrency and are bound to have to tackle this difficult area ever more often (at least) because of the move to massively parallel (many-core) hardware. The methods developed in this project will be made easier to adopt in industry by the development of tool support within a work package of the project.

Once the new methods are more widely used, the quality of concurrent software will be higher (i.e. less bugs). Furthermore, performance of future systems will be made better because of the ability of program designers to tackle the design of concurrent systems with greater confidence. This will be beneficial to all users of computers.
Description See new publications particularly the joint paper with Hayes on "possible Values".

We are about to submit a further (conference) paper of Expressivity and Compositionality.

We are now well into "Taming Concurrency" (see first year comments below).
1) a paper (Jones/Yatapanage 2015) has been published on the link twixt Separation Logic and Rely/Guarantee thinking.
2) Together with Colleagues from Queensland University, we are working on a more algebraic presentation of Rely/Guarantee reasoning.
3) A new "possible values" notation has been formalised (see Jones/Hayes 2016)
4) The Technical Report (1425) is being mechanised in Isabelle
5) Dr Hongjin Liang from Suzhou University spent two months working with us - I hope to build on this to have a bridge with Prof Xinyu Feng's group there
6) Prof Ian Hayes (Queensland) spent a 6 month sabbatical with us

The initial phase of this on-going project has already exposed a new way of looking at the key issue of separation in the design of concurrent programs.

Furthermore, existing research of documenting and reasoning about another crucial issue -interference- has been completely recast in a new more algebraic frame.
Exploitation Route We have good contacts in industry and are presenting at venues where the results will be appreciated.
In particular the FM conferences are well attended by a mix from industry and academia.
Sectors Aerospace, Defence and Marine,Digital/Communication/Information Technologies (including Software),Security and Diplomacy,Transport

URL http://www.ncl.ac.uk/computing/research/project/4519
Description Discovery Grant
Amount $880,286 (AUD)
Funding ID DP190102142 
Organisation Australian Research Council 
Sector Public
Country Australia
Start 07/2019 
End 06/2022
Title Archive of Formal Proofs 
Description @article{Concurrent_Ref_Alg-AFP, Abstract = { The concurrent refinement algebra developed here is designed to provide a foundation for rely/guarantee reasoning about concurrent programs. The algebra builds on a complete lattice of commands by providing sequential composition, parallel composition and a novel weak conjunction operator. The weak conjunction operator coincides with the lattice supremum providing its arguments are non-aborting, but aborts if either of its arguments do. Weak conjunction provides an abstract version of a guarantee condition as a guarantee process. We distinguish between models that distribute sequential composition over non-deterministic choice from the left (referred to as being conjunctive in the refinement calculus literature) and those that don't. Least and greatest fixed points of monotone functions are provided to allow recursion and iteration operators to be added to the language. Additional iteration laws are available for conjunctive models. The rely quotient of processes c and i is the process that, if executed in parallel with i implements c. It represents an abstract version of a rely condition generalised to a process. }, Author = {Julian Fell and Ian J. Hayes and Andrius Velykis}, For = {080309 (Software Engineering)}, Issn = {2150-914x}, Journal = {Archive of Formal Proofs}, Month = dec, Note = {\url{http://isa-afp.org/entries/Concurrent_Ref_Alg.shtml}, Formal proof development}, Project = {RGT}, Seo = {890299 (Computer Software and Services not elsewhere classified)}, Title = {Concurrent Refinement Algebra and Rely Quotients}, Year = 2016} 
Type Of Material Improvements to research infrastructure 
Year Produced 2016 
Provided To Others? Yes  
Impact AFP is widely accessed - but not tracked 
URL http://isa-afp.org/entries/Concurrent_Ref_Alg.shtml