# Homotopy Type Theory: Programming and Verification

Lead Research Organisation:
University of Strathclyde

Department Name: Computer and Information Sciences

### Abstract

The cost of software failure is truly staggering. Well known

individual cases include the Mars Climate Orbiter failure

(£80 million), Ariane Rocket disaster (£350 million), Pentium

Chip Division failure (£300 million), and more recently the heartbleed

bug (est. £400 million). There are many, many more examples. Even worse,

failures such as one in the Patriot Missile System and another

in the Therac-25 radiation system have cost lives. More generally, a

2008 study by the US government estimated that faulty

software costs the US economy £100 billion

annually.

There are many successful approaches to software verification

(testing, model checking etc). One approach is to find mathematical

proofs that guarantees of software correctness. However, the

complexity of modern software means that hand-written mathematical

proofs can be untrustworthy and this has led to a growing desire for

computer-checked proofs of software correctness.

Programming languages and interactive proof systems like Coq, Agda,

NuPRL and Idris have been developed based on a formal system called

Martin-Löf Type Theory. In these systems, we can not only write

programs, but we can also express properties of programs using types,

and write programs to express proofs that our programs are correct.

In this way, both large mathematical theorems such as the Four Colour

Theorem, and large software systems such as the CompCert C compiler

have been formally verified. However, in such large projects, the

issue of scalability arises: how can we use these systems to build large

libraries of verified software in an effective way?

This is related to the problem of reusability and modularity: a

component in a software system should be replaceable by another which

behaves the same way even though it may be constructed in a completely

different way. That is, we need an "extensional equality" which is

computationally well behaved (that is, we want to run programs using

this equality). Finding such an equality is a fundamental and

difficult problem which has remained unresolved for over 40 years.

But now it looks like we might have a solution! Fields medallist

Vladimir Voevodsky has come up with a completely different take on the

problem by thinking of equalities as paths such as those which occur

in one of the most abstract branches of mathematics, namely homotopy

theory, leading to Homotopy Type Theory (HoTT). In HoTT, two objects

are completely interchangeable if they behave the same way. However,

most presentations of HoTT involve axioms which lack computational

justification and, as a result, we do not have programming languages

or verification systems based upon HoTT. The goal of our project is

to fix that, thereby develop the first of a new breed of HoTT-based

programming languages and verification systems, and develop case

studies which demonstrate the power of HoTT to programmers and

those interested in formal verification.

We are an ideal team to undertake this research because i) we have

unique skills and ideas ranging from the foundations of HoTT to the

implementation and deployment of programming language and verification

tools; and ii) the active collaboration of the most important figures

in the area (including Voevodsky) as well as industrial participation

to ensure that we keep in mind our ultimate goal -- usable programming

language and verification tools.

individual cases include the Mars Climate Orbiter failure

(£80 million), Ariane Rocket disaster (£350 million), Pentium

Chip Division failure (£300 million), and more recently the heartbleed

bug (est. £400 million). There are many, many more examples. Even worse,

failures such as one in the Patriot Missile System and another

in the Therac-25 radiation system have cost lives. More generally, a

2008 study by the US government estimated that faulty

software costs the US economy £100 billion

annually.

There are many successful approaches to software verification

(testing, model checking etc). One approach is to find mathematical

proofs that guarantees of software correctness. However, the

complexity of modern software means that hand-written mathematical

proofs can be untrustworthy and this has led to a growing desire for

computer-checked proofs of software correctness.

Programming languages and interactive proof systems like Coq, Agda,

NuPRL and Idris have been developed based on a formal system called

Martin-Löf Type Theory. In these systems, we can not only write

programs, but we can also express properties of programs using types,

and write programs to express proofs that our programs are correct.

In this way, both large mathematical theorems such as the Four Colour

Theorem, and large software systems such as the CompCert C compiler

have been formally verified. However, in such large projects, the

issue of scalability arises: how can we use these systems to build large

libraries of verified software in an effective way?

This is related to the problem of reusability and modularity: a

component in a software system should be replaceable by another which

behaves the same way even though it may be constructed in a completely

different way. That is, we need an "extensional equality" which is

computationally well behaved (that is, we want to run programs using

this equality). Finding such an equality is a fundamental and

difficult problem which has remained unresolved for over 40 years.

But now it looks like we might have a solution! Fields medallist

Vladimir Voevodsky has come up with a completely different take on the

problem by thinking of equalities as paths such as those which occur

in one of the most abstract branches of mathematics, namely homotopy

theory, leading to Homotopy Type Theory (HoTT). In HoTT, two objects

are completely interchangeable if they behave the same way. However,

most presentations of HoTT involve axioms which lack computational

justification and, as a result, we do not have programming languages

or verification systems based upon HoTT. The goal of our project is

to fix that, thereby develop the first of a new breed of HoTT-based

programming languages and verification systems, and develop case

studies which demonstrate the power of HoTT to programmers and

those interested in formal verification.

We are an ideal team to undertake this research because i) we have

unique skills and ideas ranging from the foundations of HoTT to the

implementation and deployment of programming language and verification

tools; and ii) the active collaboration of the most important figures

in the area (including Voevodsky) as well as industrial participation

to ensure that we keep in mind our ultimate goal -- usable programming

language and verification tools.

### Planned Impact

In the short term, we expect impact in the following areas:

1. EPSRC has a goal of growing its research in programming languages

and programme verification and this research contributes directly -

and immediately - to this goal. This research also contributes

immediately to both the overall aim and the three objectives of

the EPSRC-funded Grand Challenge 6 "Dependable Systems Evolution". The

research also contributes indirectly to the EPSRC themes of the

Digital Economy, Global Uncertainty and cyber security by increasing

trust in software by establishing machine-checked mathematical proofs

guaranteeing correct program behaviour.

2. Further impact will be generated by producing the first HoTT-based programming

language which, unlike current languages, allows programming with

quotients and supports reusability and modularity by using an extensional but

computationally well-behaved equality. This will be clearly demonstrated via a

case study concerning Units of Measure, a feature within

Microsoft's commercial language F#.

In the long term, we expect impact in the following areas:

3. Because of the huge cost of software failures as

detailed in the proposal, there is an emerging industry surrounding

formally verified software. This research contributes to this area

by aiming to build the first of a new breed of more powerful

verification environments with better ability to scale to large

systems due to HoTT's increased support for reusability and modularity.

4. The need for trust in results across the sciences

means that there will be a growing interest in formal verification

of such human knowledge. By developing more powerful systems for

formal verification, our research will contribute to this goal.

1. EPSRC has a goal of growing its research in programming languages

and programme verification and this research contributes directly -

and immediately - to this goal. This research also contributes

immediately to both the overall aim and the three objectives of

the EPSRC-funded Grand Challenge 6 "Dependable Systems Evolution". The

research also contributes indirectly to the EPSRC themes of the

Digital Economy, Global Uncertainty and cyber security by increasing

trust in software by establishing machine-checked mathematical proofs

guaranteeing correct program behaviour.

2. Further impact will be generated by producing the first HoTT-based programming

language which, unlike current languages, allows programming with

quotients and supports reusability and modularity by using an extensional but

computationally well-behaved equality. This will be clearly demonstrated via a

case study concerning Units of Measure, a feature within

Microsoft's commercial language F#.

In the long term, we expect impact in the following areas:

3. Because of the huge cost of software failures as

detailed in the proposal, there is an emerging industry surrounding

formally verified software. This research contributes to this area

by aiming to build the first of a new breed of more powerful

verification environments with better ability to scale to large

systems due to HoTT's increased support for reusability and modularity.

4. The need for trust in results across the sciences

means that there will be a growing interest in formal verification

of such human knowledge. By developing more powerful systems for

formal verification, our research will contribute to this goal.

### Organisations

- University of Strathclyde, United Kingdom (Lead Research Organisation)
- University of St Andrews, United Kingdom (Project Partner)
- Institute for Advanced Study (Project Partner)
- University of San Diego (Project Partner)
- University of Paris Diderot (Paris 7) (Project Partner)
- Chalmers University of Technology, Sweden (Project Partner)
- Carnegie Mellon University, United States (Project Partner)
- Microsoft Research Ltd, United Kingdom (Project Partner)

### Publications

Ghani N
(2015)

*Bifibrational Functorial Semantics of Parametric Polymorphism*in Electronic Notes in Theoretical Computer Science
Ghani N
(2016)

*A List of Successes That Can Change the World*
Altenkirch T
(2016)

*Type theory in type theory using quotient inductive types*
Altenkirch T
(2016)

*Type theory in type theory using quotient inductive types*in ACM SIGPLAN Notices
McBride C
(2016)

*A List of Successes That Can Change the World*
Allais G
(2017)

*Type-and-scope safe programs and their proofs*
Lindley S
(2017)

*Do be do be do*
McBride C
(2018)

*Everybody's Got To Be Somewhere*in Electronic Proceedings in Theoretical Computer Science
Allais G
(2018)

*A type and scope safe universe of syntaxes with binding: their semantics and proofs*in Proceedings of the ACM on Programming LanguagesDescription | 2018 entry: This year we found that the traditional model based upon cubical sets is insufficient for a model of higher dimensional model of parametricity. We had to expand the traditional model to include new maps between faces and edges to increase the amount of symmetry. 2019 entry: Ghani's work on parametricity continues with a new model being generated linking full scale higher dimensional parametricity to HoTT and cubical models. This work is approaching publication. McBride's work on type theory foundations and their formalisation continues with new techniques for term representation which gives strong information about relevance and a new approach to proving basic safety properties of type theories which interleave type synthesis and type checking. In addition, Observational Type Theory now has a cubical presentation: exploration of adding univalence to this theory continues. 2020 entry: Further work was published on using HoTT in ordinal systems. |

Exploitation Route | Via papers |

Sectors | Digital/Communication/Information Technologies (including Software) |

URL | https://www.youtube.com/watch?v=W5-ulP_JzNc |