Improving Learning via Reasoning

Lead Research Organisation: University of Oxford
Department Name: Computer Science

Abstract

This project falls within the EPSRC Articial Intelligence research area.

In the recent years, machine learning (ML), and in particular deep learning (DL), have proved
to be able to get astonishing results in many fields, such as natural language processing (see,
e.g., [7]) and computer vision (see, e.g., [6]). Nevertheless, these great success stories have been
overshadowed by the birth of adversarial examples, which showed how easily ML methods, and in
particular neural networks (NNs), can be fooled [3].

This has made the need for effective methods - able to effectively design, and later verify,
correct and well-functioning NNs despite their complexity - all the greater. Some early steps have already been
taken in both directions. In particular:

1. from the design point of view, in 2012, Diligenti et al. proposed a framework to incorporate
rst-order logic clauses, which can be seen as an abstract and partial representation of the
environment, into kernel machines [1]. Then, in 2017, the same research group presented
semantic-based regularization [2]: a framework able to bridge the ability of ML to learn from
continuous datapoints with the ability of modeling and learning from high-level semantic
knowledge typical of statistical relational learning.

2. from the verification point of view, most of the performed attempts check NNs' properties
by encoding them into constraint systems, e.g., Huang et al. [4] proposed a verification
framework for feed-forward neural networks based on satisfiability modulo theory (SMT),
while Katz et al. [5] extended an SMT solver to handle the ReLU activation function.
In this project, our goal is (i) to create a general framework in which it is possible to learn not
only from examples, but also from background knowledge, constraints, and specifications (e.g.,
preconditions, postconditions, input, output), especially formulated in logics and/or (ii) to verify
the learned system against the available logical information.

The novelty of this project lies in the integration of deductive and inductive methods. Indeed, even if some work has already been done,
the field is still largely unexplored and presents a high potential. The overall objective of the project is to create more explainable, reliable, and robust NNs and,
more in general, ML methods, so that they can be applied also to safety-critical systems.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/N509711/1 01/10/2016 30/09/2021
2052861 Studentship EP/N509711/1 01/10/2018 30/09/2021 Eleonora Giunchiglia