Neuro-Symbolic AI Systems for Safe Autonomous Driving

Lead Research Organisation: University of Oxford
Department Name: Computer Science

Abstract

This project falls within the "EPSRC Artificial Intelligence and Robotics" research area and aims at improving the situational awareness of autonomous driving systems, ultimately leading to safer and trustworthy models. Autonomous driving systems heavily rely on neural networks to process visual data and, while these are powerful tools, they are still black boxes, often taking unpredictable decisions, which could easily lead to disastrous outcomes in practice. In this research project, I will focus on two caveats of neural networks, data greediness and lack of reasoning capabilities, and discuss approaches to address these limitations by using logic as means of incorporating domain knowledge into the neural networks.

Ideally, we would like our neural network systems to guarantee the satisfaction of safety requirements capturing domain knowledge and, at the same time, use less annotated data. Ensuring that neural networks satisfy requirements is a long-standing problem in AI, but only recently it started to regain interest from the research community. Prior work on neuro-symbolic integration showed that equipping neural networks with reasoning capabilities via logical constraints allows them to guide the learning, so that the models are compliant with the constraints, but also to efficiently learn from smaller annotated datasets. However, the existing neuro-symbolic methods were designed for small, synthetic datasets and would not scale to more complex, real-world scenarios such as object detection for autonomous driving.

To address the above two challenges in the context of autonomous driving, this project proposes an approach to embed logical constraints into the loss function that is able to scale to object detection scenarios, and a way of addressing the data greediness problem using logical constraints. These proposed approaches will integrate logical constraints into neural networks to guide the learning process, but also to correct the neural network's predictions.

Additionally, numerous constraints concerning the safety of autonomous vehicles and of other traffic participants can be expressed as linear inequalities. Therefore, the project also aims at developing methods that steadily build up the expressivity of the constraints in order to support more complex requirements.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/T517811/1 01/10/2020 30/09/2025
2595519 Studentship EP/T517811/1 01/10/2021 31/03/2025