Is there a more interpretable way to train neural networks?

Lead Research Organisation: Aston University
Department Name: Sch of Engineering and Applied Science

Abstract

This proposal describes an alternative algorithm for training neural networks. It outlines the research of a technique to identify class region boundaries in the feature-space of a dataset. The proposed algorithm will provide improved interpretability, as it will identify the exact regions of space where decision boundaries exist. By knowing the decision boundaries our network follows, we will be able to reliably predict how our network will react to new inputs, and could even adjust parameters to align our hyperplanes with our goals. This will give us full control and insight into networks, thus reducing the probability of catastrophic consequences when applying our AI to real world problems.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/R511821/1 01/10/2017 21/06/2021
2280862 Studentship EP/R511821/1 01/04/2019 30/03/2022 Michael Murray