Explaining machine learning by arguing

Lead Research Organisation: Imperial College London
Department Name: Dept of Computing


The lack of transparency of data-centric methods in AI, notably machine learning algorithms, is one of the most pressing issues in the field, especially given the ever-increasing integration of AI into everyday systems used by experts and non-experts alike, and the need to explain these systems' outputs. The need for explanations may arise for a number of reasons: an expert may require transparency and interpretability to justify outputs, especially in safety-critical situations, while a non-expert may place more trust in an AI system providing some form of (rather than no) explanation. Explainability is also needed to fulfil the requirements of legislation: indeed, the General Data Protection Regulation (GDPR) can be interpreted as effectively creating a right-to-explanation for users of automated decision-making systems. This project will look at providing explanations for machine learning methods of various kinds by using methods from argumentation in artificial intelligence. It aims at answering the following research questions, using methods from Artificial Intelligence: Can argumentation frameworks, representing debates, be extracted from data used to train machine learning techniques or synthetically extracted from trained models? Can these argumentation frameworks serve as a useful medium from which to build a variety of types of explanations that can empower humans to use with confidence and trust the output of the underlying machine learning methods and models? The project will look at a variety of machine learning methods and settings, starting from settings making use of images.

The project falls within the Artificial Intelligence Technologies EPSRC research area


10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/R513052/1 01/10/2018 30/09/2023
2131679 Studentship EP/R513052/1 01/10/2018 31/03/2022 Andrea Stylianou