Fulfilling humans right-to-explanation by integrating machine learning
Lead Research Organisation:
University of Aberdeen
Department Name: Computing Science
Abstract
Advances in machine learning (ML) are transforming our society. As more and more machine learnt models become work colleagues to humans (loan applications are, for example, processed by algorithms mainly and humans are called in for help occasionally), humans expect improved access to models, particularly to their inner workings. New regulatory regimes all over the world are introducing humans' 'right-to-explanation'. This means, for example, a customer whose loan application has been turned down could ask for explanation. Evidently, new research is required to investigate computational techniques for explaining to humans the inner workings of machine learnt models. This project aims to bring together techniques from natural language generation (NLG), machine learning (ML) and information visualization (InfoVis).
Organisations
Studentship Projects
Project Reference | Relationship | Related To | Start | End | Student Name |
---|---|---|---|---|---|
EP/N509814/1 | 01/10/2016 | 30/09/2021 | |||
1957547 | Studentship | EP/N509814/1 | 01/10/2017 | 31/12/2020 | James Forrest |
Description | Realising Accountable Intelligent Systems (RAInS) |
Amount | £1,108,896 (GBP) |
Funding ID | EP/R033846/1 |
Organisation | Engineering and Physical Sciences Research Council (EPSRC) |
Sector | Public |
Country | United Kingdom |
Start | 01/2019 |
End | 12/2021 |
Title | Immune Inspired Algorithm for Counterfactual Explanation Generation |
Description | An Immune Inspired Algorithm, using the a new python implementation of the opt-AINet algorithm that generates Counterfactual Explanations of Machine Learning predictions. |
Type Of Material | Computer model/algorithm |
Year Produced | 2019 |
Provided To Others? | No |
Impact | The work using this algorithm is not yet published. |
Title | Framework to explain Machine Learning predictions |
Description | For a Machine Learning models prediction, creates many candidate explanations from Interpretable Machine Learning tools, selects the most appropriate and presents to the user. Then on receiving feedback from the user, the framework provides further explanations of the prediction. Included in the framework, is a tool created in the project, using an Immune Inspired Algorithm to generate Counterfactual Explanations. |
Type Of Technology | Software |
Year Produced | 2020 |
Impact | Used in work not yet published |