Extending the explainability of machine learning models in policy decision making

Lead Research Organisation: University of Strathclyde
Department Name: Management Science

Abstract

Governments and policy makers are increasing their use of machine learning (ML) to support decision-making. The performance of ML algorithms generally improves with the increase of model complexity, which makes it harder for end-users to interrogate the model output. Lack of transparency and accountability in how the model is structured and the key variables underpinning predictions can lead to mistrust. Concerns about the rapidly expanding use of ML in making critical decisions have been voiced, especially for policies affecting marginalised sectors and communities.
While the area of explainable ML has expanded in recent years, the existing methods are often "general-purpose" that fail to capture the specific needs of real-world end-users. As such, the effectiveness of the existing explainable ML approaches remains unclear without understanding the domain knowledge and specific requirements/goals. Understanding the specific needs for explainable ML is highly demanded in policy decision-making since the policy emerges as a compromise between people pursuing different goals.
This PhD project aims to bridge the gap by developing a novel process and framework to ensure that ML models can be better understood, and therefore more readily adopted by policy makers. We focus on two aspects. First, we approach the problem by considering the application of ML as a decision-support tool, rather than a predictive tool. To do this, when developing the ML models, we explicitly capture the views of the decision-maker and make sure it is formally captured in the models in a way they can understand, modify and interrogate. Second, we use visual tools commonly deployed for wicked problems such as causal maps to capture the overarching ML process for a decision-maker so that the overall process is better understood and explained. By mixing these two approaches, quantitative and qualitative, meaningful progress will be achieved in developing a framework on explainable ML.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
ES/P000681/1 01/10/2017 30/09/2027
2887425 Studentship ES/P000681/1 01/10/2023 31/03/2027 Joseph Omae