Peering Inside the Black Box: Exploring Innovations in Interpretable Machine Learning and Causal Inference for the Explanation of Political Violence

Lead Research Organisation: London School of Economics and Political Science
Department Name: Methodology

Abstract

The use of machine learning (ML) within social science research is increasingly common. Yet, a gap remains in how we use these models in inference settings to explain, rather than predict or classify, phenomena. Concurrently, there is a growing need in policymaking to make sense of increasingly common, but complex, machine learning strategies. Despite ML models being able to capture the complexity of social phenomena to a greater extent than traditional statistical models, they have been considered less useful for explanations and inferences, in part due to being viewed as 'black box' methods that are substantively opaque. Recent methodological advances show that we can make sense of these complex ML models and use them for both explanation and causal inferences. These
advances are grouped under the subfields of 'Interpretable machine learning' (IML) and 'Causal machine learning' (Causal ML). To date, these innovations have not been sufficiently translated within the social sciences. This research aims to fill this gap and demonstrate the utility of using ML methods not just for prediction, but for explanation and inference also. I will do so by engaging in translation, innovation, and finally application of these methods to the study of political violence. Studying the application of ML methods in political violence is particularly important as understanding the determinants of conflict allow proactive interventions and policy solutions in various forms. Political violence is, however, complex, and multi-causal, and therefore conventional parametric methods are unlikely to model these dynamics effectively. For policymaking relevance, it is imperative that we can explain why a conflict event might occur as well as if and when. Methodologically, I aim to demonstrate how researchers can use ML to capture and explain complex social phenomena, and in the process open avenues for greater policymaking relevance and impact. A broad research question that could apply to this PhD project, then, is "How can we leverage developments in complex machine learning strategies for explanation and inference in social science contexts?" Substantively, I aim to use these tools to advance our understanding of the mechanisms of political violence.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
ES/P000622/1 30/09/2017 29/09/2027
2901789 Studentship ES/P000622/1 24/09/2023 29/09/2026 Christy Coulson