Structured Environments and Robust AI: A Causal Perspective

Lead Research Organisation: University College London
Department Name: Computer Science


Oneemerging common point between causal inferenceand other subfields of AI is the notion of invariance under perturbationsor variations of the environment. This has beenhistorically exploited in the field of causal inference under related notionssuch as"natural experiments","instrumental variables"and "imperfect interventions": surrogate actions, not fully targeting the control signal of interest,that nevertheless informus to which extent other features of the world are causing outcomesof interest.One key motivation is unveilingcausal relationships that are more stable and transferrable. For instance, if the goal is an understanding of the causes oflung cancer, the effect anti-smoking public educationcampaigns on lung cancer mortality may transfer more poorly across populations than theact of smoking itself on that health outcome.Nevertheless, variations in policycan inform the extent of the link between smoking and lung cancer under given assumptions.At the same time, machine learningresearchers have been extending the paradigm of transfer learning by explicitly encoding how variability in particular features (such as the colour scheme fed into the input video signal of an game-playing agent)helps a learner to focuson which causative features of the environmental state are there, so that under assumptions an actionplan can be more reliably transferred toa new scenario.In this work, we will develop methods for the combination of external environmental changes in thelearning of a causal modelin the context of robust causal effectestimation, reinforcement learningand prediction. We will develop formal families of modelsto describe commonalities betweenenvironments, families that arestructuredand continuous.That is, structured in the sense that allows for external interventions to take place at several pointsof a system; continuous in the sense that it draws scenarios under which assumptions may have bounded violationsthat are controlled bya continuum of possibilities as opposed to the hard independence constraints found in thecausal inferenceliterature.In particular, we will investigate how feature learning should be adapted to a more focused target, trading-off learning not
only to dowell on a training set of environments, but carefully avoiding the exploitation of features that are implied not to generalise well in futureenvironmentsunder our modellingassumptions.


10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/S021566/1 31/03/2019 29/09/2027
2408309 Studentship EP/S021566/1 27/09/2020 29/09/2024 Jean Heidar Kaddour