Inference and Uncertainty Quantification for Offline Reinforcement Learning
Lead Research Organisation:
Imperial College London
Department Name: Computing
Abstract
Reinforcement learning (RL) agents for sequential decision-making in finite-state systems. For
real-word deployment it is necessary to quantify uncertainty in the outcomes. We address quantifying
epistemic as well as aleatoric uncertainty in finite-state environments with limited data (offline RL).
Apply methods to interpretable gridworlds and data for clinical decision support systems
Brain behaviour Lab
AI machine learning
real-word deployment it is necessary to quantify uncertainty in the outcomes. We address quantifying
epistemic as well as aleatoric uncertainty in finite-state environments with limited data (offline RL).
Apply methods to interpretable gridworlds and data for clinical decision support systems
Brain behaviour Lab
AI machine learning
Organisations
People |
ORCID iD |
Studentship Projects
| Project Reference | Relationship | Related To | Start | End | Student Name |
|---|---|---|---|---|---|
| EP/T51780X/1 | 30/09/2020 | 29/09/2025 | |||
| 2902181 | Studentship | EP/T51780X/1 | 30/09/2021 | 30/03/2025 |