Quantitative model comparison for Time Series predictive models

Lead Research Organisation: University College London
Department Name: Statistical Science

Abstract

With more and more machine learning and artificial intelligence technologies being adopted into decision making processes, success control and validation is becoming a matter of increasing concern for end users of modern data science. Of particular concern are reliable answers to questions such as 'does the new model perform better than the state-of-art one?' and 'is the artificial intelligence-based decision making process really better than the standard one', for which replicable, quantitative and generalizable automated testing workflows are sought. Closely related is the question of automated building meta-strategies for data integration, pipelining, and modelling including prediction making or quantification of uncertainty.

This project will address the major contemporary challenges around model validation and success
control in decision making, in:

I. Theory of quantitative model comparison and automated model building, with a focus on
heterogeneous, structured or hierarchical modelling tasks such as for panel or spatio-temporal data.

II. Software engineering of automated modelling and model validation workflows implemented
as modular toolbox environments in R and/or python.

III. Deployment in real world applications in the energy, engineering, geography and health
domains, and development of best practices and validation principles.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/S513726/1 01/10/2018 22/12/2023
2207821 Studentship EP/S513726/1 03/06/2019 02/06/2023 Jeremy Sellier