Was that change real? Quantifying uncertainty for change points
Lead Research Organisation:
London School of Economics and Political Science
Department Name: Statistics
Abstract
Detecting changes in data is currently one of the most active areas of statistics. In many applications there is interest in segmenting the data into regions with the same statistical properties, either as a way to flexibly model data, to help with down-stream analysis or to ensure predictions are made based only on relevant data. Whilst in others the main interest lies in detecting when changes have occurred as they indicate features of interest, from potential failures of machinery to security breaches or the presence of genomic features such as copy number variations.
To date most research in this area has been developing methods for detecting changes: algorithms that input data and output a best guess as to whether there have been relevant changes, and if so how many there have been and when they occurred. A comparatively ignored problem is assessing how confident we are that a specific change has occurred in a given part of the data.
In many applications, quantifying the uncertainty around whether a change has occurred is of paramount importance. For example, if we are monitoring a large communication network, and changes indicate potential faults, it is helpful to know how confident we are that there is a fault at any given point in the network so that we can prioritise the use of limited resources available for investigating and repairing faults. When analysing calcium imaging data on neuronal activity, where changes correspond to times at which a neuron fires, it is helpful to know how certain we are that a neuron fired at each time point so as to improve down-stream analysis of the data.
A naive approach to this problem is to first detect changes and then apply standard statistical tests for their presence. But this approach is flawed as it uses the data twice, first to decide where to test and then to perform the test. We can overcome this using sample splitting ideas - where we use half the data to detect a change, and the other half to perform the test. But such methods lose power, e.g. from using only part of the data to detect changes.
This proposal will develop statistically valid approaches to quantifying uncertainty, that are more powerful than sample splitting approaches. These approaches are based on two complementary ideas (i) performing inference prior to detection; and (ii) develop tests for a change that account for earlier detection steps. The output will be a new general toolbox for change points encompassing both new general statistical methods and their implementation within software packages.
To date most research in this area has been developing methods for detecting changes: algorithms that input data and output a best guess as to whether there have been relevant changes, and if so how many there have been and when they occurred. A comparatively ignored problem is assessing how confident we are that a specific change has occurred in a given part of the data.
In many applications, quantifying the uncertainty around whether a change has occurred is of paramount importance. For example, if we are monitoring a large communication network, and changes indicate potential faults, it is helpful to know how confident we are that there is a fault at any given point in the network so that we can prioritise the use of limited resources available for investigating and repairing faults. When analysing calcium imaging data on neuronal activity, where changes correspond to times at which a neuron fires, it is helpful to know how certain we are that a neuron fired at each time point so as to improve down-stream analysis of the data.
A naive approach to this problem is to first detect changes and then apply standard statistical tests for their presence. But this approach is flawed as it uses the data twice, first to decide where to test and then to perform the test. We can overcome this using sample splitting ideas - where we use half the data to detect a change, and the other half to perform the test. But such methods lose power, e.g. from using only part of the data to detect changes.
This proposal will develop statistically valid approaches to quantifying uncertainty, that are more powerful than sample splitting approaches. These approaches are based on two complementary ideas (i) performing inference prior to detection; and (ii) develop tests for a change that account for earlier detection steps. The output will be a new general toolbox for change points encompassing both new general statistical methods and their implementation within software packages.
People |
ORCID iD |
| Piotr Fryzlewicz (Principal Investigator) |
Publications
Anastasiou A
(2022)
Cross-covariance isolate detect: A new change-point method for estimating dynamic functional connectivity.
in Medical image analysis
Anastasiou A
(2022)
Detecting multiple generalized change-points by isolating single ones.
in Metrika
Fryzlewicz P
(2023)
Narrowest Significance Pursuit: Inference for Multiple Change-Points in Linear Models
in Journal of the American Statistical Association
Fryzlewicz P
(2024)
Robust Narrowest Significance Pursuit: Inference for Multiple Change-Points in the Median
in Journal of Business & Economic Statistics
Gilliot P
(2024)
Pierre-Aurelien Gilliot, Christophe Andrieu, Anthony Lee, Song Liu, and Michael Whitehouse's contribution to the Discussion of 'the Discussion Meeting on Probabilistic and statistical aspects of machine learning'
in Journal of the Royal Statistical Society Series B: Statistical Methodology
Hong Y
(2024)
Yongmiao Hong, Oliver Linton, Jiajing Sun, and Meiting Zhu's contribution to the Discussion of 'the Discussion Meeting on Probabilistic and statistical aspects of machine learning'
in Journal of the Royal Statistical Society Series B: Statistical Methodology
Li J
(2024)
Automatic change-point detection in time series via deep learning
in Journal of the Royal Statistical Society Series B: Statistical Methodology
Li J
(2024)
Authors' reply to the Discussion of 'Automatic change-point detection in time series via deep learning' at the Discussion Meeting on 'Probabilistic and statistical aspects of machine learning'
in Journal of the Royal Statistical Society Series B: Statistical Methodology
| Description | Detecting change-points in data is challenging because of the range of possible types of change and types of behaviour of data when there is no change. Statistically efficient methods for detecting a change will depend on both of these features, and it can be difficult for a practitioner to develop an appropriate detection method for their application of interest. In a major piece of work associated with this award, we show how to automatically generate new offline detection methods based on training a neural network. Our approach is motivated by many existing tests for the presence of a change-point being representable by a simple neural network, and thus a neural network trained with sufficient data should have performance at least as good as these methods. We present theory that quantifies the error rate for such an approach, and how it depends on the amount of training data. Empirical results show that, even with limited training data, its performance is competitive with the standard CUSUM-based classifier for detecting a change in mean when the noise is independent and Gaussian, and can substantially outperform it in the presence of auto-correlated or heavy-tailed noise. Our method also shows strong results in detecting and localising changes in activity based on accelerometer data. This work was published as a discussion paper in the Journal of the Royal Statistical Society Series B. |
| Exploitation Route | We showed how neural networks could be used to design new statistical estimators, and we hope this useful principle is taken up by applied statisticians in a variety of fields. |
| Sectors | Aerospace Defence and Marine Agriculture Food and Drink Chemicals Digital/Communication/Information Technologies (including Software) Education Energy Environment Financial Services and Management Consultancy Healthcare Government Democracy and Justice Manufacturing including Industrial Biotechology Pharmaceuticals and Medical Biotechnology Transport |
| Title | nsp: Inference for Multiple Change-Points in Linear Models |
| Description | Implementation of Narrowest Significance Pursuit, a general and flexible methodology for automatically detecting localised regions in data sequences which each must contain a change-point (understood as an abrupt change in the parameters of an underlying linear model), at a prescribed global significance level. Narrowest Significance Pursuit works with a wide range of distributional assumptions on the errors, and yields exact desired finite-sample coverage probabilities, regardless of the form or number of the covariates. |
| Type Of Technology | Software |
| Year Produced | 2021 |
| Open Source License? | Yes |
| Impact | n/a |
| URL | https://CRAN.R-project.org/package=nsp |