Properties of statistical methods for indirect and mixed treatment comparison - a computer simulation evaluation

Lead Research Organisation: University of East Anglia
Department Name: Norwich Medical School

Abstract

People (for example: NICE, NHS managers, doctors and patients) may have to select the most cost-effective treatment from several different treatments for an illness. The most believable evidence for the treatment selection is from the direct comparison of different treatments within randomised controlled trials. However, many treatments have not been directly compared in trials.
Even without evidence from direct comparison trials, people still have to decide which the best treatment is. Statistical methods have been developed to allow different treatments to be compared indirectly by using data from separate trials. There are several simple or complex methods to make indirect comparisons of different treatment for an illness. All these methods are based on certain assumptions, and sometimes results by using these statistical methods may be incorrect. Misleading or wrong results of indirect comparisons may lead to the wrong selection of treatments with important health and cost consequences.
The proposed research is a computer experimental study to comprehensively evaluate the statistical methods for indirect comparison. One difficulty in evaluating statistical methods using data from real clinical trials is that we usually don?t know what ?true? value is to be estimated. This difficulty can be easily overcome in a computer simulation study. A computer simulation study is like a laboratory study in which experimental conditions can be controlled by investigators. We will use the computer to generate simulated clinical trials under various assumed situations (that is, we assume we know what the ?true? treatment effect is). Available statistical methods for indirect comparison will be used to analyse simulated trials. The estimated treatment effects by different statistical methods will be compared with known ?true? treatment effects. Then we will be able to check the usefulness and limitations of different methods under various circumstances. Findings from the propose simulation study will help provide recommendations on appropriate use of indirect comparison methods, and prevent wrong decisions in the selection of treatments.

Technical Summary

BACKGROUND: Indirect comparison of competing interventions have been conducted in health technology assessment (HTA) reviews, using adjusted indirect comparison (AIC), network meta-analysis (NMA) or mixed treatment comparison (MTC). The appropriateness and properties of existing methods for AIC, NMA and MTC have not been comprehensively evaluated. Inappropriate use of these statistical methods may yield misleading or invalid results regarding relative effects of competing healthcare interventions.
OBJECTIVES: (1) By computer simulations, to provide empirical evidence about patterns of transition of bias and heterogeneity within the network of trials. (2) Using simulated data to investigate and compare the performance and validity of AIC, NMA and MTC; and their assessment of inconsistency in networks of trials. (3) Based on findings of computer simulations, to provide recommendations on the use of AIC, NMA and MTC in HTA reviews.
RESEARCH PLAN: The proposed simulation study is a stochastic process with the following main steps: (1) A network of trials will be randomly generated, according to assumed parameters (including relative treatment effects of different competing treatments, heterogeneity across trials, extent and direction of bias in trials, sample sizes, number of trials, and shape of the network). (2) AIC (Bucher method), NMA (Lumley method) and MTC (Lu-Ades method) will be conducted using data from the simulated trials, to provide individual estimates of parameters. (3) AIC estimates, NMA estimates, and MTC estimates will be compared with known parameters to estimate bias and the mean square error (MSE). Bias is the difference between the estimates and parameters. The MSE is the squared deviation between the estimator and its parameter, which accommodates both bias and variance. (4) Step (1)-(3) will be repeated numerous times (1,000 or more) to examine the distribution and average of estimated bias and MSE. The simulation will start from simple assumptions and the complexity will be appropriately increased. We will investigate factors (including information size, treatments compared, assumed bias in trials and heterogeneity across trials) that may be associated with observed bias and MSEs.
PROGRAMMING: The simulation scenarios shall be programmed in R using standard data generation techniques. Simulated data will be analyzed using WinBUGS by calling WinBUGS from R using the R2WinBUGS package.
APPLICATION AND EXPLOITATION OF THE RESULTS: Findings from the proposed simulation study will enable us to provide useful recommendations for valid and appropriate use of statistical methods for indirect and mixed treatment comparison in HTA reviews.

Publications

10 25 50