Quantitative methods for the assessment of systematic error in observational studies: improving causal research

Lead Research Organisation: London School of Hygiene & Tropical Medicine
Department Name: Epidemiology and Population Health

Abstract

Results from epidemiological studies often appear contradictory, leading to some cynicism regarding medical research among the general public. Many of these contradictions might be avoided if the true extent of the uncertainties that affect such research was assessed and explained. Presently, uncertainty arising from studying finite samples is typically the only uncertainty presented (as confidence intervals). However, other sources of uncertainty may dwarf this uncertainty: (i) important unmeasured factors in the study population (unmeasured confounding); (ii) non-random inclusion of subjects (selection bias); (iii) errors in measurements of exposure and outcome (measurement error) (iv) absence of outcome/exposure data for some subjects (missing data). Recently new approaches, which take account of these sources of uncertainty, have been proposed, but face both operational (e.g. how to specify the likely magnitude of selection bias) and methodological (e.g. how to combine this information with the observed data) difficulties. We shall investigate three alternative approaches to improved quantification of uncertainty; classical sensitivity analysis; Monte Carlo sensitivity analysis and Bayesian bias analysis. We will explore the feasibility of these approaches in different contexts and provide practical guidelines to practitioners (including user-friendly software) on methods which are robust, transparent and accessible to a wide range of practitioners.

Technical Summary

Results from epidemiological studies often appear contradictory, leading to some cynicism regarding medical research among the general public. Many of these contradictions are attributable to the inadequate reporting of the uncertainties that affect this type of research.

Uncertainty arising from random sampling variation is typically the only uncertainty presented (in the form of confidence intervals). There are, however, other sources of uncertainty which may dwarf the uncertainty due to sampling variation. The extent of such uncertainties will vary from study to study but in general they arise from: (i) unmeasured confounders; (ii) selection bias; (iii) measurement error (iv) missing data.

Recently attempts have been made to incorporate multiple sources of bias within a single modelling framework. However, considerable work remains to be done as presently these approaches are complex to implement and are not immediately generalizable to continuous exposures and outcomes. The objectives of this proposal are as follows:

(1) To develop an inventory and classification of biases and existing bias-adjustment methods to quantify the impact of potential systematic errors.
(2) Identify data sources which provide information on the potential magnitude of sources of bias whose adjustment requires external information.
(3) Establish appropriate conceptual, methodological and computational bases for multi-bias adjustment and sensitivity analyses.
(4) Compare the performance and practicability of different combinations of methods.
(5) On the basis of the above, make recommendations concerning bias adjustment approaches which are statistically robust, transparent and accessible. community.

The work will be performed in a series of overlapping steps as follows:

(1) a literature search for relevant methods and data;
(2) a methodological exploration of existing work to establish the core principles behind, and links among, existing approaches;
(3) a computational assessment of the alternatives;
(4) further methodological development if necessary;
(5) comparison of the performance and practicability of different combinations of methods
(6) development of guidelines and recommendations

We expect our research to be of great interest to the epidemiological research community. We shall disseminate our findings through peer-reviewed journal articles, conference presentations and through teaching at LSHTM and elsewhere. Any software routines developed will be placed in the public domain. We do not expect our research to result in commercially exploitable outputs.

Publications

10 25 50