HNA: Development and evaluation of methods to assess the quality of audit data used to calculate risk-adjusted performance indicators

Lead Research Organisation: London School of Hygiene & Tropical Medicine
Department Name: Public Health and Policy

Abstract

A common aim of national clinical audits (NCAs) is to examine the performance of an organisation using a risk-adjusted outcome indicator, such as 90-day postoperative mortality for a surgical procedure, and assess whether the organisation is meeting expected standards of care. Such information is important both to the organisation in question, as it supports quality improvement and to national regulators like the Care Quality Commission. It is therefore essential for clinical audits to not falsely label an organisation as performing poorly (false-positive) or fail to detect when an organisation is not meeting expected standards (false-negative).

The efforts of clinical audits to accurately describe organisational performance can be undermined by poor data quality. Imperfect data can introduce bias in outcome indicator values and so lead to false-positive and false-negative assessments. Such consequences can undermine the confidence of professionals and the public in the audit process as well as confidence in the quality of the health care services that are being evaluated. This risk was highlighted recently when a cardiac unit was threatened with closure due to poor outcomes reported by an audit, only for this to be retracted when it was found that the hospital had not submitted all its cases.

Various strategies are available to monitor and improve data quality, such as checking the range of variable values, and sharing results of preliminary analyses with organisations for checking prior to publication. Nonetheless, there is an urgent need to improve the procedures used by NCAs to check data quality. A recent survey of Audits found that, in 24 of 28 audits, the scope of these processes is limited to flagging exceptional values, and only 14 of the 28 tested the reliability of the data.

The aim of the research is to develop and evaluate methods to flag data points that are likely to be affected by measurement error within datasets of patient records. There will be two types of DQ flag. The primary flag will relate to a specific value of a variable, in other words, an individual data point. The secondary DQ flag will be derived from the data item DQ flag to describe (a) the likely number of errors within a patient record, and (b) a rating of the data quality at higher levels of aggregation, such as an organisational level.

These DQ flags will be designed to become part of the common statistical procedures used by national clinical audits. NCAs typically use regression models to produce risk-adjusted outcome measures and the methods will be designed to be used during this process. In addition, audit datasets often contain missing values and NCAs will impute missing values during the risk-adjustment process. Consequently, the DQ flags will be developed to be used within a multiple imputation framework, notably, the Multiple Imputation by Chained Equations approach (MICE). The MICE approach is widely used by clinical audits and more generally, in observational research.

There is an extensive literature about the theory and the principles of multiple imputation. However, this work has tended to focus on establishing the theoretical foundations of the approach and the validity of its assumptions when used to handle patterns of missing data caused by different mechanisms (whether at random or not). A few statistical tools have been developed to examine the quality of the imputed datasets but these focus on checking the distribution of imputed values. There is no standard approach to checking for data errors during the imputation process, nor a clear understanding of how these data errors might affect the results of the imputation process. The proposed research will address this issue. Consequently, we expect the results of this research to be of value to statisticians and health care researchers in general as well as to national clinical audits.

Technical Summary

Many national clinical audits (NCA) can be viewed as multi-centre prospective cohort studies into the quality of care. Their primary aim is to produce comparative information about organisational performance using a number of outcome indicators, and NCAs will generally develop regression models which capture the relationship between the outcome and patient factors in order to remove the effect of patient casemix (ie, regression models are used to produce risk-adjusted outcome statistics).

Audit data are prone to numerous types of measurement error. These can arise in a random or systematic way, and can affect the outcome variable or explanatory / patient variables. There can also be structural patterns (eg, organisational level) in the distribution of errors, and the records of some patients might contain more errors than others, reflecting relationships between variables. These data errors can affect the estimation of organisational performance in two ways:
1. By influencing the accuracy of the coefficients in the risk-adjustment model
2. By influencing the adjustment process when an incorrect value is multiplied by the risk model coefficients.

There are a range of statistical techniques available to detect potential data errors, including various post-estimation diagnostic statistics that are available after fitting a regression (risk-adjustment) model. Epidemiologists have also described techniques to assess the degree to which a specified bias mechanism might affect the estimated relationship between some exposure and outcome. However, these techniques are focused on the first problem of incorrect regression coefficients due to data errors. For NCAs, the second problem of incorrect values is of equal (or greater) concern than incorrect regression coefficients. Consequently, there is a need to develop a coherent set of practical tools for detecting measurement errors. We address this issue in the proposed research project.

Planned Impact

Our proposed research tackles methodological issues related to the detection of data errors that affect the ability of national clinical audits to produce robust assessments of organisational performance. Our findings will be applicable across the range of national clinical audits. The findings will also be applicable more broadly, to quality assurance studies that use observational datasets and regression models to investigate the performance of health care organisations and/or policy interventions.

The results of this research will be of interest to, and benefit, a broad range of stakeholders in this area, including national clinical audit teams, commissioners of studies designed to assess the organisational performance / quality of health care, academic researchers in performance assessment and health services research, policy makers and bodies which are responsible for regulating health care organisations. The results will also be relevant to bio-statisticians with an interest in the application of multiple imputation. In particular, we expect the findings to provide insights into how random and systematic data errors affect the MICE imputation stage and lead to recommendations on how to handle datasets containing missing data and measurement errors.

In addition, by helping national clinical audits and quality assurance studies to produce more accurate information on organisational performance, we expect that this research will support health care organisations to improve the quality of care delivered to patients and so ultimately improve patient experience and the outcomes of care.

Publications

10 25 50