Assessing the effects of diagnostic tests on patient outcomes: how reliable, informative and practical are RCTs?

Lead Research Organisation: University of Birmingham
Department Name: School of Psychology

Abstract

Whether a new diagnostic test causes more good than harm needs to be assessed before it is used. Tests affect the health of patients if they cause harm directly, or if the process of testing reassures or makes patients anxious. However, a test has greatest impact on patient outcomes when test results determine the treatment that patients receive. Using a better test may mean that more patients with the disease receive effective treatment, and fewer patients without the disease are inappropriately treated and unnecessarily put at risk of side effects.

The purpose of this project is to judge what type of research should be used to measure and compare the consequences of using tests on patient outcomes. Some recommend using randomised controlled trials (RCTs) comparing patients randomised to different tests and assessing their outcomes once all subsequent treatments have been carried out. Upto 100 test-plus-treatment RCTs of this form have been performed for various tests and diseases. An alternative approach is to use computer simulation models to link evidence of how accurate the test is with evidence of how well the treatments work. Such models may not work if the process by which clinicians interpret test results and decide on treatment is not well defined, nor if there are direct effects of tests on patient outcomes.

In this project we will identify published test-plus-treatment RCTs and similar RCTs commissioned by the Department of Health (including ongoing studies), and evaluate how well they are designed, executed and reported. We will consider whether they got enough patients; whether they are potentially biased through, for example losing too many patients; whether they describe the way in which test results are used to decide on treatments; whether the test is still in use when the study finishes; and whether all necessary outcomes were measured in the trials. This will be done both by critically reading the trial report and protocols, and by asking the trialists.

We will also compare the results of RCTs with those of computer models for all the interventions for which we can find both RCTs and models. Also, we will look in detail at one particular topic for which we have existing computer models and data from several RCTs.

From this work we will develop guidelines for choosing and doing research that evaluates the impact of tests on patient outcomes.

Technical Summary

Background:
Randomized controlled trials (RCTs) have been used to assess the impact of using different tests on patient outcomes, randomizing participants between testing strategies and evaluating their outcomes after subsequent treatments. The complex interventions evaluated involve stages of (a) undertaking the test, (b) translating a test into a diagnosis, (c) translating a diagnosis into a management plan, and (d) undertaking appropriate treatments. Trials of this nature are difficult to implement, as they require large sample sizes; and are prone to difficulties with dropout and masking treatment allocations; and rarely are able to standardise and document all stages of the interventions used. It is possible that in some circumstances studies of other designs, such as simulation models combining evidence of accuracy with evidence of treatment effects, may provide more useful, valid and timely information.

Objectives:
To assess the scientific validity and applicability of completed and planned RCTs of test-plus-treatment interventions; to document key issues that render such trials infeasible or unlikely to succeed; to explore situations in which alternative study designs would be more feasible and efficient; to explore trialists? understanding and experience; to undertake empirical comparisons between results of RCTs and simulation models.

Methods:
We will identify two cohorts of RCTs from the published literature (up to 100 trials) and the NHS Health Technology Assessment Programme (up to 29 trials, including both published and on-going trials, for which protocols will be made available to the study team). RCTs will be included if they randomise participants between two or more diagnostic strategies and assess patient and/or process outcomes at some point after subsequent interventions have been carried out.

Trials will be evaluated for key aspects of design, execution and reporting. Trialists will be surveyed concerning key decisions in the design and execution of the trial, and for post-trial reflections on approaches they would consider in the future. For each test-plus-treatment intervention trialled we will consider (a) whether the trial effectively answered the question asked; (b) whether alternative study designs could more effectively and efficiently have addressed the same question; (c) whether simulation models have addressed the same question.

Two empirical studies will compare the results of trials and simulation models. The first study will compare results of trials with published models for all clinical questions where both are available. The second study will recreate simulation models to exactly match available trial data for a single clinical question.

Publications

10 25 50