Skip navigation

Mindstretcher: "Gold Standards"

Once upon a time it was simple to distinguish effective from ineffective interventions. The "gold standard" for evaluation was a randomised controlled trial. To understand whether or not the result was statistically, and by implication, clinically significant one looked at the 'p' value - the probability that the result could have occurred by chance.

In more recent years the 'p' value has been complemented by confidence intervals. These indicate a range of values within which the true value is thought to lie. A 90% confidence interval indicates that the true value will lie in the indicated range nine times out of ten. A 95% confidence interval indicates that the true value will lie in the indicated range nineteen times out of twenty.

Even more recent studies of trial methodology indicate just how cautious one has to be in accepting results of randomised controlled trials, which, on the face of it, have an adequate trial design.

Quality counts in RCTs

Ken Schulz and his colleagues in the UK have reviewed the methodological quality of 250 controlled trials and related the quality of a trial, and in particular the process of randomisation, to the results [1]. Did inadequate design exaggerate the effect measured in the trial?

They compared trials in which the authors reported adequately concealed treatment allocations with those in which treatment allocation was either inadequate or unclearly described, as well as examining the effects of exclusions and double blinding.

The results were striking and sobering. As the table shows, the odds ratios were exaggerated by 41% in trials in which there was an inadequate concealment of treatment allocation, and by 30% when the process of concealing allocation was unclearly stated. Inadequate blinding also contributed to exaggerated odds ratios.

This is one of a series of superb analyses of trial design which leads inexorably to the conclusion that unless a trial is designed and reported to the highest standard, then its results must be treated with a degree of caution. Librarians might consider ensuring that key references from this article are immediately on hand.

Structured Reporting of RCTs

What is the highest standard? A proposal for structured reporting of randomised controlled trials was set out some months ago [2]. It is detailed and sensible. It may stretch the mind to ensure that all of its many recommendations are adhered to, but those who write reports of trials and edit medical journals would do well to consider it in detail.

Perhaps in future looking for RCTs will be replaced by looking for ACTACTs - adequately concealed treatment allocation controlled trials.

References:

  1. KF Schulz, I Chalmers, RJ Hayes, DG Altman. Empirical evidence of bias: Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. Journal of the American Medical Association 1995; 273: 408-12.
  2. Standards of Reporting Trials Group. A proposal for structured reporting of randomized controlled trials. Journal of the American Medical Association 1994; 272: 1926-31.



previous or next story in this issue