Skip to content

Coffee causes cancer?

Statisticians have long been wary of scientific claims based on observational data – there is too much room to “torture the data long enough, until Nature confesses,” in the words of Ronald Coase, the Nobel Prize-winning economist. Fiddle with the variables and model parameters sufficiently, run enough comparisons, look at enough subgroups, and you can find statistical significance in almost any observed data. Statisticians prefer controlled experiments, but those are expensive and hard to do, and most published epidemiological research is based on observational data.

Nonetheless, Stanley Young and Alan Karr at the National Institute of Statistical Sciences located 12 observational studies that yielded claims that were subsequently tested in randomized controlled experiments. The results?

There were 52 “statistically significant” claims arising from the original observational studies. None replicated in the controlled randomized studies. Five actually achieved statistical significance in the opposite direction.

Young and Karr cite several factors:

– Multiple comparisons and tests. Reviewing data for numerous possible correlations, differences, etc. is bound to yield something of significance, unless (1) the questions are stated in advance, and (2) you establish a higher significance threshold, the more questions are asked.

– Bias. Systematic error results from missing factors, confounding variables, and subject attrition.

– Multiple modeling. As with multiple testing, the more variables, variable combinations and interactions you try in your model, the greater the probability that you will derive a “significant” model just by chance.

The problem is serious – much medical advice (whether from your family physician, periodicals like the Harvard Health Letter or the popular press) draws upon observational studies like the ones that Young and Karr found had an 0 for 52 track record. Much energy is spent, later, retracting the advice, and draining confidence from the system.

The solution? Read the article, but you can guess from the title that Young and Karr recommend a Deming-like “process control” solution. “Reproducible research” is the byword.

http://www.significancemagazine.org/details/magazine/1324539/Deming-data-and-observational-studies.html