The Atlantic summarizes a recently published attempt to assess the evidence-base.
No one is entirely clear on how Brian Nosek pulled it off, including Nosek himself. Over the last three years, the psychologist from the University of Virginia persuaded some 270 of his peers to channel their free time intorepeating 100 published psychological experiments to see if they could get the same results a second time around. There would be no glory, no empirical eurekas, no breaking of fresh ground. Instead, this initiative—the Reproducibility Project—would be the first big systematic attempt to answer questions that have been vexing psychologists for years, if not decades. What proportion of results in their field are reliable?
A few signs hinted that the reliable proportion might be unnervingly small. Psychology has been recently rocked by several high-profile controversies, including: the publication of studies that documented impossible effects like precognition, failures to replicate the results of classic textbook experiments, and some prominent cases of outright fraud.
The findings were not pretty.
As such, the results of the Reproducibility Project, published today in Science, have been hotly anticipated.
They make for grim reading. Although 97 percent of the 100 studies originally reported statistically significant results, just 36 percent of the replications did.
And, this doesn’t even consider whether the study and coverage of it even speak to the outcomes that patients and their families want.
Does this mean we should ignore research? No. But, it does mean we should be very careful consumers of it. And, we should probably be skeptical of those whose express excessive certitude on the basis of their evidence-base, especially when they discount experiential knowledge.
There is some good news, and some bad news coming from all of this.
A 1997 US law mandated the registry’s creation, requiring researchers from 2000 to record their trial methods and outcome measures before collecting data. The study found that in a sample of 55 large trials testing heart-disease treatments, 57% of those published before 2000 reported positive effects from the treatments. But that figure plunged to just 8% in studies that were conducted after 2000. Study author Veronica Irvin, a health scientist at Oregon State University in Corvallis, says this suggests that registering clinical studies is leading to more rigorous research.
The downside? From education advocate, Parker Palmer:
. . . when measurable, short-term outcomes become the only or primary standard for assessing our efforts, the upshot is as pathetic as it is predictable: we take on smaller and smaller tasks—the only kind that yield instantly visible results—and abandon the large, impossible but vital jobs we are here to do.
We must judge ourselves by a higher standard than effectiveness, the standard called faithfulness. Are we faithful to the community on which we depend, to doing what we can in response to its pressing needs?
Palmer’s concerns point to to the potential for increasingly narrow definitions of effectiveness that may not speak to the real world needs of patients. Particularly, in the case of complex diseases with social, emotional and environmental factors.
“And, this doesn’t even consider whether the study and coverage of it even speak to the outcomes that patients and their families want”
One of my gripes with so much research on ORT. Its perceived benefits in terms of public health and crime may well be at odds with what clients and families expect and hope for when they come to treatment. Wouldn’t it be nice for researchers to ask those seeking treatment and their families what they really want and then map the various treatments on offer with those outcomes in terms of efficacy. As you so often point out, there is one group of patients who get a very high standard of treatment associated with excellent outcomes (doctors) something that their patients may not always get.
LikeLike