Anti-treatment zombie stats

zombie

This 90% statistic has been frequently cited to discredit specialty addiction treatment.

Ninety percent of those who enter addiction-treatment programs in the U.S. don’t receive evidence-based treatment

I asked David Scheff about it several weeks back. He said it was from RAND and referred me to his book. I looked in his book and couldn’t find the reference. I asked him about it and he said he’d get back to me. Now, Alcoholism & Drug Abuse Weekly explains why I couldn’t find it.

“Ninety percent of those who enter addiction-treatment programs in the U.S. don’t receive evidence-based treatment” — an assertion David Sheff made in his blog on Time.com last fall, and then restated in another Time.com blog February 2 — is based on an 11-year-old report by RAND, Sheff told ADAW. What the report actually says is that 90 percent of people with alcohol dependence did not receive the treatment that was recommended. But it doesn’t say that they entered treatment at all. In other words, the statement is inaccurate.

To be sure, there’s a lot of bad treatment out there, and it should be covered by the media. However, much of the coverage is biased against specialty treatment and has a pro-physician-directed treatment bias. (As though lousy treatment in medical settings is a rarity.)

There are a lot of fair criticisms that can be made against a lot of the treatment in the US. But these vague blanket criticisms (90%) don’t help addicts or their families find good treatment. And, the implication that any treatment with medication is good while any treatment without is primitive voodoo is false and damaging. Also, the blanket nature of the attacks slanders and alienates ethical professionals who provide good care.

As Alcoholism & Drug Abuse Weekly set the record straight, another journalist defended the error by arguing that it was in an opinion section and the false statistic was used to support a conclusion she said was true.

What can you do when the journalists, who have the soapbox, argue that they’re right, even when their facts are wrong?

. . . most men have bound their eyes with one or another handkerchief, and attached themselves to some one of these communities of opinion.  This conformity makes them not false in a few particulars, authors of a few lies, but false in all particulars.  Their every truth is not quite true.  Their two is not the real two, their four not the real four; so that every word they say chagrins us, and we know not where to begin to set them right.– Ralph Waldo Emerson

via McLellan on the state of evidence-based treatment, and an old RAND report.

8 thoughts on “Anti-treatment zombie stats

  1. I do not like the term “evidence-based.” An honest appraisal of the science would find that majority of studies do not meet the criteria of being upheld with any certainty as fact. As Dr. Eric Topol wrote (Cleveland Clinic),”Just because we can prove it’s ‘true’ doesn’t mean it’s true.” We should, instead, be discussing “levels of evidence” and “quality of evidence.” Those terms are conversation starters.

    Like

    1. It’s supposed to level the field (If you can provide the evidence, we’ll consider it evidence-based.) and provide clarity, but it does the exact opposite. It gets so political and the outcomes associated with the “evidence-based” designation for any particular treatment so often do not match the patient’s desired outcomes. On top of that, patients are not informed of these outcome mismatches.

      They are not told that they are evidence-based for reduced drug use (not abstinence), reduced incarceration and reduced disease transmission. They are not evidence-based for getting you our life back.

      Of course, 12 Step Facilitation is evidence-based, but they always find a reason to argue that TSF programs are not evidence-based. Maddening.

      Like

      1. Excellent point. Their outcomes (assuming the research can even be considered meaningful, which is unlikely in terms of how scientific study works) often measures something entirely different than recovery. This I believe is the impetus behind redefining recovery to mean an unspecified reduction of use for an unspecified period of time and counting this as success. Even twisting patient-centered treatment to mean the addict chooses whatever recovery means for him or her…the Wild West theory of recovery. Where else does that work? Can I effectively choose whatever I like to be my recovery for a brain tumor, for a blocked artery? What are my chances of getting it right? Will I be satisfied with half-measures that reduce harm (however lasting that might be) if I know there is a path to full recovery?

        There are many ways to fail, but success is a narrow path, to paraphrase Aristotle.

        Much discussion of the research today seems to end with the term “harm reduction” which is a product of fallacious reasoning. It puts the brain to sleep, shutting down critical thinking. The phrase creates an undeserving positive emotional response. There is no such thing as harm reduction because once a substance enters the addict’s body, all bets are off.

        Alcoholics and addicts all attempt, in their own way, to practice harm reduction in active addiction, with varying degrees of short-term success. Therefore studies can also show successful harm reduction in the short run. Lasting success is what cannot be achieved, even when success is defined as “harm reduction.” Let’s go into homes and prisons and morgues and ask about harm reduction. Let’s ask Audrey Kishline, mother of Moderation Management, about harm reduction. It’s a myth, and a dangerous one.

        Recovery is our word. It means something very specific. These other folks need to go get their own word.

        Like

  2. It’s frustrating to see this kind of thing. Frustrating and unhelpful. Actually, I’ll go further: frustrating, unhelpful and somewhat dangerous. You see the same myths being trotted out again and again. My bugbear is ‘Mutual aid (usually AA) only helps 1% (or choose another low figure that pops into your mind) of those who go.’

    Like

      1. The myth of AA not working is a matter of (willfully?) confusing compliance with efficacy. Since 50% of Americans, for instance, cannot comply with completing a course of medication–that requires nothing more than swallowing a pill–we understand how tough it can be for human beings to comply in the simplest matters. Therefore, I question the motives of this confusion when speaking of AA and it’s rate of success. Is there a breakdown of critical thinking or an intentional degrading of 12 step programs? Or is bias coloring reasoning? Whichever the answer, it begs another question: How reliable are these minds?

        Like

Comments are closed.