Beware of misleading headlines

Caution Tape
Caution Tape (Photo credit: Picture Perfect Pose)

A new article discussing the expanding use of medications in addiction treatment has the following sub heading:

Experts are pushing for a truly medical approach to treating addiction as a disease rather than relying solely on longtime unproven therapies like 12-step programs.


I’m certain a day will come when we have effective pharmacological tools to help addicts initiate and maintain recovery but, beyond detox, I find the current meds pretty underwhelming as a group and troubling in some cases.

When you hear the push for these “scientific”, “medical” and “evidence-based” treatments, keep these exhibits in mind:

A Doctor’s Dilemma: When Crucial New-Drug Data Is Hidden

The positive spin surrounding industry-funded studies — which are, after all, the studies that the government uses to approve drugs — isn’t the only ongoing problem. Goldacre further describes how drug companies hide data about medication risks that affect children, how they attempt to intimidate the employers of researchers who produce results they don’t like, and how they routinely withhold safety data in various other ways that do harm to patients.

A Call for Caution on Antipsychotic Drugs

You will never guess what the fifth and sixth best-selling prescription drugs are in the United States, so I’ll just tell you: Abilify and Seroquel, two powerful antipsychotics. In 2011 alone, they and other antipsychotic drugs were prescribed to 3.1 million Americans at a cost of $18.2 billion, a 13 percent increase over the previous year, according to the market research firm IMS Health….several recent large randomized studies, like the landmark Catie trial, failed to show that the new antipsychotics were any more effective or better tolerated than the older drugs.

It was also soon discovered that the second-generation antipsychotic drugs had serious side effects of their own, namely a risk of increased blood sugar, elevated lipids andcholesterol, and weight gain. They can also cause a potentially irreversible movement disorder called tardive dyskinesia, though the risk is thought to be significantly lower than with the older antipsychotic drugs.

Nonetheless, there has been a vast expansion in the use of these second-generation antipsychotic drugs in patients of all ages, particularly young people. Until recently, these drugs were used to treat a few serious psychiatric disorders. But now, unbelievably, these powerful medications are prescribed for conditions as varied as very mild mood disorders, everyday anxiety, insomnia and even mild emotional discomfort.

Top 10 Drug Company Settlements

Record-breaking multibillion-dollar settlements against big drug companies have become routine in the U.S. In recent years, pharmaceutical companies seem to have been playing a game of one-upmanship, each surpassing yet a new milestone of wrongdoing — fraudulently marketing their drugs or making misleading claims about their safety — and the threat of massive payouts appears to have offered little deterrent.

6 thoughts on “Beware of misleading headlines

  1. We all know if they had a drug to cure addiction, most addicts like me would take twice the recommended dosage and wash it down with a glass of scotch!

  2. I bet the basis for the opinion that 12 step is not ‘proven’ is based on narrowly defined Cochrane style reviews limited to studies using highly artificial and inappropriate methods such as randomised controlled trials (RCT). These hard-core, hairy-chested methods are great for testing the causal efficacy of biomedical technofixes such as pills but poor choices for evaluating the complexities of biopsychosocial interventions in the field especially if they are not controlled by the researchers or their professional colleagues as is the case with TC and mutual-aid.
    Yet so often the inappropriate application of this method or even the lack of studies using this method because of ethical or pragmatic constraints, is used to make the grand statement “there is no evidence”.
    Too often you see programs or phenomena dismissed with the lines ‘there is no evidence for’ or ‘there is little evidence for this and that’ without any explanation of what is actually meant by these statements.
    “There is no evidence” can mean four different things.
    “There is no evidence” can mean the intervention has been exhaustively studied and shown to be ineffective. In this case, you are probably justified in saying “there is no evidence” unless of course you are insisting on limiting the studies you look at to methods that are inappropriate for the task at hand, i.e. RCTs of mutual-aid or TCs. This is Big Tobacco’s old tactic of insisting that only experimental methods can establish causation. And of course, there are no experimental studies that link tobacco to cancer. The reason is because such studies would be completely unethical and impractical (except during the Third Reich). The vast majority of scientists and policy makers accept the causal link between tobacco and caner despite the fact the evidence is entirely observational.
    Yet I’ve seen commentators (such as Stanton Peele) insist that the only evidence that would be convincing of the effectiveness of AA would be RCT’s, and they summarily dismiss all the observational evidence as fatally undermined by self-selection bias.

    Few credible researchers insist that solely experimental evidence is sufficient when assessing health interventions. It is well known that experimental evidence of efficacy often disappears in the real world noise of clinical practice. Furthermore, RCT’s are a particular poor choice of design for evaluating a community-based intervention which is self-chosen and self-delivered by the client themselves.
    How can random assignment to voluntarily choosing to attend and engage in a freely available non-professionally controlled resource such as AA be achieved when the researcher or clinician has no control over such things? How can RCT’s which aim to avoid ‘biases’ such as self-selection be used to evaluate interventions such as attendance and involvement at AA which relies primarily on self-selection?
    “There is no evidence” can mean the intervention subject has not been well studied or not at all and so we don’t know much about it and therefore “there is no evidence”. You could make the claim that “that there is no evidence that SMART Recovery works” and the reason behind such a claim is that it simply has never been studied for effectiveness in either experimental or observational designs.

    So is it really fair to dismiss SMART Recovery because it has never been studied? On the contrary, you often hear treatment professionals including Peele who dismiss 12step as ‘unproven’ yet enthusiastically claim SMART as evidence-based because it founded on ‘proven’ techniques such as CBT. Thus, it seems expert opinion trumps observational evidence when it comes to 12 step. It is simply disingenuous to insist on impossibly high or inappropriate levels of evidence for programs you personally oppose yet accept lax standards of evidence for programs you favour.

    “There is no evidence” can mean the intervention cannot be measured. This one is often used by those who adhere to the self-annihilating positivist dogma that if something can’t be verified then it can’t be scientifically assessed (which in its naïve form says if it can’t be measured it doesn’t exist). This tactic is used to dismiss qualitative evidence and methods as merely ‘anecdotal’, so if someone raises a seemingly nebulous issue such as ‘sending the wrong message’, they can be easily dismissed with “there is no evidence” because such an issue can never be measured.
    The most blatant way to say, “There is no evidence”, is to simply misrepresent the evidence or deny it all together. This is clearly the easiest to spot, however unless someone has the time and inclination to check out these claims and is prepared to speak out, then they go uncontested.
    There is even a Cochrane review Ferri, Amata & Davoli 2006 that seems to misrepresent the evidence for Twelve-Step Facilitation (TSF); a professional intervention that encourages and facilitates someone to attend and affiliate with 12 step groups.
    This review looks at studies that show basically that TSF is as good as but no better than CBT or MI in terms of reduction in drinking and other measures. The review uses these equivalent findings to make the claim that “No experimental studies unequivocally demonstrated the effectiveness of AA or TSF approaches for reducing alcohol dependence or problems.”
    Leaving aside what ‘unequivocal’ means, this statement implies that AA is not effective even though it was not AA that was studied but TSF.
    In the Plain Language summary the authors state “The available experimental studies did not demonstrate the effectiveness of AA or other 12-step approaches in reducing alcohol use and achieving abstinence compared with other treatments, but there were some limitations with these studies.”
    This statement is couched in negative language that almost seems to invite misinterpretation but it is actually saying that the findings of the constituent studies of this review as described in the discussion section is that TSF is no more but no less effective than other treatments, in other words TSF works as well as but no better than standard treatments such as CBT and MI.
    Buried in the discussion section is the admission that “there is no conclusive evidence to show that AA can help patients to achieve abstinence, nor is there any conclusive evidence to show that it cannot.”
    Note: Conclusive=experimental in Cochrane speak, observational studies are generally considered flawed by the potential for bias and thus can be safely ignored.
    This fact in the next paragraph becomes “Yet experimental studies have on the whole failed to demonstrate their effectiveness in reducing alcohol dependence or drinking problems when compared to other interventions.” i.e. as good as but no better than.
    By the time we get to the Authors’ conclusions – Implications for practice, we get the stern warning “People considering attending AA or TSF programmes should be made aware that there is a lack of experimental evidence on the effectiveness of such programmes.”
    This rhetorical sleight of hand could just as easily been employed, using studies that show no difference between CBT and MI, to claim “No experimental studies unequivocally demonstrated the effectiveness of CBT or SMART approaches for reducing alcohol dependence or problems”.
    This widely cited review pops up all over the place.
    For example, in the article “Alcohol intervention – What works?” in the Australian Family Physician Vol 37, No 1/2 2008, the author quotes the Cochrane study and states that it “does not find any conclusive evidence of improved outcomes compared to other standard treatments for alcoholism such as CBT”. The author even asserts practitioners are obliged to inform patients about the “lack of evidence for the effectiveness when discussing AA.”
    This essentially reads as there is no evidence that AA works and practitioners should advise their patients of this lack of evidence. I wonder if the author would go so far as to suggest that clinicians advise their clients to not go or stop going to AA because of this lack of evidence.
    The author then goes on to suggest SMART Recovery as a (presumably more effective) alternative.
    It must be confusing for GP’s when they go onto to read other publications referred to in the Australian Family Physician article such as the “Guidelines for the Treatment of Alcohol Problems” (DoHA, 2009) which paints a somewhat different picture. It seems DoHA has a somewhat more inclusive view of what comprises evidence.
    These guidelines state on page 109 that there is ‘sufficient evidence to suggest’ that involvement in AA
    • plays a major role in the improvement of (a range of outcomes),
    • the more involvement the better the outcomes (a dose/response effect),
    • that assertive referral by clinicians to AA improves outcomes and treatment retention and
    • AA in conjunction with treatment works better than AA or treatment alone.
    They grade the strength of recommendation for patients engaging in AA as “B” based on observational evidence graded as II and assertive referral is rated at the highest level “A” based on observational evidence graded as Category I.
    In contrast, clinical recommendations for SMART Recovery are given the lowly ranking of D because SMART Recovery has never been formally studied in either experimental or naturalistic studies.
    So why would the author who presumably has a commitment to evidence-based practice seem to dismiss the B grade evidence that exists for AA effectiveness and A grade evidence for assertive referral to AA. Yet go on to recommend an alternative that for which there is no evidence beyond expert opinion.
    Perhaps mutual-aid groups that were designed and founded by professionals don’t require the same level of evidence as one that weren’t. I am not suggesting that SMART Recovery is not effective, I am sure it is, for much the same reasons AA is. I am questioning the way the author presents the evidence for these two self-help groups in the article.
    As the DoHA Guide authors state their clinical recommendations are explicitly linked to the grading of evidence “allowing for greater emphasis to be placed upon consensus recommendations and standards of care that reflect good clinical practice and avoiding therapeutic nihilism when there is insufficient evidence available.”
    It seems we have “therapeutic nihilism’ here even in the face of substantial observational evidence.
    Can I ask if you agree with such an approach to evidence? I take it that ‘consensus recommendation’ is the basis for your recommendation for SMART Recovery? And I would be interested in why you reject the evidence for AA described in the Guide.
    A blanket rejection of observational evidence on the basis it is prone to selection bias smacks of ‘methodological hubris’ that the Cochrane collaboration itself warns against, Kunz, Vist & Oxman 2008.

    … there are limitations to the use of randomised trials, both real and imagined, and scepticism about the importance of randomisation (US Office HTA 1994; Black 1996; Weiss 1998; Pocock 2000). We believe this scepticism is healthy. It is important to question assumptions about research methods, and to test these assumptions empirically, just as it is important to test assumptions about the effects of health care. Methodological hubris can be just as dangerous as medical hubris.,5&hl=en

    1. Wow! Thanks for the thoughtful and thorough response.

      A while back, I had a post that discussed truth-finding as a social process.

      It brings to mind a comment from a recent episode of On Being [emphasis mine]:

      “…individuals are hopelessly biased, they cannot perceive the truth by themselves.”

      “Science is not just an individual activity. We expect our scientists, we exhort them, to be as objective as they can and a good scientist tries to do so very earnestly, but still fails. So therefore, there must be a social process that causes science to work to be a truth-discovering process.”

      This thing about scientific truth-discovery being a social process puts it’s finger on something very important. It’s one of the things that so frustrating about hearing people tout evidence-based policies.

      The problem here is, what do you get when the social process is takes place in a closed system where participants are blind to their own bias?

      I don’t know how else to explain the willful disregard of all of the observational evidence.

      Thanks again!

  3. Until 4-5 years ago, scientists said there was no evidence/explanation to prove bumble bees could fly. But flew they did.
    Scientists follow great practitioners, taking notes.

Comments are closed.