#NotSoFunny #16 – Ridiculing RCTs & EBM

1 02 2010

I remember it well. As a young researcher I presented my findings in one of my first talks, at the end of which the chair killed my work with a remark, that made the whole room of scientists laugh, but was really beside the point. My supervisor, a truly original and very wise scientist, suppressed his anger. Afterwards, he said: “it is very easy ridiculing something that isn’t a mainstream thought. It’s the argument that counts. We will prove that we are right.” …And we did.

This was not my only encounter with scientists who try to win the debate by making fun of a theory, a finding or …people. But it is not only the witty scientist who is to *blame*, it is also the uncritical audience that just swallows it.

I have similar feelings with some journal articles or blog posts that try to ridicule EBM – or any other theory or approach. Funny, perhaps, but often misunderstood and misused by “the audience”.

Take for instance the well known spoof article in the BMJ:

“Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials”

It is one of those Christmas spoof articles in the BMJ, meant to inject some medical humor into the normally serious scientific literature. The spoof parachute article pretends to be a Systematic Review of RCT’s  investigating if parachutes can prevent death and major trauma. Of course, no such trial has been done or will be done: dropping people at random with and without a parachute to proof that you better jump out of a plane with a parachute.

I found the article only mildly amusing. It is so unrealistic, that it becomes absurd. Not that I don’t enjoy absurdities at times, but  absurdities should not assume a live of their own.  In this way it doesn’t evoke a true discussion, but only worsens the prejudice some people already have.

People keep referring to this 2003 article. Last Friday, Dr. Val (with whom I mostly agree) devoted a Friday Funny post to it at Get Better Health: “The Friday Funny: Why Evidence-Based Medicine Is Not The Whole Story”.* In 2008 the paper was also discussed by Not Totally Rad [3]. That EBM is not the whole story seems pretty obvious to me. It was never meant to be…

But lets get specific. Which assumptions about RCT’s and SR’s are wrong, twisted or put out of context? Please read the excellent comments below the article. These often put the finger on the spot.

1. EBM is cookbook medicine.
Many define EBM as “make clinical decisions based on a synthesis of the best available evidence about a treatment.” (i.e. [3]). However, EBM is not cookbook medicine.

The accepted definition of EBM  is “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients” [4]. Sacket already emphasized back in 1996:

Good doctors use both individual clinical expertise and the best available external evidence, and neither alone is enough. Without clinical expertise, practice risks becoming tyrannised by evidence, for even excellent external evidence may be inapplicable to or inappropriate for an individual patient. Without current best evidence, practice risks becoming rapidly out of date, to the detriment of patients.


2. RCT’s are required for evidence.

Although a well performed RCT provides the “best” evidence, RCT’s are often not appropriate or indicated. That is especially true for domains other than therapy. In case of prognostic questions the most appropriate study design is usually an inception cohort. A RCT for instance can’t tell whether female age is a prognostic factor for clinical pregnancy rates following IVF: there is no way to randomize for “age”, or for “BMI”. ;)

The same is true for etiologic or harm questions. In theory, the “best” answer is obtained by RCT. However RCT’s are often unethical or unnecessary. RCT’s are out of the question to address whether substance X causes cancer. Observational studies will do. Sometimes cases provide sufficient evidence. If a woman gets hepatic veno-occlusive disease after drinking loads of a herbal tea the finding of  similar cases in the literature may be sufficient to conclude that the herbal tea probably caused the disease.

Diagnostic accuracy studies also require another study design (cross-sectional study, or cohort).

But even in the case of  interventions, we can settle for less than a RCT. Evidence is not present or not, but exists on a hierarchy. RCT’s (if well performed) are the most robust, but if not available we have to rely on “lower” evidence.

BMJ Clinical Evidence even made a list of clinical questions unlikely to be answered by RCT’s. In this case Clinical Evidence searches and includes the best appropriate form of evidence.

  1. where there are good reasons to think the intervention is not likely to be beneficial or is likely to be harmful;
  2. where the outcome is very rare (e.g. a 1/10000 fatal adverse reaction);
  3. where the condition is very rare;
  4. where very long follow up is required (e.g. does drinking milk in adolescence prevent fractures in old age?);
  5. where the evidence of benefit from observational studies is overwhelming (e.g. oxygen for acute asthma attacks);
  6. when applying the evidence to real clinical situations (external validity);
  7. where current practice is very resistant to change and/or patients would not be willing to take the control or active treatment;
  8. where the unit of randomisation would have to be too large (e.g. a nationwide public health campaign); and
  9. where the condition is acute and requires immediate treatment.
    Of these, only the first case is categorical. For the rest the cut off point when an RCT is not appropriate is not precisely defined.

Informed health decisions should be based on good science rather than EBM (alone).

Dr Val [2]: “EBM has been an over-reliance on “methodolatry” - resulting in conclusions made without consideration of prior probability, laws of physics, or plain common sense. (….) Which is why Steve Novella and the Science Based Medicine team have proposed that our quest for reliable information (upon which to make informed health decisions) should be based on good science rather than EBM alone.

Methodolatry is the profane worship of the randomized clinical trial as the only valid method of investigation. This is disproved in the previous sections.

The name “Science Based Medicine” suggests that it is opposed to “Evidence Based Medicine”. At their blog David Gorski explains: “We at SBM believe that medicine based on science is the best medicine and tirelessly promote science-based medicine through discussion of the role of science and medicine.”

While this may apply to a certain extent to quack or homeopathy (the focus of SBM) there are many examples of the opposite: that science or common sense led to interventions that were ineffective or even damaging, including:

As a matter of fact many side-effects are not foreseen and few in vitro or animal experiments have led to successful new treatments.

At the end it is most relevant to the patient that “it works” (and the benefits outweigh the harms).

Furthermore EBM is not -or should not be- without consideration of prior probability, laws of physics, or plain common sense. To me SBM and EBM are not mutually exclusive.

Why the example is bullshit unfair and unrealistic

I’ll leave it to the following comments (and yes the choice is biased) [1]

Nibu A George,Scientist :

First of all generalizing such reports of some selected cases and making it a universal truth is unhealthy and challenging the entire scientific community. Secondly, the comparing the parachute scenario with a pure medical situation is unacceptable since the parachute jump is rather a physical situation and it become a medical situation only if the jump caused any physical harm to the person involved.

Richard A. Davidson, MD,MPH:

This weak attempt at humor unfortunately reinforces one of the major negative stereotypes about EBM….that RCT’s are required for evidence, and that observational studies are worthless. If only 10% of the therapies that are paraded in front of us by journals were as effective as parachutes, we would have much less need for EBM. The efficacy of most of our current therapies are only mildly successful. In fact, many therapies can provide only a 25% or less therapeutic improvement. If parachutes were that effective, nobody would use them.
While it’s easy enough to just chalk this one up to the cliche of the cantankerous British clinician, it shows a tremendous lack of insight about what EBM is and does. Even worse, it’s just not funny.

Aviel Roy-Shapira, Senior Staff Surgeon

Smith and Pell succeeded in amusing me, but I think their spoof reflects a common misconception about evidence based medicine. All too many practitioners equate EBM with randomized controlled trials, and metaanalyses.
EBM is about what is accepted as evidence, not about how the evidence is obtained. For example, an RCT which shows that a given drug lowers blood pressure in patients with mild hypertension, however well designed and executed, is not acceptable as a basis for treatment decisions. One has to show that the drug actually lowers the incidence of strokes and heart attacks.
RCT’s are needed only when the outcome is not obvious. If most people who fall from airplanes without a parachute die, this is good enough. There is plenty of evidence for that.

EBM is about using outcome data for making therapeutic decisions. That data can come from RCTs but also from observation

Lee A. Green, Associate Professor

EBM is not RCTs. That’s probably worth repeating several times, because so often both EBM’s detractors and some of its advocates just don’t get it. Evidence is not binary, present or not, but exists on a heirarchy (Guyatt & Rennie, 2001). (….)
The methods and rigor of EBM are nothing more or less than ways of correcting for our
imperfect perceptions of our experiences. We prefer, cognitively, to perceive causal connections. We even perceive such connections where they do not exist, and we do so reliably and reproducibly under well-known sets of circumstances. RCTs aren’t holy writ, they’re simply a tool for filtering out our natural human biases in judgment and causal attribution. Whether it’s necessary to use that tool depends upon the likelihood of such bias occurring.

Scott D Ramsey, Associate Professor

Parachutes may be a no-brainer, but this article is brainless.

Unfortunately, there are few if any parallels to parachutes in health care. The danger with this type of article is that it can lead to labeling certain medical technologies as “parachutes” when in fact they are not. I’ve already seen this exact analogy used for a recent medical technology (lung volume reduction surgery for severe emphysema). In uncontrolled studies, it quite literally looked like everyone who didn’t die got better. When a high quality randomized controlled trial was done, the treatment turned out to have significant morbidity and mortality and a much more modest benefit than was originally hypothesized.

Timothy R. Church, Professor

On one level, this is a funny article. I chuckled when I first read it. On reflection, however, I thought “Well, maybe not,” because a lot of people have died based on physicians’ arrogance about their ability to judge the efficacy of a treatment based on theory and uncontrolled observation.

Several high profile medical procedures that were “obviously” effective have been shown by randomized trials to be (oops) killing people when compared to placebo. For starters to a long list of such failed therapies, look at antiarrhythmics for post-MI arrhythmias, prophylaxis for T. gondii in HIV infection, and endarterectomy for carotid stenosis; all were proven to be harmful rather than helpful in randomized trials, and in the face of widespread opposition to even testing them against no treatment. In theory they “had to work.” But didn’t.

But what the heck, let’s play along. Suppose we had never seen a parachute before. Someone proposes one and we agree it’s a good idea, but how to test it out? Human trials sound good. But what’s the question? It is not, as the author would have you believe, whether to jump out of the plane without a parachute or with one, but rather stay in the plane or jump with a parachute. No one was voluntarily jumping out of planes prior to the invention of the parachute, so it wasn’t to prevent a health threat, but rather to facilitate a rapid exit from a nonviable plane.

Another weakness in this straw-man argument is that the physics of the parachute are clear and experimentally verifiable without involving humans, but I don’t think the authors would ever suggest that human physiology and pathology in the face of medication, radiation, or surgical intervention is ever quite as clear and predictable, or that non-human experience (whether observational or experimental) would ever suffice.

The author offers as an alternative to evidence-based methods the “common sense” method, which is really the “trust me, I’m a doctor” method. That’s not worked out so well in many high profile cases (see above, plus note the recent finding that expensive, profitable angioplasty and coronary artery by-pass grafts are no better than simple medical treatment of arteriosclerosis). And these are just the ones for which careful scientists have been able to do randomized trials. Most of our accepted therapies never have been subjected to such scrutiny, but it is breathtaking how frequently such scrutiny reveals problems.

Thanks, but I’ll stick with scientifically proven remedies.

parachute experiments without humans

* on the same day as I posted Friday Foolery #15: The Man who pioneered the RCT. What a coincidence.

** Don’t forget to read the comments to the article. They are often excellent.

Photo Credits

ReferencesResearchBlogging.org

  1. Smith, G. (2003). Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials BMJ, 327 (7429), 1459-1461 DOI: 10.1136/bmj.327.7429.1459
  2. The Friday Funny: Why Evidence-Based Medicine Is Not The Whole Story”. (getbetterhealth.com) [2010.01.29]
  3. Call for randomized clinical trials of Parachutes (nottotallyrad.blogspot.com) [08-2008]
  4. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, & Richardson WS (1996). Evidence based medicine: what it is and what it isn’t. BMJ (Clinical research ed.), 312 (7023), 71-2 PMID: 8555924
Reblog this post [with Zemanta]
are very well edged off
About these ads

Actions

Information

27 responses

1 02 2010
precordialthump

G’day Jacqueline and MedLiblog readers,

I always viewed the parachute article as an expression of what Bradford Hill, one of the key architects of the RCT, said:

‘Any belief that the controlled trial is the only way would mean not that the pendulum had swung too far but that it had come right off the hook’

I like the parachute article as an antidote to those methodolatrous clinicians who say – ‘but there is no RCT on that’.

However, the point – that you share – is that EBM and RCTs are not one and the same. One of the best discussions of this is Professor Sir Michael Rawlins’ 2008 Harveian Oration titled “De testimonio: On the evidence for decisions about the use of therapeutic interventions”. I blooged about it, and linked to it, here: http://lifeinthefastlane.com/2009/05/de-testimonio/

Rawlins skillfully explains that evidence from ‘lower down’ the hierarchy (such as observational studies), when ‘fit for purpose’, can negate the need for an RCT.

Nice post!

Chris

1 02 2010
Infant Death Syndrome » #NotSoFunny – Ridiculing RCTs and EBM « Laika's MedLibLog

[...] the rest here: #NotSoFunny – Ridiculing RCTs and EBM « Laika's MedLibLog AKPC_IDS += "3024,";Popularity: unranked [...]

1 02 2010
dundeechest

I attended a lecture by Prof Allan Maurice the other day – ostensibly to try to encourage me to use more tiotropium. He actually spent the majority of the lecture telling us that RCTs and the “best” of EBM is “of little use in the real world patients I see”. Inclusion criteria, exclusion criteria, controlled environments, bias towards patients willing to enter the trial in the first place: he sees this as negating the value of most research, it seems.

He chaired the BTS committee on cough; the first line of the guideline states, somewhat proudly “This guideline is based on expert opinion”.

Personally, I cannot be so dogmatic. The triangle of improved patient outcomes needs a base, clearly, but the triangle appears to be equilateral, not a small hump.

Good post, thanks.

1 02 2010
Martin Fenner

Thanks Jacqueline. I personally believe that we need more evidence-based medicine in science blogging and science journalism.

2 02 2010
laikaspoetnik

Hi Martin. What do you mean exactly? That EBM should be dealt with more often at science blogs or articles? Or that social media should be based more on evidence?

1 02 2010
Eronarn

Great post!

I am not a medical practitioner, but for a college course I did a literature review on the cost-effectiveness of a certain CAM treatment. As part of this literature review I did a good amount of reading on EBM. I did not have any prior experience with the topic, but in my readings I was most struck by how many practitioners *weren’t* making use of EBM in their practices. It is not a particularly inflexible system in an absolute sense, and certainly not in a relative sense when compared to medicine’s prescribing, privacy, or ethical guidelines.

That being said, I am a big fan of SBM’s blog and I think they’re right about the SBM/EBM gap – in a very particular way, that is.

Science can study its own practice by testing its organization, methods, and even its tests for potential improvements. Rather than a “Yes/No” to “scientific”, science is a sliding scale of best practices. While we may call something like a polygraph “unscientific”, this isn’t because the concept is unsound. In fact, polygraph machines are better at detecting lies than random chance (though still worse than many humans). What makes the use of polygraph machines non-scientific is the refusal of polygraph practitioners to be appropriately critical of their methods and of the findings those methods produce, and to have the will to improve if they find themselves lacking.

EBM is not as scientific as it could be… but that hardly means it is unscientific. I believe that EBM simply exhibits a lag in absorbing and acting upon relevant information (both in the structure of EBM itself, and in the actual use of EBM). And I think that’s very unsurprising given that EBM can be thought of as between the institutions of science and medicine: they have to deal with institutional inertia on both sides. Unfortunately, it often takes a very long time to convince a large group of hierarchically organized humans to shift their behavior even if you can come up with a damn good reason why they should.

2 02 2010
Grand Rounds Vol 6, No. 19 | A Groundhog's Perspective on Med Blogs | More iPad

[...] say “hah!”  Not so for Jacqueline at Laika’s Medlib Blog.  She thinks that rejecting EBM is equivalent to rejecting science as a basis to medicine.  She’s right about that; I just hope she doesn’t bash the article on Ruppy.  She [...]

3 02 2010
laikaspoetnik

Thanks all for these very well thought out comments. Wonderful.

@precordialthump, dear Chris. Indeed my main point is [1] that EBM and RCT’s are not the same, but also that [2] controlled trials are NOT always “the best”, nor necessary (similar as explained more profoundly in the harveian Oration). Yet another point [3] is that the apparent opposite of EBM, SBM ,has its own weaknesses (i.e. the theory can be wrong –like in the breast cancer example in the oration). Both EBM and SBM can be very dogmatic.
If this Parachute article’s aim was meant as an antidote to those methodolatrous clinicians who say – ‘but there is no RCT on that’, then that would be fine with me. I get the impression, however, that it works the other way around: that the paper is grist to the mill of those that are ‘against’ RCT’s & EBM: “See this article perfectly shows that a RCT is nonsense” (we don’t need RCT’s to show people need to breath. Ha, ha, ha.)

There are certainly limitations to RCT’s and these are highlighted in the Harveian Oration. Thanks for the post and the link. Brilliant indeed.

@dundeechest I fear that generalisibility is one of the shortcomings of an RCT. Especially if your patient is a child, an aged person, a pregnant woman or someone with comorbidities. But to conclude that we should forget all RCT’s and confine ourselves to opinion based guidelines that we write ourselves, well…. Dogmatic is always bad. Thanks for sharing your great story.

@eronarn Thanks for your contribution. There are many hurdles to practicing EBM, even if you believe in it.
What I’ve seen from the SBM blog is that it more or less confines itself to the (non-)science of CAM. I wonder whether their approach works with other questions as well (but that may be my ignorance).

Intuitively I would say SBM is more the theoretical approach and EBM the practical (although pretty complicated still). EBM doesn’t create theories, it tests them (if a priori hypotheses look sound).
About your polygraph example, lets make it a PSA test, because EBM is about medicine. You can ask: “do we find more cancers by doing a PSA test vs doing nothing and the answer is yes (just like the spirograph). But a more important question would be whether you lower the mortality at a not too great cost (biopsies, overtreatment complications) and the answer seems no.

3 02 2010
Tweets that mention #NotSoFunny – Ridiculing RCTs and EBM « Laika’s MedLibLog -- Topsy.com

[...] This post was mentioned on Twitter by Bertalan Meskó, MD, Laika (Jacqueline), Laika (Jacqueline), Laika (Jacqueline), Laika (Jacqueline) and others. Laika (Jacqueline) said: @precordialthump of course you already knew i lacked humor (not laughing abt parachutes http://bit.ly/bjgSLV) [...]

5 02 2010
kclauson

Disclaimer: I have a soft spot for the BMJ spoof editions. My first formal journal club was “Shaken, not stirred: bioanalytical study of the antioxidant activities of martinis” (http://bit.ly/cRh5Kg) and one of the first EBM lectures I gave featured “Seven alternatives to evidence based medicine” (http://bit.ly/dm1NBs).

Ok Jacqueline, with that out of the way I would like to pay you one of the highest compliments I can. After reading your take on EBM and RCTs, I went back and modified the lecture on study design that I prepared! I had even gotten the lecture to the point where the content and timing were just the way I wanted it…but your post made me rethink some aspects of the topic and prompted me to go back and integrate points you raised. Kudos.

Kevin

5 02 2010
laikaspoetnik

Hi Kevin,

Thanks for your comment. That is about the highest compliment I can think of *blush*.
I hope that the required alterations didn’t prevent you from finishing it in time.

Of course I am very curious about your presentation. Are you planning to share it (i.e. on Slideshare)?

5 02 2010
Science-Based Medicine » Yes, Jacqueline: EBM ought to be Synonymous with SBM

[...] Laika Spoetnik took issue both with Val’s short post and with the parachute article itself, in a post titled “NotSoFunny – Ridiculing RCTs [...]

7 02 2010
kclauson

No worries – I still had a week to spare. I was not planning on posting it as it is simply a straightforward course lecture (versus a podium presentation), but I may go ahead and do so next week.

Kevin

9 02 2010
10 02 2010
Health 2.0 News: Cloud Computing in Cancer, Craig Venter and More « ScienceRoll

[...] #NotSoFunny – Ridiculing RCTs and EBM (Laikas MedliBlog): “Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials” [...]

17 02 2010
Superiority, Equivalence, and Non-inferiority Trial Design Lecture – Web 2.0 Style « Unnatural Language Processing

[...] couple of weeks ago I read a very thoughtful post on one of my favorite blogs (authored by @laikas).  One reason I enjoy Jacqueline’s blog is that [...]

18 02 2010
Superiority, Equivalence, and Non-inferiority Trial Design Lecture – Web 2.0 Style « Unnatural Language Processing

[...] couple of weeks ago I read a very thoughtful post on one of my favorite blogs (authored by @laikas).  One reason I enjoy Jacqueline’s blog [...]

18 02 2010
kclauson

I decided to go ahead and post the deck on Slideshare from the lecture I modified after reading your post (http://bit.ly/d8BMlY). I also wrote a short accompanying post to give it some context (bit.ly/bQKcGz). Thanks again.

9 04 2010
Christian

I do find the parachute synthesis funny, but likewise, think it misses the mark in some pretty significant ways. I’ll just add to the excellent points made above that with traumatic falls and parachutes, we can directly observe the whole mechanism of cause and effect in real time.

13 08 2010
di

I understood it as critiquing the misconception that RCTs/systemic reviews are the be all and end all of EBM…?

6 10 2010
How will we ever keep up with 75 Trials and 11 Systematic Reviews a Day? « Laika's MedLibLog

[...] There is another point I do not agree with. I do not think that SR’s of interventions should only include RCT’s . We should include those study types that are relevant. If RCT’s furnish a clear proof, than RCT’s are all we need. But sometimes – or in some topics/specialties- RCT’s are not available. Inclusion of other study designs and rating them with GRADE (proposed by Guyatt) gives a better overall picture. (also see the post: #notsofunny: ridiculing RCT’s and EBM. [...]

12 11 2010
Science-Based Medicine » Of SBM and EBM Redux. Part I: Does EBM Undervalue Basic Science and Overvalue RCTs?

[...] a disclaimer: I don’t mean to gang up on Mr. Simon personally; others hold opinions similar to his, but his essay just happens to be a convenient starting point for this discussion. FWIW, prior to [...]

19 04 2011
#TEDxMaastricht « Laika's MedLibLog

[...] Notsofunny: ridiculing RCTs and EBM (http://laikaspoetnik.wordpress.com) [...]

15 08 2011
RIP Statistician Paul Meier. Foster-Parent not Father of the RCT « Laika's MedLibLog

[...] (for a more detailed explanation see my previous posts The best study designs…. for dummies and #NotSoFunny #16 – Ridiculing RCTs & EBM) [...]

2 05 2012
Can Guidelines Harm Patients? « Laika's MedLibLog

[...] meta-analyses. It involves tracking down the best external evidence there is. As I explained in #NotSoFunny #16 – Ridiculing RCTs & EBM, evidence is not an all-or-nothing thing: RCT’s (if well performed) are the most robust, but if [...]

2 06 2012
Friday Foolery #51 Statistically Funny « Laika's MedLibLog

[...] My post #NotSoFunny #16: ridiculing RCTs and EBM even led David Rind to sigh that “EBM folks are not necessarily known for their great senses [...]

28 11 2012
Grand Rounds Vol 6, No. 19 | A Groundhog’s Perspective on Med Blogs | More iPad

[...] say “hah!”  Not so for Jacqueline at Laika’s Medlib Blog.  She thinks that rejecting EBM is equivalent to rejecting science as a basis to medicine.  She’s right about that; I just hope she doesn’t bash the article on Ruppy.  She [...]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




Follow

Get every new post delivered to your Inbox.

Join 610 other followers

%d bloggers like this: