Experience versus Evidence [1]. Opioid Therapy for Rheumatoid Arthritis Pain.

5 12 2011

ResearchBlogging.orgRheumatoid arthritis (RA) is a chronic auto-immune disease, which causes inflammation of the joints that eventually leads to progressive joint destruction and deformity. Patients have swollen, stiff and painful joints.  The main aim of treatment is to reduce swelling  and inflammation, to alleviate pain and stiffness and to maintain normal joint function. While there is no cure, it is important to properly manage pain.

The mainstays of therapy in RA are disease-modifying anti-rheumatic drugs (DMARDs) and non-steroidal anti-inflammatory drugs (NSAIDs). These drugs primarily target inflammation. However, since inflammation is not the only factor that causes pain in RA, patients may not be (fully) responsive to treatment with these medications.
Opioids are another class of pain-relieving substance (analgesics). They are frequently used in RA, but their role in chronic cancer pain, including RA, is not firmly established.

A recent Cochrane Systematic Review [1] assessed the beneficial and harmful effects of opioids in RA.

Eleven studies (672 participants) were included in the review.

Four studies only assessed the efficacy of  single doses of different analgesics, often given on consecutive days. In each study opioids reduced pain (a bit) more than placebo. There were no differences in effectiveness between the opioids.

Seven studies between 1-6 weeks in duration assessed 6 different oral opioids either alone or combined with non-opioid analgesics.
The only strong opioid investigated was controlled-release morphine sulphate, in a single study with 20 participants.
Six studies compared an opioid (often combined with an non-opioid analgesic) to placebo. Opioids were slightly better than placebo in improving patient reported global impression of clinical change (PGIC)  (3 studies, 324 participants: relative risk (RR) 1.44, 95% CI 1.03 to 2.03), but did not lower the  number of withdrawals due to inadequate analgesia in 4 studies.
Notably none of the 11 studies reported the primary and probably more clinical relevant outcome “proportion of participants reporting ≥ 30% pain relief”.

On the other hand adverse events (most commonly nausea, vomiting, dizziness and constipation) were more frequent in patients receiving opioids compared to placebo (4 studies, 371 participants: odds ratio 3.90, 95% CI 2.31 to 6.56). Withdrawal due to adverse events was  non-significantly higher in the opioid-treated group.

Comparing opioids to other analgesics instead of placebos seems more relevant. Among the 11 studies, only 1 study compared an opioid (codeine with paracetamol) to an NSAID (diclofenac). This study found no difference in efficacy or safety between the two treatments.

The 11 included studies were very heterogeneous (i.e. different opioid studied, with or without concurrent use of non-opioid analgesics, different outcomes measured) and the risk of bias was generally high. Furthermore, most studies were published before 2000 (less optimal treatment of RA).

The authors therefore conclude:

In light of this, the quantitative findings of this review must be interpreted with great caution. At best, there is weak evidence in favour of the efficacy of opioids for the treatment of pain in patients with RA but, as no study was longer than six weeks in duration, no reliable conclusions can be drawn regarding the efficacy or safety of opioids in the longer term.

This was the evidence, now the opinion.

I found this Cochrane Review via an EvidenceUpdates email alert from the BMJ Group and McMaster PLUS.

EvidenceUpdate alerts are meant to “provide you with access to current best evidence from research, tailored to your own health care interests, to support evidence-based clinical decisions. (…) All citations are pre-rated for quality by research staff, then rated for clinical relevance and interest by at least 3 members of a worldwide panel of practicing physicians”

I usually don’t care about the rating, because it is mostly 5-6 on a scale of 7. This was also true for the current SR.

There is a more detailed rating available (when clicking the link, free registration required). Usually, the newsworthiness of SR’s scores relatively low. (because it summarizes ‘old’ studies?). Personally I would think that the relevance and newsworthiness would be higher for the special interest group, pain.

But the comment of the first of the 3 clinical raters was most revealing:

He/she comments:

As a Palliative care physician and general internist, I have had excellent results using low potency opiates for RA and OA pain. The palliative care literature is significantly more supportive of this approach vs. the Cochrane review.

Thus personal experience wins from evidence?* How did this palliative care physician assess effectiveness? Just give a single dose of an opiate? How did he rate the effectiveness of the opioids? Did he/she compare it to placebo or NSAID (did he compare it at all?), did he/she measure adverse effects?

And what is “The palliative care literature”  the commenter is referring to? Apparently not this Cochrane Review. Apparently not the 11 controlled trials included in the Cochrane review. Apparently not the several other Cochrane reviews on use of opioids for non-chronic cancer pain, and not the guidelines, syntheses and synopsis I found via the TRIP-database. All conclude that using opioids to treat non-cancer chronic pain is supported by very limited evidence, that adverse effects are common and that long-term use may lead to opioid addiction.

I’m sorry to note that although the alerting service is great as an alert, such personal ratings are not very helpful for interpreting and *true* rating of the evidence.

I would rather prefer a truly objective, structured critical appraisal like this one on a similar topic by DARE (“Opioids for chronic noncancer pain: a meta-analysis of effectiveness and side effects”)  and/or an objective piece that puts the new data into clinical perspective.

*Just to be clear, the own expertise and opinions of experts are also important in decision making. Rightly, Sackett [2] emphasized that good doctors use both individual clinical expertise and the best available external evidence. However, that doesn’t mean that one personal opinion and/or preference replaces all the existing evidence.

References 

  1. Whittle SL, Richards BL, Husni E, & Buchbinder R (2011). Opioid therapy for treating rheumatoid arthritis pain. Cochrane database of systematic reviews (Online), 11 PMID: 22071805
  2. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, & Richardson WS (1996). Evidence based medicine: what it is and what it isn’t. BMJ (Clinical research ed.), 312 (7023), 71-2 PMID: 8555924
Enhanced by Zemanta
Advertisements




Information is Beautiful. Visualizing the Evidence for Health Supplements.

21 03 2010

In a world driven by data, we need a simple means of digesting it all. Visualization of data may help to coop with the information overload. Good visualizations enable people to look at vast quantities of data quickly.

Bram Hengeveld at Geriatric Care (geriatricare.wordpress.com) told me of Snake Oil, a fantastic visualization of scientific evidence for popular health supplements. A well chosen name too, because Snake oil  is both a traditional Chinese medicine, as a  term for “medicines” that are fake, fraudulent, quackish, or ineffective. The expression is also applied metaphorically to any product with exaggerated marketing but questionable or unverifiable quality or benefit. (Wikipedia).

Snake oil is just one visualization at Information is Beautiful (link), the site created by David McCandless, a London-based author, writer and designer who wrote for The Guardian, Wired and others, and nowadays an independent data journalist and information designer. His passion: visualizing information – facts, data, ideas, subjects, issues, statistics, questions – all with the minimum of words (see about).

When you see snake oil you intuitively understand it all.

The image is a “balloon race”. The larger the bubble the higher its popularity in terms of number of Google hits. Orange bubbles look promising but have (yet) a low evidence.

The higher a bubble, the greater the evidence for its effectiveness. But the supplements are only effective for the conditions listed inside the bubble. Evidence is only shown for supplements, taken orally by an adult with a healthy diet.

Some supplements may be represented by multiple bubbles, one for each condition:  after all, the evidence may vary across conditions. For example, there’s strong evidence that Green Tea is good for cholesterol levels. But evidence for its anti-cancer effects is conflicting.

Another nice thing about Snake oil is that it is interactive. You can show (filter) the results for specific conditions or supplement types. Below I selected cardio. Most bubbles disappear. The evidence seems strong for green tea, fish oil and red yeast rice and low for vitamin E and omega-3. When you move your mouse over a bubble it pops up and you can read the supplements name and the condition to which the evidence applies.

Truly amazing.

One might ask how GOOD are the data on which these bubbles are based?

Well I haven’t checked, but the visualization generates itself from this Google Doc. The Google spread sheet shows all the data on which the visualization is based. These can be PubMed Records, Cochrane Systematic Reviews, Medline Plus or a full text paper. The image is automatically regenerated when the google doc is updated with new research that has come out.

The only thing that strikes me as a information specialists is that the way the evidence is retrieved is not stated. Probably this isn’t done in an evidence based way, because each piece of evidence is based on ONE article only. The choice of the paper seems rather random. And some supplements are rather vague. What is meant with “anti-oxidants?” Many of the supplements have anti-oxidant activity for instance.

But the idea in itself is great. Suppose we could gather the evidence in a more evidence based way, share it in Google Docs, appraise it and visualize it. Wouldn’t that be wonderful?





How Evidence Based is UpToDate really?

5 04 2009

logo-uptodate-2KevinMD or Kevin Pho is one of the top physician bloggers. He writes many posts per day, often provocatively commenting on breaking medical news or other blogposts.

A few weeks ago Kevin wrote a post on comparative effectiveness research [5] (tweet below), which is “(funded) research to evaluate and compare clinical outcomes, effectiveness, risk, and benefits of two or more medical treatments or services that address a particular medical condition” (definition from DB Medical Rants).

utd1-uptodate-kevin

Kevin stated that doctors certainly need an authoritative, unbiased, source to base their decisions on and that that kind of information is already there in the form of UpToDate®. According to Kevin:

For those who don’t know, UpToDate is a peer-reviewed, evidence-based, medical encyclopedia [1] available via DVD or online that’s revised every 3 months. It does not carry advertisements, and is funded entirely via paid subscriptions [3]. I am a big proponent, and like many other doctors, could not practice medicine effectively without it by my side.[2]

Kevin Pho also refers to a recent study showing that hospitals who used UpToDate scored better on patient safety and complication measures, as well as length of stay, when compared to institutions who did not use the resource.[4]

Kevin’s post actually summarized a post of yet another well known blogger, Val Jones MD (dr Val) of Better Health. In her blog post Dr Val wonders whether we should incentivize hospitals and providers to use UpToDate more regularly. Incentives can range from pay for performance bonuses to malpractice immunity for physicians who adhere to UpToDate’s, evidence-based, unbiased, clinical recommendations. According to Dr. Val, this might be an effective and easy way to target the problem of inconsistent practice styles on a national level, since many physicians know and respect UpToDate.[5]

The tweet of KevinMD elicited many responses on Twitter. To read most of the discussion on twitter follow this link.

Below I shall discuss the points addressed in the blogpost of KevinMD and DrVal. When relevant I will show/discuss the tweets as well.***

[1] Is UpToDate Evidence based?
The main discussion point on twitter was to which extent UpToDate is evidence based. As you can see below (the oldest tweet is at the bottom, the newest at the top) the opinions differ as to the level of UpToDate’s “evidence-basedness”. It varies from the one extreme of UpToDate doing systematic reviews and being entirely evidence based (drval) to ‘a slant of EBM*’ (@kevinmd) and UpToDate being an online book with narrative reviews.

utd2-tot2-uptodate

UpToDate used to be entirely an online book with (excellent) narrative reviews written by experts in the field. From 2006 onwards UpToDate began grading recommendations for treatment and screening using a modification of the GRADE system. Nowadays UpToDate calls its database an evidence based, peer reviewed information resource. According UpToDate the evidence is compiled from:

  • Hand-searching of over 400 peer-reviewed journals
  • Electronic searching of databases including MEDLINE, The Cochrane Database, Clinical Evidence, and ACP Journal Club
  • Guidelines that adhere to principles of evidence evaluation
  • Published information regarding clinical trials such as reports from the FDA and NIH
  • Proceedings of major national meetings
  • The clinical experience and observations of our authors, editors, and peer reviewers

Although it is an impressive list of EBM-sources, this does not mean that UpToDate itself is evidence based. A selection of journals to be ‘handsearched’ will undoubtedly lead to positive publication bias (most positive results will reach the major journals). The electronic searches -if done- are not displayed and therefore the quality of any search performed cannot be checked. It is also unclear on which basis articles are in- or excluded. And although UpToDate may summarize evidence from Systematic Reviews, including Cochrane Systematic Reviews it does not perform Systematic Reviews itself. At the most it gives a synthesis of the evidence, which is (still) gathered in a rather nontransparent way. Thus the definition of @kevinmd comes closest: “it gives an evidence based slant”. After all, Evidence-based medicine is a set of procedures, pre-appraised resources and information tools to assist practitioners to apply evidence from research in the care of individual patients” (McKibbon, K.A., see defintions at  the scharr webpage). Merely summarizing and /or referring to evidence is not enough to be evidence based.
It is also not clear what peer reviewed implies, i.e.can articles (chapters) be rejected by peer reviewers?

As a consequence the chapters differ in quality. Regularly I don’t find the available evidence in UpToDate. That is also true for students and docs preparing a Critically Appraised Topic (CAT). In my experience, UpToDate is hardly ever useful for finding recent evidence on a not too common question. @Allergynotes tweeted a specific example on chronic urticaria and H. pylori, where the available evidence could not be found in UpToDate.
In an older post (2007)*** @Allergynotes (Ves Dimov) commented on an interesting post by Dr. RW: “Are you UpToDate dependent?” by citing an old proverb: “beware the man of a single book (homo unius libri), which describes people with limited knowledge. The current version of the Internet has billions of scientific journal pages and the answer to your questions must be somewhere out there.” Ves:

“I don’t think anybody should be dependent on a single source. If one cannot practice medicine without UpToDate, may be one should not practice at all.”

Likewise, an anonymous commenter on Kevin’ posts stated:

“Don’t overlook the fact that there is a lot of good research outside of UpToDate. This is a great source, but if it’s your only source you’re closing off a tremendous amount of the literature. The articles are also written by people, and are subject to the biases of individuals.”

In another comment Dr. Matthew Mintz of the excellent blog with the same name puts forward that many of the authors have substantial ties to the pharmaceutical industry, meaning that UptoDate (although not financed) is not completely unbiased.

utd1-uptodate-allergynotes-laikas-evidence-not-always-found-3[2] Usefulness
@Allergynotes rightly states that usability/perceived usefulness my be more important to physicians (than real usefulness) and that we should look at what make UpToDate so useful rather than just say “it’s not EBM”. In one of his posts Ves Dimov (@allergynotes) refers to a (Dutch) paper showing that answers to questions posed during daily patient care are more likely to be answered by UpToDate than PubMed.** At my hospital some doctors (especially intern med docs) consider UpToDate as their Bible. It is without doubt that UpToDate is a very useful source both for clinicians, patients and even librarians. It is ideal for background questions (How can disease X be treated, what is the differential diagnosis?, what might be the cause of this disease?), to look up things and as a starting point. And it has a broad coverage. However the point here was not whether UpToDate is a useful source for clinicians – but whether it is a sufficiently unbiased evidence based source to incentive docs to follow its recommendations and its recommendations alone. Or as Shamsha says it: “I don’t like putting all my eggs in UpToDate’s basket.

utd-ebm-eggs-shamsha[3] Disadvantages/Alternatives
As highlighted by the twitter discussions (read from down up), the major disadvantages of UpToDate are its high pricing, its ridigity, monopolistic tendencies and strict denial of remote access. I don’t know if you have seen the recent post of David Rothman on a very unpolite, aggressive vendor trying to push a trial. Most of David readers guess the vendor was from UpToDate (2nd: MD consult). Is it reasonable to positively discriminate in favor of UpToDate, while not everyone may be able to afford this costly database or may prefer another source? Incentives will only enhance UpToDate’s monopolistic position.
The most ideal situation would be an open source UTD, as suggested by @nursedan. Allergynotes thinks that this should be possible. A role for Web 2.0 in EBM?

It should be noted that (besides the databases mentioned in the tweets) there are also other freely available evidence based sources, like

utd1-uptodate-price-disadv

[4] Hospitals using UptoDate score better?
Kevin and Dr Val also refer to a study in International Journal of Medical Informatics showing hospitals that used UptoDate scored better than hospitals that didn’t (even in a dose-response way). This study is shown prominently at the UpToDate’s site.

Now let’s just “score” the Evidence.

First one can wonder how representative this article is. A quick and dirty Google search gives many hits on the very same subject not (directly) linking to UpToDate. For instance, a paper published in the January issue of Ann Intern Med tells us the results from a large-scale study of more than 40 hospitals and 160,000 patients showing that when health information technologies replace paper forms and handwritten notes, both hospitals and patients benefit strongly (fewer complications, lower mortality rates, and lower costs). Etcetera. One would like to know how the evidence in the “UpToDate paper” would relate to other studies or even better one would like to see a head to head comparison of UpToDate with any other (specific) evidence based source.

The Impact Factor of INT J MED INFORM is 1.579. This says nothing about its value, but such a paper wouldn’t likely appear in UpToDate’s handsearch list.

More important 2 of the 4 authors are from UpToDate. This is an important bias.

Furthermore the study is a retrospective and observational study, comparing hospitals with online access to UpToDate with other acute care hospitals. According to the GRADE system this would automatically yield a Grade C: low-quality evidence from observational evidence. Most important, as admitted by the authors, the study could not fully account for additional features at the included hospitals that may also have been associated with better health outcomes. It is easy to imagine, for instance that a hospital being able to subscribe to UpToDate has a medical staff that was already predisposed to delivering higher quality care or might have a greater budget ;). And although the average, severity-adjusted lenght of stay was significantly shorter in UpToDate® hospitals than in other hospitals with a P value of less than 0.0001, the mean difference was only 0.167 days with a not very impressive 95% confidence interval of 0.081–0.252 days.

[5] Incentives?
Based on the above arguments I don’t think it would be reasonable, effective or fair to incentive those hospitals or doctors that consult (and can afford) UpToDate or to indirectly punish those that don’t (because they don’t have the money or they have a good alternative).

Furthermore such a positive discrimination would not solve the problem of lack of head to head comparison, what was what it was all about. Dr Mintz explains this very clearly in his comment to Kevin.

“… the authors of UptoDate are providing their own summary of already published data, which most is funded by industry. This is similarly true of other so-called unbiased sources.(..)
The problem goes even deeper than the potential bias of industry funded research, which has been consistently shown to be favorable to the sponsor. The fact that most research, and virtually all therapeutic research is funded by the industry allows the industry to dictate what scientific knowledge is available, and by default clinical practice.(…)

There are hundreds of important studies that are never done because the industry only takes a “safe” bet.
We need comparative effectiveness not just to see whether the more expensive treatment is worth the cost, but we also need it to answer scientifically important questions that the industry will unlikely fund.”

*EBM = evidence based medicine
** It should be noted though that an other interface of PubMed is used in this hospital, to allow recording of the queries.The study participants were doctors in internal medicine.
*** I’ve added this sentence because people thought I merely summarized the tweets. In addition I added some new references.

References:

You may also want to read: