Experience versus Evidence [1]. Opioid Therapy for Rheumatoid Arthritis Pain.

5 12 2011

ResearchBlogging.orgRheumatoid arthritis (RA) is a chronic auto-immune disease, which causes inflammation of the joints that eventually leads to progressive joint destruction and deformity. Patients have swollen, stiff and painful joints.  The main aim of treatment is to reduce swelling  and inflammation, to alleviate pain and stiffness and to maintain normal joint function. While there is no cure, it is important to properly manage pain.

The mainstays of therapy in RA are disease-modifying anti-rheumatic drugs (DMARDs) and non-steroidal anti-inflammatory drugs (NSAIDs). These drugs primarily target inflammation. However, since inflammation is not the only factor that causes pain in RA, patients may not be (fully) responsive to treatment with these medications.
Opioids are another class of pain-relieving substance (analgesics). They are frequently used in RA, but their role in chronic cancer pain, including RA, is not firmly established.

A recent Cochrane Systematic Review [1] assessed the beneficial and harmful effects of opioids in RA.

Eleven studies (672 participants) were included in the review.

Four studies only assessed the efficacy of  single doses of different analgesics, often given on consecutive days. In each study opioids reduced pain (a bit) more than placebo. There were no differences in effectiveness between the opioids.

Seven studies between 1-6 weeks in duration assessed 6 different oral opioids either alone or combined with non-opioid analgesics.
The only strong opioid investigated was controlled-release morphine sulphate, in a single study with 20 participants.
Six studies compared an opioid (often combined with an non-opioid analgesic) to placebo. Opioids were slightly better than placebo in improving patient reported global impression of clinical change (PGIC)  (3 studies, 324 participants: relative risk (RR) 1.44, 95% CI 1.03 to 2.03), but did not lower the  number of withdrawals due to inadequate analgesia in 4 studies.
Notably none of the 11 studies reported the primary and probably more clinical relevant outcome “proportion of participants reporting ≥ 30% pain relief”.

On the other hand adverse events (most commonly nausea, vomiting, dizziness and constipation) were more frequent in patients receiving opioids compared to placebo (4 studies, 371 participants: odds ratio 3.90, 95% CI 2.31 to 6.56). Withdrawal due to adverse events was  non-significantly higher in the opioid-treated group.

Comparing opioids to other analgesics instead of placebos seems more relevant. Among the 11 studies, only 1 study compared an opioid (codeine with paracetamol) to an NSAID (diclofenac). This study found no difference in efficacy or safety between the two treatments.

The 11 included studies were very heterogeneous (i.e. different opioid studied, with or without concurrent use of non-opioid analgesics, different outcomes measured) and the risk of bias was generally high. Furthermore, most studies were published before 2000 (less optimal treatment of RA).

The authors therefore conclude:

In light of this, the quantitative findings of this review must be interpreted with great caution. At best, there is weak evidence in favour of the efficacy of opioids for the treatment of pain in patients with RA but, as no study was longer than six weeks in duration, no reliable conclusions can be drawn regarding the efficacy or safety of opioids in the longer term.

This was the evidence, now the opinion.

I found this Cochrane Review via an EvidenceUpdates email alert from the BMJ Group and McMaster PLUS.

EvidenceUpdate alerts are meant to “provide you with access to current best evidence from research, tailored to your own health care interests, to support evidence-based clinical decisions. (…) All citations are pre-rated for quality by research staff, then rated for clinical relevance and interest by at least 3 members of a worldwide panel of practicing physicians”

I usually don’t care about the rating, because it is mostly 5-6 on a scale of 7. This was also true for the current SR.

There is a more detailed rating available (when clicking the link, free registration required). Usually, the newsworthiness of SR’s scores relatively low. (because it summarizes ‘old’ studies?). Personally I would think that the relevance and newsworthiness would be higher for the special interest group, pain.

But the comment of the first of the 3 clinical raters was most revealing:

He/she comments:

As a Palliative care physician and general internist, I have had excellent results using low potency opiates for RA and OA pain. The palliative care literature is significantly more supportive of this approach vs. the Cochrane review.

Thus personal experience wins from evidence?* How did this palliative care physician assess effectiveness? Just give a single dose of an opiate? How did he rate the effectiveness of the opioids? Did he/she compare it to placebo or NSAID (did he compare it at all?), did he/she measure adverse effects?

And what is “The palliative care literature”  the commenter is referring to? Apparently not this Cochrane Review. Apparently not the 11 controlled trials included in the Cochrane review. Apparently not the several other Cochrane reviews on use of opioids for non-chronic cancer pain, and not the guidelines, syntheses and synopsis I found via the TRIP-database. All conclude that using opioids to treat non-cancer chronic pain is supported by very limited evidence, that adverse effects are common and that long-term use may lead to opioid addiction.

I’m sorry to note that although the alerting service is great as an alert, such personal ratings are not very helpful for interpreting and *true* rating of the evidence.

I would rather prefer a truly objective, structured critical appraisal like this one on a similar topic by DARE (“Opioids for chronic noncancer pain: a meta-analysis of effectiveness and side effects”)  and/or an objective piece that puts the new data into clinical perspective.

*Just to be clear, the own expertise and opinions of experts are also important in decision making. Rightly, Sackett [2] emphasized that good doctors use both individual clinical expertise and the best available external evidence. However, that doesn’t mean that one personal opinion and/or preference replaces all the existing evidence.


  1. Whittle SL, Richards BL, Husni E, & Buchbinder R (2011). Opioid therapy for treating rheumatoid arthritis pain. Cochrane database of systematic reviews (Online), 11 PMID: 22071805
  2. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, & Richardson WS (1996). Evidence based medicine: what it is and what it isn’t. BMJ (Clinical research ed.), 312 (7023), 71-2 PMID: 8555924
Enhanced by Zemanta

RIP Statistician Paul Meier. Proponent not Father of the RCT.

14 08 2011

This headline in Boing Boing caught my eye today:  RIP Paul Meier, father of the randomized trial

Not surprisingly, I knew that Paul Meier (with Kaplan) introduced the Kaplan-Meier estimator (1958), a very important tool for measuring how many patients survive a medical treatment. But I didn’t know he was “father of the randomized trial”….

But is he really?:Father of the randomized trial and “probably best known for the introduction of randomized trials into the evaluation of medical treatments”, as Boing Boing states?

Boing Boing’s very short article is based on the New York Times article: Paul Meier, Statistician Who Revolutionized Medical Trials, Dies at 87. According to the NY Times “Dr. Meier was one of the first and most vocal proponents of what is called “randomization.” 

Randomization, the NY-Times explains, is:

Under the protocol, researchers randomly assign one group of patients to receive an experimental treatment and another to receive the standard treatment. In that way, the researchers try to avoid unintentionally skewing the results by choosing, for example, the healthier or younger patients to receive the new treatment.

(for a more detailed explanation see my previous posts The best study designs…. for dummies and #NotSoFunny #16 – Ridiculing RCTs & EBM)

Meier was a very successful proponent, that is for sure. According to Sir Richard Peto, (Dr. Meier) “perhaps more than any other U.S. statistician, was the one who influenced U.S. drug regulatory agencies, and hence clinical researchers throughout the U.S. and other countries, to insist on the central importance of randomized evidence.”

But an advocate need not be a father, for advocates are seldom the inventors/creators. A proponent is more of a nurse, a mentor or a … foster-parent.

Is Meier the true father/inventor of the RCT? And if not, who is?

Googling “Father of the randomized trial” won’t help, because all 1.610  hits point to Dr. Meier…. thanks to Boing Boing careless copying.

What I read so far doesn’t point at one single creator. And the RCT wasn’t just suddenly there. It started with comparison of treatments under controlled conditions. Back in 1753, the British naval surgeon James Lind published his famous account of 12 scurvy patients, “their cases as similar as I could get them” noting that “the most sudden and visible good effects were perceived from the uses of the oranges and lemons and that citrus fruit cured scurvy [3]. The French physician Pierre Louis and Harvard anatomist Oliver Wendell Holmes (19th century) were also fierce proponents of supporting conclusions about the effectiveness of treatments with statistics, not subjective impressions.[4]

But what was the first real RCT?

Perhaps the first real RCT was The Nuremberg salt test (1835) [6]. This was possibly not only the first RCT, but also the first scientific demonstration of the lack of effect of a homeopathic dilution. More than 50 visitors of a local tavern participated in the experiment. Half of them received a vial  filled with distilled snow water, the other half a vial with ordinary salt in a homeopathic C30-dilution of distilled snow water. None of the participants knew whether he got the “actual medicine or not” (blinding). The numbered vials were coded and the code was broken after the experiment (allocation concealment).

The first publications of RCT’s were in the field of psychology and agriculture. As a matter of fact one other famous statistician, Ronald A. Fisher  (of the Fisher’s exact test) seems to play a more important role in the genesis and popularization of RCT’s than Meier, albeit in agricultural research [5,7]. The book “The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century” describes how Fisher devised a randomized trial at the spot to test the contention of a lady that she could taste the difference between tea into which milk had been poured and tea that had been poured into milk (almost according to homeopathic principles) [7]

According to Wikipedia [5] the published (medical) RCT appeared in the 1948 paper entitled “Streptomycin treatment of pulmonary tuberculosis”. One of the authors, Austin Bradford Hill, is (also) credited as having conceived the modern RCT.

Thus the road to the modern RCT is long, starting with the notions that experiments should be done under controlled conditions and that it doesn’t make sense to base treatment on intuition. Later, experiments were designed in which treatments were compared to placebo (or other treatments) in a randomized and blinded fashion, with concealment of allocation.

Paul Meier was not the inventor of the RCT, but a successful vocal proponent of the RCT. That in itself is commendable enough.

And although the Boing Boing article was incorrect, and many people googling for “father of the RCT” will find the wrong answer from now on, it did raise my interest in the history of the RCT and the role of statisticians in the development of science and clinical trials.
I plan to read a few of the articles and books mentioned below. Like the relatively lighthearted “The Lady Tasting Tea” [7]. You can envision a book review once I have finished reading it.

Note added 15-05 13.45 pm:

Today a more accurate article appeared in the Boston Globe (“Paul Meier; revolutionized medical studies using math”), which does justice to the important role of Dr Meier in the espousal of randomization as an essential element in clinical trials. For that is what he did.


Dr. Meier published a scathing paper in the journal Science, “Safety Testing of Poliomyelitis Vaccine,’’ in which he described deficiencies in the production of vaccines by several companies. His paper was seen as a forthright indictment of federal authorities, pharmaceutical manufacturers, and the National Foundation for Infantile Paralysis, which funded the research for a polio vaccine.

  1. RIP Paul Meier, father of the randomized trial (boingboing.net)
  2. Paul Meier, Statistician Who Revolutionized Medical Trials, Dies at 87 (nytimes.com)
  3. M L Meldrum A brief history of the randomized controlled trial. From oranges and lemons to the gold standard. Hematology/ Oncology Clinics of North America (2000) Volume: 14, Issue: 4, Pages: 745-760, vii PubMed: 10949771  or see http://www.mendeley.com
  4. Fye WB. The power of clinical trials and guidelines,and the challenge of conflicts of interest. J Am Coll Cardiol. 2003 Apr 16;41(8):1237-42. PubMed PMID: 12706915. Full text
  5. http://en.wikipedia.org/wiki/Randomized_controlled_trial
  6. Stolberg M (2006). Inventing the randomized double-blind trial: The Nuremberg salt test of 1835. JLL Bulletin: Commentaries on the history of treatment evaluation (www.jameslindlibrary.org).
  7. The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century Peter Cummings, MD, MPH, Jama 2001;286(10):1238-1239. doi:10.1001/jama.286.10.1238  Book Review.
    Book by David Salsburg, 340 pp, with illus, $23.95, ISBN 0-7167-41006-7, New York, NY, WH Freeman, 2001.
  8. Kaptchuk TJ. Intentional ignorance: a history of blind assessment and placebo controls in medicine. Bull Hist Med. 1998 Fall;72(3):389-433. PubMed PMID: 9780448. abstract
  9. The best study design for dummies/ (https://laikaspoetnik.wordpress.com: 2008/08/25/)
  10. #Notsofunny: Ridiculing RCT’s and EBM (https://laikaspoetnik.wordpress.com: 2010/02/01/)
  11. RIP Paul Meier : Research Randomization Advocate (mystrongmedicine.com)
  12. If randomized clinical trials don’t show that your woo works, try anthropology! (scienceblogs.com)
  13. The revenge of “microfascism”: PoMo strikes medicine again (scienceblogs.com)

HOT TOPIC: Does Soy Relieve Hot Flashes?

20 06 2011

ResearchBlogging.orgThe theme of the Upcoming Grand Rounds held at June 21th (1st day of the Summer) at Shrink Rap is “hot”. A bit far-fetched, but aah you know….shrinks“. Of course they hope  assume  that we will express Weiner-like exhibitionism at our blogs. Or go into spicy details of hot sexpectations or other Penis Friday NCBI-ROFL posts. But no, not me, scientist and librarian to my bone marrow. I will stick to boring, solid science and will do a thorough search to find the evidence. Here I will discuss whether soy really helps to relieve hot flashes (also called hot flushes).

…..As illustrated by this HOT picture, I should post as well…..

(CC from Katy Tresedder, Flickr):

Yes, many menopausal women plagued by hot flashes take their relief  in soy or other phytoestrogens (estrogen-like chemicals derived from plants). I know, because I happen to have many menopausal women in my circle of friends who prefer taking soy over estrogen. They rather not take normal hormone replacement therapy, because this can have adverse effects if taken for a longer time. Soy on the other hand is considered a “natural remedy”, and harmless. Probably physiological doses of soy (food) are harmless and therefore a better choice than the similarly “natural” black cohosh, which is suspected to give liver injury and other adverse effects.

But is soy effective?

I did a quick search in PubMed and found a Cochrane Systematic Review from 2007 that was recently edited with no change to the conclusions.

This review looked at several phytoestrogens that were offered in several ways, as: dietary soy (9x) (powder, cereals, drinks, muffins), soy extracts (9x), red clover extracts (7x, including Promensil (5x)), Genistein extract , Flaxseed, hop-extract  and a Chinese medicinal herb.

Thirty randomized controlled trials with a total of 2730 participants met the inclusion criteria: the participants were women in or just before their menopause complaining of vasomotor symptoms (thus having hot flashes) for at least 12 weeks. The intervention was a food or supplement with high levels of phytoestrogens (not any other herbal treatment) and this was compared with placebo, no treatment or hormone replacement therapy.

Only 5 trials using the red clover extract Promensil were homogenous enough to combine in a meta-analysis. The effect on one outcome (incidence of hot flashes) is shown below. As can be seen at a glance, Promensil had no significant effect, whether given in a low (40 mg/day) or a higher (80 mg/day) dose. This was also true for the other outcomes.

The other phytoestrogen interventions were very heterogeneous with respect to dose, composition and type. This was especially true for the dietary soy treatment. Although some of the trials showed a positive effect of phytoestrogens on hot flashes and night sweats, overall, phytoestrogens were no better than the comparisons.

Most trials were small,  of short duration and/or of poor quality. Fewer than half of the studies (n=12) indicated that allocation had been concealed from the trial investigators.

One striking finding was that there was a strong placebo effect in most trials with a reduction in frequency of hot flashes ranging from 1% to 59% .

I also found another systematic review in PubMed by Bolaños R et al , that limited itself only to soy. Other differences with the Cochrane Systematic Review (besides the much simpler search 😉 ) were: inclusion of more recently published clinical trials, no inclusion of unpublished studies and less strict exclusion on basis of low methodological quality. Furthermore, genestein was (rightly) considered as a soy product.

The group of studies that used soy dietary supplement showed the highest heterogeneity. Overall, the results “showed a significant tendency(?)  in favor of soy. Nevertheless the authors conclude (similar to the Cochrane authors), that  it is still difficult to establish conclusive results given the high heterogeneity found in the studies. (but apparently the data could still be pooled?)


  • Lethaby A, Marjoribanks J, Kronenberg F, Roberts H, Eden J, & Brown J. (2007). Phytoestrogens for vasomotor menopausal symptoms Cochrane Database of Systematic Reviews (4) : 10.1002/14651858.CD001395.pub3.
  • Bolaños R, Del Castillo A, & Francia J (2010). Soy isoflavones versus placebo in the treatment of climacteric vasomotor symptoms: systematic review and meta-analysis. Menopause (New York, N.Y.), 17 (3), 660-6 PMID: 20464785

PubMed versus Google Scholar for Retrieving Evidence

8 06 2010

ResearchBlogging.orgA while ago a resident in dermatology told me she got many hits out of PubMed, but zero results out of TRIP. It appeared she had used the same search for both databases: alopecea areata and diphenciprone (a drug with a lot of synonyms). Searching TRIP for alopecea (in the title) only, we found a Cochrane Review and a relevant NICE guideline.

Usually, each search engine has is its own search and index features. When comparing databases one should compare “optimal” searches and keep in mind for what purpose the search engines are designed. TRIP is most suited to search aggregate evidence, whereas PubMed is most suited to search individual biomedical articles.

Michael Anders and Dennis Evans ignore this “rule of the thumb” in their recent paper “Comparison of PubMed and Google Scholar Literature Searches”. And this is not the only shortcoming of the paper.

The authors performed searches on 3 different topics to compare PubMed and Google Scholar search results. Their main aim was to see which database was the most useful to find clinical evidence in respiratory care.

Well quick guess: PubMed wins…

The 3 respiratory care topics were selected from a list of systematic reviews on the Website of the Cochrane Collaboration and represented in-patient care, out-patient care, and pediatrics.

The references in the three chosen Cochrane Systematic Reviews served as a “reference” (or “golden”) standard. However, abstracts, conference proceedings, and responses to letters were excluded.

So far so good. But note that the outcome of the study only allows us to draw conclusions about interventional questions, that seek to find controlled clinical trials. Other principles may apply to other domains (diagnosis, etiology/harm, prognosis ) or to other types of studies. And it certainly doesn’t apply to non-EBM-topics.

The authors designed ONE search for each topic, by taking 2 common clinical terms from the title of each Cochrane review connected by the Boolean operator “AND” (see Table, ” ” are not used). No synonyms were used and the translation of searches in PubMed wasn’t checked (luckily the mapping was rather good).



Search Terms

Noninvasive positive-pressure ventilation for cardiogenic pulmonary edema “noninvasive positive-pressure ventilation” AND “pulmonary edema”
Self-management education and regular practitioner review for adults with asthma “asthma” AND “education”
Ribavirin for respiratory syncytial virus “ribavirin” AND “respiratory syncytial virus”

In PubMed they applied the narrow methodological filter, or Clinical Query, for the domain therapy.
This prefab search strategy (randomized controlled trial[Publication Type] OR (randomized[Title/Abstract] AND controlled[Title/Abstract] AND trial[Title/Abstract]), developed by Haynes, is suitable to quickly detect the available evidence (provided one is looking for RCT’s and doesn’t do an exhaustive search). (see previous posts 2, 3, 4)

Google Scholar, as we all probably know, does not have such methodological filters, but the authors “limited” their search by using the Advanced option and enter the 2 search terms in the “Find articles….with all of the words” space (so this is a boolean “AND“) and they limited it the search to the subject area “Medicine, Pharmacology, and Veterinary Science”.

They did a separate search for publications that were available at their library, which has limited value for others, subscriptions being different for each library.

Next they determined the sensitivity (the number of relevant records retrieved as a proportion of the total number of records in the gold standard) and the precision or positive predictive value, the  fraction of returned positives that are true positives (explained in 3).

Let me guess: sensitivity might be equal or somewhat higher, and precision is undoubtedly much lower in Google Scholar. This is because (in) Google Scholar:

  • you can often search full text instead of just in the abstract, title and (added) keywords/MeSH
  • the results are inflated by finding one and the same references cited in many different papers (that might not directly deal with the subject).
  • you can’t  limit on methodology, study type or “evidence”
  • there is no automatic mapping and explosion (which may provide a way to find more synonyms and thus more relevant studies)
  • has a broader coverage (grey literature, books, more topics)
  • lags behind PubMed in receiving updates from MEDLINE

Results: PubMed and Google Scholar had pretty much the same recall, but for ribavirin and RSV the recall was higher in PubMed, PubMed finding 100%  (12/12) of the included trials, and Google Scholar 58% (7/12)

No discussion as to the why. Since Google Scholar should find the words in titles and abstracts of PubMed I repeated the search in PubMed but only in the title, abstract field, so I searched ribavirin[tiab] AND respiratory syncytial virus[tiab]* and limited it with the narrow therapy filter: I found 26 papers instead of 32. These titles were missing when I only searched title and abstract (between brackets: [relevant MeSH (reason why paper was found), absence of abstract (thus only title and MeSH) and letter], bold: why terms in title abstract are not found)

  1. Evaluation by survival analysis on effect of traditional Chinese medicine in treating children with respiratory syncytial viral pneumonia of phlegm-heat blocking Fei syndrome.
    Respiratory Syncytial Virus Infections/]
  2. Ribavarin in ventilated respiratory syncytial virus bronchiolitis: a randomized, placebo-controlled trial.
    Respiratory Syncytial Virus Infections/[NO ABSTRACT, LETTER]
  3. Study of interobserver reliability in clinical assessment of RSV lower respiratory illness.
    [MeSH:Respiratory Syncytial Virus Infections*]
  4. Ribavirin for severe RSV infection. N Engl J Med.
    [MeSH: Respiratory Syncytial Viruses
  5. Stutman HR, Rub B, Janaim HK. New data on clinical efficacy of ribavirin.
    MeSH: Respiratory Syncytial Viruses
  6. Clinical studies with ribavirin.
    MeSH: Respiratory Syncytial Viruses

Three of the papers had the additional MeSH respiratory syncytial virus and the three others respiratory syncytial virus infections. Although not all papers (2 comments/letters) may be relevant, it illustrates why PubMed may yield results, that are not retrieved by Google Scholar (if one doesn’t use synonyms)

In Contrast to Google Scholar, PubMed translates the search ribavirin AND respiratory syncytial virus so that the MeSH-terms “ribavirin”, “respiratory syncytial viruses”[MeSH Terms] and (indirectly) respiratory syncytial virus infection”[MeSH] are also found.

Thus in Google Scholar articles with terms like RSV and respiratory syncytial viral pneumonia (or lack of specifications, like clinical efficacy) could have been missed with the above-mentioned search.

The other result of the study (the result section comprises 3 sentences) is that “For each individual search, PubMed had better precision”.

The Precision was 59/467 (13%) in PubMed and 57/80,730 (0.07%)  in Google Scholar (p<0.001)!!
(note: they had to add author names in the Google Scholar search to find the papers in the haystack 😉

Héhéhé, how surprising. Well why would it be that no clinician or librarian would ever think of using Google Scholar as the primary, let alone the only, source to search for medical evidence?
It should also ring a bell, that [QUOTE**]:
In the Cochrane reviews the researchers retrieved information from multiple databases, including MEDLINE, the Cochrane Airways Group trial register (derived from MEDLINE)***, CENTRAL, EMBASE, CINAHL, DARE, NHSEED, the Acute Respiratory Infections Group’s specialized register, and LILACS… ”
Google Scholar isn’t mentioned as a source! Google Scholar is only recommendable to search for work citing (already found) relevant articles (this is called forward searching), if one hasn’t access to Web of Science or SCOPUS. Thus only to catch the last fish.

Perhaps the paper could have been more interesting if the authors had looked at any ADDED VALUE of Google Scholar, when exhaustively searching for evidence. Then it would have been crucial to look for grey literature too, (instead of excluding it), because this could be a possible strong point for Google Scholar. Furthermore one could have researched if forward searching yielded extra papers.

The specificity of PubMed is attributed to the used therapy-narrow filter, but the vastly lower specificity of Google Scholar is also due to the searching in the full text, including the reference lists.

For instance, searching for ribavirin AND respiratory syncytial virus in PubMed yields 523 hits. This can be reduced to 32 hits when applying the narrow therapy filter. This means a reduction by a factor of 16.
Yet a similar search in Google Scholar yield
4,080 hits. Thus without the filter there is still an almost 8 times higher yield from Google Scholar than from PubMed.

That evokes another  research idea: what would have happened if randomized (OR randomised) would have been added to the Google Scholar search? Would this have increased the specificity? In case of the above search it lowers the yield with a factor 2, and the first hits look very relevant.

It is really funny but the authors bring down their own conclusion that “These results are important because efficient retrieval of the best available scientific evidence can inform respiratory care protocols, recommendations for clinical decisions in individual patients, and education, while minimizing information overload.” by saying elsewhere that “It is unlikely that users consider more than the first few hundred search results, so RTs who conduct literature searches with Google Scholar on these topics will be much less likely to find references cited in Cochrane reviews.”

Indeed no one would take it into ones head to try to find the relevant papers out of those 4,080 hits retrieved. So what is this study worth from a practical point of view?

Well anyway, as you can ask for the sake of asking you can research for the sake of researching. Despite being an EBM-addict I prefer a good subjective overview on this topic over a weak scientific, quasi-evidence based, research paper.

Does this mean Google Scholar is useless? Does it mean that all those PhD’s hooked on Google Scholar are wrong?

No, Google Scholar serves certain purposes.

Just like the example of PubMed and TRIP, you need to know what is in it for you and how to use it.

I used Google Scholar when I was a researcher:

  • to quickly find a known reference
  • to find citing papers
  • to get an idea of how much articles have been cited/ find the most relevant papers in a quick and dirty way (i.e. by browsing)
  • for quick and dirty searches by putting words string between brackets.
  • to search full text. I used quite extensive searches to find out what methods were used (for instance methods AND (synonym1 or syn2 or syn3)). An interesting possibility is to do a second search for only the last few words (in a string). This will often reveal the next words in the sentence. Often you can repeat this trick, reading a piece of the paper without need for access.

If you want to know more about the pros and cons of Google Scholar I recommend the recent overview by the expert librarian Dean Giustini: “Sure Google Scholar is ideal for some things” [7]”. He also compiled a “Google scholar bibliography” with ~115 articles as of May 2010.

Speaking of librarians, why was the study performed by PhD RRT (RN)’s and wasn’t the university librarian involved?****

* this is a search string and more strict than respiratory AND syncytial AND virus
abbreviations used instead of full (database) names
*** this is wrong, a register contains references to controlled clinical trials from EMBASE, CINAHL and all kind of  databases in addition to MEDLINE.
****other then to read the manuscript afterwards.


  1. Anders ME, & Evans DP (2010). Comparison of PubMed and Google Scholar Literature Searches. Respiratory care, 55 (5), 578-83 PMID: 20420728
  2. This Blog: https://laikaspoetnik.wordpress.com/2009/11/26/adding-methodological-filters-to-myncbi/
  3. This Blog: https://laikaspoetnik.wordpress.com/2009/01/22/search-filters-1-an-introduction/
  4. This Blog: https://laikaspoetnik.wordpress.com/2009/06/30/10-1-pubmed-tips-for-residents-and-their-instructors/
  5. NeuroDojo (2010/05) Pubmed vs Google Scholar? [also gives a nice overview of pros and cons]
  6. GenomeWeb (2010/05/10) Content versus interface at the heart of Pubmed versus Scholar?/ [response to 5]
  7. The Search principle Blog (2010/05) Sure Google Scholar is ideal for some things.

Hot News: Curry, Curcumin, Cancer & Cure

3 11 2009

347513745_54fd37f269 curcuma curry

*Hot* News via Twitter and various news media a few days ago. Big headlines tell the following in respectively The Sun, Herald, Ireland, BBC News / NHS Health and Reuters:

Curry is a ‘cure for cancer

Spices in curry may help cure cancer

Curry spicekills cancer cells

Scientists say curry compound kills cancer cells

The message of these headlines is quite different and so are the articles themselves (covered more in depth by @jdc325 at the BadScience Blog “Stuff and Nonsense” (see here)). They vary from “curry being a cure for cancer” to “a possible effect of one of its compounds on cancer cells”.

So what was (not) done?

  1. Cancer was not cured.
  2. It was not a human trial.
  3. The study didn’t test effects on living laboratory animals, like mice, either.
  4. The study was done in the test tube, using individual cancer cell lines.
  5. The cells tested were (only) esophageal cancer cell lines.
  6. Testing the drugs efficacy was not the main aim of the study.
  7. Curry (a complex spicy mixture) wasn’t used.
  8. Curcumin was tested, which makes up 3% of “turmeric”, that is one of the spices in curry.
  9. That curcumin has some anti-carcinogenic effects is not new (see my tweet linking to 1120 hits in PubMed with a  simple PubMed search for curcumin and cancer: http://bit.ly/3Qydc6)

So why the fuss? This doesn’t seem to be a terribly shocking study. Why the media picked this one up is unclear. It must have been, because they were sleeping (missed all the previous studies on curcumin) and/or because they are fond of these kind of studies: except from the experimental details- these studies translate so well to the general public: food – cure – cancer.

And the headlines do it much better than the actual title of the article:

Curcumin induces apoptosis-independent death in oesophageal cancer cells

I experienced the same when my study was picked up at a cancer conference by BBC-health, whereas other far more pioneering studies were not: these were harder to grasp and to explain ‘to the public’ and without any possible direct health benefit.

What was already known about curcumin and cancer? What was done in the present study? What is new? And is curcumin really a promising agent?

Already known.

Curcumin (diferuloylmethane) is a polyphenol derived from the plant Curcuma longa, commonly called turmeric. It gives the curry it bright yellow color. Curcumin has a plethora of beneficial effects in vitro (in the test tube) and in animal studies, including anti-microbial,  anti-arthritic and  anti-inflammatory effects, but most interesting is its anti-carcinogenic effect. It has potential for both prevention and therapy of cancer, but the evidence for preventive effects is most convincing. The mechanisms playing a role in the anticarcinogenic effect are also multifold and complex. Possible mechanisms include: Inhibition/protection from DNA damage/alterations, Inhibition of angiogenesis, Inhibition of invasion/metastasis, Induction of apoptosis, Antioxidant activity, Induction of GST, Inhibition of cytochromes P450, I NF-jB, AP-1, MMPs, COX-2, TNF-a, IL-6, iNOS, IL-1b, the oncogens ras/fos/jun/myc, MAPK, ornithine decarboxylase, Activation of Nrf2, Induction of HO-1, Activation of PPAR-c  and Immunostimulant/immunorestorer effect……….[2]

New Findings

This is to put in perspective that the researchers found yet another possible mechanism (although others have found evidence before, see introduction [1]). Using a small panel of esophagus cancer cells, they first showed that the cells were selectively killed by curcumin. Next they showed that the major mechanism wasn’t apoptosis, cell death by suicide, but cell death by a mechanism called “Mitotic catastrophe”, a type of cell death that occurs during mitosis (cell division) (see free review in Oncogene [3]). As with apoptosis many steps have to go wrong before the cell will undergo mitotic catastrophe. The researchers show that curcumin-responsive cells were found to accumulate poly-ubiquitinated proteins and cyclin B, consistent with a disturbance of the ubiquitin–proteasome system: ubiquitin labels proteins for degradation by proteasomes, thereby controlling the stability, function, and intracellular localization of a wide variety of proteins.

In other words, this study is mainly about the mechanisms behind the anti-cancer effects of curcumin.


Of course this paper itself has no direct relevance to the management of human esophagus cancer. The sentence that may have triggered the media is:

“Curcumin can induce cell death by a mechanism that is not reliant on apoptosis induction, and thus represents a promising anticancer agent for prevention and treatment of esophageal cancer.”

Which is of course to far-fetched. The authors refer to the fact that esophageal cancers are often resistant to cell death induction with chemotherapeutic drugs, but this only indirectly points at a possible role for curcumin.

It has to be stressed that no human study has convincingly shown an anti-tumor effects of curcumin. Studies that have been done are observational, i.e. show that people taking higher concentrations of curcumin in their diet have a lower incidence of several common cancer types. However, such studies are prone to bias: several other factors (alone or in together) can be responsible for a anti-cancer effect (see previous post [5] explaining this for other nutrients).

The Current grade of evidence for a preventive or therapeutic effect is C, which means “unclear scientific evidence” (see MedlinePlus).

Although there are several trials under way there is reason to be skeptical about the potential of curcumin as cancertherapeutic agent.

  • The limited bioavailability and extensive metabolism of curcumin suggest that many of its anticancer effects observed in vitro may not be attainable in vivo. On the other the gastro-intestinal system is he most likely place for an effect of curcimin taken by the oral route. [2]
  • Although relatively high concentrations of curcumin have not shown significant toxicity in short-term studies, these concentrations may lead to toxic and carcinogenic effects in the long term.[2]
  • The therapeutic effects are dose-dependent. As often seen with these bioactive compounds, toxic effects can occur at supra-optimal amounts. Indeed curcumin has shown to be toxic and carcinogenic under specific conditions. At low and high doses curcumin behaves as an anti-oxidant and a pro-oxidant (toxic) respectively. [2, 6 ]
  • Often more ingredients add to the therapeutic effect, or more foods/habits [5].
  • The FDA has a shortlist of “187 Fake Cancer “Cures” Consumers Should Avoid”, compounds containing curcumin are on that list [7].


So, concluding, a study that unraveled one of the mechanisms whereby curcumin can kill cancer cells, led to an exaggerated and sometimes completely wrong coverage in the media. Why this was done is unclear, but the ultimate result of such misplaced drumroll will only lead to disbelief or carelessness.

Shame on you, media!!ResearchBlogging.org

Photo credits



  1. O’Sullivan-Coyne, G., O’Sullivan, G., O’Donovan, T., Piwocka, K., & McKenna, S. (2009). Curcumin induces apoptosis-independent death in oesophageal cancer cells British Journal of Cancer, 101 (9), 1585-1595 DOI: 10.1038/sj.bjc.6605308
  2. López-Lázaro, M. (2008). Anticancer and carcinogenic properties of curcumin: Considerations for its clinical development as a cancer chemopreventive and chemotherapeutic agent Molecular Nutrition & Food Research DOI: 10.1002/mnfr.200700238
  3. Castedo, M., Perfettini, J., Roumier, T., Andreau, K., Medema, R., & Kroemer, G. (2004). Cell death by mitotic catastrophe: a molecular definition Oncogene, 23 (16), 2825-2837 DOI: 10.1038/sj.onc.1207528
  4. Stuff and Nonsense – Curry can cure cancer, say scientists (2009/10/28)
  5. The best study design for dummies (2008/08/25)
  6. Huge disappointment: Selenium and Vitamin E fail to Prevent Prostate Cancer.(post on this blog about the SELECT trial – 2008/11/16)
  7. http://www.fda.gov/Drugs/GuidanceComplianceRegulatoryInformation/EnforcementActivitiesbyFDA/ucm171057.htm

You may also want to read:

Reblog this post [with Zemanta]

The Best Study Design… For Dummies.

25 08 2008

When I had those tired looks again, my mother in law recommended coenzyme Q, which research had proven to have wondrous effects on tiredness. Indeed many sites and magazines advocate this natural energy producing nutrient which mobilizes your mitochondria for cellular energy! Another time she asked me if I thought komkommerslank (cucumber pills for slimming) would work to loose some extra weight. She took my NO for granted.

It is often difficult to explain people that not all research is equally good, and that outcomes are not always equally “significant” (both statistically and clinically). It is even more difficult to understand “levels of evidence” and why we should even care. Pharmaceutical Industries (especially the supplement-selling ones) take advantage of this ignorance and are very successful in selling their stories and pills.

If properly conducted, the Randomized Controlled Trial (RCT) is the best study-design to examine the clinical efficacy of health interventions. An RCT is an experimental study where individuals who are similar at the beginning are randomly allocated to two or more treatment groups and the outcomes of the groups are compared after sufficient follow-up time. However an RCT may not always be feasible, because it may not be ethical or desirable to randomize people or to expose them to certain interventions.

Observational studies provide weaker empirical evidence, because the allocation of factors is not under control of the investigator, but “just happen” or are choosen (e.g. smoking). Of the observational studies, cohort studies provide stronger evidence than case control studies, because in cohort studies factors are measured before the outcome, whereas in case controls studies factors are measured after the outcome.

Most people find such a description of study types and levels of evidence too theoretical and not appealing.

Last year I was challenged to tell about how doctors search medical information (central theme = Google) for and here it comes…. “the Society of History and ICT”.

To explain the audience why it is important for clinicians to find ‘the best evidence’ and how methodological filters can be used to sift through the overwhelming amount of information in for instance PubMed, I had to introduce RCT’s and the levels of evidence. To explain it to them I used an example that stroke me when I first read about it.

I showed them the following slide :

And clarified: Beta-carotene is a vitamine in carrots and many other vegetables, but you can also buy it in pure form as pills. There is reason to believe that beta-carotene might help to prevent lung cancer in cigarette smokers. How do you think you can find out whether beta-carotene will have this effect?

  • Suppose you have two neighbors, both heavy smokers of the same age, both males. The neighbor who doesn’t eat much vegetables gets lung cancer, but the neighbor who eats a lot of vegetables and is fond of carrots doesn’t. Do you think this provides good evidence that beta-carotene prevents lung cancer?
    There is a laughter in the room, so they don’t believe in n=1 experiments/case reports. (still how many people don’t think smoking does not necessarily do any harm because “their chainsmoking father reached his nineties in good health”).
    I show them the following slide with the lowest box only.
  • O.k. What about this study? I’ve a group of lung cancer patients, who smoke(d) heavily. I ask them to fill in a questionnaire about their eating habits in the past and take a blood sample, and I do the same with a simlar group of smokers without cancer (controls). Analysis shows that smokers developing lung cancer eat much less beta-carotene containing vegetables and have lower bloodlevels of beta-carotene than the smokers not developing cancer. Does this mean that beta-carotene is preventing lung cancer?
    Humming in the audience, till one man says: “perhaps some people don’t remember exactly what they eat” and then several people object “that it is just an association” and “you do not yet know whether beta-carotene really causes this”. Right! I show the box patient-control studies.
  • Than consider this study design. I follow a large cohort of ‘healthy’ heavy smokers and look at their eating habits (including use of supplements) and take regular blood samples. After a long follow-up some heavy smokers develop lung cancer whereas others don’t. Now it turns out that the group that did not develop lung cancer had significantly more beta-carotene in their blood and eat larger amount of beta-carotene containing food. What do you think about that then?
    Now the room is a bit quiet, there is some hesitation. Then someone says: “well it is more convincing” and finally the chair says: “but it may still not be the carrots, but something else in their food or they may just have other healthy living habits (including eating carrots). Cohort-study appears on the slide (What a perfect audience!)
  • O.k. you’re not convinced that these study designs give conclusive evidence. How could we then establish that beta-carotene lowers the risk of lung cancer in heavy smokers? Suppose you really wanted to know, how do you set up such a study?
    Grinning. Someone says “by giving half of the smokers beta-carotene and the other half nothing”. “Or a placebo”, someone else says. Right! Randomized Controlled Trial is on top of the slide. And there is not much room left for another box, so we are there. I only add that the best way to do it is to do it double blinded.

Than I reveal that all this research has really been done. There have been numerous observational studies (case-control as well cohorts studies) showing a consistent negative correlation between the intake of beta-carotene and the development of lung cancer in heavy smokers. The same has been shown for vitamin E.

“Knowing that”, I asked the public: “Would you as a heavy smoker participate in a trial where you are randomly assigned to one of the following groups: 1. beta-carotene, 2. vitamin E, 3. both or 4. neither vitamin (placebo)?”

The recruitment fails. Some people say they don’t believe in supplements, others say that it would be far more effective if smokers quit smoking (laughter). Just 2 individuals said they would at least consider it. But they thought there was a snag in it and they were right. Such studies have been done, and did not give the expected positive results.
In the first large RCT (
appr. 30,000 male smokers!), the ATBC Cancer Prevention Study, beta-carotene rather increased the incidence of lung cancer with 18 percent and overall mortality with 8 percent (although harmful effects faded after men stopped taking the pills). Similar results were obtained in the CARET-study, but not in a 3rd RCT, the Physician’s Health Trial, the only difference being that the latter trial was performed both with smokers ànd non-smokers.
It is now generally thought that cigarette smoke causes beta-carotene to breakdown in detrimental products, a process that can be halted by other anti-oxidants (normally present in food). Whether vitamins act positively (anti-oxidant) or negatively (pro-oxidant) depends very much on the dose and the situation and on whether there is a shortage of such supplements or not.

I found that this way of explaining study designs to well-educated layman was very effective and fun!
The take-home message is that no matter how reproducible the observational studies seem to indicate a certain effect, better evidence is obtained by randomized control trials. It also shows that scientists should be very prudent to translate observational findings directly in a particular lifestyle advice.

On the other hand, I wonder whether all hypotheses have to be tested in a costly RCT (the costs for the ATCB trial were $46 million). Shouldn’t there be very very solid grounds to start a prevention study with dietary supplements in healthy individuals ? Aren’t their any dangers? Personally I think we should be very restrictive about these chemopreventive studies. Till now most chemopreventive studies have not met the high expectations, anyway.
And what about coenzyme-Q and komkommerslank? Besides that I do not expect the evidence to be convincing, tiredness can obviously be best combated by rest and I already eat enough cucumbers…. 😉
To be continued…

Clinical Studies and designs:
several paper books; online e.g. GlossClinStudy on a vetenarian site
The ATCB study: The Alpha-Tocopherol, Beta Carotene Cancer Prevention Study Group. The effect of vitamin E and beta carotene on the incidence of lung cancer and other cancers in male smokers. N Engl J Med 1994;330:1029-35. See free full text here
Overview of ATBC and CARET study 2 overwiews at www.cancer.gov, one about the ATBCfollowup and one about the CARET-trial
Overview of other RCT’s with surprising outcomes: on wikipedia


Toen ik er weer eens donkere kringen onder mijn ogen had, raadde mijn schoonmoeder me coenzym Q aan, waarvan ze net gelezen had dat het een wetenschappelijk bewezen effectieve vermindering van vermoeidheid geeft. Inderdaad bevelen veel webpagina’s en huis-aan-huis bladen deze “natuurlijke en energie-mobiliserende voedingstof die je mitochondrien aanzet tot cellulaire energie” aan. Een andere keer vroeg ze me of ik dacht dat je wat pondjes zou kunnen kwijtraken door komkommerslank. Ze nam met een stellig NEE genoegen.

Het is vaak heel moeilijk duidelijk te maken dat niet alle research even goed is en niet alle uitkomsten even significant (zowel statistisch als klinisch). Nog moeilijker is het om de “bewijsniveau’s (levels of evidence)” uit de doeken te doen of om uit te leggen waarom je uberhaupt zoiets zou willen weten. De pharmacie en zeker de supplement verkopende bedrijfjes varen wel bij de goedgelovigheid van de gemiddelde consument. Hun verhalen en pillen vinden gretig aftrek.

Een gerandomiseerde gecontroleerde trial (RCT) is, indien goed uitgevoerd, het ‘beste’ studietype om aan te tonen of een behandeling werkzaam of nuttig is. Een RCT is een experimentele studie waarbij de deelnemers, die bij start van de studie vergelijkbaar zijn, door het lot worden toegewezen aan een (bepaalde) interventie- of controlegroep en waarbij na een bepaalde tijd de uitkomsten in beide groepen worden vergeleken. Een RCT is echter niet altijd haalbaar of wenselijk, bijvoorbeeld omdat het niet ethisch is om mensen te randomiseren of om hen bloot te stellen aan bepaalde interventies.

Observationele studies zijn zwakker in bewijskracht, omdat de toewijzing van de factoren niet in handen liggen van de onderzoeker, maar van het lot (natuur, werkomstandigheden, eigen keuzes). Van de observationele studies hebben cohort studies een hogere bewijskracht dan case control studies, omdat de te onderzoeken factoren bij cohort studies worden bepaald vòòrdat de uitkomst bekend is, terwijl dit bij case control studies juist gebeurt nadat de uitkomst bekend is.

De meeste mensen vinden zo’n beschrijving van studietypes the theoretisch en nietszeggend. Het spreekt ze niet aan, omdat ze zich er niets bij voor kunnen stellen.

Afgelopen jaar werd ik uitgedaagd om te vertellen “hoe artsen medische informatie zoeken” (het centrale thema was Google en informatie) voor …en nou komt-ie.. “De Vereniging voor Geschiedenis en Informatica”.

Om het publiek uit te leggen waarom het belangrijk is dat klinici de beste evidence vinden en hoe zij met behulp van methodologische filters het kaf van het koren kunnen scheiden, moest ik hen eerst uitleggen wat RCT’s en bewijsniveau’s zijn. Ik legde hen dit uit aan de hand van een onderwerp dat mij na aan het hart ligt.

Ik toonde hen een dia met daarop de vraag: Voorkomt bètacaroteen longkanker?
En ik legde hen uit dat bètacaroteen een vitamine is die veel in worteltjes en bepaalde andere groente voorkomt, maar die je ook als pillen kan slikken. Er zijn aanwijzingen dat bètacaroteen longkanker bij rokers zou helpen voorkomen. Hoe denkt u dat u dit het best kunt aantonen?

  • Stel u heeft 2 buren, beiden kettingroker, man, en even oud. De buurman die weinig groente eet krijgt longkanker, maar de buurman die heel veel groente eet en gek is op worteltjes krijgt het niet. Kunnen we nu concluderen, dat bètacaroteen longkanker bij rokers voorkomt?
    Er wordt hartelijk gelachen, dus men gelooft hier niet in n=1 experimenten/case reports. (maar je moet ze maar de kost geven die denken dat roken geen kwaad kan omdat hun shaggies verslindende vader een taaie kerel was die nog tot op hoge leeftijd volop van het leven genoot).
    Ik toon hen een nieuwe slide met daarop het woord “casus”.
  • O.k. Wat vind u hiervan? Er is een groep longkankerpatienten, die veel gerookt heeft. Ik vraag of de patienten een vragenlijst willen invullen en neem wat bloed af. Datzelfde doe ik met een vergelijkbare groep rokers die géén kanker ontwikkeld heeft. Ik analyseer alles en wat blijkt? In de groep met longkanker zitten vooral rokers met een minder caroteenrijk voedingspatroon en minder caroteen in hun bloed. Betekent dit dat bètacaroteen longkanker voorkomt?
    Geroezemoes. Iemand zegt: “misschien kunnen sommige mensen zich helemaal niet zo goed herinneren wat ze vroeger precies hebben gegeten”. Anderen werpen tegen “dat het alleen maar een associatie is” en “dat je niet zeker weet of het echt alleen de betacaroteen is die dit effect geeft.” Heel goed! Op het scherm verschijnt “patient-controle onderzoek”.
  • Dan doe ik het anders. Ik volg een grote groep ‘gezonde’ zware rokers, ik volg hun eetgewoonten (inclusief het gebruik van supplementen) nauwgezet en neem regelmatig bloed af. Na lange tijd krijgen sommige rokers longkanker, maar andere niet. Nou blijkt wéér dat de groep die geen longkanker kreeg meer caroteenrijk voedsel at en meer betacaroteen in zijn bloed had. Wat denkt u hiervan?
    Het is een beetje stiller, je voelt de aarzeling, tot iemand zegt. “Nou, het overtuigt wel iets meer”. Een ander werpt tegen: “maar wie zegt dat het de worteltjes zijn, misschien is het wel iets anders in het eten of misschien is de levensstijl sowieso gezonder. Cohort-studie verschijnt op het scherm. Wat een perfect publiek, daar kan menig geneeskunde student nog een puntje aan zuigen.
  • Ik begrijp dat u niet geheel overtuigd bent van de bewijskracht van deze studies. Maar hoe zou u dan vaststellen dat bètacaroteen de kans op longkanker verlaagt bij rokers? Stel dat u het echt zou willen weten, hoe zet u de studie dan op?
    Gegrinnik. Iemand zegt: “Geef de helft van de rokers beta-caroteen en de andere helft niets”. “Of een placebo”. zegt een ander. Prima! Er verschijnt “Randomized Controlled Trial” boven aan de slide. Er is geen ruimte meer over, dus we zijn er. Ik vertel alleen nog iets over het belang van dubbelblind onderzoek.

Dan onthul ik dat dergelijke onderzoeken ook werkelijk gedaan zijn. Uit tal van observationele (case-control en cohort) studies is gebleken dat er een omgekeerde relatie bestaat tussen inname van bètacaroteen en ontwikkelen van longkanker bij rokers. Hetzelfde geldt voor vitamine E.

“Nu u dat weet”, vraag ik het publiek “Zou u dan als kettingroker deelnemen aan een studie waar u random toegewezen wordt aan een van de volgende behandelingen:
1. bètacaroteen, 2. vitamine E, 3. beiden of 4. geen van beiden (placebo)?”

De inclusie faalt. Sommigen geloven niet in supplementen, anderen stellen dat het beter zou zijn als rokers stopten met roken (instemmend gelach). Twee mensen zeggen dat ze het in overweging zouden nemen. Maar men vermoed (gezien het voorafgaande) een addertje onder het gras en ze hebben gelijk. Dergelijke studies zijn gedaan en hebben niet het gewenste resultaat opgeleverd.
In de eerste grote RCT (
ca. 30.000 mannelijke rokers!), de ATBC Cancer Prevention Study, verhoogde bètacaroteen juist het aantal longkankergevallen met 18% en de algehele sterfte met 8% (hoewel het effect langzaam uitdooft als mensen met de pillen stoppen). Vergelijkbare resultaten werden verkregen in de CARET-studie, maar niet in een 3e RCT, de Physician’s Health Trial, die ook niet-rokers had geincludeerd.
Aangenomen wordt nu dat sigaretterook bètacaroteen afbreekt tot gevaarlijke afvalproducten, een proces dat gekeerd kan worden door andere oxidanten (normaal aanwezig in het voedsel). Of vitaminen een positieve (anti-oxidant) of negatieve (pro-oxidant) werking hebben hangt erg af van de dosis, de vorm, en de situatie: vitamines in fysiologische concentraties zijn met name nuttig bij een tekort eraan.

Deze manier van studietypes uitleggen aan goed-ontwikkelde leken vond ik erg effectief en leuk om te doen bovendien.

De boodschap is dat hoezeer observationele studies ook op een bepaald effect wijzen, beter evidence verkregen kan worden met een RCT (hoewel dit niet altijd kan en mag). Het laat ook zien dat je als wetenschapper (en dokter) heel voorzichtig moet zijn met het direct vertalen van ‘waarnemingen’ naar adviezen richting een bepaalde therapie of levensstijl.

Aan de andere kant vraag ik me wel af, of alle hypothesen getest moeten worden in een overigens zeer kostbare RCT (kosten van de ATCB trial bedroegen $46 miljoen). Zou er niet een heel stevige basis moeten zijn vòòr je een preventieve studie doet met supplementen bij gezonde individuen? Zijn er geen risico’s ? Zelf denk ik dat we heel terughoudend moeten zijn met chemopreventie studies. Tot op heden hebben ze trouwens niet aan de hoge verachtingen voldaan.
Wat coenzym-Q en komkommerslank betreft? Behalve dat ik er gewoon niet in geloof (ik bedoel dat er evidence bestaat dat ze werken), denk ik dat vermoeidheid het best bestreden kan worden met rust en ik eet zat komkommers. 😉 Volgende keer meer…..