The Scatter of Medical Research and What to do About it.

18 05 2012

ResearchBlogging.orgPaul Glasziou, GP and professor in Evidence Based Medicine, co-authored a new article in the BMJ [1]. Similar to another paper [2] I discussed before [3] this paper deals with the difficulty for clinicians of staying up-to-date with the literature. But where the previous paper [2,3] highlighted the mere increase in number of research articles over time, the current paper looks at the scatter of randomized clinical trials (RCTs) and systematic reviews (SR’s) accross different journals cited in one year (2009) in PubMed.

Hofmann et al analyzed 7 specialties and 9 sub-specialties, that are considered the leading contributions to the burden of disease in high income countries.

They followed a relative straightforward method for identifying the publications. Each search string consisted of a MeSH term (controlled  term) to identify the selected disease or disorders, a publication type [pt] to identify the type of study, and the year of publication. For example, the search strategy for randomized trials in cardiology was: “heart diseases”[MeSH] AND randomized controlled trial[pt] AND 2009[dp]. (when searching “heart diseases” as a MeSH, narrower terms are also searched.) Meta-analysis[pt] was used to identify systematic reviews.

Using this approach Hofmann et al found 14 343 RCTs and 3214 SR’s published in 2009 in the field of the selected (sub)specialties. There was a clear scatter across journals, but this scatter varied considerably among specialties:

“Otolaryngology had the least scatter (363 trials across 167 journals) and neurology the most (2770 trials across 896 journals). In only three subspecialties (lung cancer, chronic obstructive pulmonary disease, hearing loss) were 10 or fewer journals needed to locate 50% of trials. The scatter was less for systematic reviews: hearing loss had the least scatter (10 reviews across nine journals) and cancer the most (670 reviews across 279 journals). For some specialties and subspecialties the papers were concentrated in specialty journals; whereas for others, few of the top 10 journals were a specialty journal for that area.
Generally, little overlap occurred between the top 10 journals publishing trials and those publishing systematic reviews. The number of journals required to find all trials or reviews was highly correlated (r=0.97) with the number of papers for each specialty/ subspecialty.”

Previous work already suggested that this scatter of research has a long tail. Half of the publications is in a minority of papers, whereas the remaining articles are scattered among many journals (see Fig below).

Click to enlarge en see legends at BMJ 2012;344:e3223 [CC]

The good news is that SRs are less scattered and that general journals appear more often in the top 10 journals publishing SRs. Indeed for 6 of the 7 specialties and 4 of the 9 subspecialties, the Cochrane Database of Systematic Reviews had published the highest number of systematic reviews, publishing between 6% and 18% of all the systematic reviews published in each area in 2009. The bad news is that even keeping up to date with SRs seems a huge, if not impossible, challenge.

In other words, it is not sufficient for clinicians to rely on personal subscriptions to a few journals in their specialty (which is common practice). Hoffmann et al suggest several solutions to help clinicians cope with the increasing volume and scatter of research publications.

  • a central library of systematic reviews (but apparently the Cochrane Library fails to fulfill such a role according to the authors, because many reviews are out of date and are perceived as less clinically relevant)
  • registry of planned and completed systematic reviews, such as prospero. (this makes it easier to locate SRs and reduces bias)
  • Synthesis of Evidence and synopses, like the ACP-Jounal Club which summarizes the best evidence in internal medicine
  • Specialised databases that collate and critically appraise randomized trials and systematic reviews, like for physical therapy. In my personal experience, however, this database is often out of date and not comprehensive
  • Journal scanning services like EvidenceUpdates from, which scans over 120 journals, filters articles on the basis of quality, has practising clinicians rate them for relevance and newsworthiness, and makes them available as email alerts and in a searchable database. I use this service too, but besides that not all specialties are covered, the rating of evidence may not always be objective (see previous post [4])
  • The use of social media tools to alert clinicians to important new research.

Most of these solutions are (long) existing solutions that do not or only partly help to solve the information overload.

I was surprised that the authors didn’t propose the use of personalized alerts. PubMed’s My NCBI feature allows to create automatic email alerts on a topic and to subscribe to electronic tables of contents (which could include ACP journal Club). Suppose that a physician browses 10 journals roughly covering 25% of the trials. He/she does not need to read all the other journals from cover to cover to avoid missing one potentially relevant trial. Instead it is far more efficient to perform a topic search to filter relevant studies from journals that seldom publish trials on the topic of interest. One could even use the search of Hoffmann et al to achieve this.* Although in reality, most clinical researchers will have narrower fields of interest than all studies about endocrinology and neurology.

At our library we are working at creating deduplicated, easy to read, alerts that collate table of contents of certain journals with topic (and author) searches in PubMed, EMBASE and other databases. There are existing tools that do the same.

Another way to reduce the individual work (reading) load is to organize journals clubs or even better organize regular CATs (critical appraised topics). In the Netherlands, CATS are a compulsory item for residents. A few doctors do the work for many. Usually they choose topics that are clinically relevant (or for which the evidence is unclear).

The authors shortly mention that their search strategy might have missed  missed some eligible papers and included some that are not truly RCTs or SRs, because they relied on PubMed’s publication type to retrieve RCTs and SRs. For systematic reviews this may be a greater problem than recognized, for the authors have used meta-analyses[pt] to identify systematic reviews. Unfortunately PubMed has no publication type for systematic reviews, but it may be clear that there are many more systematic reviews that meta-analyses. Possibly systematical reviews might even have a different scatter pattern than meta-analyses (i.e. the latter might be preferentially included in core journals).

Furthermore not all meta-analyses and systematic reviews are reviews of RCTs (thus it is not completely fair to compare MAs with RCTs only). On the other hand it is a (not discussed) omission of this study, that only interventions are considered. Nowadays physicians have many other questions than those related to therapy, like questions about prognosis, harm and diagnosis.

I did a little imperfect search just to see whether use of other search terms than meta-analyses[pt] would have any influence on the outcome. I search for (1) meta-analyses [pt] and (2) systematic review [tiab] (title and abstract) of papers about endocrine diseases. Then I subtracted 1 from 2 (to analyse the systematic reviews not indexed as meta-analysis[pt])



I analyzed the top 10/11 journals publishing these study types.

This little experiment suggests that:

  1. the precise scatter might differ per search: apparently the systematic review[tiab] search yielded different top 10/11 journals (for this sample) than the meta-analysis[pt] search. (partially because Cochrane systematic reviews apparently don’t mention systematic reviews in title and abstract?).
  2. the authors underestimate the numbers of Systematic Reviews: simply searching for systematic review[tiab] already found appr. 50% additional systematic reviews compared to meta-analysis[pt] alone
  3. As expected (by me at last), many of the SR’s en MA’s were NOT dealing with interventions, i.e. see the first 5 hits (out of 108 and 236 respectively).
  4. Together these findings indicate that the true information overload is far greater than shown by Hoffmann et al (not all systematic reviews are found, of all available search designs only RCTs are searched).
  5. On the other hand this indirectly shows that SRs are a better way to keep up-to-date than suggested: SRs  also summarize non-interventional research (the ratio SRs of RCTs: individual RCTs is much lower than suggested)
  6. It also means that the role of the Cochrane Systematic reviews to aggregate RCTs is underestimated by the published graphs (the MA[pt] section is diluted with non-RCT- systematic reviews, thus the proportion of the Cochrane SRs in the interventional MAs becomes larger)

Well anyway, these imperfections do not contradict the main point of this paper: that trials are scattered across hundreds of general and specialty journals and that “systematic reviews” (or meta-analyses really) do reduce the extent of scatter, but are still widely scattered and mostly in different journals to those of randomized trials.

Indeed, personal subscriptions to journals seem insufficient for keeping up to date.
Besides supplementing subscription by  methods such as journal scanning services, I would recommend the use of personalized alerts from PubMed and several prefiltered sources including an EBM search machine like TRIP (

*but I would broaden it to find all aggregate evidence, including ACP, Clinical Evidence, syntheses and synopses, not only meta-analyses.

**I do appreciate that one of the co-authors is a medical librarian: Sarah Thorning.


  1. Hoffmann, Tammy, Erueti, Chrissy, Thorning, Sarah, & Glasziou, Paul (2012). The scatter of research: cross sectional comparison of randomised trials and systematic reviews across specialties BMJ, 344 : 10.1136/bmj.e3223
  2. Bastian, H., Glasziou, P., & Chalmers, I. (2010). Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLoS Medicine, 7 (9) DOI: 10.1371/journal.pmed.1000326
  3. How will we ever keep up with 75 trials and 11 systematic reviews a day (
  4. Experience versus Evidence [1]. Opioid Therapy for Rheumatoid Arthritis Pain. (

Evidence Based Point of Care Summaries [2] More Uptodate with Dynamed.

18 10 2011

ResearchBlogging.orgThis post is part of a short series about Evidence Based Point of Care Summaries or POCs. In this series I will review 3 recent papers that objectively compare a selection of POCs.

In the previous post I reviewed a paper from Rita Banzi and colleagues from the Italian Cochrane Centre [1]. They analyzed 18 POCs with respect to their “volume”, content development and editorial policy. There were large differences among POCs, especially with regard to evidence-based methodology scores, but no product appeared the best according to the criteria used.

In this post I will review another paper by Banzi et al, published in the BMJ a few weeks ago [2].

This article examined the speed with which EBP-point of care summaries were updated using a prospective cohort design.

First the authors selected all the systematic reviews signaled by the American College of Physicians (ACP) Journal Club and Evidence-Based Medicine Primary Care and Internal Medicine from April to December 2009. In the same period the authors selected all the Cochrane systematic reviews labelled as “conclusion changed” in the Cochrane Library. In total 128 systematic reviews were retrieved, 68 from the literature surveillance journals (53%) and 60 (47%) from the Cochrane Library. Two months after the collection started (June 2009) the authors did a monthly screen for a year to look for potential citation of the identified 128 systematic reviews in the POCs.

Only those 5 POCs were studied that were ranked in the top quarter for at least 2 (out of 3) desirable dimensions, namely: Clinical Evidence, Dynamed, EBM Guidelines, UpToDate and eMedicine. Surprisingly eMedicine was among the selected POCs, having a rating of “1″ on a scale of 1 to 15 for EBM methodology. One would think that Evidence-based-ness is a fundamental prerequisite  for EBM-POCs…..?!

Results were represented as a (rather odd, but clear) “survival analysis” ( “death” = a citation in a summary).

Fig.1 : Updating curves for relevant evidence by POCs (from [2])

I will be brief about the results.

Dynamed clearly beated all the other products  in its updating speed.

Expressed in figures, the updating speed of Dynamed was 78% and 97% greater than those of EBM Guidelines and Clinical Evidence, respectively. Dynamed had a median citation rate of around two months and EBM Guidelines around 10 months, quite close to the limit of the follow-up, but the citation rate of the other three point of care summaries (UpToDate, eMedicine, Clinical Evidence) were so slow that they exceeded the follow-up period and the authors could not compute the median.

Dynamed outperformed the other POC’s in updating of systematic reviews independent of the route. EBM Guidelines and UpToDate had similar overall updating rates, but Cochrane systematic reviews were more likely to be cited by EBM Guidelines than by UpToDate (odds ratio 0.02, P<0.001). Perhaps not surprising, as EBM Guidelines has a formal agreement with the Cochrane Collaboration to use Cochrane contents and label its summaries as “Cochrane inside.” On the other hand, UpToDate was faster than EBM Guidelines in updating systematic reviews signaled by literature surveillance journals.

Dynamed‘s higher updating ability was not due to a difference in identifying important new evidence, but to the speed with which this new information was incorporated in their summaries. Possibly the central updating of Dynamed by the editorial team might account for the more prompt inclusion of evidence.

As the authors rightly point out, slowness in updating could mean that new relevant information is ignored and could thus affect the validity of point of care information services”.

A slower updating rate may be considered more important for POCs that “promise” to “continuously update their evidence summaries” (EBM-Guidelines) or to “perform a continuous comprehensive review and to revise chapters whenever important new information is published, not according to any specific time schedule” (UpToDate). (see table with description of updating mechanisms )

In contrast, Emedicine doesn’t provide any detailed information on updating policy, another reason that it doesn’t belong to this list of best POCs.
Clinical Evidence, however, clearly states, We aim to update Clinical Evidence reviews annually. In addition to this cycle, details of clinically important studies are added to the relevant reviews throughout the year using the BMJ Updates service.” But BMJ Updates is not considered in the current analysis. Furthermore, patience is rewarded with excellent and complete summaries of evidence (in my opinion).

Indeed a major limitation of the current (and the previous) study by Banzi et al [1,2] is that they have looked at quantitative aspects and items that are relatively “easy to score”, like “volume” and “editorial quality”, not at the real quality of the evidence (previous post).

Although the findings were new to me, others have recently published similar results (studies were performed in the same time-span):

Shurtz and Foster [3] of the Texas A&M University Medical Sciences Library (MSL) also sought to establish a rubric for evaluating evidence-based medicine (EBM) point-of-care tools in a health sciences library.

They, too, looked at editorial quality and speed of updating plus reviewing content, search options, quality control, and grading.

Their main conclusion is that “differences between EBM tools’ options, content coverage, and usability were minimal, but that the products’ methods for locating and grading evidence varied widely in transparency and process”.

Thus this is in line with what Banzi et al reported in their first paper. They also share Banzi’s conclusion about differences in speed of updating

“DynaMed had the most up-to-date summaries (updated on average within 19 days), while First Consult had the least up to date (updated on average within 449 days). Six tools claimed to update summaries within 6 months or less. For the 10 topics searched, however, only DynaMed met this claim.”

Table 3 from Shurtz and Foster [3] 

Ketchum et al [4] also conclude that DynaMed the largest proportion of current (2007-2009) references (170/1131, 15%). In addition they found that Dynamed had the largest total number of references (1131/2330, 48.5%).

Yes, and you might have guessed it. The paper of Andrea Ketchum is the 3rd paper I’m going to review.

I also recommend to read the paper of the librarians Shurtz and Foster [3], which I found along the way. It has too much overlap with the Banzi papers to devote a separate post to it. Still it provides better background information then the Banzi papers, it focuses on POCs that claim to be EBM and doesn’t try to weigh one element over another. 


  1. Banzi, R., Liberati, A., Moschetti, I., Tagliabue, L., & Moja, L. (2010). A Review of Online Evidence-based Practice Point-of-Care Information Summary Providers Journal of Medical Internet Research, 12 (3) DOI: 10.2196/jmir.1288
  2. Banzi, R., Cinquini, M., Liberati, A., Moschetti, I., Pecoraro, V., Tagliabue, L., & Moja, L. (2011). Speed of updating online evidence based point of care summaries: prospective cohort analysis BMJ, 343 (sep22 2) DOI: 10.1136/bmj.d5856
  3. Shurtz, S., & Foster, M. (2011). Developing and using a rubric for evaluating evidence-based medicine point-of-care tools Journal of the Medical Library Association : JMLA, 99 (3), 247-254 DOI: 10.3163/1536-5050.99.3.012
  4. Ketchum, A., Saleh, A., & Jeong, K. (2011). Type of Evidence Behind Point-of-Care Clinical Information Products: A Bibliometric Analysis Journal of Medical Internet Research, 13 (1) DOI: 10.2196/jmir.1539
  5. Evidence Based Point of Care Summaries [1] No “Best” Among the Bests? (
  6. How will we ever keep up with 75 Trials and 11 Systematic Reviews a Day? (
  7. UpToDate or Dynamed? (Shamsha Damani at
  8. How Evidence Based is UpToDate really? (

Related articles (automatically generated)

Evidence Based Point of Care Summaries [1] No “Best” Among the Bests?

13 10 2011

ResearchBlogging.orgFor many of today’s busy practicing clinicians, keeping up with the enormous and ever growing amount of medical information, poses substantial challenges [6]. Its impractical to do a PubMed search to answer each clinical question and then synthesize and appraise the evidence. Simply, because busy health care providers have limited time and many questions per day.

As repeatedly mentioned on this blog ([6-7]), it is far more efficient to try to find aggregate (or pre-filtered or pre-appraised) evidence first.

Haynes ‘‘5S’’ levels of evidence (adapted by [1])

There are several forms of aggregate evidence, often represented as the higher layers of an evidence pyramid (because they aggregate individual studies, represented by the lowest layer). There are confusingly many pyramids, however [8] with different kinds of hierarchies and based on different principles.

According to the “5S” paradigm[9] (now evolved to 6S -[10]) the peak of the pyramid are the ideal but not yet realized computer decision support systems, that link the individual patient characteristics to the current best evidence. According to the 5S model the next best source are Evidence Based Textbooks.
(Note: EBM and textbooks almost seem a contradiction in terms to me, personally I would not put many of the POCs somewhere at the top. Also see my post: How Evidence Based is UpToDate really?)

Whatever their exact place in the EBM-pyramid, these POCs are helpful to many clinicians. There are many different POCs (see The HLWIKI Canada for a comprehensive overview [11]) with a wide range of costs, varying from free with ads (e-Medicine) to very expensive site licenses (UpToDate). Because of the costs, hospital libraries have to choose among them.

Choices are often based on user preferences and satisfaction and balanced against costs, scope of coverage etc. Choices are often subjective and people tend to stick to the databases they know.

Initial literature about POCs concentrated on user preferences and satisfaction. A New Zealand study [3] among 84 GPs showed no significant difference in preference for, or usage levels of DynaMed, MD Consult (including FirstConsult) and UpToDate. The proportion of questions adequately answered by POCs differed per study (see introduction of [4] for an overview) varying from 20% to 70%.
McKibbon and Fridsma ([5] cited in [4]) found that the information resources chosen by primary care physicians were seldom helpful in providing the correct answers, leading them to conclude that:

“…the evidence base of the resources must be strong and current…We need to evaluate them well to determine how best to harness the resources to support good clinical decision making.”

Recent studies have tried to objectively compare online point-of-care summaries with respect to their breadth, content development, editorial policy, the speed of updating and the type of evidence cited. I will discuss 3 of these recent papers, but will review each paper separately. (My posts tend to be pretty long and in-depth. So in an effort to keep them readable I try to cut down where possible.)

Two of the three papers are published by Rita Banzi and colleagues from the Italian Cochrane Centre.

In the first paper, reviewed here, Banzi et al [1] first identified English Web-based POCs using Medline, Google, librarian association websites, and information conference proceedings from January to December 2008. In order to be eligible, a product had to be an online-delivered summary that is regularly updated, claims to provide evidence-based information and is to be used at the bedside.

They found 30 eligible POCs, of which the following 18 databases met the criteria: 5-Minute Clinical Consult, ACP-Pier, BestBETs, CKS (NHS), Clinical Evidence, DynaMed, eMedicine,  eTG complete, EBM Guidelines, First Consult, GP Notebook, Harrison’s Practice, Health Gate, Map Of Medicine, Micromedex, Pepid, UpToDate, ZynxEvidence.

They assessed and ranked these 18 point-of-care products according to: (1) coverage (volume) of medical conditions, (2) editorial quality, and (3) evidence-based methodology. (For operational definitions see appendix 1)

From a quantitive perspective DynaMed, eMedicine, and First Consult were the most comprehensive (88%) and eTG complete the least (45%).

The best editorial quality of EBP was delivered by Clinical Evidence (15), UpToDate (15), eMedicine (13), Dynamed (11) and eTG complete (10). (Scores are shown in brackets)

Finally, BestBETs, Clinical Evidence, EBM Guidelines and UpToDate obtained the maximal score (15 points each) for best evidence-based methodology, followed by DynaMed and Map Of Medicine (12 points each).
As expected eMedicine, eTG complete, First Consult, GP Notebook and Harrison’s Practice had a very low EBM score (1 point each). Personally I would not have even considered these online sources as “evidence based”.

The calculations seem very “exact”, but assumptions upon which these figures are based are open to question in my view. Furthermore all items have the same weight. Isn’t the evidence-based methodology far more important than “comprehensiveness” and editorial quality?

Certainly because “volume” is “just” estimated by analyzing to which extent 4 random chapters of the ICD-10 classification are covered by the POCs. Some sources, like Clinical Evidence and BestBets (scoring low for this item) don’t aim to be comprehensive but only “answer” a limited number of questions: they are not textbooks.

Editorial quality is determined by scoring of the specific indicators of transparency: authorship, peer reviewing procedure, updating, disclosure of authors’ conflicts of interest, and commercial support of content development.

For the EB methodology, Banzi et al scored the following indicators:

  1. Is a systematic literature search or surveillance the basis of content development?
  2. Is the critical appraisal method fully described?
  3. Are systematic reviews preferred over other types of publication?
  4. Is there a system for grading the quality of evidence?
  5. When expert opinion is included is it easily recognizable over studies’ data and results ?

The  score for each of these indicators is 3 for “yes”, 1 for “unclear”, and 0 for “no” ( if judged “not adequate” or “not reported.”)

This leaves little room for qualitative differences and mainly relies upon adequate reporting. As discussed earlier in a post where I questioned the evidence-based-ness of UpToDate, there is a difference between tailored searches and checking a limited list of sources (indicator 1.). It also matters whether the search is mentioned or not (transparency), whether it is qualitatively ok and whether it is extensive or not. For lists, it matters how many sources are “surveyed”. It also matters whether one or both methods are used… These important differences are not reflected by the scores.

Furthermore some points may be more important than others. Personally I find step 1 the most important. For what good is appraising and grading if it isn’t applied to the most relevant evidence? It is “easy” to do a grading or to copy it from other sources (yes, I wouldn’t be surprised if some POCs are doing this).

On the other hand, a zero for one indicator can have too much weight on the score.

Dynamed got 12 instead of the maximum 15 points, because their editorial policy page didn’t explicitly describe their absolute prioritization of systematic reviews although they really adhere to that in practice (see comment by editor-in-chief  Brian Alper [2]). Had Dynamed received the deserved 15 points for this indicator, they would have had the highest score overall.

The authors further conclude that none of the dimensions turned out to be significantly associated with the other dimensions. For example, BestBETs scored among the worst on volume (comprehensiveness), with an intermediate score for editorial quality, and the highest score for evidence-based methodology.  Overall, DynaMed, EBM Guidelines, and UpToDate scored in the top quartile for 2 out of 3 variables and in the 2nd quartile for the 3rd of these variables. (but as explained above Dynamed really scored in the top quartile for all 3 variables)

On basis of their findings Banzi et al conclude that only a few POCs satisfied the criteria, with none excelling in all.

The finding that Pepid, eMedicine, eTG complete, First Consult, GP Notebook, Harrison’s Practice and 5-Minute Clinical Consult only obtained 1 or 2 of the maximum 15 points for EBM methodology confirms my “intuitive grasp” that these sources really don’t deserve the label “evidence based”. Perhaps we should make a more strict distinction between “point of care” databases as a point where patients and practitioners interact, particularly referring to the context of the provider-patient dyad (definition by Banzi et al) and truly evidence based summaries. Only few of the tested databases would fit the latter definition. 

In summary, Banzi et al reviewed 18 Online Evidence-based Practice Point-of-Care Information Summary Providers. They comprehensively evaluated and summarized these resources with respect to coverage (volume) of medical conditions, editorial quality, and (3) evidence-based methodology. 

Limitations of the study, also according to the authors, were the lack of a clear definition of these products, arbitrariness of the scoring system and emphasis on the quality of reporting. Furthermore the study didn’t really assess the products qualitatively (i.e. with respect to performance). Nor did it take into account that products might have a different aim. Clinical Evidence only summarizes evidence on the effectiveness of treatments of a limited number of diseases, for instance. Therefore it scores bad on volume while excelling on the other items. 

Nevertheless it is helpful that POCs are objectively compared and it may help as starting point for decisions about acquisition.

References (not in chronological order)

  1. Banzi, R., Liberati, A., Moschetti, I., Tagliabue, L., & Moja, L. (2010). A Review of Online Evidence-based Practice Point-of-Care Information Summary Providers Journal of Medical Internet Research, 12 (3) DOI: 10.2196/jmir.1288
  2. Alper, B. (2010). Review of Online Evidence-based Practice Point-of-Care Information Summary Providers: Response by the Publisher of DynaMed Journal of Medical Internet Research, 12 (3) DOI: 10.2196/jmir.1622
  3. Goodyear-Smith F, Kerse N, Warren J, & Arroll B (2008). Evaluation of e-textbooks. DynaMed, MD Consult and UpToDate. Australian family physician, 37 (10), 878-82 PMID: 19002313
  4. Ketchum, A., Saleh, A., & Jeong, K. (2011). Type of Evidence Behind Point-of-Care Clinical Information Products: A Bibliometric Analysis Journal of Medical Internet Research, 13 (1) DOI: 10.2196/jmir.1539
  5. McKibbon, K., & Fridsma, D. (2006). Effectiveness of Clinician-selected Electronic Information Resources for Answering Primary Care Physicians’ Information Needs Journal of the American Medical Informatics Association, 13 (6), 653-659 DOI: 10.1197/jamia.M2087
  6. How will we ever keep up with 75 Trials and 11 Systematic Reviews a Day? (
  7. 10 + 1 PubMed Tips for Residents (and their Instructors) (
  8. Time to weed the (EBM-)pyramids?! (
  9. Haynes RB. Of studies, syntheses, synopses, summaries, and systems: the “5S” evolution of information services for evidence-based healthcare decisions. Evid Based Med 2006 Dec;11(6):162-164. [PubMed]
  10. DiCenso A, Bayley L, Haynes RB. ACP Journal Club. Editorial: Accessing preappraised evidence: fine-tuning the 5S model into a 6S model. Ann Intern Med. 2009 Sep 15;151(6):JC3-2, JC3-3. PubMed PMID: 19755349 [free full text].
  11. How Evidence Based is UpToDate really? (
  12. Point_of_care_decision-making_tools_-_Overview (
  13. UpToDate or Dynamed? (Shamsha Damani at

Related articles (automatically generated)

The #TwitJC Twitter Journal Club, a New Initiative on Twitter. Some Initial Thoughts.

10 06 2011

There is a new initiative on Twitter: The Twitter Journal Club. It is initiated by Fi Douglas (@fidouglas) a medical student at Cambridge,  and Natalie Silvey (@silv24)  a junior doctor in the West Midlands.

Fi and Natalie have set up a blog for this event:

A Twitter Journal Club operates in the same way as any other journal club, except that the forum is Twitter.

The organizers choose a paper, which they announce at their website (you can make suggestions here or via a tweet). Ideally, people should read the entire paper before the Twitter session. A short summary with key points (i.e. see here) is posted on the website.

The first topic was:  Early Goal-Directed Therapy in the Treatment of Severe Sepsis and Septic Shock [PDF]

It started last Sunday 8 pm (Dutch time) and took almost 2 hours to complete.

@twitjournalclub (the twitter account of the organizers) started with a short introduction. People introduced themselves as they entered the discussion. Each tweet in the discussion was tagged with #TwitJC (a so called hashtag), otherwise it won’t get picked up by people following the hashtag. (Tweetchat automatically provides the hashtag you type in).

Although it was the first session, many people (perhaps almost 100?!) joined the Journal Club, both actively and more passively. That is a terrific achievement. Afterwards it got a very positive Twitter “press”. If you know to engage people like @nothern_doctor, @doctorblogs, @amcunningham and @drgrumble and people like @bengoldacre, @cebmblog and @david_colquhoun find it a terrific concept, then you know that it is a great idea that meets a need. As such, enough reason to continue.

There were also not purely positive voices. @DrVes sees it as a great effort, but added that “we need to go beyond this 1950s model rather than adapt it to social media.” Apparently this tweet was not well received, but I think he made a very sensible statement.

We can (and should) asks ourselves if Twitter is the right medium for such an event.

@DrVes has experience with Twitter Journal Clubs. He participated in the first medical journal club on Twitter at the Allergy and Immunology program of Creighton University back in 2008 and presented a poster at an allergy meeting in 2009.

BUT, as far as I can tell, that Twitter Journal Club was both much more small-scale (7 fellows?) and different in design. It seems that Tweets summarized what was being said at a real journal club teaching session. Ves Dimov:

“The updates were followed in real time by the Allergy and Immunology fellows at the Louisiana State University (Shreveport) and some interested residents at Cleveland Clinic, along with the 309 subscribers of my Twitter account named AllergyNotes“.

So that is the same as tweeting during a conference or a lecture to inform others about the most interesting facts/statements. It is one-way-tweeting (overall there were just 24 updates with links).

I think the present  Twitter Journal Club was more like a medical Twitter chat (also the words of Ves).

Is chatting on Twitter effective?

Well that depends on what one wants to achieve.

Apparently for all people participating, it was fun to do and educative.

I joined too late to tell, thus I awaited the transcript. But boy, who wants to read 31 pages of “chaotic tweets”? Because that is what a Twitter chat is if many people join.  All tweets are ordered chronologically. Good for the archive, but if the intention is to make the transcribed chat available to people who couldn’t attend, it needs deleting, cutting, pasting and sorting. But that is a lot of work if done manually.

I tried it for part of the transcript. Compare the original transcript here with this Google Doc.

The “remix of tweets” also illustrates that people have their own “mini-chats”, and “off-topic” (but often very relevant) questions.

In addition, the audience is very mixed. Some people seem to have little experience with critical appraisal or concepts like “intention to treat” (ITT) and would perhaps benefit from supplementary information beforehand (i.e. documents at the TwitJC website). Others are experienced doctors with a lot of clinical expertise, who always put theoretical things in perspective. Very valuable, but often they are far ahead in the discussion.

The name of the event is Twitter  Journal Club. Journal Club is a somewhat ambiguous term. According to Wikipedia “A journal club is a group of individuals who meet regularly to critically evaluate recent articles in scientific literature”. It can deal with any piece which looks interesting to share, including hypotheses and preclinical papers about mechanisms of actions.

Thus, to me Journal club is not per definition EBM (Evidence Based Medicine).

Other initiatives are a critical appraisal of a study and a CAT,  a critical appraisal of a topic (sometimes wrongly called PICO, PICO is only part of it).

The structure of the present journal club was more that of a critical appraisal. It followed the normal checklist for an RCT: What is being studied? Is the paper valid (appropriately allocated, blinded etc ), what are the results (NNT etc) and are the results valid outside of the context of the paper?

Imo, official critical appraisal of the paper costs a lot of time and is not the most interesting. Looking at my edited transcript you see that half of the people are answering the question and they all say the same: “Clearly focused question” is answer to first question (but even in the edited transcript this takes 3 pages), “clear interventions (helpful flowcharts) is the answer to the second question.

Half of the people have their own questions. Very legitimate and good questions, but not in line with the questions of @twitjournalclub. Talking about the NNT and about whether the results are really revolutionary, is VERY relevant, but should be left till the end.

A twitter chat with appr. 100 people needs a tight structure.

However, I wonder whether this  approach of critical appraisal is the most interesting. Even more so, because this part didn’t evoke much discussion.

Plus it has already been done!!

I searched the TRIP database and with the title of the paper, to find critical appraisals or synopses of the paper. I found 3 synopses, 2 of which follow more or less the structure of this journal club here, here (and this older one). They answer all the questions about validity.

Wouldn’t it have better with this older key paper (2001) to just use the existing critical appraisals as background information and discuss the implications? Or discuss new supporting or contradictory findings?

The very limited search in TRIP (title of paper only) showed some new interesting papers on the topic (external validation, cost effectiveness, implementation, antibiotics) and I am sure there are many more.

A CAT may also be more interesting than a synopsis, because “other pieces of evidence” are also taken into consideration and one discusses a topic not one single paper. But perhaps this is too difficult to do, because one has to do a thorough search as well and has too much to discuss. Alternatively one could choose a recent systematic review, which summarizes the existing RCT’s.

Anyway, I think the journal club could improve by not following the entire checklist (boring! done!), but use this as a background. Furthermore I think there should be 3-5 questions that are very relevant to discuss. Like in the #HSCMEU discussions, people could pose those questions beforehand. In this way it is easier to adhere to the structure.

As to the medium Twitter for this journal club. I am not fond of  long Twitter chats, because it tends to be chaotic, there is a lot of reiteration, people tend to tweet not to “listen” and there is a constriction of 140 characters. Personally I would prefer a webinar, where people discuss the topic and you can pose questions via Twitter or otherwise.
Other alternatives wouldn’t work for me either. A Facebook journal club (described by of Neil Mehta) looks more static (commenting to a short summary of a paper), and Skyping is difficult with more than 10 people and not easy to transcribe.

But as said there is a lot of enthusiasm for this Twitter Journal Club. Even outside the medical world. This “convincing effort” inspired others to start a Astronomy Twitter Journal Club.

Perhaps a little modification of goals and structure could make it even more interesting. I will try to attend the next event, which is about Geoffrey Rose’s ‘Prevention Paradox’ paper, officially titled ”Strategy of prevention: lessons from cardiovascular disease”, available here.

Notes added:

[1] A summary of the first Twitter journal club is just posted. This is really valuable and takes away the disadvantages of reading an entire transcript (but one misses a lot of interesting aspects too)!

[2] This is the immediate response of one of the organizers at Twitter. I’m very pleased to notice that they will put more emphasis on implications of the Journal. That would take away much of my critic.

(Read tweets from bottom to top).


  1. Welcome (
  2. An important topic for the first Twitter Journal Club (
  3. Rivers E, Nguyen B, Havstad S, Ressler J, Muzzin A, Knoblich B, Peterson E, Tomlanovich M; Early Goal-Directed Therapy Collaborative Group. Early goal-directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med. 2001 Nov 8;345(19):1368-77. PubMed PMID: 11794169. (PDF).
  4. The First Journal Club on Twitter – Then and Now (
  5. Allergy and Immunologyclub on Twitter (
  6. The Utility of a Real-time Microblogging Service for Journal Club in Allergy and Immunology. Dimov, V.; Randhawa, S.; Auron, M.; Casale, T. American College of Allergy, Asthma & Immunology (ACAAI) 2009 Annual Meeting. Ann Allergy Asthma Immunol., Vol 103:5, Suppl. 3, A126, Nov 2009.
  7. (short remix of the transcript)
  8. Model for a Journal Club using Google Reader and Facebook OR if the prophet does not go to the Mountain…. bring the journal club to FB! (
  9. Astronomy Twitter Journal Club/ (
  10. A summary of week one: Rivers et al (

#CECEM Bridging the Gap between Evidence Based Practice and Practice Based Evidence

15 06 2009

cochrane-symbol A very interesting presentation at the CECEM was given by the organizer of this continental Cochrane meeting, Rob de Bie. De Bie is Professor of Physiotherapy Research and director of Education of the Faculty of Health within the dept. of Epidemiology of the Maastricht University. He is both a certified physiotherapist and an epidemiologist. Luckily he kept the epidemiologic theory to a minimum. In fact he is a very engaging speaker who keeps your attention to the end.


While guidelines were already present in the Middle Ages in the form of formalized treatment of daily practice, more recently clinical guidelines have emerged. These are systematically developed statements which assists clinicians and patients in making decisions about appropriate treatement for specific conditions.

Currently, there are 3 kinds of guidelines, each with its own shortcomings.

  • Consensus based. Consensus may be largely influenced by group dynamics
    Consensus = non-sensus and Consensus guidelines are guidelies.
  • Expert based. Might be even worse than consensus. It can have all kind of biases, like expert and opinion bias or external financing.
  • Evidence based. Guideline recommendations are based on best available evidence, deals with specific interventions for specific populations and are based on a systematic approach.

The quality of Evidence Based Guidelines depends on whether the evidence is good enough, transparent, credible, available, applied and not ‘muddled’ by health care insurers.
It is good to realize that some trials are never done, for instance because of ethical considerations. It is also true that only part of what you read (in the conclusions) has actually be done and some trials are republished several times, each time with a better outcome…

Systematic reviews and qualitatively good trials that don’t give answers.

Next Rob showed us the results of a study ( Jadad and McQuay in J. Clin. Epidemiol. ,1996) with efficacy as stated in the review plotted on the X-axis and the Quality score on the Y-axis. Surprisingly meta-analysis of high quality were less likely to produce positive results. Similar results were also obtained by Suttorp et al in 2006. (see Figure below)

12066264  rob de bie CECEM

Photo made by Chris Mavergames

There may be several reasons why good trials not always give good answers. Well known reasons are  the lack of randomization or blinding. However Rob focused on a less obvious reason. Despite its high level of evidence, a Randomized Controlled Trial (RCT) may not always be suitable to provide good answers applicable to all patients, because RCT’s often fail to reflect the true clinical practice. Often, the inclusion of patients in RCT’s is selective: middle-aged men with exclusion of co-morbidity. Whereas co-morbidity occurs in > 20% of the people of 60 years and older and in >40% of the people of 80 years and older (André Knottnerus in his speech).

Usefulness of a Nested Trial Cohort Study coupled to an EHR to study interventions.

Next, Rob showed that a nested Trial cohort study can be useful to study the effectiveness of  interventions. He used this in conjunction with an EHR (electronic health record), which could be accessed by practitioner and patient.

One of the diseases studied in this way, was Intermittent Claudication. Most commonly Intermittent Claudication is a manifestation of  peripheral arterial disease in the legs, causing pain and cramps in the legs while walking (hence the name). The mortality is high: the 5 year mortality rates are in between those of colorectal cancer and Non-Hodgkin Lymphoma. This is related to the underlying atherosclerosis.

There are several risk factors, some of which cannot be modified, like hereditary factors, age and gender. Other factors, like smoking, diet, physical inactivity and obesity can be tackled. These factors are interrelated.

Rob showed that, whereas there may be an overall null effect of exercise in the whole population, the effect may differ per subgroup.

15-6-2009 3-06-19 CI 1

  • Patients with mild disease and no co-morbidity may directly benefit from exercise-therapy (blue area).
  • Exercise has no effect on smokers, probably because smoking is the main causative factor.
  • People with unstable diabetes first show an improvement, which stabilized after a few weeks due to hypo- or hyperglycaemia induced by the exercise,
  • A similar effect is seen in COPD patients, the exercise becoming less effective because the patients become short of breath.

It is important to first regulate diabetes or COPD before continuing the exercise therapy. By individually optimizing the intervention(s) a far greater overall effect is achieved: 191% improval in the maximal (pain-free) walking distance compared to for instance <35% according to a Cochrane Systematic Review (2007).

Another striking effect: exercise therapy affects some of the prognostic factors: whereas there is no effect on BMI (this stays an important risk factor), age and diabetes become less important risk factors.

15-6-2009 3-35-10 shift in prognostic factors

Because guidelines are quickly outdated, the findings are directly implemented in the existing guidelines.

Another astonishing fact: the physiotherapists pay for the system, not the patient nor the government.

More information can be found on Although the presentation is not (yet?) available on the net, I found a comparable presentation here.

** (2009-06-15) Good news: the program and all presentations can now be viewed at:

How to make EBM easy to swallow: BMJ PICO

8 02 2009

Guest author: Shamsha Damani (@shamsha)

As a medical librarian, I try to instill the importance of Evidence Based Medicine (EBM) to all my users. They agree that EBM is important, and yet, still resort to shortcuts (like using Google, asking colleagues, etc). And you know what, I don’t blame them. Given the amount of medical literature published today, it is very difficult to keep up with it all. There are some very bad and poorly designed studies published, which makes it difficult to identify good ones. And once you’ve identified a good article to read, evaluating and critiquing it is another daunting task. I keep wondering if this has to be so difficult. Shouldn’t there be stricter standards for publications? Shouldn’t publishers care about the quality of research that is associated with their name? I know that some journals like ACP Journal Club critique articles but they don’t cover nearly enough topics.

As I pondered these thoughts, something very interesting happened that gives me hope. BMJ recently announced that they will be publishing two summaries for each research article published. One is called BMJ PICO, is prepared by the authors, and breaks down the article into the different EBM elements. The other is called Short Cuts, which is written by BMJ itself. This is where I hope BMJ will shine, provide an unbiased view of the article, and set itself apart from other journals by doing some extra work. Imagine reading a brief synopsis of a research article, not written by the author, which will tell you whether the study was any good and if the results were valid. What a time saver! I hope that BMJ continues this practice and that other journals follow suit. Right now BMJ is still testing the waters and trying to figure out which format would be most appealing to readers. Personally I think it would have been better to have the BMJ reviewers write the PICO format, and do a bit more thorough critiquing. The reviewers already critique the article before it gets accepted; it only makes sense that the results of such a thorough critique be published as well. An unbiased view would make it easier for readers to trust (or not!) the results and proceed accordingly.

I still believe that EBM skills are very important and should be learned.
However, busy health care providers will find value in such pre-packaged articles and will use the evidence more if it has been critiqued already. And isn’t that the point of EBM: to make more use of the evidence?

Shamsha Damani, Clinical Librarian

The Web 2.0-EBM Medicine split. [1] Introduction into a short series.

4 01 2009

Since the three years I’m working as a medical information specialist, I’ve embraced the concept of evidence based medicine or EBM. As a searcher I spend hours if not days to find as much relevant evidence as possible on a particular subject, which others select, appraise and synthesize to a systematic review or an evidence based guideline. I’m convinced that it is important to find the best evidence for any given intervention, diagnosis, prognostic or causal factor.

Why? Because history has shown that despite their expertise and best intentions, doctors don’t always know or feel what’s best for their patients.

An example. For many years corticosteroids had been used to lower intracranial pressure after serious head injury, because steroids reduce the inflammation that causes the brain to swell. However, in the 1990′s, meta-analyses and evidence-based guidelines called the effectiveness of steroids into question. Because of the lack of sufficiently large trials, a large RCT (CRASH) was started. Contrary to all expectations, there was actually an excess of 159 deaths in the steroid group. The overall absolute risk of death in the corticosteroid group was shown to be increased with 2%. This means that the administration of corticosteroids had caused more than 10,000 deaths before the 1990′s.[1,2,3]

Another example. The first Cochrane Systematic Review, shows the results of a systematic review of RCTs of a short, inexpensive course of a corticosteroid given to women about to give birth too early. The diagram below, which is nowadays well known as the logo of the Cochrane Collaboration, clearly shows that antenatal corticosteroids reduce the odds of the babies dying from the complications of immaturity by 30 to 50 per cent (diamond left under). Strikingly, the first of these RCTs showing a positive effect of corticosteroids, was already reported in 1972. By 1991, seven more trials had been reported, and the picture had become still stronger. Because no systematic review of these trials had been published until 1989, most obstetricians had not realized that the treatment was so effective. As a result, 10.000s of premature babies have probably suffered and died unnecessarily. This is just one of many examples of the human costs resulting from failure to perform systematic, up-to-date reviews of RCTs of health care.[4,5]

The Cochrane logo explained

Less than I year ago I entered the web 2.0-, and (indirectly) medicine 2.0 world, via a library 2.0 course. I loved the tools and I appreciated the approach. Web 2.0 is ‘all about sharing‘ or as Dean Giustini says it: ‘all about people. It is very fast and simple. It is easy to keep abreast of new information and to meet new interesting people with good ideas and a lot of knowledge.

An example. Bertalan Mesko in a comment on his blog ScienceRoll:

I know exactly that most of these web 2.0 tools have been around for quite a long time. Most of these things are not new and regarding the software, there aren’t any differences in most of the cases. But!
These tools and services will help us how to change medicine. In my opinion, the most essential problem of medicine nowadays is the sharing of information. Some months ago, I wrote about a blogger who fights Pompe disease, a rare genetic disorder and he told me about the diagnostic delay. I try to help physicians how they can find information easier and faster. For example: I gave tips how to search for genetic diseases.

Other examples are good functioning and dedicated patient web 2.0 sites, like PatientsLikeMe.

In the medical literature, blogs and slideshare, differences between medicine 2.0 and 1.0 are already described in detail (for instance see the excellent review of Dean Giustini in the BMJ), as well as the differences between medicine 1.0 and EBM (e.g. see the review of David Sackett et al in BMJ).

However, the longer I’m involved in web 2.0, the more I feel it conflicts with my job as EBM-librarian. The approach is so much different, other tools are used and other views shared. More and more I find ideas and opinions expressed on blogs that do EBM no justice and that seem to arise out of ignorance and/or prejudice. On the other hand EBM and traditional medicine often are not aware of web 2.0 sources or mistrust them. In science, blogs and wiki’s seldom count, because they express personal views, echo pre-existing data and are superficial.


I’m feeling like I’m in a split, with one leg in EBM and the other in web 2.0. In my view each has got his merits, and these approaches should not oppose each other but should mingle. EBM getting a lower threshold and becoming more digestible and practical, and medicine 2.0 becoming less superficial and more underpinned.

It is my goal to take an upright position, standing on both legs, integrating EBM, medicine 2.0 (as well as medicine 1.0).

As a first step I will discuss some discrepancies between the two views as I encounter it in blogs, in the form of a mini-series: “The Web 2.0-EBM Medicine split”.

Before I do so I will give a short list of what I consider characteristic for each type of medicine, EBM-, Web 1.0 (usual)- and Web 2.0- medicine. Not based on any evidence, only on experience and intuition. I’ve just written down what came to my mind. I would be very interested in your thoughts on this.

EBM – medicine

  • centered round the best evidence
  • methodology-dependent
  • objective, transparent
  • thorough
  • difficult (to make, but for many also to find and also to understand)
  • time-consuming
  • published in peer reviewed papers (except for guidelines)
  • searching: PubMed and other bibliographic databases (to produce) and guideline databases, TRIP, and PubMed (Clinical Queries) or specific sources, i.e. specialist guidelines (to find).
  • Mostly Web 1.0 (with some web 2.0 tools, like podcasts, RSS and e-learning)

Web 1.0 – traditional medicine*

  • centered round clinical knowledge, expertise and intuition
  • opinion-based
  • authority based, i.e.strong beliefs in opinion leaders, expert opinion or ‘authority opinion’ (i.e. head of departments, professor) and own authority versus patient.
  • subjective
  • fast
  • act! (motto)
  • searching: browsing ( a specific list, site or Journals), quick search, mostly via Google**, in pharmacopeia, or protocols and UpToDate seldom in Pubmed (dependent on discipline)
  • Web 1.0: mail, patient-records, quick search via Google and Pubmed

Web 2.0 medicine

  • people-centered and patient-centered (although mostly not in individual blogs of doctors)
  • heavily based on technology (easy to use and free internet software)
  • social-based: based on sharing knowledge and expertise
  • (in theory) personalized
  • subjective, nondirected.
  • often:superficial
  • fast
  • generally not peer reviewed, i.e. published on blogs and wiki’s
  • searching: mostly via free internet sources and search engines, e.g. wikipedia, emedicine, respectively Google**, health metasearch engines, like Mednar and Health Sciences Online. PubMed mainly via third-party-tools like GoPubMed, HubMed and PubReminer. (e.g. see recent listings of top bedside health search engines on Sandnsurf’s blog ‘Life in the Fast Lane’
  • heavily dependent on web 2.0 tools both for ‘publishing’, ‘finding information’ and ‘communication’

*very general. of course dependent on discipline.
** this is not merely my impression, e.g. see: this blogpost on the “Clinical Cases and Images blog” of Ves Dimov, referring to four separate interviews of Dean Giustini with Physician bloggers.

Other references

[1] Final results of MRC CRASH, a randomised placebo-controlled trial of intravenous corticosteroid in adults with head injury-outcomes at 6 months. Edwards P et al. Lancet. 2005 Jun 4-10;365(9475):1957-9.
[2] A CRASH landing in severe head injury. Sauerland S, Maegele M. Lancet. 2004 Oct 9-15;364(9442):1291-2. Comment on: Lancet. 2004 Oct 9-15;364(9442):1321-8.
[3] Corticosteroids for acute traumatic brain injury.Alderson P, Roberts IG. Cochrane Database of Systematic Reviews 2005, Issue 1. Art. No.: CD000196.
[5] Antenatal corticosteroids for accelerating fetal lung maturation for women at risk of preterm birth.Roberts D, Dalziel SR.Cochrane Database of Systematic Reviews 2006, Issue 3. Art. No.: CD004454
[6] How Web 2.0 is changing medicine. Giustini D. BMJ. 2006 Dec 23;333(7582):1283-4.
[7] Evidence based medicine: what it is and what it isn’t. Sackett DL et al. BMJ. 1996 Jan 13;312(7023):71-2.

The importance of early intervention in Addisonian crises

27 10 2008

In a previous post entitled “changing care for addison patients“ (see here), I mentioned that Addison’s disease is often misdiagnosed and Addison crises not adequately dealth with.

“I’m by no means an exception. Addison’s disease is often missed or diagnosed late. That early diagnosis can be a challenge is frequently addressed in the medical literature and many poignant examples can be read on patient forums. In fact I know very few prompt and swift diagnoses.”

“…But there are far more upsetting stories of other Addison crises. Even in this era there are unnecessary deaths due to inadequate intervention.”

While preparing this post I came across a recent paper in “Het Nederlands Tijdschrift voor Geneeskunde” (something like the Dutch Lancet) with a relevant clinical lesson on this very subject. It is entitled:

“Addisonian crisis in patients with known adrenal insufficiency: the importance of early intervention”, written by Mulder of the group of Professor Hermus from the Universitair Medisch Centrum St Radboud, Nijmegen.

The paper decribes 3 fatal cases of Addisonian crisis in patients with adrenal insufficiency, which formed the basis for the development of a regional protocol to prevent any further unnecessary death from Addisonian crisis (see PubMed abstract here).

The cases

Patient A was a 47 year old male with congenital adrenal hyperplasia due to 21-hydroxylase deficiency. Since this leads to deficient glucocorticoid and mineralocorticoid hormone production, replacement therapy consisted of daily replacement with glucocorticoid (hydrocortisone, HC) and mineralocorticoid (fludrocortisone).
A got a sudden gastroenteritis (acute abdominal pain, watery diarrhea, no fever), for which he doubled his HC dose. The next day he became weak and dizzy. The consulted physician didn’t deem parenteral cortisol (proposed by the patient’s partner) necessary, but prescribed loperamide instead. Indeed the diarrhea improved, but the condition of the patient worsened overnight, his temperature dropped to 34,4 C, he was confused and finally became comatose. Upon arrival at ED the hypotensive patient developed ventricular fibrillation. The neurological sequelae after CPR were so severe that active medical treatment was withheld, after which the patient died.


The other two patients had panhypopituitarism and adrenal insufficiency secondary to their ACTH deficiency. With respect to replacement of adrenal hormones, these patients only require replacement of (ACTH driven production of) glucocorticoids, not mineralocorticoids. (On the other hand, they need extra replacement of other hypophysis-(regulated) hormones, like levothyroxine, gonadotropins and growth hormone).

Patient B, a 28 year old male got a sore throat and fever (41 C), for which he didn’t increase his HC-dose. His mother called a physician in vain: patient B didn’t respond and was found dead two hours later. Obduction showed tonsillitis, bronchopneumonia and an enlarged spleen, indicative of sepsis. This all took place in one and a half day.

Patient C was vomiting and had fever during a couple of days. Soon after her doctor visited her, she suffered a cardiac arrest and died. Her family physician was not familiar with her medical history nor with the prescribed medication. In retrospect, patient C had poor treatment compliance (never came to a consult and didn’t take replacement medication, including HC, for a year).


Even patients known to have adrenal insufficiency can develop a life-threatening Addison crisis in case of inadequate adjustment of the glucocorticoid dosage during intercurrent illness. Treatment consists of a high parenteral dose glucocorticoids, preferentially HC (because this also has a mineralocorticoid action).

The chance of hypovolemic shock accompanying a crisis is greater in patients with primary Addison, lacking mineralocorticoids (case A).

Preventive measures

These casualties led to a new protocol. According to the authors:

“Patients with known adrenal insufficiency, as well as their relatives and general practitioners, should repeatedly receive verbal and written instructions on how to deal with physical and severe psychic stress. We teach the patients and their relatives how to use an emergency injection of hydrocortisone, and the patients can consult the on-call endocrinologist by telephone 24 hours a day.”

I. Points to be adressed in the yearly instruction of patients with primary or secondary adrenal insufficiency, preferably in presence of his/her partner or close relative:

  • explain importance of glucocorticoid use.
  • describe the symptoms of an Addisonian crisis
  • give instruction on increasing glucocorticoid dose in case of illness or severe stress
  • stress the importance of an alert bracelet
  • verify whether the patient has an emergency ampule with hydrocortisone (i.e. Solucortef) at home
  • give instruction on the use of an emergency intramuscular injection (standardly given by a nurse)
  • inquire about traveling abroad, provide letter with advice in case of (written in English) if required*
  • provide written information, including telephone number of on-call endocrinologist (24 hours a day service)!!
  • In addition the family physician receives a yearly letter with a standard treatment advice in case of an imminent Addisonian crisis. He is advised to inform his colleagues at the Central GP post.

II. Advice to patients with primary or secondary adrenal insufficiency for dosage of cortisone in case of stress. Normal Dose is 15 to 30 mg HC daily (or equivalent dose of other glucocorticoid)

  • outpatient or dental interventions (i.e. local anesthesia): double HC dose before intervention
  • fever (>38 C), severe psychological stress** (difficult exam, death family member): at least triple HC-dose, i.e. 60 mg in the morning and 30 mg in the evening, taper till normal dose after symptoms are relieved. Contact doctor if there is no improvement.
  • vomiting or diarrhea, unconsciousness: parenteral administration of 100 mg hydrocortison by patient or partner (im) or physician (im, iv); direct consult of on-call endocrinologist, always check afterwards at ED
  • surgery or hospitalization: the treating physician should contact the patient’s endocrinologist for advice on dose adjustments.

What is special about this protocol is the 24h endocrinologist on call service, the earlier (and consistent) referral to endocrinologists and ED, in case of possible emergency, and the structural approach: all patients with adrenal insufficiency, including their relatives and physicians, are well-informed about the preventive measures that should be taken (including HC emergency ampule and alert bracelet).

That is a great improvement! Hopefully other regions and countries will follow this example.

Notes and Sources:

Sources: Mulder AH, Nauta S, Pieters GF, Hermus AR. Addisonian crisis in patients with known adrenal insufficiency: the importance of early intervention. Ned Tijdschr Geneeskd. 2008 Jul 5;152(27):1497-500. [Article in Dutch] (see PubMed abstract here).
* The Dutch Addison and Cushing Society NVACP since long has a small booklet “SOS stressboekje”, which is specially designed to inform physicians abroad when on vacation. Short guidelines for dosages of (hydro)cortisone in stress and medical information for physicians is translated in 6 languages.
** Advices based on what is usually advised in the literature. There is little evidence for a particular dose in case of physical or psychological stress.
Photo’s acknowledgments.
Burning and burned matches derive from Flickr, respectively from


De Nederlandstalige samenvatting van het artikel:
Addison-crisis bij patiënten bekend wegens bijnierschorsinsufficiëntie: het belang van vroegtijdig ingrijpen
A.H.Mulder, S.Nauta, G.F.Pieters en A.R.M.M.Hermus in het Ned Tijdschr Geneeskd. 2008 5 juli;152(27)

Dames en Heren,
Patiënten met een bijnierschorsinsufficiëntie kunnen over het algemeen goed functioneren indien zij worden behandeld met glucocorticoïden en – in geval van een primaire bijnierschorsinsufficiëntie – mineralocorticoïden. Tijdens ziekte, koorts en ernstige psychische stress is de natuurlijke
behoefte aan cortisol verhoogd. Patiënten met een bijnierschorsinsufficiëntie moeten in deze gevallen dan ook de substitutiedosering glucocorticoïden verhogen. Alhoewel zij tijdens de poliklinische controles hierover uitleg ontvangen blijken de instructies niet altijd adequaat te worden opgevolgd. De ernst van de situatie wordt soms door de patiënt zelf, en soms door de geraadpleegde huisarts of specialist, onvoldoende onderkend.
Met de beschrijving van de volgende drie ziektegeschiedenissen willen wij onder de aandacht brengen dat een addison-crisis bij patiënten met een bekend hypocortisolisme levensbedreigend is, en dat vroegtijdig adequaat ingrijpen noodzakelijk is. Tevens beschrijven wij de maatregelen die wij namen om patiënten nog beter te informeren over glucocorticoïdgebruik bij lichamelijke en psychische stress en om de bewustwording bij medebehandelaren te verhogen.

Changing care (for Addison patients)

19 10 2008

This post is inspired by the theme for this weeks Grand Rounds at PalliMed, a Hospice and Palliative Medicine Blog: “Changing Goals of Care”. According to Christian Sinclair, M.D. of Pallimed:

It can be changing the goals in any direction, not just the curative towards palliative route, although I expect that is a common touchstone for many in the medical field.

‘Goals of Care’ is a subject that is outside of my area of professional expertise, being a medical biologist and an information specialist.

But as a consumer and patient I can easily see how I would like health care to change.

  • affordable healthcare for everyone who needs it
  • More personal and personalized care
  • And -indeed- more attention for palliative healthcare (my mother in law has a bearable life, since low doses morphine were prescribed)

But those issues can be better addressed by persons in the field. I just simply want to restrict to “changing care in a very specific area, adrenal diseases, simply because I’m a hands-on expert, having secondary Addison’s Disease (Sheehan’s syndrome)”.

Main conclusions:
Healthcarefivers look (and act) beyond your specialty! Try to be a good generalist as well. Please adapt protocols if it suits the patients. Take the patient seriously.

Primary Addison (damage or destruction of the adrenal cortex) as well as secondary Addison (absent pituitary signal(s)) often have a slow onset and are difficult to diagnose.
In theory this may be different for Sheehan’s Syndrome. According to Google Knol:

Sheehan’s syndrome (…) is a condition in which the pituitary gland is injured as result of heavy blood loss during complicated childbirth. This heavy loss of blood deprives the pituitary gland of oxygen and other nutrients and leads to necrosis (death) of pituitary tissue and therefore pituitary failure (hypopituitarism). Failure to produce breast milk after delivery (due to lack of the pituitary hormone prolactin) may be a presenting sign of Sheehan’s syndrome. Fortunately, Sheehan’s syndrome is now rare cause of pituitary failure, particularly in developed countries as a result of improved obstetric care.

Looking back I’m stunned that Sheehan was not directly diagnozed by the gynecologists themselves.
And perhaps even more surprised why it happened to me in the first place, being hospitalized in Europe, and having a previous cesarean. (For good reason it is: “Once a cesarean, always a cesarean” According to present protocols I had many negative predictors for success (no prior vaginal birth, short stature, age >40, induction of labor, gestational age almost 43 weeks, failed second stage), but worst of all they didn’t take me serious when I said I didn’t feel well and got a sudden neck pain. When standing up I fainted. So I have every reason to believe all this could have been prevented).

I lost more than 3 litres of blood (and had puerperal fever as well), developing all signs of Sheehan (and Addison crisis) in the days that followed: breast milk “disappearing”, loss of appetite, severe muscle pain, fatigue, headache, lethargy, extreme nausea, diarrhea & vomiting and finally speaking with double tongue, feeling like I fell when lying down, sensitive to cold etc. But nurses pressed to try to give breastmilk (till bleeding), reprimanded me in presence of other patients (you have to break the circle, please do your best (!) and eat something; you have to take care of your child, come on!) and a psychiatrist was being ordered. Finally (after 10 days), when I plead them to check whether I was not dehydrated, they did some tests and found out my blood Natrium was dangerously low (106; normal 140), and could apparently not be corrected by giving saline transfusion. I “missed’ this part, but when I woke up the internist told me proudly he found out I had Sheehan (practically no cortisol or any other hormones under regulation of the anterior hypophysis). Normal natrium levels were achieved after giving cortisol-replacement.

I’m by no means an exception. Addison’s disease is often missed or diagnosed late. That early diagnosis can be a challenge is frequently addressed in the medical literature and many poignant examples can be read on patient forums. In fact I know very few prompt and swift diagnoses.

For instance (from the Newsletter of “The Canadian Addison Society”, issue 27, 2002

After being admitted and discharged what seemed to feel like every weekend, I was finally admitted for bronchitis that affected my asthma. I went on Prednisone* to treat the infection. I felt much better to my surprise. After being “cured” of bronchitis, back in the hospital I went. The pain was unbearable; doctors were questioning if I was anorexic, I saw a psychiatrist who put me on Paxil because I “appeared” to be depressed. Demerol became my new best friend and was the only thing that put me at ease.
My mother continued to stay by my side the entire time. Whether it be stroking my hand, brushing my hair, or encouraging me to walk just a few steps a day. This felt like a marathon to me; in reality it was only a few steps.
After every “possible” test was completed my internist had suggested performing one more test. The results had come back positive! Addison’s Disease….**

(*Prednisone is a glucocorticosteroid that can replace cortisol; this patient also had pigmented handpalms, specific for primary Addison.)

The same is true for other adrenal diseases. Cushing’s Disease (excess of cortisol) is often mistaken for (manic) depression. See for instance or here (Dutch).

After years of non-recognized Cushing one of my fellow patients was treated by many specialists. One expert (being an orthopedic, I believe) totally missed the Cushing, because she fixated on other causes of the severe osteoporosis and didn’t notice the patient’s bruises, mania, belly fat, striae to name just a few other symptoms, typical for Cushing. Missing her diagnosis means she is mostly in a wheel chair now, and not able to do the things she liked to do (for those interested and able to read Dutch she has written a book about it: “Aftakelen and Ophijsen”)

Action (in case of a crisis)
With hormone replacement therapy, most Addison patients disease are able to lead normal lives. However extreme stress can precipitate an Addison crises, which is a medical emergency. Patients therefore often wear alert bracelets or necklaces, so that emergency personnel can identify them as having adrenal insufficiency and provide stress doses of steroids in the event of trauma, surgery, or hospitalization.

Luckily I don’t seem very vulnerable to crises (still producing aldosterone), but the one time I had something like it (presumably due wrong capsules, thus more insidious), family physicians reacted inadequatly. One gave me a lab form emphasizing twice that lab tests should ONLY be done when I was really, really ill. Very stupid, because determining Natrium costs nothing compared to hospitalization, and my pride prevented me taking the test, afraid that I made a fool of myself. My own physician said a few weeks later that I should consult a endocrinologist, because he found Addison “much too difficult”. I thought that wasn’t bad, but my endocrinologist didn’t agree, because “he would have been too late in case of a real emergency”. (I had a Na of 123, but was hospitalized, because my endo (a wonderful female doctor) found I behaved differently and wasn’t ok – I also lost >18 pounds in 2 months)

But there are far more upsetting stories of other Addison crises. Even in this era there are unnecessary deaths due to  inadequate intervention. What is also worrying is that paramedics often miss the alert bracelets. A Dutch paramedic wrote on the bulletin board of our patient’s association, that paramedics don’t even look at it, because they aren’t allowed to do anything going beyond first aid and stabilization. However, if my husband may give me an intramuscular injection of corticosteroids, why can’t a paramedic? It is the most essential emergency measure that can and should be taken. He advised that we would bundle our forces with other patient groups to change the protocols of the ambulance personnel. Paramedics won’t do anything when they are not legally entitled to.

I also hear from many Addison patients that it takes ages before there is adequate action. Apparantly routine tests have to be performed first. A nurse even told me that glucose is tested first, because it is such an easy and fast test. O.k. an addison crisis is often accompanied by low blood glucose. So what? Get those corticosteroids in!!! Intravenous injection is often difficult, because of the low blood pressure. It often takes too long and often fails, at least that is what I hear from other patients.

Iatrogenic Cushing and Addison

Apart from natural causes, Cushing and Addison’s disease can have a iatrogenic cause (unintended harmful effects by a physician’s activity, manner, or therapy). It is well known that longlasting treatment and/or high doses of corticosteroids can give Cushing-like symptoms as well as Addison-crises in case of sudden withdrawal (because of feedback mechanisms the body can’t make cortisol any longer).
Laurens Mijnders has developed long lasting Addison’s Disease because of his asthma treatment. His letter in Contrastma, a paper of a Dutch Asthma Foundation (Astma fonds) evoked many responses of patients who had used high doses corticosteroids (up to 50 mg/day Prednison per day). The reactions showed that doctors had given little or no information about adverse effects of corticosteroids and had never warned against a possible Addison crisis (see here).
An endocrinologist revealed at a meeting that they still regularly see Addison crises in patients who received high-dose steroids for their asthma, rheuma, dermatologic or other inflammatory condition
Of course some of these diseases can only be controlled by corticosteroids, but the treating physician should try to sail safely between Scylla and Charybdis, and prepare the patient for any (anticipated) danger.

Wasn’t it: “Primum non nocere” (Latin for “First, do no harm”)?!

Thus physicians, look beyond the border of your specialty and always take patients seriously, please?

Addison's disease info (nvacp)

Dutch Grand Round 1.3

23 09 2008

The 3rd Dutch Grand Round is up at of Jan Martens. This time there are 4 posts, 3 of them in English. Please read his summary of de Grote Visite here.

Next Dutch Grand Rounds will be hosted at Health Management RX of Jen McCabe Gorman on October 7. The deadline will be on October 5. You can submit your articles by mailing Jen or through the Blog Carnival.There is no theme for submissions, but posts should relate to medicine or health in some way.


De 3e grote visite kunt u vinden op van Jan Martens.
Lees de samenvatting hier.

De volgende ronde is op Health Management RX van Jen McCabe Gorman.
Bent u een Nederlandse blogger en heeft u iets geschreven op medisch gebied (in de breedste zin van het woord) meld uw blogpost dan uiterlijk zondag 5 oktober (voor 12.00 a.m.) bij Health Management RX of de blogcarnival aan!

Previous posts on this subject:
(2008/09/09) Dutch-grand-round-nr-2
(2008/08/26) The first Dutch grand round
(2008/08/16) 1st Dutch grand round expected soon + continuation MedblogNL-top 25 cancelled.
(2008/08/10) a Dutch grand round. Announcement
+ reference to Englisch-language grand rounds.

Appropriate bedside manners

14 05 2008

Do you prefer a doctor that is crying at the bedside or rather one that stays calm and keeps at a professional distance???

My previous post on Etiquette Based Medicine also dealt with ‘correct’ attitudes of doctors towards their patients. Here I quoted Dr. Khan who believes that “patients may care less about whether their doctors are reflective and empathic than whether they are respectful and attentive”. His opinion is shared by many, but certainly not by all. I allready cited a British Journal of General Practice issue on doctor-patient communication, where different viewes were presented. Well, the debate is still ongoing. In the NY Times of 22nd April was a interesting piece about physicians crying at the bedside: At Bedside, Stay Stoic or Display Emotions? [*requires registration].

Some excerpts:

“A young doctor sat down with a terminal lung cancer patient and her husband to discuss the woman’s gloomy prognosis. The patient began to cry. Then the doctor did, too.

At a recent meeting of the Society of General Internal Medicine, Dr. Anthony D. Sung of Harvard Medical School and colleagues reported that 69 percent of medical students and 74 percent of interns said they had cried at least once.

In the 1988 PBS documentary “Can We Make a Better Doctor?” a Harvard medical student, Jane Liebschutz, sees her patient unexpectedly die during a cardiac bypass operation. She suddenly bursts into tears and wanders away from her colleagues until the chief surgeon, who has witnessed what happened, assures her that her response was natural.

Dr. Benita Burke, skipped lunch to spend extra time with her cancer patients. They dubbed this time “mental health rounds,” during which they could address issues that were not strictly medical. Many times, Dr. Burke would wind up in tears or giving an embrace.

The comments in the NY Times and at two blogs (DB’s Medical Rants and Clinical Cases and Images) are also worth reading. These responses illustrate that there is not one truth. Whether strong emotions like crying are appropriate depends on the doctor, the patient, the situation and where and how emotions are expressed. Most patients do not seem to appreciate outright crying at their bedside as it makes them insecure. A crying doctor might also feel like a final verdict: no hope is left. But nobody would blame an intern for crying with his or her mates. And a doctor who cries in front of the patient’s family when sharing information about a serious medical error might help to accept what happened.

So, what kind of doctor would you prefer?

I agree with Dr Hiram Cody, cited in the NY Times, who cautions against excess emotions. Although Dr. Cody emphasizes the need for doctors “to understand, to sympathize, to empathize and to reassure,” he says his job “is not to be emotional and/or cry with his patients for two reasons: It is not therapeutic for the patient, and it will cause “emotional burnout”. (although I’m not sure about the latter)

Personally I prefer a doctor with great knowledge, but openminded to other ideas, attentive and empathic, but without loosing a certain distance, a good listener, explaining disease and treatment options, ….. but no crying, please, never! Never when I’m around. Not when I’m the patient.


NL flagToevallig kwam ik in mijn Feed-Reader een bericht tegen uit de New York Times van 22 april, dat perfect aansluit op mijn vorige post over Etiquette Based Medicine: At Bedside, Stay Stoic or Display Emotions? [*registratie vereist].

Dit stuk bespreekt de voor en tegens van een dokter die zich “laat gaan”.

Enkele citaten:

A young doctor sat down with a terminal lung cancer patient and her husband to discuss the woman’s gloomy prognosis. The patient began to cry. Then the doctor did, too.

At a recent meeting of the Society of General Internal Medicine, Dr. Anthony D. Sung of Harvard Medical School and colleagues reported that 69 percent of medical students and 74 percent of interns said they had cried at least once.

In the 1988 PBS documentary “Can We Make a Better Doctor?” a Harvard medical student, Jane Liebschutz, sees her patient unexpectedly die during a cardiac bypass operation. She suddenly bursts into tears and wanders away from her colleagues until the chief surgeon, who has witnessed what happened, assures her that her response was natural.

Dr. Benita Burke, skipped lunch to spend extra time with her cancer patients. They dubbed this time “mental health rounds,” during which they could address issues that were not strictly medical. Many times, Dr. Burke would wind up in tears or giving an embrace.

Behalve dit stuk zijn ook de commentaren in de NY-times zelf en op 2 blogs (DB’s Medical Rants en Clinical Cases and Images) de moeite van het lezen waard. Diverse meningen passeren de revue, zowel die van dokters als patiënten of familie. Hieruit blijkt dat er niet één waarheid is. Of het uiten van heel sterke emoties kàn, hangt erg af van de dokter, de patient, hun relatie en de situatie. De meeste patienten vinden het uiten van emoties wél belangrijk (“een dokter moet geen robot zijn”), maar velen vinden te sterke emoties zoals het in huilen uitbarsten waar de patient bijstaat niet prettig, omdat ze juist willen dat ze op hun arts kunnen steunen. Gaat het om een heel slechte boodschap (kanker bijvoorbeeld) dan kan de patient het ook ervaren dat hij opgegeven is: de arts neemt alle hoop dan in een keer weg. Maar als een co-assistent bij het overlijden van zijn/haar “eerste” patient bij haar vrienden uithuilt kan iedereen dat begrijpen. Als een dokter huilt wanneer hij slecht nieuws brengt over een dierbare ten gevolge van een medische fout, dan kan dat bij de verwerking helpen.

Maar welke dokter zou jij verkiezen?

Ik ben het in grote lijnen met Dr Hiram Cody eens. In de NY Times waarschuwt hij tegen overmatige emoties. Hij benadrukt weliswaar dat artsen begripvol moeten zijn, moeten meeleven, empathisch moeten zijn en gerust moeten stellen, maar echt emotioneel zijn en huilen raadt hij af omdat het noch de patient noch de arts goed doet.

Persoonlijk verkies ik een arts met een goede kennis van zaken, maar die wel openstaat voor andere opvattingen, die meeleeft en empathisch is wanneer nodig. Hij moet goed kunnen luisteren, mij serieus nemen, goed kunnen uitleggen waarom ik iets heb en welke behandelingsmogelijkheden er zijn (met hun voor-en nadelen). Hij moet eerlijk zijn en als ik het nodig zou hebben is een beetje emotie en een beetje warmte prettig. Maar huilen, nee. Geen huilen waar ik bij ben. Niet wanneer ik de patient ben.

Etiquette-Based Medicine

11 05 2008

Every now and than my collegue Heleen provides me with an interesting paper (a nice web 1.0 way of sharing things). Last Friday I found this paper on my desk: “Etiquette-Based medicine” from Michael W Kahn. The paper in this week’s New England Journal of Medicine is not about the substition of “evidence based medicine” or “eminence-based medicine” by “etiquette-based medicine”. It is about the importance of a good attitude of doctors towards their patients.

When psychiatrist Dr. Khan hears patients complain about doctors, their criticism often has nothing to do with not feeling understood or empathized with. Instead, they object that “he just stared at his computer screen,” “she never smiles,” or “I had no idea who I was talking to”, he writes.
On the contrary, during his own hospitalization he noticed the professional attitude of his European-born surgeon having Old World manners (dress, body language, eye contact etc.).

“The impression this surgeon made was remarkably calming, and it helped to confirm my suspicion that patients may care less about whether their doctors are reflective and empathic than whether they are respectful and attentive”, wrote Dr. Kahn.

Therefore, Khan suggests that medical education and postgraduate training should place more emphasis on “etiquette-based medicine” as it forms the basis of the patient-doctor relationship. One approach would be to introduce a checklist to enforce an etiquette-based approach. A checklist for the first meeting with a patient would for instance cover items like ‘asking permission to enter the room and wait for an answer’, ‘introducing yourselve, showing your ID badge’ and ‘explaining you role on the team’.

This approach bears resemblance to the program introduced at several Academical Medical Centres in the Netherlands. For instance Maas Jan Heineman, nowadays Professor Gynaecology in the Amsterdam Medical Centre (AMC), Amsterdam, helped to introduce such a “etiquette program” in Groningen and in Amsterdam. The competences of the doctors and the integration of knowledge, skills and attitude are now central to the new curricula. As Heineman says it: “What good are doctors who have great knowledge but behave badly. Or vice versa”?! )

These thoughts are (of course) not specifically Dutch (nor European). The entire 2005 January issue of the British Journal of General practice focuses on this subject.

The journal issue ends with a bookreview of a UK-US guide to communicating with patients, consisting of a book ‘Skills for Communicating with Patients’ and a companion volume, ‘Teaching and Learning Communication Skills in Medicine‘, which translates the first book into a framework that can be used in designing and delivering curricula for communications skills teaching in both the academic and clinical setting.

The reviewer, Iain Lawrie, is very positive about the content:

“The layout and language are clear and unambiguous throughout. Important points are emphasised where necessary, and at no time does reading become laborious. Far more importantly, however, the authors have employed an evidence-based approach that moves these titles from the realm of personal opinion and musings to an authoritative work. The frequent use of examples further serves to promote this series as a ‘useable’ guide. (….)
The book gives examples Skills for Communicating with Patients, the authors use a logical approach to analyse the various aspects of communication relevant throughout the consultation process, which are then explored in greater depth over six chapters. They move from the initiation of a consultation (!), through information gathering, structuring, and relationship building, to the often neglected areas of explaining and planning and, finally, closing the encounter.”

Thus it seems that the awareness within the medical community about the necessity of good communication skills is growing. The tools are there, some curricula have already embraced “etiquette based medicine” (although not called by that name) and it seems just a matter of time before “etiquette” becomes an integral part of medical education.

Lets conclude with a quote from the abovementioned book, that also applies to professions other than medical:

‘If you can’t communicate, it doesn’t matter what you know’


Van mijn collega Heleen krijg ik af en toe een artikel of een krantenknipsel toegeschoven. Nog geheel op de ouderwetse web 1.0 manier, maar eigenlijk wel zo leuk. Van de week lag er een artikel in mijn postvak getiteld “Etiquette-Based medicine”, geschreven door Michael W Kahn. Ik dacht eerst “weer een zogenaamd alternatief voor “evidence based-” of “eminence-based medicine”, maar het artikel in het laatste nummer van de New England Journal of Medicine ziet “Etiquette-Based medicine” meer als een aanvulling. Het gaat over het belang van een juiste attitude van de arts tegenover zijn patient.

De klachten die Dr. Khan als psychiater van patiënten over artsen hoort gaan meestal niet over gebrek aan empathie maar veel meer over zaken als: “hij staarde maar naar zijn computerscherm”, “er kan geen lachje af”, “ik had geen idee met wie ik nou te maken had”.

Toen Khan zelf in het ziekenhuis lag had hij precies de tegenovergestelde ervaring. De behandelend chirurg van europese herkomst kwam door zijn zogenaamde ‘Oude-Wereld’ houding (kleding, lichaamstaal, oogcontact) bijzonder professioneel en geruststellend over.

Dit sterkte Khan in zijn idee dat patiënten het veel belangrijker vinden dat hun arts hen met respect en met aandacht bejegent dan dat hij heel erg meelevend is.

Hij stelt daarom dat er in het medisch onderwijs meer aandacht moet komen voor wat hij “etiquette-based medicine” noemt, daar dit de grondslag van een goede patient-doctor relatie vormt. Een checklist zou daarbij kunnen helpen. Als een arts de patient voor het eerst ziet zou hij bijvoorbeeld eerst moeten vragen of hij welkom is en pas als de patiënt akkoord is zou hij naar binnen moeten gaan, een hand moeten geven en zich voor moeten stellen.

Iets dergelijks gebeurt reeds in diverse Nederlandse universitair medische centra. Professor Maas Jan Heineman heeft zo’n “etiquette programma” eerst in het UMCG in Groningen en nu in het AMC te Amsterdam geïnitieerd. In het nieuwe curriculum staan de competenties van de arts centraal en een integratie van kennis, vaardigheden en gedrag. Je hebt tenslotte niets aan een dokter die weliswaar veel weet, maar zich vreselijk gedraagt, of andersom”, aldus Heineman. )

Zo’n benadering is niet specifiek Nederlands, noch Europees. Een heel nummer van het British Journal of General practice (jan 2005) gaat enkel over dit onderwerp.

Het laatste artikel is een boekbespreking van een ‘UK-US gids’ over communicatievaardigheden: ‘Skills for Communicating with Patients” en een begeleidend boekje, Teaching and Learning Communication Skills in Medicine. ]

De recensent Iain Lawrie is erg positief over het boek. Het is helder geschreven en legt de juiste nadrukken. Verder stijgt het werk door de evidence-based benadering boven een opeensomming van feitjes en meningen uit. Het begeleidende boek geeft voorbeelden van hoe te handelen in bepaalde situaties, bijvoorbeeld tijdens het eerste consult. Het boek omvat dus precies wat Khan suggereerde.

Het lijkt er dus op dat men zich binnen de medische wereld steeds meer bewust wordt van de noodzaak van goede communicatievaardigheden. Er zijn al ‘leermethoden’ beschikbaar en in enkele curriculums is etiquette based medicine reeds verweven (zij het onder een andere naam). Het is slechts een kwestie van tijd voordat etiquette een vanzelfsprekend onderdeel van de medische vorming is.

Tot slot een citaat uit het eerder aangehaalde boek (dat eigenlijk op veel meer beroepen van toepassing is):

‘If you can’t communicate, it doesn’t matter what you know.’

De naald in de hooiberg zoeken?

24 03 2008

naald in hooiberg

Deze week werd op de AMC-homepage middels bovenstaand plaatje reclame gemaakt voor de Cursussen ‘Evidence-Based Zoeken voor (para)medici en bibliothecarissen’ april a.s. Onze Medische Bibliotheek doet dat in samenwerking met het Dutch Cochrane Centre (DCC).

Vroeger heb ik begrepen, kwamen koppels van artsen en informatiespecialisten samen naar deze cursus om zo voldoende kennis van EBS (evidence based searching) te krijgen om het op eigen locatie te kunnen implementeren. Sinds enkele jaren komen artsen, paramedici en informatiespecialisten individueel naar de cursus.fournituren2.jpg

Hoewel het plaatje “een naald zoeken in de hooiberg” erg aanschouwelijk is, vind ik het eigenlijk niet direct van toepassing is op onze cursussen EBS. Wat wij de cursisten (en ook onze klanten) willen leren is juist dat ze niet in eerste instantie op zoek moeten gaan naar de naalden in de hooiberg, maar eerst moeten kijken in ‘winkels’, waar die ‘naalden’ reeds netjes verzameld, gesorteerd en beoordeeld zijn. Er zijn tal van bronnen waar je die verzamelde naalden snel kunt bekijken. Met andere woorden men zou juist eerst naar geaggregeerde evidence moeten zoeken. Bij een uitgebreide vraag (voor een systematic review) kijken we wel op diverse plaatsen, maar dan nog richten we ons met behulp van allerlei trucs op die naalden die ons het meest van dienst kunnen zijn.

EBS is juist een manier om niet meer voor elke zoekactie zelf die hooiberg in te duiken.

Cochrane Library underused in the US??

10 02 2008

Al rondneuzende ben ik inmiddels enkele interessante blogs tegengekomen. Een ervan is van de Krafty Librarian. Heel wetenswaardig wat ze allemaal te melden heeft op mijn interessegebied. Het is me zelfs reeds gelukt om een RSS feed van haar blog te krijgen. Ik loop dus al een dag voor op de cursus (heb per ongeluk op het knopje RSS feed gedrukt en automatisch werd mij de keuze geboden uit tig RSS-readers waarvan ik maar lukraak die van Google heb gedownload – en het werkte – voor een enkele feed.) .

Wat ik heb geselecteerd kunnen jullie lezen onder “Laika’s selecties” in de rechterkolom, een mogelijkheid die Google biedt.

De Krafty Librarian wees o.a. op een recent artikel in Obstetrics and Gynaecology. Dat wil ik er even uit lichten omdat het zo mooi aansluit op het verzoek tot financiële steun om de Cochrane Library voor iedereen toegankelijk te maken in Europa. Methodologisch is het niet zo sterk, maar het legt wel de vinger op de zere plek: 1. niet alle zogenaamde evidence based reviews zijn evidence based 2. hoe zorg je ervoor dat de evidence (die netjes op een rij gezet is in een systematic review) ook gevonden en gebruikt wordt?

Do clinical experts rely on the cochrane library? Obstet Gynecol. 2008 Feb;111(2):420-2. Grimes DA, Hou MY, Lopez LM, Nanda K.

In part because of limited public access, Cochrane reviews are underused in the United States compared with other developed nations. To assess use of these reviews by opinion leaders, we examined citation of Cochrane reviews in the Clinical Expert Series of Obstetrics & Gynecology from inception through June of 2007. We reviewed all 54 articles for mention of Cochrane reviews, then searched for potentially relevant Cochrane reviews that the authors could have cited. Thirty-six of 54 Clinical Expert Series articles had one or more relevant Cochrane reviews published at least two calendar quarters before the Clinical Expert Series article. Of these 36 articles, 19 (53%) cited one or more Cochrane reviews. We identified 187 instances of relevant Cochrane reviews, of which 40 (21%) were cited in the Clinical Expert Series articles. No temporal trends were evident in citation of Cochrane reviews. Although about one half of Clinical Expert Series articles cited relevant Cochrane reviews, most eligible reviews were not referenced. Wider use of Cochrane reviews could strengthen the scientific basis of this popular series.

In het kort: Clinical Expert Series articles in Obstetrics & Gynecology is een populaire serie gewijd aan practische evidence based overzichten op dit gebied, waarbij e.e.a. wordt afgezet tegen de klinische expertise van de auteur (opinion leader). Men heeft nu in een bepaals tijdsbestek gekeken in hoeveel artikelen een relevant Cochrane review wel en hoeveel artikelen Cochrane reviews ten onrechte niet werden geciteerd. Slechts 21% van de relevante reviews werd geciteerd, hetgeen opmerkelijk is, daar Cochrane reviews toch als zeer hoge evidence beschouwd worden.

Hoe zou dit verklaard kunnen worden? Volgens Grimes et al:

  • Sommige auteurs zijn niet op de hoogte van de Cochrane Library. Lijkt niet voor de hand liggend, omdat Cochrane abstracts sinds 2000 in PubMed opgenomen zijn
  • Hoewel auteurs gevraagd wordt om hun manuscripten te baseren op evidence , wordt hen niet expliciet gevraagd in de Cochrane Library te zoeken naar relevante reviews.
  • Beperkte toegang tot de volledige tekst van Cochrane Reviews. Auters zijn echter werkzaam op medische scholen, die i.h.a. toegang hebben tot de Cochrane Library.
  • Voorkeur voor het noemen van de primaire bronnen (specifieke RCT’s, randomized controlled trials) boven de systematische reviews die deze RCT’s samenvatten.
  • Cochrane Reviews werden wel gevonden maar niet relevant beschouwd.
  • Cochrane Reviews zijn niet gebruikersvriendelijk. (format; nadruk op methodologie en ontoegankelijk woordgebruik) [9]

…….het volgende stukje licht toe wat het gevolg is als belangrijke evidence niet gevonden wordt:

Despite their clinical usefulness, Cochrane systematic reviews of randomized controlled trials are underused in the United States. For example, a Cochrane review documenting that magnesium sulfate is ineffective as a tocolytic agent received little attention in the United States and Canada, where this treatment has dominated practice for several decades.[10] This therapy had been abandoned in other industrialized nations, where access to the Cochrane Library is easier.[11] Citizens of many countries have free online access to the Cochrane Library through governmental or other funding. In the United States, only Wyoming residents have public access through libraries, thanks to funding by its State Legislature.

9. Rowe BH, Wyer PC, Cordell WH. Evidence-based emergency medicine. Improving the dissemination of systematic reviews in emergency medicine. Ann Emerg Med 2002;39:293-5.
10. Crowther CA, Hiller JE, Doyle LW. Magnesium sulphate for preventing preterm birth in threatened preterm labour. Cochrane Database Syst Rev 2002;(4):CD001060.

11. Grimshaw J. So what has the Cochrane Collaboration ever done for us? A report card on the first 10 years. CMAJ 2004;171:747-9.


Get every new post delivered to your Inbox.

Join 607 other followers