Social Media in Clinical Practice by Bertalan Meskó [Book Review]

13 09 2013

How to review a book on Medical Social Media written by an author, who has learned you many Social Media skills himself?

Thanks to people like Bertalan Meskó, the author of the book concerned,  I am not a novice in the field of Medical Social Media.

But wouldn’t it be great if all newcomers in the medical social media field could benefit from Bertalan’s knowledge and expertise? Bertalan Meskó, a MD with a  Summa Cum Laude PhD degree in clinical genomics, has already shared his insights by posts on award-winning blog ScienceRoll, via Twitter and (an online service that curates health-related social media resources), by giving presentations and social media classes to medical students and physicians.

But many of his students rather read (or reread) the topics in a book instead of e-learning materials. Therefore Bertalan decided to write a handbook entitled “Social Media in Clinical Practice”.

This is the table of contents (for more complete overview see Amazon):

  1. Social media is transforming medicine and healthcare
  2. Using medical search engines with a special focus on Google
  3. Being up-to-date in medicine
  4. Community sites Facebook, Google+ and medical social networks
  5. The world of e-patients
  6. Establishing a medical blog
  7. The role of Twitter and microblogging in medicine
  8. Collaboration online
  9. Wikipedia and Medical Wikis
  10. Organizing medical events in virtual environments
  11. Medical smartphone and tablet applications
  12. Use of social media by hospitals and medical practices
  13. Medical video and podcast
  14. Creating presentations and slideshows
  15. E-mails and privacy concerns
  16. Social bookmarking
  17. Conclusions

As you can see, many social media tools are covered and in this respect the book is useful for everyone, including patients and consumers.

But what makes “Social Media in Clinical Practice” especially valuable for medical students and clinicians?

First, specific medical search engines/social media sites/tools are discussed, like (Pubmed [medical database, search engine], Sermo [Community site for US physicians], Medworm [aggregator of RSS feeds], medical smartphone apps and sources where to find them, Medical Wiki’s like Radiopaedia.
Scientific Social media sites, with possible relevance to physicians are also discussed, like Google Scholar and Wolphram Alpha.

Second, numerous medical examples are given (with links and descriptions). Often, examples are summarized in tables in the individual chapters (see Fig 1 for a random example 😉 ). Links can also be found at the end of the book, organized per chapter.

12-9-2013 7-20-28 Berci examples of blogs

Fig 1. Examples represented in a Table

Third, community sites and non-medical social media tools are discussed from the medical prespective. With regard to community sites and tools like Facebook, Twitter, Blogs and Email special emphasis is placed on (for clinicians very important) quality, privacy and legacy concerns, for instance the compliance of websites and blogs with the HONcode (HON=The Health On the Net Foundation) and HIPAA (Health Insurance Portability and Accountability Act), the privacy settings in Facebook and Social Media Etiquette (see Fig 2).

12-9-2013 7-40-18 berci facebook patient

Fig. 2 Table from “Social Media in Clinical Practice” p 42

The chapters are succinctly written, well organized and replete with numerous examples. I specifically like the practical examples (see for instance Example #4).

12-9-2013 11-19-39 berci example

Fig 3 Example of Smartphone App for consumers

Some tools are explained in more detail, i.e. the anatomy of a tweet or a stepwise description how to launch a WordPress blog.
Most chapters end with a self test (questions),  next steps (encouraging to put the theory into practice) and key points.

Thus in many ways a very useful book for clinical practice (also see the positive reviews on Amazon and the review of Dean Giustini at his blog).

Are there any shortcomings, apart from the minimal language-shortcomings, mentioned by Dean?

Personally I find that discussions of the quality of websites concentrate a bit too much on the formal quality (contact info, title, subtitle etc)). True, it is of utmost importance, but quality is also determined by  content and clinical usefulness. Not all websites that are formally ok deliver good content and vice versa.

As a medical  librarian I pay particular attention to the search part, discussed in chapter 3 and 4.
Emphasis is put on how to create alerts in PubMed and Google Scholar, thus on the social media aspects. However searches are shown, that wouldn’t make physicians very happy, even if used as an alert: who wants a PubMed-alert for cardiovascular disease retrieving 1870195 hits? This is even more true for a the PubMed search “genetics” (rather meaningless yet non-comprehensive term).
More importantly, it is not explained when to use which search engine.  I understand that a search course is beyond the scope of this book, but a subtitle like “How to Get Better at Searching Online?” suggests otherwise. At least there should be hints that searching might be more complicated in practice, preferably with link to sources and online courses.  Getting too much hits or the wrong ones will only frustrate physicians (also to use the socia media tools, that are otherwise helpful).

But overall I find it a useful, clearly written and well structured practical handbook. “Social Media in Clinical Practice” is unique in his kind – I know of no other book that is alike-. Therefore I recommend it to all medical students and health care experts who are interested in digital medicine and social media.

This book will also be very useful to clinicians who are not very fond of social media. Their reluctance may change and their understanding of social medicine developed or enhanced.

Lets face it: a good clinician can’t do without digital knowledge. At the very least his patients use the internet and he must be able to act as a gatekeeper identifying and filtering thrustworty, credible and understandable information. Indeed, as Berci writes in his conclusion:

“it obviously is not a goal to transform all physicians into bloggers and Twitter users, but (..) each physician should find the platforms, tools and solutions that can assist them in their workflow.”

If not convinced I would recommend clinicians to read the blog post written at the the Fauquier ENT-blog (refererred to by Bertalan in chapter 6, #story 5) entiteld: As A Busy Physician, Why Do I Even Bother Blogging?

SM in Practice (AMAZON)

Book information: (also see Amazon):

  • Title: Social Media in Clinical Practice
  • Author: Bertalan Meskó
  • Publisher: Springer London Heidelberg New York Dordrecht
  • 155 pages
  • ISBN 978-1-4471-4305-5
  • ISBN 978-1-4471-4306-2 (eBook)
  • ISBN-10: 1447143051
  • DOI 10.1007/978-1-4471-4306-2
  • $37.99 (Sept 2013) (pocket at Amazon)

Between the Lines. Finding the Truth in Medical Literature [Book Review]

19 07 2013

In the 1970s a study was conducted among 60 physicians and physicians-in-training. They had to solve a simple problem:

“If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5 %, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person’s symptoms or signs?” 

Half of the “medical experts” thought the answer was 95%.
Only a small proportion, 18%, of the doctors arrived at the right answer of 2%.

If you are a medical expert who comes the same faulty conclusion -or need a refresher how to arrive at the right answer- you might benefit from the book written by Marya Zilberberg: “Between the Lines. Finding the Truth in Medical Literature”.

The same is true for a patient whose doctor thinks he/she is among the 95% to benefit form such a test…
Or for journalists who translate medical news to the public…
Or for peer reviewers or editors who have to assess biomedical papers…

In other words, this book is useful for everyone who wants to be able to read “between the lines”. For everyone who needs to examine medical literature critically from time to time and doesn’t want to rely solely on the interpretation of others.

I hope that I didn’t scare you off with the abovementioned example. Between the Lines surely is NOT a complicated epidemiology textbook, nor a dull studybook where you have to struggle through a lot of definitions, difficult tables and statistic formulas and where each chapter is followed by a set of review questions that test what you learned.

This example is presented half way the book, at the end of Part I. By then you have enough tools to solve the question yourself. But even if you don’t feel like doing the exact calculation at that moment, you have a solid basis to understand the bottomline: the (enormous) 93% gap (95% vs 2% of the people with a positive test are considered truly positive) serves as the pool for overdiagnosis and overtreatment.

In the previous chapters of Part I (“Context”), you have learned about the scientific methods in clinical research, uncertainty as the only certain feature of science, the importance of denominators, outcomes that matter and outcomes that don’t, Bayesian probability, evidence hierarchies, heterogeneous treatment effects (does the evidence apply to this particular patient?) and all kinds of biases.

Most reviewers prefer part I of the book. Personally I find part II (“Evaluation”) as interesting.

Part II deals with the study question, and study design, pros and cons of observational and interventional studies, validity, hypothesis testing and statistics.

Perhaps part II  is somewhat less narrative. Furthermore, it deals with tougher topics like statistics. But I find it very valuable for being able to critically appraise a study. I have never seen a better description of “ODDs”: somehow ODDs it is better to grasp if you substitute “treatment A” and “treatment B” for “horse A” and “horse B”, and substitute “death” for “loss of a race”.
I knew the basic differences between cohort studies, case control studies and so on, but I kind of never realized before that ODDs Ratio is the only measure of association available in a case-control study and that case control studies cannot estimate incidence or prevalence (as shown in a nice overview in table 4).

Unlike many other books about “the art of reading of medical articles”, “study designs” or “Evidence Based Medicine”, Marya’s book is easy to read. It is written at a conversational tone and statements are illustrated by means of current, appealing examples, like the overestimation of risk of death from the H1N1 virus, breast cancer screening and hormone replacement therapy.

Although I had printed this book in a wrong order (page 136 next to 13 etc), I was able to read (and understand) 1/3 of the book (the more difficult part II) during a 2 hour car trip….

Because this book is comprehensive, yet accessible, I recommend it highly to everyone, including fellow librarians.

Marya even mentions medical librarians as a separate target audience:

Medical librarians may find this book particularly helpful: Being at the forefront of evidence dissemination, they can lead the charge of separating credible science from rubbish.

(thanks Marya!)

In addition, this book may be indirectly useful to librarians as it may help to choose appropriate methodological filters and search terms for certain EBM-questions. In case of etiology questions words like “cohort”, “case-control”, “odds”, “risk” and “regression” might help to find the “right” studies.

By the way Marya Ziberberg @murzee at Twitter and she writes at her blog Healthcare etc.

p.s. 1 I want to apologize to Marya for writing this review more than a year after the book was published. For personal reasons I found little time to read and blog. Luckily the book lost none of its topicality.

p.s. 2 patients who are not very familiar with critical reading of medical papers might benefit from reading “your medical mind” first [1]. 

bwtn the lines

Amazon Product Details

The Scatter of Medical Research and What to do About it.

18 05 2012

ResearchBlogging.orgPaul Glasziou, GP and professor in Evidence Based Medicine, co-authored a new article in the BMJ [1]. Similar to another paper [2] I discussed before [3] this paper deals with the difficulty for clinicians of staying up-to-date with the literature. But where the previous paper [2,3] highlighted the mere increase in number of research articles over time, the current paper looks at the scatter of randomized clinical trials (RCTs) and systematic reviews (SR’s) accross different journals cited in one year (2009) in PubMed.

Hofmann et al analyzed 7 specialties and 9 sub-specialties, that are considered the leading contributions to the burden of disease in high income countries.

They followed a relative straightforward method for identifying the publications. Each search string consisted of a MeSH term (controlled  term) to identify the selected disease or disorders, a publication type [pt] to identify the type of study, and the year of publication. For example, the search strategy for randomized trials in cardiology was: “heart diseases”[MeSH] AND randomized controlled trial[pt] AND 2009[dp]. (when searching “heart diseases” as a MeSH, narrower terms are also searched.) Meta-analysis[pt] was used to identify systematic reviews.

Using this approach Hofmann et al found 14 343 RCTs and 3214 SR’s published in 2009 in the field of the selected (sub)specialties. There was a clear scatter across journals, but this scatter varied considerably among specialties:

“Otolaryngology had the least scatter (363 trials across 167 journals) and neurology the most (2770 trials across 896 journals). In only three subspecialties (lung cancer, chronic obstructive pulmonary disease, hearing loss) were 10 or fewer journals needed to locate 50% of trials. The scatter was less for systematic reviews: hearing loss had the least scatter (10 reviews across nine journals) and cancer the most (670 reviews across 279 journals). For some specialties and subspecialties the papers were concentrated in specialty journals; whereas for others, few of the top 10 journals were a specialty journal for that area.
Generally, little overlap occurred between the top 10 journals publishing trials and those publishing systematic reviews. The number of journals required to find all trials or reviews was highly correlated (r=0.97) with the number of papers for each specialty/ subspecialty.”

Previous work already suggested that this scatter of research has a long tail. Half of the publications is in a minority of papers, whereas the remaining articles are scattered among many journals (see Fig below).

Click to enlarge en see legends at BMJ 2012;344:e3223 [CC]

The good news is that SRs are less scattered and that general journals appear more often in the top 10 journals publishing SRs. Indeed for 6 of the 7 specialties and 4 of the 9 subspecialties, the Cochrane Database of Systematic Reviews had published the highest number of systematic reviews, publishing between 6% and 18% of all the systematic reviews published in each area in 2009. The bad news is that even keeping up to date with SRs seems a huge, if not impossible, challenge.

In other words, it is not sufficient for clinicians to rely on personal subscriptions to a few journals in their specialty (which is common practice). Hoffmann et al suggest several solutions to help clinicians cope with the increasing volume and scatter of research publications.

  • a central library of systematic reviews (but apparently the Cochrane Library fails to fulfill such a role according to the authors, because many reviews are out of date and are perceived as less clinically relevant)
  • registry of planned and completed systematic reviews, such as prospero. (this makes it easier to locate SRs and reduces bias)
  • Synthesis of Evidence and synopses, like the ACP-Jounal Club which summarizes the best evidence in internal medicine
  • Specialised databases that collate and critically appraise randomized trials and systematic reviews, like for physical therapy. In my personal experience, however, this database is often out of date and not comprehensive
  • Journal scanning services like EvidenceUpdates from, which scans over 120 journals, filters articles on the basis of quality, has practising clinicians rate them for relevance and newsworthiness, and makes them available as email alerts and in a searchable database. I use this service too, but besides that not all specialties are covered, the rating of evidence may not always be objective (see previous post [4])
  • The use of social media tools to alert clinicians to important new research.

Most of these solutions are (long) existing solutions that do not or only partly help to solve the information overload.

I was surprised that the authors didn’t propose the use of personalized alerts. PubMed’s My NCBI feature allows to create automatic email alerts on a topic and to subscribe to electronic tables of contents (which could include ACP journal Club). Suppose that a physician browses 10 journals roughly covering 25% of the trials. He/she does not need to read all the other journals from cover to cover to avoid missing one potentially relevant trial. Instead it is far more efficient to perform a topic search to filter relevant studies from journals that seldom publish trials on the topic of interest. One could even use the search of Hoffmann et al to achieve this.* Although in reality, most clinical researchers will have narrower fields of interest than all studies about endocrinology and neurology.

At our library we are working at creating deduplicated, easy to read, alerts that collate table of contents of certain journals with topic (and author) searches in PubMed, EMBASE and other databases. There are existing tools that do the same.

Another way to reduce the individual work (reading) load is to organize journals clubs or even better organize regular CATs (critical appraised topics). In the Netherlands, CATS are a compulsory item for residents. A few doctors do the work for many. Usually they choose topics that are clinically relevant (or for which the evidence is unclear).

The authors shortly mention that their search strategy might have missed  missed some eligible papers and included some that are not truly RCTs or SRs, because they relied on PubMed’s publication type to retrieve RCTs and SRs. For systematic reviews this may be a greater problem than recognized, for the authors have used meta-analyses[pt] to identify systematic reviews. Unfortunately PubMed has no publication type for systematic reviews, but it may be clear that there are many more systematic reviews that meta-analyses. Possibly systematical reviews might even have a different scatter pattern than meta-analyses (i.e. the latter might be preferentially included in core journals).

Furthermore not all meta-analyses and systematic reviews are reviews of RCTs (thus it is not completely fair to compare MAs with RCTs only). On the other hand it is a (not discussed) omission of this study, that only interventions are considered. Nowadays physicians have many other questions than those related to therapy, like questions about prognosis, harm and diagnosis.

I did a little imperfect search just to see whether use of other search terms than meta-analyses[pt] would have any influence on the outcome. I search for (1) meta-analyses [pt] and (2) systematic review [tiab] (title and abstract) of papers about endocrine diseases. Then I subtracted 1 from 2 (to analyse the systematic reviews not indexed as meta-analysis[pt])



I analyzed the top 10/11 journals publishing these study types.

This little experiment suggests that:

  1. the precise scatter might differ per search: apparently the systematic review[tiab] search yielded different top 10/11 journals (for this sample) than the meta-analysis[pt] search. (partially because Cochrane systematic reviews apparently don’t mention systematic reviews in title and abstract?).
  2. the authors underestimate the numbers of Systematic Reviews: simply searching for systematic review[tiab] already found appr. 50% additional systematic reviews compared to meta-analysis[pt] alone
  3. As expected (by me at last), many of the SR’s en MA’s were NOT dealing with interventions, i.e. see the first 5 hits (out of 108 and 236 respectively).
  4. Together these findings indicate that the true information overload is far greater than shown by Hoffmann et al (not all systematic reviews are found, of all available search designs only RCTs are searched).
  5. On the other hand this indirectly shows that SRs are a better way to keep up-to-date than suggested: SRs  also summarize non-interventional research (the ratio SRs of RCTs: individual RCTs is much lower than suggested)
  6. It also means that the role of the Cochrane Systematic reviews to aggregate RCTs is underestimated by the published graphs (the MA[pt] section is diluted with non-RCT- systematic reviews, thus the proportion of the Cochrane SRs in the interventional MAs becomes larger)

Well anyway, these imperfections do not contradict the main point of this paper: that trials are scattered across hundreds of general and specialty journals and that “systematic reviews” (or meta-analyses really) do reduce the extent of scatter, but are still widely scattered and mostly in different journals to those of randomized trials.

Indeed, personal subscriptions to journals seem insufficient for keeping up to date.
Besides supplementing subscription by  methods such as journal scanning services, I would recommend the use of personalized alerts from PubMed and several prefiltered sources including an EBM search machine like TRIP (

*but I would broaden it to find all aggregate evidence, including ACP, Clinical Evidence, syntheses and synopses, not only meta-analyses.

**I do appreciate that one of the co-authors is a medical librarian: Sarah Thorning.


  1. Hoffmann, Tammy, Erueti, Chrissy, Thorning, Sarah, & Glasziou, Paul (2012). The scatter of research: cross sectional comparison of randomised trials and systematic reviews across specialties BMJ, 344 : 10.1136/bmj.e3223
  2. Bastian, H., Glasziou, P., & Chalmers, I. (2010). Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLoS Medicine, 7 (9) DOI: 10.1371/journal.pmed.1000326
  3. How will we ever keep up with 75 trials and 11 systematic reviews a day (
  4. Experience versus Evidence [1]. Opioid Therapy for Rheumatoid Arthritis Pain. (

Can Guidelines Harm Patients?

2 05 2012

ResearchBlogging.orgRecently I saw an intriguing “personal view” in the BMJ written by Grant Hutchison entitled: “Can Guidelines Harm Patients Too?” Hutchison is a consultant anesthetist with -as he calls it- chronic guideline fatigue syndrome. Hutchison underwent an acute exacerbation of his “condition” with the arrival of another set of guidelines in his email inbox. Hutchison:

On reviewing the level of evidence provided for the various recommendations being offered, I was struck by the fact that no relevant clinical trials had been carried out in the population of interest. Eleven out of 25 of the recommendations made were supported only by the lowest levels of published evidence (case reports and case series, or inference from studies not directly applicable to the relevant population). A further seven out of 25 were derived only from the expert opinion of members of the guidelines committee, in the absence of any guidance to be gleaned from the published literature.

Hutchison’s personal experience is supported by evidence from two articles [2,3].

One paper published in the JAMA 2009 [2] concludes that ACC/AHA (American College of Cardiology and the American Heart Association) clinical practice guidelines are largely developed from lower levels of evidence or expert opinion and that the proportion of recommendations for which there is no conclusive evidence is growing. Only 314 recommendations of 2711 (median, 11%) are classified as level of evidence A , thus recommendation based on evidence from multiple randomized trials or meta-analyses.  The majority of recommendations (1246/2711; median, 48%) are level of evidence C, thus based  on expert opinion, case studies, or standards of care. Strikingly only 245 of 1305 class I recommendations are based on the highest level A evidence (median, 19%).

Another paper, published in Ann Intern Med 2011 [3], reaches similar conclusions analyzing the Infectious Diseases Society of America (IDSA) Practice Guidelines. Of the 4218 individual recommendations found, only 14% were supported by the strongest (level I) quality of evidence; more than half were based on level III evidence only. Like the ACC/AHH guidelines only a small part (23%) of the strongest IDSA recommendations, were based on level I evidence (in this case ≥1 randomized controlled trial, see below). And, here too, the new recommendations were mostly based on level II and III evidence.

Although there is little to argue about Hutchison’s observations, I do not agree with his conclusions.

In his view guidelines are equivalent to a bullet pointed list or flow diagram, allowing busy practitioners to move on from practice based on mere anecdote and opinion. It therefore seems contradictory that half of the EBM-guidelines are based on little more than anecdote (case series, extrapolation from other populations) and opinion. He then argues that guidelines, like other therapeutic interventions, should be considered in terms of balance between benefit and risk and that the risk  associated with the dissemination of poorly founded guidelines must also be considered. One of those risks is that doctors will just tend to adhere to the guidelines, and may even change their own (adequate) practice  in the absence of any scientific evidence against it. If a patient is harmed despite punctilious adherence to the guideline-rules,  “it is easy to be seduced into assuming that the bad outcome was therefore unavoidable”. But perhaps harm was done by following the guideline….

First of all, overall evidence shows that adherence to guidelines can improve patient outcome and provide more cost effective care (Naveed Mustfa in a comment refers to [4]).

Hutchinson’s piece is opinion-based and rather driven by (understandable) gut feelings and implicit assumptions, that also surround EBM in general.

  1. First there is the assumption that guidelines are a fixed set of rules, like a protocol, and that there is no room for preferences (both of the doctor and the patient), interpretations and experience. In the same way as EBM is often degraded to “cookbook medicine”, EBM guidelines are turned into mere bullet pointed lists made by a bunch of experts that just want to impose their opinions as truth.
  2. The second assumption (shared by many) is that evidence based medicine is synonymous with “randomized controlled trials”. In analogy, only those EBM guideline recommendations “count” that are based on RCT’s or meta-analyses.

Before I continue, I would strongly advice all readers (and certainly all EBM and guideline-skeptics) to read this excellent and clearly written BJM-editorial by David Sackett et al. that deals with misconceptions, myths and prejudices surrounding EBM : Evidence based medicine: what it is and what it isn’t [5].

Sackett et al define EBM as “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients” [5]. Sackett emphasizes that “Good doctors use both individual clinical expertise and the best available external evidence, and neither alone is enough. Without clinical expertise, practice risks becoming tyrannised by evidence, for even excellent external evidence may be inapplicable to or inappropriate for an individual patient. Without current best evidence, practice risks becoming rapidly out of date, to the detriment of patients.”

Guidelines are meant to give recommendations based on the best available evidence. Guidelines should not be a set of rules, set in stone. Ideally, guidelines have gathered evidence in a transparent way and make it easier for the clinicians to grasp the evidence for a certain procedure in a certain situation … and to see the gaps.

Contrary to what many people think, EBM is not restricted to randomized trials and meta-analyses. It involves tracking down the best external evidence there is. As I explained in #NotSoFunny #16 – Ridiculing RCTs & EBM, evidence is not an all-or-nothing thing: RCT’s (if well performed) are the most robust, but if not available we have to rely on “lower” evidence (from cohort to case-control to case series or expert opinion even).
On the other hand RCT’s are often not even suitable to answer questions in other domains than therapy (etiology/harm, prognosis, diagnosis): per definition the level of evidence for these kind of questions inevitably will be low*. Also, for some interventions RCT’s are not appropriate, feasible or too costly to perform (cesarean vs vaginal birth; experimental therapies, rare diseases, see also [3]).

It is also good to realize that guidance, based on numerous randomized controlled trials is probably not or limited applicable to groups of patients who are seldom included in a RCT: the cognitively impaired, the patient with multiple comorbidities [6], the old patient [6], children and (often) women.

Finally not all RCTs are created equal (various forms of bias; surrogate outcomes; small sample sizes, short follow-up), and thus should not all represent the same high level of evidence.*

Thus in my opinion, low levels of evidence are not per definition problematic. Even if they are the basis for strong recommendations. As long as it is clear how the recommendations were reached and as long as these are well underpinned (by whatever evidence or motivation). One could see the exposed gaps in evidence as a positive thing as it may highlight the need for clinical research in certain fields.

There is one BIG BUT: my assumption is that guidelines are “just” recommendations based on exhaustive and objective reviews of existing evidence. No more, no less. This means that the clinician must have the freedom to deviate from the recommendations, based on his own expertise and/or the situation and/or the patient’s preferences. The more, when the evidence on which these strong recommendations are based is ‘scant’. Sackett already warned for the possible hijacking of EBM by purchasers and managers (and may I add health insurances and governmental agencies) to cut the costs of health care and to impose “rules”.

I therefore think it is odd that the ACC/AHA guidelines prescribe that Class I recommendations SHOULD be performed/administered even if they are based on level C recommendations (see Figure).

I also find it odd that different guidelines have a different nomenclature. The ACC/AHA have Class I, IIa, IIb and III recommendations and level A, B, C evidence where level A evidence represents sufficient evidence from multiple randomized trials and meta-analyses, whereas the strength of recommendations in the IDSA guidelines includes levels A through C (OR D/E recommendations against use) and quality of evidence ranges from level I through III , where I indicates evidence from (just) 1 properly randomized controlled trial. As explained in [3] this system was introduced to evaluate the effectiveness of preventive health care interventions in Canada (for which RCTs are apt).

Finally, guidelines and guideline makers should probably be more open for input/feedback from people who apply these guidelines.


*the new GRADE (Grading of Recommendations Assessment, Development, and Evaluation) scoring system taking into account good quality observational studies as well may offer a potential solution.

Another possibly relevant post at this blog: The Best Study Design for … Dummies

Taken from a summary of an ACC/AHA guideline at
Click to enlarge.


  1. Hutchison, G. (2012). Guidelines can harm patients too BMJ, 344 (apr18 1) DOI: 10.1136/bmj.e2685
  2. Tricoci P, Allen JM, Kramer JM, Califf RM, & Smith SC Jr (2009). Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA : the journal of the American Medical Association, 301 (8), 831-41 PMID: 19244190
  3. Lee, D., & Vielemeyer, O. (2011). Analysis of Overall Level of Evidence Behind Infectious Diseases Society of America Practice Guidelines Archives of Internal Medicine, 171 (1), 18-22 DOI: 10.1001/archinternmed.2010.482
  4. Menéndez R, Reyes S, Martínez R, de la Cuadra P, Manuel Vallés J, & Vallterra J (2007). Economic evaluation of adherence to treatment guidelines in nonintensive care pneumonia. The European respiratory journal : official journal of the European Society for Clinical Respiratory Physiology, 29 (4), 751-6 PMID: 17005580
  5. Sackett, D., Rosenberg, W., Gray, J., Haynes, R., & Richardson, W. (1996). Evidence based medicine: what it is and what it isn’t BMJ, 312 (7023), 71-72 DOI: 10.1136/bmj.312.7023.71
  6. Aylett, V. (2010). Do geriatricians need guidelines? BMJ, 341 (sep29 3) DOI: 10.1136/bmj.c5340

Silly Sunday #50: Molecular Designs & Synthetic DNA

23 04 2012

As a teenager I found it hard to picture the 3D structure of DNA, proteins and other molecules. Remember we didn’t have a computer then, no videos, nor 3D-pictures or 3D models.

I tried to fill the gap, by making DNA-molecules of (used) matches and colored clay, based on descriptions in dry (and dull 2D) textbooks, but you can imagine that these creative 3D clay figures beard little resemblance to the real molecular structures.

But luckily things have changed over the last 40 years. Not only do we have computers and videos, there are also ready-made molecular models, specially designed for education.

O, how I wished, my chemistry teachers would have had those DNA-(starters)-kits.

Hattip: Joanne Manaster‏ @sciencegoddess on Twitter: 

Curious? Here is the Products Catalog of

Of course, such “synthesis” (copying) of existing molecules -though very useful for educational purposes- is overshadowed by the recent “CREATION of molecules other than DNA and RNA [xeno-nucleic acids (XNAs)], that can be used to store and propagate information and have the capacity for Darwinian evolution.

But that is quite a different story.

Related articles

Jeffrey Beall’s List of Predatory, Open-Access Publishers, 2012 Edition

19 12 2011

Perhaps you remember that I previously wrote [1] about  non-existing and/or low quality scammy open access journals. I specifically wrote about Medical Science Journals of  the series, which comprises 45 titles, none of which having published any article yet.

Another blogger, David M [2] also had negative experiences with fake peer review invitations from sciencejournals. He even noticed plagiarism.

Later I occasionally found other posts about open access spam, like the post of Per Ola Kristensson [3] (specifically about Bentham, Hindawi and InTech OA publishers), of Peter Murray-Rust [4] ,a chemist interested in OA (about spam journals and conferences, specifically about Scientific Research Publishing) and of Alan Dove PhD [5] (specifically about The Journal of Computational Biology and Bioinformatics Research (JCBBR) published by Academic Journals).

But now it appears that there is an entire list of “Predatory, Open-Access Publishers”. This list was created by Jeffrey Beall, academic librarian at the University of Colorado Denver. He just updated the list for 2012 here (PDF-format).

According to Jeffrey predatory, open-access publishers

are those that unprofessionally exploit the author-pays model of open-access publishing (Gold OA) for their own profit. Typically, these publishers spam professional email lists, broadly soliciting article submissions for the clear purpose of gaining additional income. Operating essentially as vanity presses, these publishers typically have a low article acceptance threshold, with a false-front or non-existent peer review process. Unlike professional publishing operations, whether subscription-based or ethically-sound open access, these predatory publishers add little value to scholarship, pay little attention to digital preservation, and operate using fly-by-night, unsustainable business models.

Jeffrey recommends not to do business with the following (illegitimate) publishers, including submitting article manuscripts, serving on editorial boards, buying advertising, etc. According to Jeffrey, “there are numerous traditional, legitimate journals that will publish your quality work for free, including many legitimate, open-access publishers”.

(For sake of conciseness, I only describe the main characteristics, not always using the same wording; please see the entire list for the full descriptions.)

Watchlist: Publishers, that may show some characteristics of  predatory, open-access publisher
  • Hindawi Way too many journals than can be properly handled by one publisher
  • MedKnow Publications vague business model. It charges for the PDF version
  • PAGEPress many dead links, a prominent link to PayPal
  • Versita Open paid subscription for print form. ..unclear business model

An asterisk (*) indicates that the publisher is appearing on this list for the first time.

How complete and reliable is this list?

Clearly, this list is quite exhaustive. Jeffrey did a great job listing  many dodgy OA journals. We should watch (many) of these OA publishers with caution. Another good thing is that the list is updated annually.

( described in my previous post is not (yet) on the list 😉  but I will inform Jeffrey).

Personally, I would have preferred a distinction between real bogus or spammy journals and journals that seem to have “too many journals to properly handle” or that ask (too much ) money for subscription/from the author. The scientific content may still be good (enough).

Furthermore, I would rather see a neutral description of what is exactly wrong about a journal. Especially because “Beall’s list” is a list and not a blog post (or is it?). Sometimes the description doesn’t convince me that the journal is really bogus or predatory.

Examples of subjective portrayals:

  • Dove Press:  This New Zealand-based medical publisher boasts high-quality appearing journals and articles, yet it demands a very high author fee for publishing articles. Its fleet of journals is large, bringing into question how it can properly fulfill its promise to quickly deliver an acceptance decision on submitted articles.
  • Libertas Academia “The tag line under the name on this publisher’s page is “Freedom to research.” It might better say “Freedom to be ripped off.” 
  • Hindawi  .. This publisher has way too many journals than can be properly handled by one publisher, I think (…)

I do like funny posts, but only if it is clear that the post is intended to be funny. Like the one by Alan Dove PhD about JCBBR.

JCBBR is dedicated to increasing the depth of research across all areas of this subject.

Translation: we’re launching a new journal for research that can’t get published anyplace else.

The journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence in this subject area.

We’ll take pretty much any crap you excrete.

Hattip: Catherine Arnott Smith, PhD at the MedLib-L list.

  1. I Got the Wrong Request from the Wrong Journal to Review the Wrong Piece. The Wrong kind of Open Access Apparently, Something Wrong with this Inherently… (
  2. A peer-review phishing scam (
  3. Academic Spam and Open Access Publishing (
  4. What’s wrong with Scholarly Publishing? New Journal Spam and “Open Access” (
  5. From the Inbox: Journal Spam (
  6. Beall’s List of Predatory, Open-Access Publishers. 2012 Edition (
  7. Silly Sunday #42 Open Access Week around the Globe (

Happy Anniversary Highlight HEALTH, ScienceRoll & Sterile Eye!

13 12 2011

Starting a blog is easy. But maintaining a blog costs time and effort. Especially when having a job/while studying (and having a private life as well).

This blog almost celebrates its 4th year (February 2012).

I’m happy to notice that many established (bio)medical & library blogs, that inspired me to start blogging, are still around.

Like one of the greatest medical blogs, CasesBlog by Dr Ves Dimov. And the medlib blogs The Search Principle blog by Dean Giustini and the Krafty Librarian by Michelle Kraft.

All these blogs are still going strong.

The same is true for the blog ScienceRoll by Bertalan Mesko (emphasis on health 2.0), that celebrated its 5th anniversary last month. That same month Sterile Eye (Life, death and surgery through a lens) celebrated its 4th year of existence.

This month Highlight Health (main author Walter Jessen) celebrates its 5th year anniversary.

And the nice thing is that Highlight Health celebrates this with prize pack giveaways.

There are 4 drawings. Each prize pack consist of the following:

All you have to do is to subscribe to the blog in the form of an email alert. People, like me, who are already subscribers are also eligible to participate in the drawings. (see this post for all info)

With so many ‘golden oldies’ around, I wonder about you, my audience. Do you blog? And if you do, for how long? Please tell me in the poll below.

If you are a (bio)medical, library or science blogger (blogging in English), I would appreciate if you could fill in this spreadsheet as well. You are free to edit the spreadsheet and add names of other bloggers as well.

Grand Rounds Vol 8 nr 5: Data, Information & Communication

26 10 2011

Welcome to the Grand Rounds, the weekly summary of the best health blog posts on the Internet. I am pleased to host the Grand Rounds for the second time. The first time, 2 years ago, was theme-less, but during the round we took a trip around the library. Because, for those who don’t know me, after years of biomedical research I became a medical librarian. This also explains my choice for the current theme:


The theme is meant to be broad. According to Wikipedia:

Information in its most restricted technical sense is a message (utterance or expression) or collection of messages that consists of an ordered sequence of symbols, or it is the meaning that can be interpreted from such a message or collection of messages. Information can be recorded or transmitted (…) as signs, or conveyed as signals by waves. Information is any kind of event that affects the state of a dynamic system. (…) Moreover, the concept of information is closely related to notions of … communication.. dataknowledge, meaning, .. perception. .. and especially entropy.

I am pleased that there were plenty submissions on the topic. I love the creative way the bloggers used the theme “information”. In line with the theme the information will be brought to you according to the Rule of Entropy, seemingly chaotic. Still all information is meaningful and often a pleasure to read. Please Enjoy!


From: IBN-live (India): Book News: “Kama Sutra is about sexual & social relations”

IMAGES are a great way to tell information, especially if you don’t understand the language. The picture above is from the Kama Sutra, an ancient Indian Hindu work on human sexual behavior in Sanskrit literature. Did you know the original Kama Sutra is not all about sex and does not have any pictures? Only words, no graphic. And sadly, as a text, it isn’t widely read.

Yes, we start our trip where it ended last week, in INDIA

Our host of last week, Sumer Sethi of Sumer’s Radiology Site, shows very clear (MRI)-images of partially recanalized internal jugular vein thrombosis, in a patient with MS, possibly supporting the theory that MS is a result of chronic venous insufficiency. As readers of this blog know Laika is not impressed by n=1 data, although it may be a good starting point. However, Sumer underpins this link with a paper in J Neurol Neurosurg Psychiatry 2009. Still, a quick look at the citing papers shows many new studies don’t confirm the association of MS with cerebrospinal venous insufficiency…

Another great radiologist, also from India, isVijay Sadasivam (@scanman). No recent posts, but at Scanman’s Casebook you will find an archive of interesting radiological cases, in the form of case reports.

The quite tech savvy surgeon Dr. Dheeraj (aka Techknowdoc) explores the alternatives to the invasive and uncomfortable colonoscopy procedure at Techknowdoc’s Surgical Adventures! This post is a short illustrated guide, visualizing the differences between regular colonoscopy, capsule endoscopy and Virtual Colonoscopy. It is not hard to imagine which approach people would prefer.

Pranab (aka Skepticdoctor) makes an urgent appeal to fellow Indians to help Amit Gupta and other Indian people to get a bone marrow transplant when they need one. Amit has Acute Leukemia, but South Asians are very poorly represented in bone marrow registries, so his odds of getting a match off the registries in the US are slim. The chances are even worse for the less well-off Indians. Read at Scepticemia how you can help. For Amit, for India, for you, or worse, someone you love more than yourself….

Dr. Jen Gunter ridicules Cosmo’s to-go version of the Kama Sutra in a short series! For the “sex positions of the days” are just an offensive alliteration and woeful ignorance of female anatomy… Looking up medical information is the 3rd most common on-line activity. While there are good sites with great information that can help people be empowered about their health, there are also tons of terrible sites marred by bias and rife with the stench of snake oil. In an other post at Dr. Jen Gunter (wielding the lasso of truth) Jen reveals 10 red flags that will help you separate the wisdom from the woo.


Yes, a picture is worth a thousand words. And this is also true for other audiovisual arts. 

Yet, some Medical Bloggers master the art of storytelling, they convey of events in words, images and sounds. And here, words have the same powerful strength. Often these posts of these storytellers are about communication and they know how to communicate that.

One of the master storytellers is Bongi, a general surgeon from South Africa. He submitted the post die taal (that language), which is clearly about communication but in a language (“Afrikaans”), that I can understand, but many of you don’t. Therefore I choose another post at Other Things Amanzi, which is also about communication: “It’s all in the detail”

Another great storyteller, and the winner of the best literary medical blog category of Medgadget contest in 2009 and 2010 is StorytellERdoc. In the beautiful post The Reminder – EKG #6, he tells us how the 6th abnormal EKG in a presentation of one of the residents, brought back memories to the technician who made that EKG: “There is something more important about this EKG than it’s tracing, I began” ….

Robbo (Andrew Roberts) is a pharmacist from one of the most remote parts of Australia working full time in Aboriginal Health. His blog BitingTheDust often covers topics like aboriginal art and pharmacy. There is also a category “information-resources”. His latest post in this category explains how condoms are made and how they work. A video goes with it.

Øystein of  The Sterile Eye (Life, death and surgery through a lens) uses photos throughout his blog. His latest post is about a brochure “LEICA – Fotografie in der Medizin” (Photography in Medicine) that was published by Leitz in 1961.

Another blogger, unique in its kind, “raps” his stories. Yes I’m talking about Zubin, better known as ZDoggMD. Watch how he and his mates colleagues rap “Doctors Today!” where he “informs” folks of what it’s like to actually practice primary care medicine on the front lines. Want to know more about this medical rapper, then listen to this radio interview with a med-student run radio (RadioRounds). It’s about using video to “inform” patients and healthcare providers about health-related issues in a humorous way.

Movies are also a good way to “tell a story” and pass information. Ramona Bates reviews the Lifetime’s Movie “Five” at her blog Suture for a Living. Five is an anthology of five short very emotional (but not sentimental) films exploring the impact of breast cancer on people’s lives.

We have had pictures, music, videos and movies as data carriers. But here is a post that is based on the good old book. Dr. Deborah Serani (who has a blog of her own: Dr. Deb: Psychological Perspectives) submits a review from PsychCentral about her new book “Living with Depression.” My first intuitive response: how can a psychologist or psychoanalyst write about “living with“. But it seems that Deborah Serani has faced a lifelong struggle with depression herself. This memoir/self help book seems a great resource for anyone in the health field looking for information about mood disorders, treatments and recommendations. The review makes me want to read this book.


What about social media as a tool for medical communication and a source of information?

At Diabetes Mine Allison B. and Amy Tenderich review numerous new mobile apps for managing diabetes. Their reviews “Diabetes? There’s An App For That” and “Glooko: iPhone Diabetes Logging Made Super-Easy” may help to choose diabetes patients among the bevvy of diabetes apps.

Twitter is seen as offering more noise than signal, but there’s valid medical data that can be uncovered. Ryan DuBosar at the ACP internist blog highlights how a researcher uses Twitter to track attitudes about vaccination and how they correlate with vaccination rates. The study adds to a growing body of evidence that social networking can be used to track diseases and other natural disasters that affect public health.

Hot from the press, I can’t resist to include a post from the web 2.0 pioneer Dr. Ves at CasesBlog. Ves Dimov usually writes many short posts, but today he explains Social media in Medicine in depth and guides you “How to be a Twitter superstar and help your patients and your practice”. According to his interesting concept two Cycles, the cycle of Patient Education and the Cycle of Online Information and Physician Education, work together as two interlocking cogwheels.

Mayo Clinic started using social media for communication with patients well before all the recent hype and it organized tweetcamps back in 2009. David Harlow made the pilgrimage to Rochester, MN and spoke at the Mayo Clinic Center for Social Media’s Health Care Social Media Summit last week. According to David “A ton of information was presented, through traditional channels and through some multimedia demos as well”. He shares conference highlights in this post at HealthBlawg, like “It is impossible to transplant a successful program from one location to another without taking into account myriad local conditions”. And “health care providers will have to do more with less”. Therefore e-Patient Dave suggests in his closing keynote to “Let Patients Help”.

Nicholas Fogelson of Academic OB/GYN notes that an operating room without incentives is very expensive. He proposes to install a cheap digital toteboard in every operating room in the USA, that would read how many dollars have been spent on that case at that moment. The idea is that surgeons who know exactly what they are spending, would compete to spend less wherever they could.

According to Bryan Vartabedian the social and technological innovations cause doctors to slowly change from analog physicians to digital physicians. He mentions 6 differences between these doctors. The first is that the information consumption of the digital physician is web-based, while the analog doctor consumes information through paper books and journals, often saying curious things like, “I like the smell of paper” or “I’ve gotta be able to hold it.” By the way, Bryan’s blog 33 Charts is all about social media and medicine.

Blogging doctors are digital doctors per definition, but that doesn’t mean they don’t want to discuss things and see each other in real life. Dr. Val of Better Health and cofounder of this Grand Rounds announces a blog conference in Los Angeles, the Blog World Expo, on November 4th, 2011. Her talk is about “physicians engaging online in social health”, but she is actually hoping that many members of the medical blogging community will be out there IRL! At her blog you can get discount tickets.

The online presence of doctors at social media places can have serious drawbacks. The post of Anne Marie Cunningham about derogatory and cynical humour as displayed by medical personnel at Twitter and Facebook has made it to the Daily Telegraph, other UK newspaper, and to my blog…. This post at Wishful thinking in medical education is a must read for healthcare providers embracing social media.

Many physicians have an online presence, but do they really use social media for decision making, wonders Chris Nickson. From his post and the ensuing reactions at Life in the Fast Lane it appears that tools like Twitter and the comments sections on blogs enable a constant, ongoing dialogue with emergency physicians and critical care experts around the world regarding puzzling clinical issues. Rarely, however, there is a direct ‘tweet’ for clinical help. Rather Twitter contributes to the serendipitously finding of relevant and significant information.

Perhaps direct clinical questions are not asked because Twitter (and Facebook to some extent) are open social media. Bertalan Mesko of ScienceRoll mentions that some French doctors actually perform case presentations on Google+, taking advantage of the very simple privacy settings of Google+. They upload information about the case, discuss it with other peers and get to a final diagnosis.

E-Patient Dave announced a seven hour event about information transfer during transitions of care. This event was webcasted, tweeted and discussed on Google+. (also see Brian Ahier’s post about it on Government Health IT). Dave gives some examples that highlight that without reliable information transition, the care transition can become dangerous. Yes, good IT can help.


We now arrive at a clinical librarian topic, medical information via databases, journals and the role of EBM.

The first post bridges this and the previous topic. Jon Brassey is co-founder of  the TRIP-database, a clinical search tool designed to rapidly identify the highest quality clinical evidence for clinical practice. At his blog Liberating the Literature he expresses his view that search is -at best- a partial solution. He is passionate about answering clinician’s questions and would rather see an answer machine than a search engine. Jon is very tempted to allow users to upload their own Q&As, thereby creating an open repository of clinical Q&As. I am more skeptical, because this kind of EBM sharing might be at the expense of the quality of evidence.

What do you think? Can social media and EBM reinforce each other or not? Please tweet your ideas to Anabel Bentley (@doctorblogs at Twitter) who is giving a talk at Evidence 2011 (#ev2011) tomorrow on social media & EBM and asks for your input. You might also want to read my older post about The Web 2.0-EBM Medicine split.

Dean Giustini reviews PubMed Health at The Search Principle Blog. Dean describes PubMed Health as follows. It is as a consumer version of PubMed – a metasearch tool that gathers evidence from Cochrane Collaboration, Nice and other EBM sources to see clinical studies and “what works” in human health. One major benefit of PubMed Health is that any search performed on PubMed Health also runs in PubMed.” Sounds like worth trying.

The invitation to join the editorial board of a relatively new online, open access journal, without receiving any compensation triggered Skeptic Scalpel to ponder about the tangible benefits of open access publishers (coined as “predatory open access” by a commenter) and about how many journals are really needed? Who has the time or interest to read 25 journals on a relatively specialized topic? And what about the quality of the articles in all these journals?

Indeed as The Krafty Librarian explains  the “good guys” (open access) are making just as much profit as the “bad guys.”  They both are for profit. Open Access is not the panacea that many think it is.

Tasha Stanton of Body in Mind asks the intriguing question what to do if systematic reviews on the same topic don’t all give us the same conclusions, whereas you would expect they would collate the same evidence. Tasha finds this disconcerting as for some conditions this could take ages before we could ‘trust’ the evidence. In the example discussed here an Umbrella review was helpful in assessing the evidence. Also the quality of systematic reviews is improving.


From: as seen at Science Based Medicine

Many people think screening is always a good thing and will prevent or cure a disease. But not every test is a good test and often there are both harms and benefits. It is difficult for patients to understand the true value of tests. 

Margaret Polaneczky, MD was touched by a beautiful essay in the NY Times written by a mother of a child born with Tay Sachs disease. While the mother in her loved the essay, the doctor in her cringed, because a single paragraph about the mother’s experience with prenatal screening had the potential to misinform and even frighten readers. Margaret writes a bit of a primer on Tay Sachs screening at the Blog That Ate Manhattan, mainly to set realistic expectations about what prenatal testing can and cannot accomplish.

David Williams at the Health Business Blog reasons that the US Preventive Services Task Force (USPTF) recommendations against routine use of the PSA blood test in healthy men should not have been delayed because of the the firestorm of controversy created by the 2009 screening mammography guidelines… Because uh-oh well, PSA testing is different (and David is right)…  It’s all about what kind of info we can expect from screening and where it leads us.

This month is breast cancer awareness month, meant to highlight issues of breast cancer and try to call attention to new discoveries about breast cancer. Personally I have mixed feelings about the pink ribbon exploitation of this month”, but David Gorky at Science Based Medicine points at a worse misuse: quacks seize the opportunity to spread their message against science-based modalities for the detection and treatment of breast cancer and to promote their “alternative” methods. (see Fig. above).


Dr Shock MD PhD reviews a Dutch trial that shows that availability bias contributes to diagnostic errors made by physicians. Availability bias means that a disease comes more easily to the mind of a doctor who diagnoses this disease more often. This study also suggests that analytical or reflective reasoning may help to counteract this bias.

In an intriguing post counseling psychologist Will Meek, PhD covers some of the recent research on two information processing systems as identified by Daniel Kahneman: Intuition and Reasoning. A simple experiment confirms (in my case) that we use intuition for most of the day, and occasionally use reasoning to answer more complex problems. Some people may also frame this as “head vs heart”. Both systems have their pros and cons and both are needed to make good decisions. Otherwise common problems can arise.

David Bradley of ScienceBase discusses recent research by Gallant and colleagues who were able to reconstruct a video image presented to a subject in a functional MRI machine. David dreams of uploading our dreams to Youtube and of developing a mind-machine interface to allow people with severe disabilities to communicate their thoughts and control a computer or equipment. But David is more of a scientist than a dreamer and he interviews Gallant to find out more about the validity of the technique.

Computational Biologist Walter Jessen highlights “National Biomedical Research Day” at Highlight HEALTH. “National Biomedical Research Day” was proclaimed by Bill Clinton in 1993 on the 160th anniversary of Nobel’s birth. This day celebrates the central role of biomedical research  in improving human health and longevity.

This image was paired with the story: Insurers Shun Those Taking Certain Meds

Philip Hickey at Behaviorism and Mental Health discusses homosexuality. Philip: “homosexuality is a complex phenomenon which defies simplistic explanations. Unfortunately in this field valid information and communication often take a back seat to bigotry and prejudice.”

In his post “Want go Dutch…or German…or French?” at HUB’s LIST of medical fun facts Herbert Mathewson, MD argues that “Before trying to copy other nation’s health care systems we should probably actually learn about them.” The outcomes of the Dutch switch from a system of mandatory social insurance administered by nonprofit sick funds to mandatory basic insurance that citizens had to buy from private insurance companies (“managed competition”) are appalling! I can imagine that the idea that the Dutch reforms provide a successful model for U.S. Medicare seems bizarre. (Herbert’s post is based on a NEJM article “Sobering Lessons from the Netherlands”).

Henry Stern of InsureBlog notes that as far as RomneyCare© (Massachusetts health care reform) is concerned it’s not so much lack of information per se that’s the problem. It’s information that’s wrong that gets you in trouble.

Robert Centor of Medrants simply submitted one sentence:
“I am a physician, not a provider, and Groopman agrees. –″
This distinction between physicians and providers is similar to the distinction between consumers and patients, and I agree.

Rich Fogoros (DrRich) of The Covert Rationing Blog discusses recent article in the New York Times about whether nurses with a doctorate degree ought to be addressed as “doctor.” Most doctors think calling a nurse “doctor” is not appropriate and confusing for patients.
A medical student running the blog The Reflex Hammer agrees: medical students with a doctoral degree don’t introduce themselves as “Doctor” to a patient either, don’t they?
Dr Rich, an old hand, thinks otherwise. While it is indeed comforting that doctors should be so concerned about patients knowing everything they’re supposed to know, the fact (according to dr. Rich) is that the doctor-nurse controversy is a distraction.

Note: this is a librarian!!

And of course you always hope that you find the information you need or that you can inform people the right way.

Medaholic wonders whether you still would be a medical doctor if you knew that it didn’t pay as much? What sorts of information would help you determine whether this is a career worth pursuing?

The post, by Chris Langston, at the John A. Hartford Foundation blog, Health AGEnda details how interested health professionals can get information about how to apply for a new fellowship with the Center for Medicare & Medicaid Innovations office, and urges health professionals interested in improving health care for older adults to apply.

Hospital antimicrobial stewardship programs are prompting more appropriate prescribing of antibiotics, leading to improved patient care, less microbial resistance and lower costs, three studies show. The trick is how to convey this information so hospitals will implement these programs, as only one-third of U.S. facilities currently do. Read more at ACP Hospitalist, in the second contribution of Ryan DuBosar to this round.

We all know that adherence to prescriptions is a problem. But will the Star Ratings system increase adherence? The big question, according to Georg van Antwerp, author of Enabling Healthy Decisionsis whether consumers care about Star Ratings or just focus on lowest price point and access to pharmacies or specific medications.

Louise of the Colorado Health Insurer Insider summarizes her submission quite aptly: “Our submission is about the new Health Insurance Exchanges that will be starting here in the US soon. This post discusses how consumers will get INFORMATION about the health plans through the exchanges. Currently, consumers get their information through health insurance brokers or directly through the insurance carrier. If there are people to answer questions for consumers with the exchanges, how will the plans be more or less expensive”

The post that Reflex Hammer submitted (the one above was just picked by me) concerns informing young children about vegetables. A few weeks ago he and a classmate were invited to give a presentation to 1st graders at an inner-city school. Wishing to combat obesity, they developed a lesson plan about vegetables. They were heartened by how much the adorable kids already knew about vegetables and how enthusiastic they became about eating their greens. An adorable initiative and a great post to end this Grand Rounds, since it illustrates the importance of doctors who enjoy to take their time to inform people.

I just want to mention one other post, by Mike Cadogan at Life at the fast Lane. Mike doesn’t blog a lot lately, because he is preparing presentations for an important Emergency Medicine meeting. But Mike does share some of this journey with us in The 11 Phases Of Grief  Presentation Preparation. Reading these 11 stages, the similarities between writing a lecture and writing for Grand Rounds struck me. Except that beer had to be replaced by wine….

Mike is in stage 7-9, I am in stage 10-11. Stage 11 is Evaluation: What will I do different next time? First, I won’t go for two blog carnivals at the same time, I won’t plan a Grand Round when I’m away for the weekend* (I just need a lot of time) and I should refrain from adding posts that weren’t even submitted….

Will you remind me next time?

I hope that you enjoyed this Grand Rounds and that it wasn’t too much information. I enjoyed reading and compiling all our posts!

Related articles

Evidence Based Point of Care Summaries [2] More Uptodate with Dynamed.

18 10 2011

ResearchBlogging.orgThis post is part of a short series about Evidence Based Point of Care Summaries or POCs. In this series I will review 3 recent papers that objectively compare a selection of POCs.

In the previous post I reviewed a paper from Rita Banzi and colleagues from the Italian Cochrane Centre [1]. They analyzed 18 POCs with respect to their “volume”, content development and editorial policy. There were large differences among POCs, especially with regard to evidence-based methodology scores, but no product appeared the best according to the criteria used.

In this post I will review another paper by Banzi et al, published in the BMJ a few weeks ago [2].

This article examined the speed with which EBP-point of care summaries were updated using a prospective cohort design.

First the authors selected all the systematic reviews signaled by the American College of Physicians (ACP) Journal Club and Evidence-Based Medicine Primary Care and Internal Medicine from April to December 2009. In the same period the authors selected all the Cochrane systematic reviews labelled as “conclusion changed” in the Cochrane Library. In total 128 systematic reviews were retrieved, 68 from the literature surveillance journals (53%) and 60 (47%) from the Cochrane Library. Two months after the collection started (June 2009) the authors did a monthly screen for a year to look for potential citation of the identified 128 systematic reviews in the POCs.

Only those 5 POCs were studied that were ranked in the top quarter for at least 2 (out of 3) desirable dimensions, namely: Clinical Evidence, Dynamed, EBM Guidelines, UpToDate and eMedicine. Surprisingly eMedicine was among the selected POCs, having a rating of “1” on a scale of 1 to 15 for EBM methodology. One would think that Evidence-based-ness is a fundamental prerequisite  for EBM-POCs…..?!

Results were represented as a (rather odd, but clear) “survival analysis” ( “death” = a citation in a summary).

Fig.1 : Updating curves for relevant evidence by POCs (from [2])

I will be brief about the results.

Dynamed clearly beated all the other products  in its updating speed.

Expressed in figures, the updating speed of Dynamed was 78% and 97% greater than those of EBM Guidelines and Clinical Evidence, respectively. Dynamed had a median citation rate of around two months and EBM Guidelines around 10 months, quite close to the limit of the follow-up, but the citation rate of the other three point of care summaries (UpToDate, eMedicine, Clinical Evidence) were so slow that they exceeded the follow-up period and the authors could not compute the median.

Dynamed outperformed the other POC’s in updating of systematic reviews independent of the route. EBM Guidelines and UpToDate had similar overall updating rates, but Cochrane systematic reviews were more likely to be cited by EBM Guidelines than by UpToDate (odds ratio 0.02, P<0.001). Perhaps not surprising, as EBM Guidelines has a formal agreement with the Cochrane Collaboration to use Cochrane contents and label its summaries as “Cochrane inside.” On the other hand, UpToDate was faster than EBM Guidelines in updating systematic reviews signaled by literature surveillance journals.

Dynamed‘s higher updating ability was not due to a difference in identifying important new evidence, but to the speed with which this new information was incorporated in their summaries. Possibly the central updating of Dynamed by the editorial team might account for the more prompt inclusion of evidence.

As the authors rightly point out, slowness in updating could mean that new relevant information is ignored and could thus affect the validity of point of care information services”.

A slower updating rate may be considered more important for POCs that “promise” to “continuously update their evidence summaries” (EBM-Guidelines) or to “perform a continuous comprehensive review and to revise chapters whenever important new information is published, not according to any specific time schedule” (UpToDate). (see table with description of updating mechanisms )

In contrast, Emedicine doesn’t provide any detailed information on updating policy, another reason that it doesn’t belong to this list of best POCs.
Clinical Evidence, however, clearly states, We aim to update Clinical Evidence reviews annually. In addition to this cycle, details of clinically important studies are added to the relevant reviews throughout the year using the BMJ Updates service.” But BMJ Updates is not considered in the current analysis. Furthermore, patience is rewarded with excellent and complete summaries of evidence (in my opinion).

Indeed a major limitation of the current (and the previous) study by Banzi et al [1,2] is that they have looked at quantitative aspects and items that are relatively “easy to score”, like “volume” and “editorial quality”, not at the real quality of the evidence (previous post).

Although the findings were new to me, others have recently published similar results (studies were performed in the same time-span):

Shurtz and Foster [3] of the Texas A&M University Medical Sciences Library (MSL) also sought to establish a rubric for evaluating evidence-based medicine (EBM) point-of-care tools in a health sciences library.

They, too, looked at editorial quality and speed of updating plus reviewing content, search options, quality control, and grading.

Their main conclusion is that “differences between EBM tools’ options, content coverage, and usability were minimal, but that the products’ methods for locating and grading evidence varied widely in transparency and process”.

Thus this is in line with what Banzi et al reported in their first paper. They also share Banzi’s conclusion about differences in speed of updating

“DynaMed had the most up-to-date summaries (updated on average within 19 days), while First Consult had the least up to date (updated on average within 449 days). Six tools claimed to update summaries within 6 months or less. For the 10 topics searched, however, only DynaMed met this claim.”

Table 3 from Shurtz and Foster [3] 

Ketchum et al [4] also conclude that DynaMed the largest proportion of current (2007-2009) references (170/1131, 15%). In addition they found that Dynamed had the largest total number of references (1131/2330, 48.5%).

Yes, and you might have guessed it. The paper of Andrea Ketchum is the 3rd paper I’m going to review.

I also recommend to read the paper of the librarians Shurtz and Foster [3], which I found along the way. It has too much overlap with the Banzi papers to devote a separate post to it. Still it provides better background information then the Banzi papers, it focuses on POCs that claim to be EBM and doesn’t try to weigh one element over another. 


  1. Banzi, R., Liberati, A., Moschetti, I., Tagliabue, L., & Moja, L. (2010). A Review of Online Evidence-based Practice Point-of-Care Information Summary Providers Journal of Medical Internet Research, 12 (3) DOI: 10.2196/jmir.1288
  2. Banzi, R., Cinquini, M., Liberati, A., Moschetti, I., Pecoraro, V., Tagliabue, L., & Moja, L. (2011). Speed of updating online evidence based point of care summaries: prospective cohort analysis BMJ, 343 (sep22 2) DOI: 10.1136/bmj.d5856
  3. Shurtz, S., & Foster, M. (2011). Developing and using a rubric for evaluating evidence-based medicine point-of-care tools Journal of the Medical Library Association : JMLA, 99 (3), 247-254 DOI: 10.3163/1536-5050.99.3.012
  4. Ketchum, A., Saleh, A., & Jeong, K. (2011). Type of Evidence Behind Point-of-Care Clinical Information Products: A Bibliometric Analysis Journal of Medical Internet Research, 13 (1) DOI: 10.2196/jmir.1539
  5. Evidence Based Point of Care Summaries [1] No “Best” Among the Bests? (
  6. How will we ever keep up with 75 Trials and 11 Systematic Reviews a Day? (
  7. UpToDate or Dynamed? (Shamsha Damani at
  8. How Evidence Based is UpToDate really? (

Related articles (automatically generated)

Call for Submissions: Medical Grand Rounds at Laika’s MedLibLog

18 10 2011

Grand Rounds is a weekly round up of the best health blog posts on the Internet. Each week a different blogger takes turns hosting and summarizing the best submissions of the week.

October 25th I will be your host. Again…. for I have hosted Grand Rounds once before. Then we made a trip around the library.

This time the theme will be “INFORMATION”.

Difficult? Not at all. Almost anything may fit into this theme. Examples:  Searching for information, information overload, lack of information, misinformation, the hardest information you had to share, the way the doctor (mis)informed you about a disease, how pharma deals with information…. The way information is interpreted (you can also choose psychiatric topics here). Nice or noteworthy articles or books you read. Or you may review an app. Web2.0 tools. Social Media. Data carriers. Ah well, if you sell it the right way and your post is of good quality, I will accept almost everything…..

I have one slight problem though. Grand Rounds is traveling all the way from India to the Netherlands this week and I am away for the weekend. You would help me tremendously if you submit your post this Tuesday or Wednesday!

Official Deadline: Sunday October 23rd, 20.00 pm Central European Time. This is 14.00 EDT (NY)

Please Email your submissions to:

And include:

  •  “Submission for Grand Rounds” in the subject line of your e-mail.
  • Your name (blog author), the name of your blog, and the URL of your specific blog-post submission.
  • A short summary (1 to 3 sentences) of your blog post.

I look forward to receiving your submissions and featuring them here next week. Thank you!

Jacqueline aka Laika.

Photo Credits (CC):  Picture by mag3737 (Flickr)

Evidence Based Point of Care Summaries [1] No “Best” Among the Bests?

13 10 2011

ResearchBlogging.orgFor many of today’s busy practicing clinicians, keeping up with the enormous and ever growing amount of medical information, poses substantial challenges [6]. Its impractical to do a PubMed search to answer each clinical question and then synthesize and appraise the evidence. Simply, because busy health care providers have limited time and many questions per day.

As repeatedly mentioned on this blog ([67]), it is far more efficient to try to find aggregate (or pre-filtered or pre-appraised) evidence first.

Haynes ‘‘5S’’ levels of evidence (adapted by [1])

There are several forms of aggregate evidence, often represented as the higher layers of an evidence pyramid (because they aggregate individual studies, represented by the lowest layer). There are confusingly many pyramids, however [8] with different kinds of hierarchies and based on different principles.

According to the “5S” paradigm[9] (now evolved to 6S -[10]) the peak of the pyramid are the ideal but not yet realized computer decision support systems, that link the individual patient characteristics to the current best evidence. According to the 5S model the next best source are Evidence Based Textbooks.
(Note: EBM and textbooks almost seem a contradiction in terms to me, personally I would not put many of the POCs somewhere at the top. Also see my post: How Evidence Based is UpToDate really?)

Whatever their exact place in the EBM-pyramid, these POCs are helpful to many clinicians. There are many different POCs (see The HLWIKI Canada for a comprehensive overview [11]) with a wide range of costs, varying from free with ads (e-Medicine) to very expensive site licenses (UpToDate). Because of the costs, hospital libraries have to choose among them.

Choices are often based on user preferences and satisfaction and balanced against costs, scope of coverage etc. Choices are often subjective and people tend to stick to the databases they know.

Initial literature about POCs concentrated on user preferences and satisfaction. A New Zealand study [3] among 84 GPs showed no significant difference in preference for, or usage levels of DynaMed, MD Consult (including FirstConsult) and UpToDate. The proportion of questions adequately answered by POCs differed per study (see introduction of [4] for an overview) varying from 20% to 70%.
McKibbon and Fridsma ([5] cited in [4]) found that the information resources chosen by primary care physicians were seldom helpful in providing the correct answers, leading them to conclude that:

“…the evidence base of the resources must be strong and current…We need to evaluate them well to determine how best to harness the resources to support good clinical decision making.”

Recent studies have tried to objectively compare online point-of-care summaries with respect to their breadth, content development, editorial policy, the speed of updating and the type of evidence cited. I will discuss 3 of these recent papers, but will review each paper separately. (My posts tend to be pretty long and in-depth. So in an effort to keep them readable I try to cut down where possible.)

Two of the three papers are published by Rita Banzi and colleagues from the Italian Cochrane Centre.

In the first paper, reviewed here, Banzi et al [1] first identified English Web-based POCs using Medline, Google, librarian association websites, and information conference proceedings from January to December 2008. In order to be eligible, a product had to be an online-delivered summary that is regularly updated, claims to provide evidence-based information and is to be used at the bedside.

They found 30 eligible POCs, of which the following 18 databases met the criteria: 5-Minute Clinical Consult, ACP-Pier, BestBETs, CKS (NHS), Clinical Evidence, DynaMed, eMedicine,  eTG complete, EBM Guidelines, First Consult, GP Notebook, Harrison’s Practice, Health Gate, Map Of Medicine, Micromedex, Pepid, UpToDate, ZynxEvidence.

They assessed and ranked these 18 point-of-care products according to: (1) coverage (volume) of medical conditions, (2) editorial quality, and (3) evidence-based methodology. (For operational definitions see appendix 1)

From a quantitive perspective DynaMed, eMedicine, and First Consult were the most comprehensive (88%) and eTG complete the least (45%).

The best editorial quality of EBP was delivered by Clinical Evidence (15), UpToDate (15), eMedicine (13), Dynamed (11) and eTG complete (10). (Scores are shown in brackets)

Finally, BestBETs, Clinical Evidence, EBM Guidelines and UpToDate obtained the maximal score (15 points each) for best evidence-based methodology, followed by DynaMed and Map Of Medicine (12 points each).
As expected eMedicine, eTG complete, First Consult, GP Notebook and Harrison’s Practice had a very low EBM score (1 point each). Personally I would not have even considered these online sources as “evidence based”.

The calculations seem very “exact”, but assumptions upon which these figures are based are open to question in my view. Furthermore all items have the same weight. Isn’t the evidence-based methodology far more important than “comprehensiveness” and editorial quality?

Certainly because “volume” is “just” estimated by analyzing to which extent 4 random chapters of the ICD-10 classification are covered by the POCs. Some sources, like Clinical Evidence and BestBets (scoring low for this item) don’t aim to be comprehensive but only “answer” a limited number of questions: they are not textbooks.

Editorial quality is determined by scoring of the specific indicators of transparency: authorship, peer reviewing procedure, updating, disclosure of authors’ conflicts of interest, and commercial support of content development.

For the EB methodology, Banzi et al scored the following indicators:

  1. Is a systematic literature search or surveillance the basis of content development?
  2. Is the critical appraisal method fully described?
  3. Are systematic reviews preferred over other types of publication?
  4. Is there a system for grading the quality of evidence?
  5. When expert opinion is included is it easily recognizable over studies’ data and results ?

The  score for each of these indicators is 3 for “yes”, 1 for “unclear”, and 0 for “no” ( if judged “not adequate” or “not reported.”)

This leaves little room for qualitative differences and mainly relies upon adequate reporting. As discussed earlier in a post where I questioned the evidence-based-ness of UpToDate, there is a difference between tailored searches and checking a limited list of sources (indicator 1.). It also matters whether the search is mentioned or not (transparency), whether it is qualitatively ok and whether it is extensive or not. For lists, it matters how many sources are “surveyed”. It also matters whether one or both methods are used… These important differences are not reflected by the scores.

Furthermore some points may be more important than others. Personally I find step 1 the most important. For what good is appraising and grading if it isn’t applied to the most relevant evidence? It is “easy” to do a grading or to copy it from other sources (yes, I wouldn’t be surprised if some POCs are doing this).

On the other hand, a zero for one indicator can have too much weight on the score.

Dynamed got 12 instead of the maximum 15 points, because their editorial policy page didn’t explicitly describe their absolute prioritization of systematic reviews although they really adhere to that in practice (see comment by editor-in-chief  Brian Alper [2]). Had Dynamed received the deserved 15 points for this indicator, they would have had the highest score overall.

The authors further conclude that none of the dimensions turned out to be significantly associated with the other dimensions. For example, BestBETs scored among the worst on volume (comprehensiveness), with an intermediate score for editorial quality, and the highest score for evidence-based methodology.  Overall, DynaMed, EBM Guidelines, and UpToDate scored in the top quartile for 2 out of 3 variables and in the 2nd quartile for the 3rd of these variables. (but as explained above Dynamed really scored in the top quartile for all 3 variables)

On basis of their findings Banzi et al conclude that only a few POCs satisfied the criteria, with none excelling in all.

The finding that Pepid, eMedicine, eTG complete, First Consult, GP Notebook, Harrison’s Practice and 5-Minute Clinical Consult only obtained 1 or 2 of the maximum 15 points for EBM methodology confirms my “intuitive grasp” that these sources really don’t deserve the label “evidence based”. Perhaps we should make a more strict distinction between “point of care” databases as a point where patients and practitioners interact, particularly referring to the context of the provider-patient dyad (definition by Banzi et al) and truly evidence based summaries. Only few of the tested databases would fit the latter definition. 

In summary, Banzi et al reviewed 18 Online Evidence-based Practice Point-of-Care Information Summary Providers. They comprehensively evaluated and summarized these resources with respect to coverage (volume) of medical conditions, editorial quality, and (3) evidence-based methodology. 

Limitations of the study, also according to the authors, were the lack of a clear definition of these products, arbitrariness of the scoring system and emphasis on the quality of reporting. Furthermore the study didn’t really assess the products qualitatively (i.e. with respect to performance). Nor did it take into account that products might have a different aim. Clinical Evidence only summarizes evidence on the effectiveness of treatments of a limited number of diseases, for instance. Therefore it scores bad on volume while excelling on the other items. 

Nevertheless it is helpful that POCs are objectively compared and it may help as starting point for decisions about acquisition.

References (not in chronological order)

  1. Banzi, R., Liberati, A., Moschetti, I., Tagliabue, L., & Moja, L. (2010). A Review of Online Evidence-based Practice Point-of-Care Information Summary Providers Journal of Medical Internet Research, 12 (3) DOI: 10.2196/jmir.1288
  2. Alper, B. (2010). Review of Online Evidence-based Practice Point-of-Care Information Summary Providers: Response by the Publisher of DynaMed Journal of Medical Internet Research, 12 (3) DOI: 10.2196/jmir.1622
  3. Goodyear-Smith F, Kerse N, Warren J, & Arroll B (2008). Evaluation of e-textbooks. DynaMed, MD Consult and UpToDate. Australian family physician, 37 (10), 878-82 PMID: 19002313
  4. Ketchum, A., Saleh, A., & Jeong, K. (2011). Type of Evidence Behind Point-of-Care Clinical Information Products: A Bibliometric Analysis Journal of Medical Internet Research, 13 (1) DOI: 10.2196/jmir.1539
  5. McKibbon, K., & Fridsma, D. (2006). Effectiveness of Clinician-selected Electronic Information Resources for Answering Primary Care Physicians’ Information Needs Journal of the American Medical Informatics Association, 13 (6), 653-659 DOI: 10.1197/jamia.M2087
  6. How will we ever keep up with 75 Trials and 11 Systematic Reviews a Day? (
  7. 10 + 1 PubMed Tips for Residents (and their Instructors) (
  8. Time to weed the (EBM-)pyramids?! (
  9. Haynes RB. Of studies, syntheses, synopses, summaries, and systems: the “5S” evolution of information services for evidence-based healthcare decisions. Evid Based Med 2006 Dec;11(6):162-164. [PubMed]
  10. DiCenso A, Bayley L, Haynes RB. ACP Journal Club. Editorial: Accessing preappraised evidence: fine-tuning the 5S model into a 6S model. Ann Intern Med. 2009 Sep 15;151(6):JC3-2, JC3-3. PubMed PMID: 19755349 [free full text].
  11. How Evidence Based is UpToDate really? (
  12. Point_of_care_decision-making_tools_-_Overview (
  13. UpToDate or Dynamed? (Shamsha Damani at

Related articles (automatically generated)

Your Medical Mind. How to Decide What is Right for You [Book Review]

3 10 2011

I enjoyed reading “Your Medical Mind” from start to end. The style of this book was light, but the content was not. Jerome Groopman, oncologist, and Pamela Hartzband, endocrinologist, are to be congratulated on their ability to write clearly about a difficult topic. They explain all aspects about making the right medical choices, in a way that is comprehensible to all.

What makes their book so enlightening is that Groopman and Hartzband illustrate each aspect of medical decisionmaking with real patient stories. In fact the entire book is largely based on interviews with scores of patients of different ages, of different economic status and with different medical conditions.

The authors also drew on research and insights from doctors, psychologists, economists and other experts to shed more light on forces that can aid or impede our thinking when we have to make those decisions.
For those who want to explore things further, there are 213 notes (appr. 80 pages!) and a bibliography of 20 pages at the end of the book.

The first chapter “Where am I in the numbers” deals… right… with numbers, or basic statistics. A topic that patients (and quite some doctors!) often find difficult to understand. This chapter explains Relative Risk Reduction (RRR), Control Event Rate (Basic Risk), Absolute Risk Reduction (ARR) and Numbers Needed to Treat (NNT) without hardly mentioning these terms.

The authors illustrate these and other principles with the story of Susan. Susan is a bit overweight and has a high cholesterol “of the bad kind” (LDL). Her GP concludes: “Since you’re active and already follow a healthy diet, I think it is time for medication. Fortunately, we have a good treatment for this [statins]. Here is a prescription. I’ll see you again in a month”.

But Susan doesn’t take the prescription. Why? First Susan is a doubter and a minimalist. She wants the minimum necessary, certain that “less is more”.  For this is how she was raised. Second, Susan is very much like her father, who had a similar high cholesterol, never took a pill, yet lived a long, full and healthy life. Therefore she believes that for people like her these high levels of LDL-cholesterol are not necessarily dangerous. Third, she  meets an acquaintance who suffers from debilitating muscle pain as a side effect of the statins.

When Susan’s GP hears that she decided not to take her medicine, her face tightens in concern: “It s very important to take this medication. You really need it”. She explains that statin pills will lower her risk of a heart attack over the next 10 years by as much as 30 % [RRR]. She adds that the risk of side effects is very small and often reversible.

Sinds 30% less risk of myocardial infarction (heart attack, MI) sounds impressive, Susan promised her doctor to reconsider her decision. As many other people she searches the Web for medical information. After months (!) she finds a government-sponsored link with objective patient information and a 10-year heart attack risk calculator. By entering all the requested information, she finds out that her basic MI risk is 1%. This means that 1 of 100 (or 3 of 300) people with this level of risk will have a heart attack in the next 10 years (background risk without taking treatment).

Let’s apply that benefit to a group of 300 women like Susan, where 3 of them would have a MI without taking statins. If we treat them all, we would prevent one MI—because we prevent 1 MI in 3 women (30% RRR). The other 2 women would still have a MI despite taking the medicine. The remaining 297 would not have had a heart attack even without the medication, so they wouldn’t have benefited from taking it. Thus 300 persons with this background risk need to be treated to prevent one heart attack. This is the number needed to treat (NNR).

Research has shown that people respond most profoundly to “stories”. Statistics can help to merge science with stories and fit single anecdotes into the larger context of all people who are treated. Statistics (and “evidence” in general) allows people to make an informed choice. 

Susan’s story also illustrates that framing  is very important. When you hear that a statin lowers your risk by 30% (RRR), it sounds as if you are at 100% risk and thus have a great benefit. But reframing the effect as a chance of 1 in 300 persons to have a benefit, may shift the balance for you. Susan concluded that the benefits didn’t outweigh the risks. Others may look at it another way: If there is a chance I could be the one person out of the 300 who avoids a heart attack, then the statin is 100% effective for me. 

Pharmaceutic industries understand a great deal about how people decide whether to take a medicine. They frame information about benefit in the most favorable way and exploit the power of availability bias* using carefully crafted images and anecdotes, and giving implicit messages while  marginalizing side effects.

Various studies and patient stories discussed in the book clearly show that patients choose differently when they are given clearer information about benefits and risks. Surprisingly, their choice often differs from the treatment options the experts see as “best”.

As we have seen the attitude of the doctor and the way he/she frames the medical information also matters. Susan’s GP framed the information in such a way that it overemphasized the benefits of treatment with statins, the option she saw as “best”. This GP later refused Susan as a patient, because she didn’t follow her instructions. Her next doctor: “It is the old paternalistic way of dealing with patients. Ultimately you know, patients have final control of what goes on. (….) It is not like you just go: “Boom, boom, boom, here is the prescription”.

The irony is that most people will accept the default option: they assume that what is routinely recommended is best. If it turns out differently, however, they may feel strong regret. In contrast, if the risk is taken into account beforehand, people may experience side effects less seriously. Furthermore people have a tremendous ability to adapt.

The book learns us the differences between believers and doubters, maximalists and minimalists, naturalism and technology orientation and the importance of availability bias*, omission bias*, decisional conflict, loss aversion*, expected utility*, autonomy and control.

Our preferences about treatment, may depend on our personality, the way we were raised as kids and our previous experiences. This applies to both patients and doctors. As an example, the authors explain why one of them became a believer and a maximalist and the other a doubter and a minimalist. Until some bad experience with an aggressive and unsuccessful surgery made the maximalist a bit more risk-averse.

The book offers several examples of doctors  advocating treatments on basis of their beliefs or expertise. A surgeon wants to cure prostate cancer by surgery while focusing on the unacceptable sides of radiation, while radiation therapists emphasize unacceptable side effects of surgery. Yet others make a case for “watchful waiting”.
More than before, I realize that choices are highly personal and that I, too, have my own preferences. For instance, I tend to favor watchful waiting in case of low risk prostate cancer, possibly because I am a doubter in most respects, and have worked with Prof Schröder who supports watchful waiting. However, for some men this watchful waiting may become watchful worrying and they might just prefer to get the cancer out. Even at the cost of sexual and urinary function.

Interesting is also the notion that “the best” doctors or the “most renowned hospitals” may not always be the best for you. An expert who looks totally bored, saying you’re a “typical case” may give you an anonymous feeling. A nurse’s silent shrug when you express dismay about losing a lot of weight, may reinforce this sense. This can be a reason to clinch to your own community hospital and not choosing a large, bustling cancer center.

Another AHA moment for me was about end of life decisions, described in the touching chapter 8. The authors describe that nearly half of the patients were inconsistent in their wishes about what therapies they wanted, whether they had completed a living will or advance directive or not. This is because they often can’t imagine what they will want and how much they can endure when their condition shifts from healthy to sick and then to even sicker. On the other hand rigid sticking to directives may pose a dilemma to the carer. Are resuscitation and intubation allowed as temporary interventions if not meant to artificially sustain life?

In short, “Your Medical Mind” is an interesting and instructive book, that is not only of value for patients and carers, but also for doctors ànd future patients (and remember everyone is a patient sometimes).

Does this mean that “Your Medical Mind” is an “essential companion that will show us how to chart a clear path through this sea of confusion” as the book flap and introduction promise?

And is it true that the answer to the question “How do you know what is right for you? lies not with the experts, but within you?”

These seem too ambitious claims.

For a good decision process knowing your preferences and the forces that can influence your choice, is not enough. A good health literacy is important too. Apart from a chapter that deals with statistics, this book offers little info on that topic.

What about minimalistic naturalists who choose a homeopathic treatment for cancer? This choice might fit the medical mind of those patients, and of course they have every right to make their own decisions, but is it truly “right for them”?

I get the impression that the authors underestimate the value of “evidence”. They are very skeptical, not only about pharmaceutical companies, but also about recommendations in guidelines, whether they are evidence based or not.

In the examples all treatments are almost equally effective. This leaves a grey zone for where there is no black and white answer about when and how to treat. Often, some treatments are superior to others (for certain patient groups).

Thus, the authors give little attention to the importance of objective medical information itself, as a basis for decision making. They also pay no attention to shared decision making, as e-patient Dave emphasizes in his review.

Still, I loved the book. It completes my knowledge of EBM and information sources.

It also made me curious about another book by Groopman “How doctors think” , which has rapidly risen to the top of the New York Times bestseller list since its release in March 2007.  Dr Shock just reviewed it. Perhaps we should exchange our books….

Title: Your Medical Mind
Author: Jerome Groopman, M.D., Pamela Hartzband, M.D.
Publisher: The Penguin Press
Book: Hardcover, 320 pages

  • availability bias: overweighting evidence that comes easily to mind.
  • loss aversion: the reluctance to risk side effects for what is pursued to be a small benefit
  • expected utility = [probability x outcome) X (utility of outcome)
  • omission bias: avoiding treatment because of anticipation of regret

FUTON Bias. Or Why Limiting to Free Full Text Might not Always be a Good Idea.

8 09 2011

ResearchBlogging.orgA few weeks ago I was discussing possible relevant papers for the Twitter Journal Club  (Hashtag #TwitJC), a succesful initiative on Twitter, that I have discussed previously here and here [7,8].

I proposed an article, that appeared behind a paywall. Annemarie Cunningham (@amcunningham) immediately ran the idea down, stressing that open-access (OA) is a pre-requisite for the TwitJC journal club.

One of the TwitJC organizers, Fi Douglas (@fidouglas on Twitter), argued that using paid-for journals would defeat the objective that  #TwitJC is open to everyone. I can imagine that fee-based articles could set a too high threshold for many doctors. In addition, I sympathize with promoting OA.

However, I disagree with Annemarie that an OA (or rather free) paper is a prerequisite if you really want to talk about what might impact on practice. On the contrary, limiting to free full text (FFT) papers in PubMed might lead to bias: picking “low hanging fruit of convenience” might mean that the paper isn’t representative and/or doesn’t reflect the current best evidence.

But is there evidence for my theory that selecting FFT papers might lead to bias?

Lets first look at the extent of the problem. Which percentage of papers do we miss by limiting for free-access papers?

survey in PLOS by Björk et al [1] found that one in five peer reviewed research papers published in 2008 were freely available on the internet. Overall 8,5% of the articles published in 2008 (and 13,9 % in Medicine) were freely available at the publishers’ sites (gold OA).  For an additional 11,9% free manuscript versions could be found via the green route:  i.e. copies in repositories and web sites (7,8% in Medicine).
As a commenter rightly stated, the lag time is also important, as we would like to have immediate access to recently published research, yet some publishers (37%) impose an access-embargo of 6-12 months or more. (these papers were largely missed as the 2008 OA status was assessed late 2009).

PLOS 2009

The strength of the paper is that it measures  OA prevalence on an article basis, not on calculating the share of journals which are OA: an OA journal generally contains a lower number of articles.
The authors randomly sampled from 1.2 million articles using the advanced search facility of Scopus. They measured what share of OA copies the average researcher would find using Google.

Another paper published in  J Med Libr Assoc (2009) [2], using similar methods as the PLOS survey examined the state of open access (OA) specifically in the biomedical field. Because of its broad coverage and popularity in the biomedical field, PubMed was chosen to collect their target sample of 4,667 articles. Matsubayashi et al used four different databases and search engines to identify full text copies. The authors reported an OA percentage of 26,3 for peer reviewed articles (70% of all articles), which is comparable to the results of Björk et al. More than 70% of the OA articles were provided through journal websites. The percentages of green OA articles from the websites of authors or in institutional repositories was quite low (5.9% and 4.8%, respectively).

In their discussion of the findings of Matsubayashi et al, Björk et al. [1] quickly assessed the OA status in PubMed by using the new “link to Free Full Text” search facility. First they searched for all “journal articles” published in 2005 and then repeated this with the further restrictions of “link to FFT”. The PubMed OA percentages obtained this way were 23,1 for 2005 and 23,3 for 2008.

This proportion of biomedical OA papers is gradually increasing. A chart in Nature’s News Blog [9] shows that the proportion of papers indexed on the PubMed repository each year has increased from 23% in 2005 to above 28% in 2009.
(Methods are not shown, though. The 2008 data are higher than those of Björk et al, who noticed little difference with 2005. The Data for this chart, however, are from David Lipman, NCBI director and driving force behind the digital OA archive PubMed Central).
Again, because of the embargo periods, not all literature is immediately available at the time that it is published.

In summary, we would miss about 70% of biomedical papers by limiting for FFT papers. However, we would miss an even larger proportion of papers if we limit ourselves to recently published ones.

Of course, the key question is whether ignoring relevant studies not available in full text really matters.

Reinhard Wentz of the Imperial College Library and Information Service already argued in a visionary 2002 Lancet letter[3] that the availability of full-text articles on the internet might have created a new form of bias: FUTON bias (Full Text On the Net bias).

Wentz reasoned that FUTON bias will not affect researchers who are used to comprehensive searches of published medical studies, but that it will affect staff and students with limited experience in doing searches and that it might have the same effect in daily clinical practice as publication bias or language bias when doing systematic reviews of published studies.

Wentz also hypothesized that FUTON bias (together with no abstract available (NAA) bias) will affect the visibility and the impact factor of OA journals. He makes a reasonable cause that the NAA-bias will affect publications on new, peripheral, and under-discussion subjects more than established topics covered in substantive reports.

The study of Murali et al [4] published in Mayo Proceedings 2004 confirms that the availability of journals on MEDLINE as FUTON or NAA affects their impact factor.

Of the 324 journals screened by Murali et al. 38.3% were FUTON, 19.1%  NAA and 42.6% had abstracts only. The mean impact factor was 3.24 (±0.32), 1.64 (±0.30), and 0.14 (±0.45), respectively! The authors confirmed this finding by showing a difference in impact factors for journals available in both the pre and the post-Internet era (n=159).

Murali et al informally questioned many physicians and residents at multiple national and international meetings in 2003. These doctors uniformly admitted relying on FUTON articles on the Web to answer a sizable proportion of their questions. A study by Carney et al (2004) [5] showed  that 98% of the US primary care physicians used the Internet as a resource for clinical information at least once a week and mostly used FUTON articles to aid decisions about patient care or patient education and medical student or resident instruction.

Murali et al therefore conclude that failure to consider FUTON bias may not only affect a journal’s impact factor, but could also limit consideration of medical literature by ignoring relevant for-fee articles and thereby influence medical education akin to publication or language bias.

This proposed effect of the FFT limit on citation retrieval for clinical questions, was examined in a  more recent study (2008), published in J Med Libr Assoc [6].

Across all 4 questions based on a research agenda for physical therapy, the FFT limit reduced the number of citations to 11.1% of the total number of citations retrieved without the FFT limit in PubMed.

Even more important, high-quality evidence such as systematic reviews and randomized controlled trials were missed when the FFT limit was used.

For example, when searching without the FFT limit, 10 systematic reviews of RCTs were retrieved against one when the FFT limit was used. Likewise when searching without the FFT limit, 28 RCTs were retrieved and only one was retrieved when the FFT limit was used.

The proportion of missed studies (appr. 90%) is higher than in the studies mentioned above. Possibly this is because real searches have been tested and that only relevant clinical studies  have been considered.

The authors rightly conclude that consistently missing high-quality evidence when searching clinical questions is problematic because it undermines the process of Evicence Based Practice. Krieger et al finally conclude:

“Librarians can educate health care consumers, scientists, and clinicians about the effects that the FFT limit may have on their information retrieval and the ways it ultimately may affect their health care and clinical decision making.”

It is the hope of this librarian that she did a little education in this respect and clarified the point that limiting to free full text might not always be a good idea. Especially if the aim is to critically appraise a topic, to educate or to discuss current best medical practice.


  1. Björk, B., Welling, P., Laakso, M., Majlender, P., Hedlund, T., & Guðnason, G. (2010). Open Access to the Scientific Journal Literature: Situation 2009 PLoS ONE, 5 (6) DOI: 10.1371/journal.pone.0011273
  2. Matsubayashi, M., Kurata, K., Sakai, Y., Morioka, T., Kato, S., Mine, S., & Ueda, S. (2009). Status of open access in the biomedical field in 2005 Journal of the Medical Library Association : JMLA, 97 (1), 4-11 DOI: 10.3163/1536-5050.97.1.002
  3. WENTZ, R. (2002). Visibility of research: FUTON bias The Lancet, 360 (9341), 1256-1256 DOI: 10.1016/S0140-6736(02)11264-5
  4. Murali NS, Murali HR, Auethavekiat P, Erwin PJ, Mandrekar JN, Manek NJ, & Ghosh AK (2004). Impact of FUTON and NAA bias on visibility of research. Mayo Clinic proceedings. Mayo Clinic, 79 (8), 1001-6 PMID: 15301326
  5. Carney PA, Poor DA, Schifferdecker KE, Gephart DS, Brooks WB, & Nierenberg DW (2004). Computer use among community-based primary care physician preceptors. Academic medicine : journal of the Association of American Medical Colleges, 79 (6), 580-90 PMID: 15165980
  6. Krieger, M., Richter, R., & Austin, T. (2008). An exploratory analysis of PubMed’s free full-text limit on citation retrieval for clinical questions Journal of the Medical Library Association : JMLA, 96 (4), 351-355 DOI: 10.3163/1536-5050.96.4.010
  7. The #TwitJC Twitter Journal Club, a new Initiative on Twitter. Some Initial Thoughts. (
  8. The Second #TwitJC Twitter Journal Club (
  9. How many research papers are freely available? (

Grand Rounds 7-50: Dr. Rich Did a Great Job… Jobs, Jobs, Jobs…

6 09 2011

In the old days, bloggers whose posts were included in the Grand Rounds would link to that post from their own blog. Grand Rounds, for those who are not familiar, is a  weekly compilation of the best of the medical blogosphere.

I used to refer to the Grand Rounds once in a while, but quit this habit to prevent that my own posts would get lost amidst the summarizing and/or referring posts.

But I will make an exception for a Grand Rounds edition that is written by a man who combines modern practice along with classic craftsmanship (rather called “old fartness” by the author concerned).

Anyway, Dr Rich of the latest edition of Grand Rounds did a great job with his Grand Rounds 7-50: The Jobs! Jobs! Jobs! Edition.

First of all I was surprised to find a very good summary of my own post. A post about a search topic, which I was rather surprised to find included in the first place. Please let me share this excellent & quite funny plain language summary of my post.

Jaqueline writes Laika’s MedLiblog, a blog dedicated to medical information science. She submits a post entitled, “PubMed’s Higher Sensitivity than OVID MEDLINE… & other Published Clichés,” in which she shows how medical researchers doing literature searches for, among other things, meta-analyses, will stumble upon various “anomalies” in their searches of the PubMed and OVID databases, and then write additional, CV-padding papers about those anomalies. Jaqueline points out that these so-called “anomalies” are actually well-documented “clichés,” which are well-known to information specialists and anyone else who is competent in doing comprehensive literature searches. In other words, Jaqueline has documented that these meta-analysis researchers are rank amateurs at doing the most critical step in conducting meta-analyses – searching the literature for all the appropriate published studies. DrRich has always mistrusted meta-analyses, and Jaqueline has helpfully identified yet another reason to justify such mistrust. He thanks Jaqueline, and whoever planted those database anomalies which allow us to identify potentially incompetent meta-analysis researchers. 

Second, I am always happy if a Grand Round not only quotes the posts of the great medical bloggers I already know, but also includes posts of bloggers who are new to me. Today I’ve found two new blogs I’ve subscribed to.

The First is Sharp incisions (… random cuts in the life of a fledgling medical student), a blog started in 2010 by a second year medical student. He/she wrote an affecting post in 5 parts about the harvesting of six vital organs for transplantation from a patient who has been declared brain dead. (First part starts here)

Here is a quote from the last (5th) part:

Now, all that was left was to close his incision.

I stood beside the surgeon, watching, but through the sterile drape, I reached for the patient’s hand, squeezed it, and silently said,

‘Thank you. Your legacy lives on in these lives you’ve saved.’

One heart.
Two lungs.
A liver.
Two kidneys.

Six futures.

Another blog I subscribed to is In My Humble Opinion (A primary care physician’s thoughts on medicine and life), written by Jordan Grumet (@jordangrumet at Twitter), an Internal Medicine physician. This blog already started in 2008 (just like this blog).

I really enjoyed the beautiful post Sometimes We Are Doctors or as he says at the end of his post:

We are all patients sometimes… and sometimes we are doctors.”

For more summaries please read the entire Grand Rounds at the Covert Rationing Blog. You might just discover your own new favorite blog.