Social Media in Clinical Practice by Bertalan Meskó [Book Review]

13 09 2013

How to review a book on Medical Social Media written by an author, who has learned you many Social Media skills himself?

Thanks to people like Bertalan Meskó, the author of the book concerned,  I am not a novice in the field of Medical Social Media.

But wouldn’t it be great if all newcomers in the medical social media field could benefit from Bertalan’s knowledge and expertise? Bertalan Meskó, a MD with a  Summa Cum Laude PhD degree in clinical genomics, has already shared his insights by posts on award-winning blog ScienceRoll, via Twitter and (an online service that curates health-related social media resources), by giving presentations and social media classes to medical students and physicians.

But many of his students rather read (or reread) the topics in a book instead of e-learning materials. Therefore Bertalan decided to write a handbook entitled “Social Media in Clinical Practice”.

This is the table of contents (for more complete overview see Amazon):

  1. Social media is transforming medicine and healthcare
  2. Using medical search engines with a special focus on Google
  3. Being up-to-date in medicine
  4. Community sites Facebook, Google+ and medical social networks
  5. The world of e-patients
  6. Establishing a medical blog
  7. The role of Twitter and microblogging in medicine
  8. Collaboration online
  9. Wikipedia and Medical Wikis
  10. Organizing medical events in virtual environments
  11. Medical smartphone and tablet applications
  12. Use of social media by hospitals and medical practices
  13. Medical video and podcast
  14. Creating presentations and slideshows
  15. E-mails and privacy concerns
  16. Social bookmarking
  17. Conclusions

As you can see, many social media tools are covered and in this respect the book is useful for everyone, including patients and consumers.

But what makes “Social Media in Clinical Practice” especially valuable for medical students and clinicians?

First, specific medical search engines/social media sites/tools are discussed, like (Pubmed [medical database, search engine], Sermo [Community site for US physicians], Medworm [aggregator of RSS feeds], medical smartphone apps and sources where to find them, Medical Wiki’s like Radiopaedia.
Scientific Social media sites, with possible relevance to physicians are also discussed, like Google Scholar and Wolphram Alpha.

Second, numerous medical examples are given (with links and descriptions). Often, examples are summarized in tables in the individual chapters (see Fig 1 for a random example 😉 ). Links can also be found at the end of the book, organized per chapter.

12-9-2013 7-20-28 Berci examples of blogs

Fig 1. Examples represented in a Table

Third, community sites and non-medical social media tools are discussed from the medical prespective. With regard to community sites and tools like Facebook, Twitter, Blogs and Email special emphasis is placed on (for clinicians very important) quality, privacy and legacy concerns, for instance the compliance of websites and blogs with the HONcode (HON=The Health On the Net Foundation) and HIPAA (Health Insurance Portability and Accountability Act), the privacy settings in Facebook and Social Media Etiquette (see Fig 2).

12-9-2013 7-40-18 berci facebook patient

Fig. 2 Table from “Social Media in Clinical Practice” p 42

The chapters are succinctly written, well organized and replete with numerous examples. I specifically like the practical examples (see for instance Example #4).

12-9-2013 11-19-39 berci example

Fig 3 Example of Smartphone App for consumers

Some tools are explained in more detail, i.e. the anatomy of a tweet or a stepwise description how to launch a WordPress blog.
Most chapters end with a self test (questions),  next steps (encouraging to put the theory into practice) and key points.

Thus in many ways a very useful book for clinical practice (also see the positive reviews on Amazon and the review of Dean Giustini at his blog).

Are there any shortcomings, apart from the minimal language-shortcomings, mentioned by Dean?

Personally I find that discussions of the quality of websites concentrate a bit too much on the formal quality (contact info, title, subtitle etc)). True, it is of utmost importance, but quality is also determined by  content and clinical usefulness. Not all websites that are formally ok deliver good content and vice versa.

As a medical  librarian I pay particular attention to the search part, discussed in chapter 3 and 4.
Emphasis is put on how to create alerts in PubMed and Google Scholar, thus on the social media aspects. However searches are shown, that wouldn’t make physicians very happy, even if used as an alert: who wants a PubMed-alert for cardiovascular disease retrieving 1870195 hits? This is even more true for a the PubMed search “genetics” (rather meaningless yet non-comprehensive term).
More importantly, it is not explained when to use which search engine.  I understand that a search course is beyond the scope of this book, but a subtitle like “How to Get Better at Searching Online?” suggests otherwise. At least there should be hints that searching might be more complicated in practice, preferably with link to sources and online courses.  Getting too much hits or the wrong ones will only frustrate physicians (also to use the socia media tools, that are otherwise helpful).

But overall I find it a useful, clearly written and well structured practical handbook. “Social Media in Clinical Practice” is unique in his kind – I know of no other book that is alike-. Therefore I recommend it to all medical students and health care experts who are interested in digital medicine and social media.

This book will also be very useful to clinicians who are not very fond of social media. Their reluctance may change and their understanding of social medicine developed or enhanced.

Lets face it: a good clinician can’t do without digital knowledge. At the very least his patients use the internet and he must be able to act as a gatekeeper identifying and filtering thrustworty, credible and understandable information. Indeed, as Berci writes in his conclusion:

“it obviously is not a goal to transform all physicians into bloggers and Twitter users, but (..) each physician should find the platforms, tools and solutions that can assist them in their workflow.”

If not convinced I would recommend clinicians to read the blog post written at the the Fauquier ENT-blog (refererred to by Bertalan in chapter 6, #story 5) entiteld: As A Busy Physician, Why Do I Even Bother Blogging?

SM in Practice (AMAZON)

Book information: (also see Amazon):

  • Title: Social Media in Clinical Practice
  • Author: Bertalan Meskó
  • Publisher: Springer London Heidelberg New York Dordrecht
  • 155 pages
  • ISBN 978-1-4471-4305-5
  • ISBN 978-1-4471-4306-2 (eBook)
  • ISBN-10: 1447143051
  • DOI 10.1007/978-1-4471-4306-2
  • $37.99 (Sept 2013) (pocket at Amazon)

Between the Lines. Finding the Truth in Medical Literature [Book Review]

19 07 2013

In the 1970s a study was conducted among 60 physicians and physicians-in-training. They had to solve a simple problem:

“If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5 %, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person’s symptoms or signs?” 

Half of the “medical experts” thought the answer was 95%.
Only a small proportion, 18%, of the doctors arrived at the right answer of 2%.

If you are a medical expert who comes the same faulty conclusion -or need a refresher how to arrive at the right answer- you might benefit from the book written by Marya Zilberberg: “Between the Lines. Finding the Truth in Medical Literature”.

The same is true for a patient whose doctor thinks he/she is among the 95% to benefit form such a test…
Or for journalists who translate medical news to the public…
Or for peer reviewers or editors who have to assess biomedical papers…

In other words, this book is useful for everyone who wants to be able to read “between the lines”. For everyone who needs to examine medical literature critically from time to time and doesn’t want to rely solely on the interpretation of others.

I hope that I didn’t scare you off with the abovementioned example. Between the Lines surely is NOT a complicated epidemiology textbook, nor a dull studybook where you have to struggle through a lot of definitions, difficult tables and statistic formulas and where each chapter is followed by a set of review questions that test what you learned.

This example is presented half way the book, at the end of Part I. By then you have enough tools to solve the question yourself. But even if you don’t feel like doing the exact calculation at that moment, you have a solid basis to understand the bottomline: the (enormous) 93% gap (95% vs 2% of the people with a positive test are considered truly positive) serves as the pool for overdiagnosis and overtreatment.

In the previous chapters of Part I (“Context”), you have learned about the scientific methods in clinical research, uncertainty as the only certain feature of science, the importance of denominators, outcomes that matter and outcomes that don’t, Bayesian probability, evidence hierarchies, heterogeneous treatment effects (does the evidence apply to this particular patient?) and all kinds of biases.

Most reviewers prefer part I of the book. Personally I find part II (“Evaluation”) as interesting.

Part II deals with the study question, and study design, pros and cons of observational and interventional studies, validity, hypothesis testing and statistics.

Perhaps part II  is somewhat less narrative. Furthermore, it deals with tougher topics like statistics. But I find it very valuable for being able to critically appraise a study. I have never seen a better description of “ODDs”: somehow ODDs it is better to grasp if you substitute “treatment A” and “treatment B” for “horse A” and “horse B”, and substitute “death” for “loss of a race”.
I knew the basic differences between cohort studies, case control studies and so on, but I kind of never realized before that ODDs Ratio is the only measure of association available in a case-control study and that case control studies cannot estimate incidence or prevalence (as shown in a nice overview in table 4).

Unlike many other books about “the art of reading of medical articles”, “study designs” or “Evidence Based Medicine”, Marya’s book is easy to read. It is written at a conversational tone and statements are illustrated by means of current, appealing examples, like the overestimation of risk of death from the H1N1 virus, breast cancer screening and hormone replacement therapy.

Although I had printed this book in a wrong order (page 136 next to 13 etc), I was able to read (and understand) 1/3 of the book (the more difficult part II) during a 2 hour car trip….

Because this book is comprehensive, yet accessible, I recommend it highly to everyone, including fellow librarians.

Marya even mentions medical librarians as a separate target audience:

Medical librarians may find this book particularly helpful: Being at the forefront of evidence dissemination, they can lead the charge of separating credible science from rubbish.

(thanks Marya!)

In addition, this book may be indirectly useful to librarians as it may help to choose appropriate methodological filters and search terms for certain EBM-questions. In case of etiology questions words like “cohort”, “case-control”, “odds”, “risk” and “regression” might help to find the “right” studies.

By the way Marya Ziberberg @murzee at Twitter and she writes at her blog Healthcare etc.

p.s. 1 I want to apologize to Marya for writing this review more than a year after the book was published. For personal reasons I found little time to read and blog. Luckily the book lost none of its topicality.

p.s. 2 patients who are not very familiar with critical reading of medical papers might benefit from reading “your medical mind” first [1]. 

bwtn the lines

Amazon Product Details

The Scatter of Medical Research and What to do About it.

18 05 2012

ResearchBlogging.orgPaul Glasziou, GP and professor in Evidence Based Medicine, co-authored a new article in the BMJ [1]. Similar to another paper [2] I discussed before [3] this paper deals with the difficulty for clinicians of staying up-to-date with the literature. But where the previous paper [2,3] highlighted the mere increase in number of research articles over time, the current paper looks at the scatter of randomized clinical trials (RCTs) and systematic reviews (SR’s) accross different journals cited in one year (2009) in PubMed.

Hofmann et al analyzed 7 specialties and 9 sub-specialties, that are considered the leading contributions to the burden of disease in high income countries.

They followed a relative straightforward method for identifying the publications. Each search string consisted of a MeSH term (controlled  term) to identify the selected disease or disorders, a publication type [pt] to identify the type of study, and the year of publication. For example, the search strategy for randomized trials in cardiology was: “heart diseases”[MeSH] AND randomized controlled trial[pt] AND 2009[dp]. (when searching “heart diseases” as a MeSH, narrower terms are also searched.) Meta-analysis[pt] was used to identify systematic reviews.

Using this approach Hofmann et al found 14 343 RCTs and 3214 SR’s published in 2009 in the field of the selected (sub)specialties. There was a clear scatter across journals, but this scatter varied considerably among specialties:

“Otolaryngology had the least scatter (363 trials across 167 journals) and neurology the most (2770 trials across 896 journals). In only three subspecialties (lung cancer, chronic obstructive pulmonary disease, hearing loss) were 10 or fewer journals needed to locate 50% of trials. The scatter was less for systematic reviews: hearing loss had the least scatter (10 reviews across nine journals) and cancer the most (670 reviews across 279 journals). For some specialties and subspecialties the papers were concentrated in specialty journals; whereas for others, few of the top 10 journals were a specialty journal for that area.
Generally, little overlap occurred between the top 10 journals publishing trials and those publishing systematic reviews. The number of journals required to find all trials or reviews was highly correlated (r=0.97) with the number of papers for each specialty/ subspecialty.”

Previous work already suggested that this scatter of research has a long tail. Half of the publications is in a minority of papers, whereas the remaining articles are scattered among many journals (see Fig below).

Click to enlarge en see legends at BMJ 2012;344:e3223 [CC]

The good news is that SRs are less scattered and that general journals appear more often in the top 10 journals publishing SRs. Indeed for 6 of the 7 specialties and 4 of the 9 subspecialties, the Cochrane Database of Systematic Reviews had published the highest number of systematic reviews, publishing between 6% and 18% of all the systematic reviews published in each area in 2009. The bad news is that even keeping up to date with SRs seems a huge, if not impossible, challenge.

In other words, it is not sufficient for clinicians to rely on personal subscriptions to a few journals in their specialty (which is common practice). Hoffmann et al suggest several solutions to help clinicians cope with the increasing volume and scatter of research publications.

  • a central library of systematic reviews (but apparently the Cochrane Library fails to fulfill such a role according to the authors, because many reviews are out of date and are perceived as less clinically relevant)
  • registry of planned and completed systematic reviews, such as prospero. (this makes it easier to locate SRs and reduces bias)
  • Synthesis of Evidence and synopses, like the ACP-Jounal Club which summarizes the best evidence in internal medicine
  • Specialised databases that collate and critically appraise randomized trials and systematic reviews, like for physical therapy. In my personal experience, however, this database is often out of date and not comprehensive
  • Journal scanning services like EvidenceUpdates from, which scans over 120 journals, filters articles on the basis of quality, has practising clinicians rate them for relevance and newsworthiness, and makes them available as email alerts and in a searchable database. I use this service too, but besides that not all specialties are covered, the rating of evidence may not always be objective (see previous post [4])
  • The use of social media tools to alert clinicians to important new research.

Most of these solutions are (long) existing solutions that do not or only partly help to solve the information overload.

I was surprised that the authors didn’t propose the use of personalized alerts. PubMed’s My NCBI feature allows to create automatic email alerts on a topic and to subscribe to electronic tables of contents (which could include ACP journal Club). Suppose that a physician browses 10 journals roughly covering 25% of the trials. He/she does not need to read all the other journals from cover to cover to avoid missing one potentially relevant trial. Instead it is far more efficient to perform a topic search to filter relevant studies from journals that seldom publish trials on the topic of interest. One could even use the search of Hoffmann et al to achieve this.* Although in reality, most clinical researchers will have narrower fields of interest than all studies about endocrinology and neurology.

At our library we are working at creating deduplicated, easy to read, alerts that collate table of contents of certain journals with topic (and author) searches in PubMed, EMBASE and other databases. There are existing tools that do the same.

Another way to reduce the individual work (reading) load is to organize journals clubs or even better organize regular CATs (critical appraised topics). In the Netherlands, CATS are a compulsory item for residents. A few doctors do the work for many. Usually they choose topics that are clinically relevant (or for which the evidence is unclear).

The authors shortly mention that their search strategy might have missed  missed some eligible papers and included some that are not truly RCTs or SRs, because they relied on PubMed’s publication type to retrieve RCTs and SRs. For systematic reviews this may be a greater problem than recognized, for the authors have used meta-analyses[pt] to identify systematic reviews. Unfortunately PubMed has no publication type for systematic reviews, but it may be clear that there are many more systematic reviews that meta-analyses. Possibly systematical reviews might even have a different scatter pattern than meta-analyses (i.e. the latter might be preferentially included in core journals).

Furthermore not all meta-analyses and systematic reviews are reviews of RCTs (thus it is not completely fair to compare MAs with RCTs only). On the other hand it is a (not discussed) omission of this study, that only interventions are considered. Nowadays physicians have many other questions than those related to therapy, like questions about prognosis, harm and diagnosis.

I did a little imperfect search just to see whether use of other search terms than meta-analyses[pt] would have any influence on the outcome. I search for (1) meta-analyses [pt] and (2) systematic review [tiab] (title and abstract) of papers about endocrine diseases. Then I subtracted 1 from 2 (to analyse the systematic reviews not indexed as meta-analysis[pt])



I analyzed the top 10/11 journals publishing these study types.

This little experiment suggests that:

  1. the precise scatter might differ per search: apparently the systematic review[tiab] search yielded different top 10/11 journals (for this sample) than the meta-analysis[pt] search. (partially because Cochrane systematic reviews apparently don’t mention systematic reviews in title and abstract?).
  2. the authors underestimate the numbers of Systematic Reviews: simply searching for systematic review[tiab] already found appr. 50% additional systematic reviews compared to meta-analysis[pt] alone
  3. As expected (by me at last), many of the SR’s en MA’s were NOT dealing with interventions, i.e. see the first 5 hits (out of 108 and 236 respectively).
  4. Together these findings indicate that the true information overload is far greater than shown by Hoffmann et al (not all systematic reviews are found, of all available search designs only RCTs are searched).
  5. On the other hand this indirectly shows that SRs are a better way to keep up-to-date than suggested: SRs  also summarize non-interventional research (the ratio SRs of RCTs: individual RCTs is much lower than suggested)
  6. It also means that the role of the Cochrane Systematic reviews to aggregate RCTs is underestimated by the published graphs (the MA[pt] section is diluted with non-RCT- systematic reviews, thus the proportion of the Cochrane SRs in the interventional MAs becomes larger)

Well anyway, these imperfections do not contradict the main point of this paper: that trials are scattered across hundreds of general and specialty journals and that “systematic reviews” (or meta-analyses really) do reduce the extent of scatter, but are still widely scattered and mostly in different journals to those of randomized trials.

Indeed, personal subscriptions to journals seem insufficient for keeping up to date.
Besides supplementing subscription by  methods such as journal scanning services, I would recommend the use of personalized alerts from PubMed and several prefiltered sources including an EBM search machine like TRIP (

*but I would broaden it to find all aggregate evidence, including ACP, Clinical Evidence, syntheses and synopses, not only meta-analyses.

**I do appreciate that one of the co-authors is a medical librarian: Sarah Thorning.


  1. Hoffmann, Tammy, Erueti, Chrissy, Thorning, Sarah, & Glasziou, Paul (2012). The scatter of research: cross sectional comparison of randomised trials and systematic reviews across specialties BMJ, 344 : 10.1136/bmj.e3223
  2. Bastian, H., Glasziou, P., & Chalmers, I. (2010). Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLoS Medicine, 7 (9) DOI: 10.1371/journal.pmed.1000326
  3. How will we ever keep up with 75 trials and 11 systematic reviews a day (
  4. Experience versus Evidence [1]. Opioid Therapy for Rheumatoid Arthritis Pain. (

Can Guidelines Harm Patients?

2 05 2012

ResearchBlogging.orgRecently I saw an intriguing “personal view” in the BMJ written by Grant Hutchison entitled: “Can Guidelines Harm Patients Too?” Hutchison is a consultant anesthetist with -as he calls it- chronic guideline fatigue syndrome. Hutchison underwent an acute exacerbation of his “condition” with the arrival of another set of guidelines in his email inbox. Hutchison:

On reviewing the level of evidence provided for the various recommendations being offered, I was struck by the fact that no relevant clinical trials had been carried out in the population of interest. Eleven out of 25 of the recommendations made were supported only by the lowest levels of published evidence (case reports and case series, or inference from studies not directly applicable to the relevant population). A further seven out of 25 were derived only from the expert opinion of members of the guidelines committee, in the absence of any guidance to be gleaned from the published literature.

Hutchison’s personal experience is supported by evidence from two articles [2,3].

One paper published in the JAMA 2009 [2] concludes that ACC/AHA (American College of Cardiology and the American Heart Association) clinical practice guidelines are largely developed from lower levels of evidence or expert opinion and that the proportion of recommendations for which there is no conclusive evidence is growing. Only 314 recommendations of 2711 (median, 11%) are classified as level of evidence A , thus recommendation based on evidence from multiple randomized trials or meta-analyses.  The majority of recommendations (1246/2711; median, 48%) are level of evidence C, thus based  on expert opinion, case studies, or standards of care. Strikingly only 245 of 1305 class I recommendations are based on the highest level A evidence (median, 19%).

Another paper, published in Ann Intern Med 2011 [3], reaches similar conclusions analyzing the Infectious Diseases Society of America (IDSA) Practice Guidelines. Of the 4218 individual recommendations found, only 14% were supported by the strongest (level I) quality of evidence; more than half were based on level III evidence only. Like the ACC/AHH guidelines only a small part (23%) of the strongest IDSA recommendations, were based on level I evidence (in this case ≥1 randomized controlled trial, see below). And, here too, the new recommendations were mostly based on level II and III evidence.

Although there is little to argue about Hutchison’s observations, I do not agree with his conclusions.

In his view guidelines are equivalent to a bullet pointed list or flow diagram, allowing busy practitioners to move on from practice based on mere anecdote and opinion. It therefore seems contradictory that half of the EBM-guidelines are based on little more than anecdote (case series, extrapolation from other populations) and opinion. He then argues that guidelines, like other therapeutic interventions, should be considered in terms of balance between benefit and risk and that the risk  associated with the dissemination of poorly founded guidelines must also be considered. One of those risks is that doctors will just tend to adhere to the guidelines, and may even change their own (adequate) practice  in the absence of any scientific evidence against it. If a patient is harmed despite punctilious adherence to the guideline-rules,  “it is easy to be seduced into assuming that the bad outcome was therefore unavoidable”. But perhaps harm was done by following the guideline….

First of all, overall evidence shows that adherence to guidelines can improve patient outcome and provide more cost effective care (Naveed Mustfa in a comment refers to [4]).

Hutchinson’s piece is opinion-based and rather driven by (understandable) gut feelings and implicit assumptions, that also surround EBM in general.

  1. First there is the assumption that guidelines are a fixed set of rules, like a protocol, and that there is no room for preferences (both of the doctor and the patient), interpretations and experience. In the same way as EBM is often degraded to “cookbook medicine”, EBM guidelines are turned into mere bullet pointed lists made by a bunch of experts that just want to impose their opinions as truth.
  2. The second assumption (shared by many) is that evidence based medicine is synonymous with “randomized controlled trials”. In analogy, only those EBM guideline recommendations “count” that are based on RCT’s or meta-analyses.

Before I continue, I would strongly advice all readers (and certainly all EBM and guideline-skeptics) to read this excellent and clearly written BJM-editorial by David Sackett et al. that deals with misconceptions, myths and prejudices surrounding EBM : Evidence based medicine: what it is and what it isn’t [5].

Sackett et al define EBM as “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients” [5]. Sackett emphasizes that “Good doctors use both individual clinical expertise and the best available external evidence, and neither alone is enough. Without clinical expertise, practice risks becoming tyrannised by evidence, for even excellent external evidence may be inapplicable to or inappropriate for an individual patient. Without current best evidence, practice risks becoming rapidly out of date, to the detriment of patients.”

Guidelines are meant to give recommendations based on the best available evidence. Guidelines should not be a set of rules, set in stone. Ideally, guidelines have gathered evidence in a transparent way and make it easier for the clinicians to grasp the evidence for a certain procedure in a certain situation … and to see the gaps.

Contrary to what many people think, EBM is not restricted to randomized trials and meta-analyses. It involves tracking down the best external evidence there is. As I explained in #NotSoFunny #16 – Ridiculing RCTs & EBM, evidence is not an all-or-nothing thing: RCT’s (if well performed) are the most robust, but if not available we have to rely on “lower” evidence (from cohort to case-control to case series or expert opinion even).
On the other hand RCT’s are often not even suitable to answer questions in other domains than therapy (etiology/harm, prognosis, diagnosis): per definition the level of evidence for these kind of questions inevitably will be low*. Also, for some interventions RCT’s are not appropriate, feasible or too costly to perform (cesarean vs vaginal birth; experimental therapies, rare diseases, see also [3]).

It is also good to realize that guidance, based on numerous randomized controlled trials is probably not or limited applicable to groups of patients who are seldom included in a RCT: the cognitively impaired, the patient with multiple comorbidities [6], the old patient [6], children and (often) women.

Finally not all RCTs are created equal (various forms of bias; surrogate outcomes; small sample sizes, short follow-up), and thus should not all represent the same high level of evidence.*

Thus in my opinion, low levels of evidence are not per definition problematic. Even if they are the basis for strong recommendations. As long as it is clear how the recommendations were reached and as long as these are well underpinned (by whatever evidence or motivation). One could see the exposed gaps in evidence as a positive thing as it may highlight the need for clinical research in certain fields.

There is one BIG BUT: my assumption is that guidelines are “just” recommendations based on exhaustive and objective reviews of existing evidence. No more, no less. This means that the clinician must have the freedom to deviate from the recommendations, based on his own expertise and/or the situation and/or the patient’s preferences. The more, when the evidence on which these strong recommendations are based is ‘scant’. Sackett already warned for the possible hijacking of EBM by purchasers and managers (and may I add health insurances and governmental agencies) to cut the costs of health care and to impose “rules”.

I therefore think it is odd that the ACC/AHA guidelines prescribe that Class I recommendations SHOULD be performed/administered even if they are based on level C recommendations (see Figure).

I also find it odd that different guidelines have a different nomenclature. The ACC/AHA have Class I, IIa, IIb and III recommendations and level A, B, C evidence where level A evidence represents sufficient evidence from multiple randomized trials and meta-analyses, whereas the strength of recommendations in the IDSA guidelines includes levels A through C (OR D/E recommendations against use) and quality of evidence ranges from level I through III , where I indicates evidence from (just) 1 properly randomized controlled trial. As explained in [3] this system was introduced to evaluate the effectiveness of preventive health care interventions in Canada (for which RCTs are apt).

Finally, guidelines and guideline makers should probably be more open for input/feedback from people who apply these guidelines.


*the new GRADE (Grading of Recommendations Assessment, Development, and Evaluation) scoring system taking into account good quality observational studies as well may offer a potential solution.

Another possibly relevant post at this blog: The Best Study Design for … Dummies

Taken from a summary of an ACC/AHA guideline at
Click to enlarge.


  1. Hutchison, G. (2012). Guidelines can harm patients too BMJ, 344 (apr18 1) DOI: 10.1136/bmj.e2685
  2. Tricoci P, Allen JM, Kramer JM, Califf RM, & Smith SC Jr (2009). Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA : the journal of the American Medical Association, 301 (8), 831-41 PMID: 19244190
  3. Lee, D., & Vielemeyer, O. (2011). Analysis of Overall Level of Evidence Behind Infectious Diseases Society of America Practice Guidelines Archives of Internal Medicine, 171 (1), 18-22 DOI: 10.1001/archinternmed.2010.482
  4. Menéndez R, Reyes S, Martínez R, de la Cuadra P, Manuel Vallés J, & Vallterra J (2007). Economic evaluation of adherence to treatment guidelines in nonintensive care pneumonia. The European respiratory journal : official journal of the European Society for Clinical Respiratory Physiology, 29 (4), 751-6 PMID: 17005580
  5. Sackett, D., Rosenberg, W., Gray, J., Haynes, R., & Richardson, W. (1996). Evidence based medicine: what it is and what it isn’t BMJ, 312 (7023), 71-72 DOI: 10.1136/bmj.312.7023.71
  6. Aylett, V. (2010). Do geriatricians need guidelines? BMJ, 341 (sep29 3) DOI: 10.1136/bmj.c5340

Silly Sunday #50: Molecular Designs & Synthetic DNA

23 04 2012

As a teenager I found it hard to picture the 3D structure of DNA, proteins and other molecules. Remember we didn’t have a computer then, no videos, nor 3D-pictures or 3D models.

I tried to fill the gap, by making DNA-molecules of (used) matches and colored clay, based on descriptions in dry (and dull 2D) textbooks, but you can imagine that these creative 3D clay figures beard little resemblance to the real molecular structures.

But luckily things have changed over the last 40 years. Not only do we have computers and videos, there are also ready-made molecular models, specially designed for education.

O, how I wished, my chemistry teachers would have had those DNA-(starters)-kits.

Hattip: Joanne Manaster‏ @sciencegoddess on Twitter: 

Curious? Here is the Products Catalog of

Of course, such “synthesis” (copying) of existing molecules -though very useful for educational purposes- is overshadowed by the recent “CREATION of molecules other than DNA and RNA [xeno-nucleic acids (XNAs)], that can be used to store and propagate information and have the capacity for Darwinian evolution.

But that is quite a different story.

Related articles

Jeffrey Beall’s List of Predatory, Open-Access Publishers, 2012 Edition

19 12 2011

Perhaps you remember that I previously wrote [1] about  non-existing and/or low quality scammy open access journals. I specifically wrote about Medical Science Journals of  the series, which comprises 45 titles, none of which having published any article yet.

Another blogger, David M [2] also had negative experiences with fake peer review invitations from sciencejournals. He even noticed plagiarism.

Later I occasionally found other posts about open access spam, like the post of Per Ola Kristensson [3] (specifically about Bentham, Hindawi and InTech OA publishers), of Peter Murray-Rust [4] ,a chemist interested in OA (about spam journals and conferences, specifically about Scientific Research Publishing) and of Alan Dove PhD [5] (specifically about The Journal of Computational Biology and Bioinformatics Research (JCBBR) published by Academic Journals).

But now it appears that there is an entire list of “Predatory, Open-Access Publishers”. This list was created by Jeffrey Beall, academic librarian at the University of Colorado Denver. He just updated the list for 2012 here (PDF-format).

According to Jeffrey predatory, open-access publishers

are those that unprofessionally exploit the author-pays model of open-access publishing (Gold OA) for their own profit. Typically, these publishers spam professional email lists, broadly soliciting article submissions for the clear purpose of gaining additional income. Operating essentially as vanity presses, these publishers typically have a low article acceptance threshold, with a false-front or non-existent peer review process. Unlike professional publishing operations, whether subscription-based or ethically-sound open access, these predatory publishers add little value to scholarship, pay little attention to digital preservation, and operate using fly-by-night, unsustainable business models.

Jeffrey recommends not to do business with the following (illegitimate) publishers, including submitting article manuscripts, serving on editorial boards, buying advertising, etc. According to Jeffrey, “there are numerous traditional, legitimate journals that will publish your quality work for free, including many legitimate, open-access publishers”.

(For sake of conciseness, I only describe the main characteristics, not always using the same wording; please see the entire list for the full descriptions.)

Watchlist: Publishers, that may show some characteristics of  predatory, open-access publisher
  • Hindawi Way too many journals than can be properly handled by one publisher
  • MedKnow Publications vague business model. It charges for the PDF version
  • PAGEPress many dead links, a prominent link to PayPal
  • Versita Open paid subscription for print form. ..unclear business model

An asterisk (*) indicates that the publisher is appearing on this list for the first time.

How complete and reliable is this list?

Clearly, this list is quite exhaustive. Jeffrey did a great job listing  many dodgy OA journals. We should watch (many) of these OA publishers with caution. Another good thing is that the list is updated annually.

( described in my previous post is not (yet) on the list 😉  but I will inform Jeffrey).

Personally, I would have preferred a distinction between real bogus or spammy journals and journals that seem to have “too many journals to properly handle” or that ask (too much ) money for subscription/from the author. The scientific content may still be good (enough).

Furthermore, I would rather see a neutral description of what is exactly wrong about a journal. Especially because “Beall’s list” is a list and not a blog post (or is it?). Sometimes the description doesn’t convince me that the journal is really bogus or predatory.

Examples of subjective portrayals:

  • Dove Press:  This New Zealand-based medical publisher boasts high-quality appearing journals and articles, yet it demands a very high author fee for publishing articles. Its fleet of journals is large, bringing into question how it can properly fulfill its promise to quickly deliver an acceptance decision on submitted articles.
  • Libertas Academia “The tag line under the name on this publisher’s page is “Freedom to research.” It might better say “Freedom to be ripped off.” 
  • Hindawi  .. This publisher has way too many journals than can be properly handled by one publisher, I think (…)

I do like funny posts, but only if it is clear that the post is intended to be funny. Like the one by Alan Dove PhD about JCBBR.

JCBBR is dedicated to increasing the depth of research across all areas of this subject.

Translation: we’re launching a new journal for research that can’t get published anyplace else.

The journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence in this subject area.

We’ll take pretty much any crap you excrete.

Hattip: Catherine Arnott Smith, PhD at the MedLib-L list.

  1. I Got the Wrong Request from the Wrong Journal to Review the Wrong Piece. The Wrong kind of Open Access Apparently, Something Wrong with this Inherently… (
  2. A peer-review phishing scam (
  3. Academic Spam and Open Access Publishing (
  4. What’s wrong with Scholarly Publishing? New Journal Spam and “Open Access” (
  5. From the Inbox: Journal Spam (
  6. Beall’s List of Predatory, Open-Access Publishers. 2012 Edition (
  7. Silly Sunday #42 Open Access Week around the Globe (

Happy Anniversary Highlight HEALTH, ScienceRoll & Sterile Eye!

13 12 2011

Starting a blog is easy. But maintaining a blog costs time and effort. Especially when having a job/while studying (and having a private life as well).

This blog almost celebrates its 4th year (February 2012).

I’m happy to notice that many established (bio)medical & library blogs, that inspired me to start blogging, are still around.

Like one of the greatest medical blogs, CasesBlog by Dr Ves Dimov. And the medlib blogs The Search Principle blog by Dean Giustini and the Krafty Librarian by Michelle Kraft.

All these blogs are still going strong.

The same is true for the blog ScienceRoll by Bertalan Mesko (emphasis on health 2.0), that celebrated its 5th anniversary last month. That same month Sterile Eye (Life, death and surgery through a lens) celebrated its 4th year of existence.

This month Highlight Health (main author Walter Jessen) celebrates its 5th year anniversary.

And the nice thing is that Highlight Health celebrates this with prize pack giveaways.

There are 4 drawings. Each prize pack consist of the following:

All you have to do is to subscribe to the blog in the form of an email alert. People, like me, who are already subscribers are also eligible to participate in the drawings. (see this post for all info)

With so many ‘golden oldies’ around, I wonder about you, my audience. Do you blog? And if you do, for how long? Please tell me in the poll below.

If you are a (bio)medical, library or science blogger (blogging in English), I would appreciate if you could fill in this spreadsheet as well. You are free to edit the spreadsheet and add names of other bloggers as well.