#EAHIL2012 CEC 2: Visibility & Impact – Library’s New Role to Enhance Visibility of Researchers

4 07 2012

This week I’m blogging at (and mostly about) the 13th EAHIL conference in Brussels. EAHIL stands for European Association for Health Information and Libraries.

The second Continuing Education Course (CEC) I followed was given by Tiina Heino and Katri Larmo of the Terkko Meilahti Campus Library at the University of Helsinki in Finland.

The full title of the course was Visibility and impact – library’s new role: How the library can support the researcher to get visibility and generate impact to researcher’s work. You can read the abstract here.

The hands-on workshop mainly concentrated on the social bookmarking sites ConnoteaMendeley and Altmetric.

Furthermore we got information on CiteULike, ORCID,  Faculty of 1000 Posters and Pinterest. Also services developed in Terkko, such as ScholarChart and TopCited Articles, were shortly demonstrated.

What I especially liked in the hands on session is that the tutors had prepared a wikispace with all the information and links on the main page ( https://visibility2012.wikispaces.com) and a separate page for each participant to edit (here is my page). You could add links to your created accounts and embed widgets for Mendeley.

There was sufficient time to practice and try the tools. And despite the great number of participants there was ample room for questions (& even for making a blog draft ;)).

The main message of the tutors is that the process of publishing scientific research doesn’t end at publishing the article: it is equally important what happens after the research has been published. Visibility and impact in the scientific community and in the society are  crucial  for making the research go forward as well as for getting research funding and promoting the researcher’s career. The Fig below (taken from the presentation) visualizes this process.

The tutors discussed ORCID, Open Researcher and contributor ID, that will be introduced later this year. It is meant to solve the author name ambiguity problem in scholarly communication by central registry of unique identifiers for each author (because author names can’t be used to reliably identify all scholarly author). It will be possible for authors to create, manage and share their ORCID record without membership fee. For further information see several publications and presentations by Martin Fenner. I found this one during the course while browsing Mendeley.

Once published the author’s work can be promoted using bookmarking tools, like CiteULike, Connotea and Mendeley. You can easily registrate for Connotea and Mendeley using your Facebook account. These social bookmarking tools are also useful for networking, i.e. to discover individuals and groups with the same field of interest. It is easy to synchronize your Mendeley with your CiteULike account.

Mendeley is available in a desktop and a web version. The web version offers a public profile for researchers, a catalog of documents, and collaborative groups (the cloud of Mendeley). The desktop version of Mendeley is specially suited for reference management and organizing your PDF’s. That said Mendeley seems most suitable for serendipitous use (clicking and importing a reference you happen to see and like) and less useful for managing and deduplicating large numbers of records, i.e. for a systematic review.
Also (during the course) it was not possible to import several PubMed records at once in either CiteULike or Mendeley.

What stroke me when I tried Mendeley is that there were many small or dead groups. A search for “cochrane”  for instance yielded one large group Cochrane QES Register, owned by Andrew Booth, and 3 groups with one member (thus not really a group), with 0 (!) to 6 papers each! It looks like people are trying Mendeley and other tools just for a short while. Indeed, most papers I looked up in PubMed were not bookmarked at all. It makes you wonder how widespread the use of these bookmarking tools is. It probably doesn’t help that there are so many tools with different purposes and possibilities.

Another tool that we tried was Altmetric. This is a free bookmarklet on scholarly articles which allows you to track the conversations around scientific articles online. It shows the tweets, blogposts, Google+ and Facebook mentions, and the numbers of bookmarks on Mendeley, CiteULike and Connotea.

I tried the tool on a paper I blogged about , ie. Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up?

The bookmarklet showed the tweets and the blogposts mentioning the paper.

Indeed altmetrics did correctly refer to my blog (even to 2 posts).

I liked altmetrics*, but saying that it is suitable for scientific metrics is a step too far. For people interested in this topic I would like to refer -again- to a post of Martin Fenner on altmetrics (in general).  He stresses that “usage metrics”  has its limitations because of its proness  to “gaming” (cheating).

But the current workshop didn’t address the shortcomings of the tools, for it was meant as a first practical acquaintance with the web 2.0 tools.

For the other tools (Faculty of 1000 Posters, Pinterest) and the services developed in Terkko, such as ScholarChart and TopCited Articles,  see the wikipage and the presentation:

*Coincidentally I’m preparing a post on handy chrome extensions to look for tweets about a webpage. Altmetric is another tool which seems very suitable for this purpose

Related articles





Friday Foolery #51 Statistically Funny

1 06 2012

Epidemiologists, people working in the EBM field and, above all, statisticians are said to have no sense of humor.*

Hilda Bastian is a clear exception to this rule.

I met Hilda a few years ago at a Cochrane colloquium. At that time she was working as a consumer advocate in Australia. Nowadays she is editor and curator of PubMed Health. According to her Twitter Bio (she tweets as @hildabast) she is (still) “Interested in effective communication as well as effective health care”. She also writes important articles, like “Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? (PLOS 2010), reviewed at this blog.

Today I learned she also has a great creative talent in cartoon drawing, in the field of …  yeah… EBM, epidemiology & statistics.

Below is one of her cartoons, which fits in well with a recent post in the BMJ by Ray Moynihan, retweeted by Hilda: Preventing overdiagnosis: how to stop harming the healthy. In her post she refers to another article: Overdiagnosis in cancer (JNCI 2010), saying:

“Finding and aggressively treating non-symptomatic disease that would never have made people sick, inventing new conditions and re-defining the thresholds for old ones: will there be anyone healthy left at all?”

I invite you to go and visit Hilda’s blog Statistically funny (Commenting on the science of unbiased health research with cartoons) and to enjoy her cartoons, that are often inspired by recent publications in the field.

* My post #NotSoFunny #16: ridiculing RCTs and EBM even led David Rind to sigh that “EBM folks are not necessarily known for their great senses of humor”. (so I’m no exception to the rule ;)





Even the Scientific American Blog Links to Spammy Online Education Affiliate Sites…

28 05 2012

On numerous occasions [1,2,3] I have warned against top Twitter and Blog lists spread by education affiliate sites.
Sites like accreditedonlinecolleges.comonlinecolleges.com, onlinecollegesusa.org, onlinedegrees.com, mbaonline.com.

While some of the published Twitter Top 50 lists and Blog top 100 lists may be interesting as such (or may flatter you if you’re on it), the only intention of the makers is to lure you to their site and earn money through click-throughs.

Or as David Bradley from Sciencebase said it much more eloquently than I could:
(in a previous comment) 

“I get endless emails from people with these kinds of sites telling me I am on such and such a list…I even get different messages claiming to be from different people, but actually the same email address.They’re splogs and link bait scams almost always and unfortunately some people get suckered into linking to them, giving them credence and publicity. They’re a pain in the ‘arris.

These education sites do not only produce these “fantabulous” top 50 and 100 lists.
I also receive many requests for guest-authorships, and undoubtedly I’m not the only one.

Recently I also received a request from mbaonlinedegrees to post an infographic:

While searching for resources about the internet, I came across your site and noticed that you had posted the ‘State of the Internet’ video. I wanted to reach out as I have an infographic about the topic that I think would be a great fit for your site.”

But this mba.onlinedegrees infographic was a simple, yes even simplistic, summary of “a day at the internet”:

How many emails are sent, blog posts are made, how many people visit Facebook and how many updates are updated, and so forth and so on. Plus: Internet users spend 14.6 minutes viewing porn online: the average fap session is 12 minutes…
(How would they know?)

Anyway not the kind of information my readers are looking for. So I didn’t write a post with the embedding the code for the infographic.

Thus these online education affiliate sites produce top 50 and 100 lists, blogposts, guestposts and infographics and promote their use by actively approaching bloggers and people on Twitter.

I was surprised to find¹, however that even the high quality Scientific American science blog Observations (Opinion, arguments & analyses from the editors of Scientific American) blindly linked to such a spammy infographic (just adding a short meaningless introduction) [4].

That is an easy way to increase the numbers of blog posts….

And according to an insider commenting to the article the actual information in the infographic is even simply wrong.

“These MBAs have a smaller brain than accountants. They don’t know the difference between asset, revenue and income”.

If such a high authority science blog does not know to separate the wheat from the chaff, does not recognize splogs as such, and does not even (at the very least) filter and track the information offered, …. than who can…. who will….?³

Sometimes I feel like a miniature version of Don Quixote…

————-

NOTES

1.  HATTIP:

Again, @Nutsci brought this to my attention:

2. In response to my post @AdamMerberg tweeted a link to a very interesting article in the Atlantic by Megan McArdle issuing a plea to bloggers to help stop this plague in its track. (i.e. saying:  The reservoir of this disease of erroneous infographics is internet marketers who don’t care whether the information in their graphics is right … just so long as you link it.). She even uses an infographic herself to deliver her message. Highly recommended!

3. This doesn’t mean that Scientific American doesn’t produce good blog posts or good scientific papers. Just the other day, I tweeted:

The referred article Scientific American puts a new meta-analysis of statins and an accompanying editorial in the Lancet in broader perspective. The meta-analysis suggests that healthy people over 50 should take cholesterol-lowering drugs as a preventative measure. Scientific American questions this by also addressing the background risks (low for most 50+ people), possible risks of statin use, cost-effectiveness and the issue of funding by pharmaceutical companies and other types of bias.

References

  1. Health and Science Twitter & Blog Top 50 and 100 Lists. How to Separate the Wheat from the Chaff. (laikaspoetnik.wordpress.com)
  2. Beware of Top 50 “Great Tools to Double Check your Doctor” or whatever Lists. (laikaspoetnik.wordpress.com)
  3. Vanity is the Quicksand of Reasoning: Beware of Top 100 and 50 lists! ((laikaspoetnik.wordpress.com)
  4. What’s Smaller than Mark Zuckerberg? (blogs.scientificamerican.com/observations/)




The Scatter of Medical Research and What to do About it.

18 05 2012

ResearchBlogging.orgPaul Glasziou, GP and professor in Evidence Based Medicine, co-authored a new article in the BMJ [1]. Similar to another paper [2] I discussed before [3] this paper deals with the difficulty for clinicians of staying up-to-date with the literature. But where the previous paper [2,3] highlighted the mere increase in number of research articles over time, the current paper looks at the scatter of randomized clinical trials (RCTs) and systematic reviews (SR’s) accross different journals cited in one year (2009) in PubMed.

Hofmann et al analyzed 7 specialties and 9 sub-specialties, that are considered the leading contributions to the burden of disease in high income countries.

They followed a relative straightforward method for identifying the publications. Each search string consisted of a MeSH term (controlled  term) to identify the selected disease or disorders, a publication type [pt] to identify the type of study, and the year of publication. For example, the search strategy for randomized trials in cardiology was: “heart diseases”[MeSH] AND randomized controlled trial[pt] AND 2009[dp]. (when searching “heart diseases” as a MeSH, narrower terms are also searched.) Meta-analysis[pt] was used to identify systematic reviews.

Using this approach Hofmann et al found 14 343 RCTs and 3214 SR’s published in 2009 in the field of the selected (sub)specialties. There was a clear scatter across journals, but this scatter varied considerably among specialties:

“Otolaryngology had the least scatter (363 trials across 167 journals) and neurology the most (2770 trials across 896 journals). In only three subspecialties (lung cancer, chronic obstructive pulmonary disease, hearing loss) were 10 or fewer journals needed to locate 50% of trials. The scatter was less for systematic reviews: hearing loss had the least scatter (10 reviews across nine journals) and cancer the most (670 reviews across 279 journals). For some specialties and subspecialties the papers were concentrated in specialty journals; whereas for others, few of the top 10 journals were a specialty journal for that area.
Generally, little overlap occurred between the top 10 journals publishing trials and those publishing systematic reviews. The number of journals required to find all trials or reviews was highly correlated (r=0.97) with the number of papers for each specialty/ subspecialty.”

Previous work already suggested that this scatter of research has a long tail. Half of the publications is in a minority of papers, whereas the remaining articles are scattered among many journals (see Fig below).

Click to enlarge en see legends at BMJ 2012;344:e3223 [CC]

The good news is that SRs are less scattered and that general journals appear more often in the top 10 journals publishing SRs. Indeed for 6 of the 7 specialties and 4 of the 9 subspecialties, the Cochrane Database of Systematic Reviews had published the highest number of systematic reviews, publishing between 6% and 18% of all the systematic reviews published in each area in 2009. The bad news is that even keeping up to date with SRs seems a huge, if not impossible, challenge.

In other words, it is not sufficient for clinicians to rely on personal subscriptions to a few journals in their specialty (which is common practice). Hoffmann et al suggest several solutions to help clinicians cope with the increasing volume and scatter of research publications.

  • a central library of systematic reviews (but apparently the Cochrane Library fails to fulfill such a role according to the authors, because many reviews are out of date and are perceived as less clinically relevant)
  • registry of planned and completed systematic reviews, such as prospero. (this makes it easier to locate SRs and reduces bias)
  • Synthesis of Evidence and synopses, like the ACP-Jounal Club which summarizes the best evidence in internal medicine
  • Specialised databases that collate and critically appraise randomized trials and systematic reviews, like www.pedro.org.au for physical therapy. In my personal experience, however, this database is often out of date and not comprehensive
  • Journal scanning services like EvidenceUpdates from mcmaster.ca), which scans over 120 journals, filters articles on the basis of quality, has practising clinicians rate them for relevance and newsworthiness, and makes them available as email alerts and in a searchable database. I use this service too, but besides that not all specialties are covered, the rating of evidence may not always be objective (see previous post [4])
  • The use of social media tools to alert clinicians to important new research.

Most of these solutions are (long) existing solutions that do not or only partly help to solve the information overload.

I was surprised that the authors didn’t propose the use of personalized alerts. PubMed’s My NCBI feature allows to create automatic email alerts on a topic and to subscribe to electronic tables of contents (which could include ACP journal Club). Suppose that a physician browses 10 journals roughly covering 25% of the trials. He/she does not need to read all the other journals from cover to cover to avoid missing one potentially relevant trial. Instead it is far more efficient to perform a topic search to filter relevant studies from journals that seldom publish trials on the topic of interest. One could even use the search of Hoffmann et al to achieve this.* Although in reality, most clinical researchers will have narrower fields of interest than all studies about endocrinology and neurology.

At our library we are working at creating deduplicated, easy to read, alerts that collate table of contents of certain journals with topic (and author) searches in PubMed, EMBASE and other databases. There are existing tools that do the same.

Another way to reduce the individual work (reading) load is to organize journals clubs or even better organize regular CATs (critical appraised topics). In the Netherlands, CATS are a compulsory item for residents. A few doctors do the work for many. Usually they choose topics that are clinically relevant (or for which the evidence is unclear).

The authors shortly mention that their search strategy might have missed  missed some eligible papers and included some that are not truly RCTs or SRs, because they relied on PubMed’s publication type to retrieve RCTs and SRs. For systematic reviews this may be a greater problem than recognized, for the authors have used meta-analyses[pt] to identify systematic reviews. Unfortunately PubMed has no publication type for systematic reviews, but it may be clear that there are many more systematic reviews that meta-analyses. Possibly systematical reviews might even have a different scatter pattern than meta-analyses (i.e. the latter might be preferentially included in core journals).

Furthermore not all meta-analyses and systematic reviews are reviews of RCTs (thus it is not completely fair to compare MAs with RCTs only). On the other hand it is a (not discussed) omission of this study, that only interventions are considered. Nowadays physicians have many other questions than those related to therapy, like questions about prognosis, harm and diagnosis.

I did a little imperfect search just to see whether use of other search terms than meta-analyses[pt] would have any influence on the outcome. I search for (1) meta-analyses [pt] and (2) systematic review [tiab] (title and abstract) of papers about endocrine diseases. Then I subtracted 1 from 2 (to analyse the systematic reviews not indexed as meta-analysis[pt])

Thus:

(ENDOCRINE DISEASES[MESH] AND SYSTEMATIC REVIEW[TIAB] AND 2009[DP]) NOT META-ANALYSIS[PT]

I analyzed the top 10/11 journals publishing these study types.

This little experiment suggests that:

  1. the precise scatter might differ per search: apparently the systematic review[tiab] search yielded different top 10/11 journals (for this sample) than the meta-analysis[pt] search. (partially because Cochrane systematic reviews apparently don’t mention systematic reviews in title and abstract?).
  2. the authors underestimate the numbers of Systematic Reviews: simply searching for systematic review[tiab] already found appr. 50% additional systematic reviews compared to meta-analysis[pt] alone
  3. As expected (by me at last), many of the SR’s en MA’s were NOT dealing with interventions, i.e. see the first 5 hits (out of 108 and 236 respectively).
  4. Together these findings indicate that the true information overload is far greater than shown by Hoffmann et al (not all systematic reviews are found, of all available search designs only RCTs are searched).
  5. On the other hand this indirectly shows that SRs are a better way to keep up-to-date than suggested: SRs  also summarize non-interventional research (the ratio SRs of RCTs: individual RCTs is much lower than suggested)
  6. It also means that the role of the Cochrane Systematic reviews to aggregate RCTs is underestimated by the published graphs (the MA[pt] section is diluted with non-RCT- systematic reviews, thus the proportion of the Cochrane SRs in the interventional MAs becomes larger)

Well anyway, these imperfections do not contradict the main point of this paper: that trials are scattered across hundreds of general and specialty journals and that “systematic reviews” (or meta-analyses really) do reduce the extent of scatter, but are still widely scattered and mostly in different journals to those of randomized trials.

Indeed, personal subscriptions to journals seem insufficient for keeping up to date.
Besides supplementing subscription by  methods such as journal scanning services, I would recommend the use of personalized alerts from PubMed and several prefiltered sources including an EBM search machine like TRIP (www.tripdatabase.com/).

*but I would broaden it to find all aggregate evidence, including ACP, Clinical Evidence, syntheses and synopses, not only meta-analyses.

**I do appreciate that one of the co-authors is a medical librarian: Sarah Thorning.

References

  1. Hoffmann, Tammy, Erueti, Chrissy, Thorning, Sarah, & Glasziou, Paul (2012). The scatter of research: cross sectional comparison of randomised trials and systematic reviews across specialties BMJ, 344 : 10.1136/bmj.e3223
  2. Bastian, H., Glasziou, P., & Chalmers, I. (2010). Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLoS Medicine, 7 (9) DOI: 10.1371/journal.pmed.1000326
  3. How will we ever keep up with 75 trials and 11 systematic reviews a day (laikaspoetnik.wordpress.com)
  4. Experience versus Evidence [1]. Opioid Therapy for Rheumatoid Arthritis Pain. (laikaspoetnik.wordpress.com)




Can Guidelines Harm Patients?

2 05 2012

ResearchBlogging.orgRecently I saw an intriguing “personal view” in the BMJ written by Grant Hutchison entitled: “Can Guidelines Harm Patients Too?” Hutchison is a consultant anesthetist with -as he calls it- chronic guideline fatigue syndrome. Hutchison underwent an acute exacerbation of his “condition” with the arrival of another set of guidelines in his email inbox. Hutchison:

On reviewing the level of evidence provided for the various recommendations being offered, I was struck by the fact that no relevant clinical trials had been carried out in the population of interest. Eleven out of 25 of the recommendations made were supported only by the lowest levels of published evidence (case reports and case series, or inference from studies not directly applicable to the relevant population). A further seven out of 25 were derived only from the expert opinion of members of the guidelines committee, in the absence of any guidance to be gleaned from the published literature.

Hutchison’s personal experience is supported by evidence from two articles [2,3].

One paper published in the JAMA 2009 [2] concludes that ACC/AHA (American College of Cardiology and the American Heart Association) clinical practice guidelines are largely developed from lower levels of evidence or expert opinion and that the proportion of recommendations for which there is no conclusive evidence is growing. Only 314 recommendations of 2711 (median, 11%) are classified as level of evidence A , thus recommendation based on evidence from multiple randomized trials or meta-analyses.  The majority of recommendations (1246/2711; median, 48%) are level of evidence C, thus based  on expert opinion, case studies, or standards of care. Strikingly only 245 of 1305 class I recommendations are based on the highest level A evidence (median, 19%).

Another paper, published in Ann Intern Med 2011 [3], reaches similar conclusions analyzing the Infectious Diseases Society of America (IDSA) Practice Guidelines. Of the 4218 individual recommendations found, only 14% were supported by the strongest (level I) quality of evidence; more than half were based on level III evidence only. Like the ACC/AHH guidelines only a small part (23%) of the strongest IDSA recommendations, were based on level I evidence (in this case ≥1 randomized controlled trial, see below). And, here too, the new recommendations were mostly based on level II and III evidence.

Although there is little to argue about Hutchison’s observations, I do not agree with his conclusions.

In his view guidelines are equivalent to a bullet pointed list or flow diagram, allowing busy practitioners to move on from practice based on mere anecdote and opinion. It therefore seems contradictory that half of the EBM-guidelines are based on little more than anecdote (case series, extrapolation from other populations) and opinion. He then argues that guidelines, like other therapeutic interventions, should be considered in terms of balance between benefit and risk and that the risk  associated with the dissemination of poorly founded guidelines must also be considered. One of those risks is that doctors will just tend to adhere to the guidelines, and may even change their own (adequate) practice  in the absence of any scientific evidence against it. If a patient is harmed despite punctilious adherence to the guideline-rules,  “it is easy to be seduced into assuming that the bad outcome was therefore unavoidable”. But perhaps harm was done by following the guideline….

First of all, overall evidence shows that adherence to guidelines can improve patient outcome and provide more cost effective care (Naveed Mustfa in a comment refers to [4]).

Hutchinson’s piece is opinion-based and rather driven by (understandable) gut feelings and implicit assumptions, that also surround EBM in general.

  1. First there is the assumption that guidelines are a fixed set of rules, like a protocol, and that there is no room for preferences (both of the doctor and the patient), interpretations and experience. In the same way as EBM is often degraded to “cookbook medicine”, EBM guidelines are turned into mere bullet pointed lists made by a bunch of experts that just want to impose their opinions as truth.
  2. The second assumption (shared by many) is that evidence based medicine is synonymous with “randomized controlled trials”. In analogy, only those EBM guideline recommendations “count” that are based on RCT’s or meta-analyses.

Before I continue, I would strongly advice all readers (and certainly all EBM and guideline-skeptics) to read this excellent and clearly written BJM-editorial by David Sackett et al. that deals with misconceptions, myths and prejudices surrounding EBM : Evidence based medicine: what it is and what it isn’t [5].

Sackett et al define EBM as “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients” [5]. Sackett emphasizes that “Good doctors use both individual clinical expertise and the best available external evidence, and neither alone is enough. Without clinical expertise, practice risks becoming tyrannised by evidence, for even excellent external evidence may be inapplicable to or inappropriate for an individual patient. Without current best evidence, practice risks becoming rapidly out of date, to the detriment of patients.”

Guidelines are meant to give recommendations based on the best available evidence. Guidelines should not be a set of rules, set in stone. Ideally, guidelines have gathered evidence in a transparent way and make it easier for the clinicians to grasp the evidence for a certain procedure in a certain situation … and to see the gaps.

Contrary to what many people think, EBM is not restricted to randomized trials and meta-analyses. It involves tracking down the best external evidence there is. As I explained in #NotSoFunny #16 – Ridiculing RCTs & EBM, evidence is not an all-or-nothing thing: RCT’s (if well performed) are the most robust, but if not available we have to rely on “lower” evidence (from cohort to case-control to case series or expert opinion even).
On the other hand RCT’s are often not even suitable to answer questions in other domains than therapy (etiology/harm, prognosis, diagnosis): per definition the level of evidence for these kind of questions inevitably will be low*. Also, for some interventions RCT’s are not appropriate, feasible or too costly to perform (cesarean vs vaginal birth; experimental therapies, rare diseases, see also [3]).

It is also good to realize that guidance, based on numerous randomized controlled trials is probably not or limited applicable to groups of patients who are seldom included in a RCT: the cognitively impaired, the patient with multiple comorbidities [6], the old patient [6], children and (often) women.

Finally not all RCTs are created equal (various forms of bias; surrogate outcomes; small sample sizes, short follow-up), and thus should not all represent the same high level of evidence.*

Thus in my opinion, low levels of evidence are not per definition problematic. Even if they are the basis for strong recommendations. As long as it is clear how the recommendations were reached and as long as these are well underpinned (by whatever evidence or motivation). One could see the exposed gaps in evidence as a positive thing as it may highlight the need for clinical research in certain fields.

There is one BIG BUT: my assumption is that guidelines are “just” recommendations based on exhaustive and objective reviews of existing evidence. No more, no less. This means that the clinician must have the freedom to deviate from the recommendations, based on his own expertise and/or the situation and/or the patient’s preferences. The more, when the evidence on which these strong recommendations are based is ‘scant’. Sackett already warned for the possible hijacking of EBM by purchasers and managers (and may I add health insurances and governmental agencies) to cut the costs of health care and to impose “rules”.

I therefore think it is odd that the ACC/AHA guidelines prescribe that Class I recommendations SHOULD be performed/administered even if they are based on level C recommendations (see Figure).

I also find it odd that different guidelines have a different nomenclature. The ACC/AHA have Class I, IIa, IIb and III recommendations and level A, B, C evidence where level A evidence represents sufficient evidence from multiple randomized trials and meta-analyses, whereas the strength of recommendations in the IDSA guidelines includes levels A through C (OR D/E recommendations against use) and quality of evidence ranges from level I through III , where I indicates evidence from (just) 1 properly randomized controlled trial. As explained in [3] this system was introduced to evaluate the effectiveness of preventive health care interventions in Canada (for which RCTs are apt).

Finally, guidelines and guideline makers should probably be more open for input/feedback from people who apply these guidelines.

————————————————

*the new GRADE (Grading of Recommendations Assessment, Development, and Evaluation) scoring system taking into account good quality observational studies as well may offer a potential solution.

Another possibly relevant post at this blog: The Best Study Design for … Dummies

Taken from a summary of an ACC/AHA guideline at http://guideline.gov/
Click to enlarge.

References

  1. Hutchison, G. (2012). Guidelines can harm patients too BMJ, 344 (apr18 1) DOI: 10.1136/bmj.e2685
  2. Tricoci P, Allen JM, Kramer JM, Califf RM, & Smith SC Jr (2009). Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA : the journal of the American Medical Association, 301 (8), 831-41 PMID: 19244190
  3. Lee, D., & Vielemeyer, O. (2011). Analysis of Overall Level of Evidence Behind Infectious Diseases Society of America Practice Guidelines Archives of Internal Medicine, 171 (1), 18-22 DOI: 10.1001/archinternmed.2010.482
  4. Menéndez R, Reyes S, Martínez R, de la Cuadra P, Manuel Vallés J, & Vallterra J (2007). Economic evaluation of adherence to treatment guidelines in nonintensive care pneumonia. The European respiratory journal : official journal of the European Society for Clinical Respiratory Physiology, 29 (4), 751-6 PMID: 17005580
  5. Sackett, D., Rosenberg, W., Gray, J., Haynes, R., & Richardson, W. (1996). Evidence based medicine: what it is and what it isn’t BMJ, 312 (7023), 71-72 DOI: 10.1136/bmj.312.7023.71
  6. Aylett, V. (2010). Do geriatricians need guidelines? BMJ, 341 (sep29 3) DOI: 10.1136/bmj.c5340




What Did Deep DNA Sequencing of Traditional Chinese Medicines (TCMs) Really Reveal?

30 04 2012

ResearchBlogging.orgA recent study published in PLOS genetics[1] on a genetic audit of Traditional Chinese Medicines (TCMs) was widely covered in the news. The headlines are a bit confusing as they said different things. Some headlines say “Dangers of Chinese Medicine Brought to Light by DNA Studies“, others that Bear and Antelope DNA are Found in Traditional Chinese Medicine, and still others more neutrally: Breaking down traditional Chinese medicine.

What have Bunce and his group really done and what is the newsworthiness of this article?

doi:info:doi/10.1371/journal.pgen.1002657.g001

Photos from 4 TCM samples used in this study doi/10.1371/journal.pgen.1002657.g001

The researchers from the the Murdoch University, Australia,  have applied Second Generation, high-throughput sequencing to identify the plant and animal composition of 28 TCM samples (see Fig.). These TCM samples had been seized by Australian Customs and Border Protection Service at airports and seaports across Australia, because they contravened Australia’s international wildlife trade laws (Part 13A EPBC Act 1999).

Using primers specific for the plastid trnL gene (plants) and the mitochondrial 16S ribosomal RNA (animals), DNA of sufficient quality was obtained from 15 of the 28 (54%) TCM samples. The resultant 49,000 amplicons (amplified sequences) were analyzed by high-throughput sequencing and compared to reference databases.

Due to better GenBank coverage, the analysis of vertebrate DNA was simpler and less ambiguous than the analysis of the plant origins.

Four TCM samples – Saiga Antelope Horn powder, Bear Bile powder, powder in box with bear outline and Chu Pak Hou Tsao San powder were found to contain DNA from known CITES- (Convention on International Trade in Endangered Species) listed species. This is no real surprise, as the packages were labeled as such.

On the other hand some TCM samples, like the “100% pure” Saiga Antilope powder, were “diluted” with  DNA from bovids (i.e. goats and sheep), deer and/or toads. In 78% of the samples, animal DNA was identified that had not been clearly labeled as such on the packaging.

In total 68 different plant families were detected in the medicines. Some of the TCMs contained plants of potentially toxic genera like Ephedra and Asarum. Ephedra contains the sympathomimetic ephedrine, which has led to many, sometimes fatal, intoxications, also in Western countries. It should be noted however, that pharmacological activity cannot be demonstrated by DNA-analysis. Similarly, certain species of Asarum (wild ginger) contain the nephrotoxic and carcinogenic aristolochic acid, but it would require further testing to establish the presence of aristolochia acid in the samples positive for Asarum. Plant DNA assigned to other potentially toxic, allergic (nuts, soy) and/or subject to CITES regulation were also recovered. Again, other gene regions would need to be targeted, to reveal the exact species involved.

Most newspapers emphasized that the study has brought to light “the dangers of TCM”

For this reason The Telegraph interviewed an expert in the field, Edzard Ernst, Professor of Complementary Medicine at the University of Exeter. Ernst:

“The risks of Chinese herbal medicine are numerous: firstly, the herbs themselves can be toxic; secondly, they might interact with prescription drugs; thirdly, they are often contaminated with heavy metals; fourthly, they are frequently adulterated with prescription drugs; fifthly, the practitioners are often not well trained, make unsubstantiated claims and give irresponsible, dangerous advice to their patients.”

Ernst is right about the risks. However, these adverse effects of TCM have long been known. Fifteen years ago I happened to have written a bibliography about “adverse effects of herbal medicines*” (in Dutch, a good book on this topic is [2]). I did exclude interactions with prescription drugs, contamination with heavy metals and adulteration with prescription drugs, because the events (publications in PubMed and EMBASE) were to numerous(!). Toxic Chinese herbs mostly caused acute toxicity by aconitine, anticholinergic (datura, atropa) and podophyllotoxin intoxications. In Belgium 80 young women got nephropathy (kidney problems) after attending a “slimming” clinic because of mixup of Stephania (chinese: fangji) with Aristolochia fanghi (which contains the toxic aristolochic acid). Some of the women later developed urinary tract cancer.

In other words, toxic side effects of herbs including chinese herbs are long known. And the same is true for the presence of (traces of) endangered species in TCM.

In a media release the complementary health council (CHC) of Australia emphasized that the 15 TCM products featured in this study were rogue products seized by Customs as they were found to contain prohibited and undeclared ingredients. The CHC emphasizes the proficiency of rigorous regulatory regime around complementary medicines, i.e. all ingredients used in listed products must be on the permitted list of ingredients. However, Australian regulations do not apply to products purchased online from overseas.

Thus if the findings are not new and (perhaps) not applicable to most legal TCM, then what is the value of this paper?

The new aspect is the high throughput DNA sequencing approach, which allows determination of a larger number of animal and plant taxa than would have been possible through morphological and/or biochemical means. Various TCM-samples are suitable: powders, tablets, capsules, flakes and herbal teas.

There are also some limitations:

  1. DNA of sufficient quality could only be obtained from appr. half of the samples.
  2. Plants sequences could often not be resolved beyond the family level. Therefore it could often not be established whether an endangered of toxic species was really present (or an innocent family member).
  3. Only DNA sequences can be determined, not pharmacological activity.
  4. The method is at best semi-quantitative.
  5. Only plant and animal ingredients are determined, not contaminating heavy metals or prescription drugs.

In the future, species assignment (2) can be improved with the development of better reference databases involving multiple genes and (3) can be solved by combining genetic (sequencing) and metabolomic (for compound detection) approaches. According to the authors this may be a cost-effective way to audit TCM products.

Non-technical approaches may be equally important: like convincing consumers not to use medicines containing animal traces (not to speak of  endangered species), not to order  TCM online and to avoid the use of complex, uncontrolled TCM-mixes.

Furthermore, there should be more info on what works and what doesn’t.

*including but not limited to TCM

References

  1. Coghlan ML, Haile J, Houston J, Murray DC, White NE, Moolhuijzen P, Bellgard MI, & Bunce M (2012). Deep Sequencing of Plant and Animal DNA Contained within Traditional Chinese Medicines Reveals Legality Issues and Health Safety Concerns. PLoS genetics, 8 (4) PMID: 22511890 (Free Full Text)
  2. Adverse Effects of Herbal Drugs 2 P. A. G. M. De Smet K. Keller R. Hansel R. F. Chandler, Paperback. Springer 1993-01-15. ISBN 0387558004 / 0-387-55800-4 EAN 9780387558004
  3. DNA may weed out toxic Chinese medicine (abc.net.au)
  4. Bedreigde beren in potje Lucas Brouwers, NRC Wetenschap 14 april 2012, bl 3 [Dutch]
  5. Dangers in herbal medicine (continued) – DNA sequencing to hunt illegal ingredients (somethingaboutscience.wordpress.com)
  6. Breaking down traditional Chinese medicine. (green.blogs.nytimes.com)
  7. Dangers of Chinese Medicine Brought to Light by DNA Studies (news.sciencemag.org)
  8. Chinese herbal medicines contained toxic mix (cbc.ca)
  9. Screen uncovers hidden ingredients of Chinese medicine (Nature News)
  10. Media release: CHC emphasises proficiency of rigorous regulatory regime around complementary medicines (http://www.chc.org.au/)




Health and Science Twitter & Blog Top 50 and 100 Lists. How to Separate the Wheat from the Chaff.

24 04 2012

Recently a Top 100 scientists-Twitter list got viral on Twitter. It was published at accreditedonlinecolleges.com/blog.*

Most people just tweeted “Top 100 Scientists on Twitter”, others were excited to be on the list, a few mentioned the lack of scientist X or discipline Y  in the top 100.

Two scientist noticed something peculiar about the list: @seanmcarroll noticed two fake (!) accounts under “physics” (as later explained these were: @NIMAARKANIHAMED and @Prof_S_Hawking). And @nutsci (having read two posts of mine about spam top 50 or 100 lists [12]) recognized this Twitter list as spam:

It is surprising how easy it (still) is for such spammy Top 50 or 100 Lists to get viral, whereas they only have been published to generate more traffic to the website and/or to earn revenue through click-throughs.

It makes me wonder why well-educated people like scientists and doctors swallow the bait. Don’t they recognize the spam? Do they feel flattered to be on the list, or do they take offence when they (or another person who “deserves” it) aren’t chosen? Or perhaps they just find the list useful and want to share it, without taking a close look?

To help you to recognize and avoid such spammy lists, here are some tips to separate the wheat from the chaff:

  1. Check WHO made the list. Is it from an expert in the field, someone you trust? (and/or someone you like to follow?)
  2. If you don’t know the author in person, check the site which publishes the list (often a “blog”):
    1. Beware if there is no (or little info in the) ABOUT-section.
    2. Beware if the site mainly (only) has these kind of lists or short -very general-blogposts (like 10 ways to….) except when the author is somebody like Darren Rowse aka @ProBlogger [3].
    3. Beware if it is a very general site producing a diversity of very specialised lists (who can be expert in all fields?)
    4. Beware if the website has any of the following (not mutually exclusive) characteristics:
      1. Web addresses like accreditedonlinecolleges.com, onlinecolleges.com, onlinecollegesusa.org,  onlinedegrees.com (watch out com sites anyway)
      2. Websites with a Quick-degree, nursing degree, technician school etc finder
      3. Prominent links at the homepage to Kaplan University, University of Phoenix, Grand Canyon University etc
    5. Reputable sites less likely produce nonsense lists. See for instance this “Women in science blogging”-list published in the Guardian [4].
  3. When the site itself seems ok, check whether the names on the list seem trustworthy and worth a follow. Clearly, lists with fake accounts (other then lists with “top 50 fake accounts” ;)) aren’t worth the bother: apparently the creator didn’t make the effort to verify the accounts and/or hasn’t the capacity to understand the tweets/topic.
  4. Ideally the list should have added value. Meaning that it should be more than a summary of names and copy pasting of the bio or “about” section.
    For instance I have recently been put on a list of onlinecollegesusa.org [b], but the author had just copied the subtitle of my blog: …. a medical librarian and her blog explores the web 2.0 world as it relates to library science and beyond.
    However, sometimes, the added value may just be that the author is a highly recognized expert or opinion leader. For instance this Top Health & Medical Bloggers (& Their Twitter Names) List [5] by the well known health blogger Dean Giustini.
  5. In what way do these lists represent *top* Blogs or Twitter accounts? Are their blogs worth reading and/or their Twitter accounts worth following? A nobel price winner may be a top scientist, but may not necessarily be a good blogger and/or may not have interesting tweets. (personally I know various examples of uninteresting accounts of *celebrities* in health, science and politics)
  6. Beware if you are actively approached and kindly requested to spread the list to your audience. (for this is what they want).It goes like this (watch the impersonal tone):

    Your Blog is being featured!

    Hi There,

    I recently compiled a list of the best librarian blogs, and I wanted to let you know that you made the list! You can find your site linked here: [...]

    If you have any feedback please let me know, or if you think your audience would find any of this information useful, please feel free to share the link. We always appreciate a Facebook Like, a Google +1, a Stumble Upon or even a regular old link back, as we’re trying to increase our readership.

    Thanks again, and have a great day!

While some of the list may be worthwhile in itself, it is best NOT TO LINK TO DOUBTFUL LISTS, thus not  mention them on Twitter, not retweet the lists and not blog about it. For this is what they only want to achieve.

But what if you really find this list interesting?

Here are some tips to find alternatives to these spammy lists (often opposite to above-mentioned words of caution) 

  1. Find posts/lists produced by experts in the field and/or people you trust or like to follow. Their choice of blogs or twitter-accounts (albeit subjective and incomplete) will probably suit you the best. For isn’t this what it is all about?
  2. Especially useful are posts that give you more information about the people on the list. Like this top-10 librarian list by Phil Bradley [6] and the excellent “100+ women healthcare academics” compiled by @amcunningham and @trishgreenhalgh [7].
    Strikingly the reason to create the latter list was that a spammy list not recognized as such (“50 Medical School Professors You Should Be Following On Twitter”  [c])  seemed short on women….
  3. In case of Twitter-accounts:
    1. Check existing Twitter lists of people you find interesting to follow. You can follow the entire lists or just those people you find most interesting.
      Examples: I created a list with people from the EBM-cochrane people & sceptics [8]. Nutritional science grad student @Nutsci has a nutrition-health-science list [9]. The more followers, the more popular the list.
    2. Check interesting conversation partners of people you follow.
    3. Check accounts of people who are often retweeted in the field.
    4. Keep an eye on #FF (#FollowFriday) mentions, where people worth following are highlighted
    5. Check a topic on Listorious. For instance @hrana made a list of Twitter-doctors[10]. There are also scientists-lists (then again, check who made the list and who is on the list. Some health/nutrition lists are really bad if you’re interested in science and not junk)
    6. Worth mentioning are shared lists that are open for edit (so there are many contributors besides the curator). Lists [4] and [7] are examples of crowd sourced lists. Other examples are truly open-to-edit lists using public spreadsheets, like the Top Twitter Doctors[11], created by Dr Ves and  lists for science and bio(medical) journals [12], created by me.
  4. Finally, if you find the spam top 100 list truly helpful, and don’t know too many people in the field, just check out some of the names without linking to the list or spreading the word.

*For obvious reasons I will not hyperlink to these sites, but if you would like to check them, these are the links

[a] accreditedonlinecolleges.com/blog/2012/top-100-scientists-on-twitter

[b] onlinecollegesusa.org/librarian-resources-online

[c] thedegree360.onlinedegrees.com/50-must-follow-medical-school-professors-on-twitter

  1. Beware of Top 50 “Great Tools to Double Check your Doctor” or whatever Lists. (laikaspoetnik.wordpress.com)
  2. Vanity is the Quicksand of Reasoning: Beware of Top 100 and 50 lists! ((laikaspoetnik.wordpress.com)
  3. Google+ Tactics of the Blogging Pros (problogger.net)
  4. “Women in science blogging” by  ( http://www.guardian.co.uk/science)
  5. Top Health & Medical Bloggers (& Their Twitter Names) List (blog.openmedicine.ca)
  6. Top-10 librarian list by Phil Bradley (www.blogs.com/topten)
  7. 100+ women healthcare academics by Annemarie Cunningham/ Trisha Greenhalgh (wishfulthinkinginmedicaleducation.blogspot.com)
  8. Twitter-doctors by @hrana (listorious.com)
  9. EBM-cochrane people & sceptics (Twitter list by @laikas)
  10. Nutrition-health-science (Twitter list by @nutsci)
  11. Open for edit: Top Twitter Doctors arranged by specialty in alphabetical order (Google Spreadsheet by @drves)
  12. TWITTER BIOMEDICAL AND OTHER SCIENTIFIC JOURNALS & MAGAZINES (Google Spreadsheet by @laikas)









Follow

Get every new post delivered to your Inbox.

Join 610 other followers