Between the Lines. Finding the Truth in Medical Literature [Book Review]

19 07 2013

In the 1970s a study was conducted among 60 physicians and physicians-in-training. They had to solve a simple problem:

“If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5 %, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person’s symptoms or signs?” 

Half of the “medical experts” thought the answer was 95%.
Only a small proportion, 18%, of the doctors arrived at the right answer of 2%.

If you are a medical expert who comes the same faulty conclusion -or need a refresher how to arrive at the right answer- you might benefit from the book written by Marya Zilberberg: “Between the Lines. Finding the Truth in Medical Literature”.

The same is true for a patient whose doctor thinks he/she is among the 95% to benefit form such a test…
Or for journalists who translate medical news to the public…
Or for peer reviewers or editors who have to assess biomedical papers…

In other words, this book is useful for everyone who wants to be able to read “between the lines”. For everyone who needs to examine medical literature critically from time to time and doesn’t want to rely solely on the interpretation of others.

I hope that I didn’t scare you off with the abovementioned example. Between the Lines surely is NOT a complicated epidemiology textbook, nor a dull studybook where you have to struggle through a lot of definitions, difficult tables and statistic formulas and where each chapter is followed by a set of review questions that test what you learned.

This example is presented half way the book, at the end of Part I. By then you have enough tools to solve the question yourself. But even if you don’t feel like doing the exact calculation at that moment, you have a solid basis to understand the bottomline: the (enormous) 93% gap (95% vs 2% of the people with a positive test are considered truly positive) serves as the pool for overdiagnosis and overtreatment.

In the previous chapters of Part I (“Context”), you have learned about the scientific methods in clinical research, uncertainty as the only certain feature of science, the importance of denominators, outcomes that matter and outcomes that don’t, Bayesian probability, evidence hierarchies, heterogeneous treatment effects (does the evidence apply to this particular patient?) and all kinds of biases.

Most reviewers prefer part I of the book. Personally I find part II (“Evaluation”) as interesting.

Part II deals with the study question, and study design, pros and cons of observational and interventional studies, validity, hypothesis testing and statistics.

Perhaps part II  is somewhat less narrative. Furthermore, it deals with tougher topics like statistics. But I find it very valuable for being able to critically appraise a study. I have never seen a better description of “ODDs”: somehow ODDs it is better to grasp if you substitute “treatment A” and “treatment B” for “horse A” and “horse B”, and substitute “death” for “loss of a race”.
I knew the basic differences between cohort studies, case control studies and so on, but I kind of never realized before that ODDs Ratio is the only measure of association available in a case-control study and that case control studies cannot estimate incidence or prevalence (as shown in a nice overview in table 4).

Unlike many other books about “the art of reading of medical articles”, “study designs” or “Evidence Based Medicine”, Marya’s book is easy to read. It is written at a conversational tone and statements are illustrated by means of current, appealing examples, like the overestimation of risk of death from the H1N1 virus, breast cancer screening and hormone replacement therapy.

Although I had printed this book in a wrong order (page 136 next to 13 etc), I was able to read (and understand) 1/3 of the book (the more difficult part II) during a 2 hour car trip….

Because this book is comprehensive, yet accessible, I recommend it highly to everyone, including fellow librarians.

Marya even mentions medical librarians as a separate target audience:

Medical librarians may find this book particularly helpful: Being at the forefront of evidence dissemination, they can lead the charge of separating credible science from rubbish.

(thanks Marya!)

In addition, this book may be indirectly useful to librarians as it may help to choose appropriate methodological filters and search terms for certain EBM-questions. In case of etiology questions words like “cohort”, “case-control”, “odds”, “risk” and “regression” might help to find the “right” studies.

By the way Marya Ziberberg @murzee at Twitter and she writes at her blog Healthcare etc.

p.s. 1 I want to apologize to Marya for writing this review more than a year after the book was published. For personal reasons I found little time to read and blog. Luckily the book lost none of its topicality.

p.s. 2 patients who are not very familiar with critical reading of medical papers might benefit from reading “your medical mind” first [1]. 

bwtn the lines

Amazon Product Details





The Scatter of Medical Research and What to do About it.

18 05 2012

ResearchBlogging.orgPaul Glasziou, GP and professor in Evidence Based Medicine, co-authored a new article in the BMJ [1]. Similar to another paper [2] I discussed before [3] this paper deals with the difficulty for clinicians of staying up-to-date with the literature. But where the previous paper [2,3] highlighted the mere increase in number of research articles over time, the current paper looks at the scatter of randomized clinical trials (RCTs) and systematic reviews (SR’s) accross different journals cited in one year (2009) in PubMed.

Hofmann et al analyzed 7 specialties and 9 sub-specialties, that are considered the leading contributions to the burden of disease in high income countries.

They followed a relative straightforward method for identifying the publications. Each search string consisted of a MeSH term (controlled  term) to identify the selected disease or disorders, a publication type [pt] to identify the type of study, and the year of publication. For example, the search strategy for randomized trials in cardiology was: “heart diseases”[MeSH] AND randomized controlled trial[pt] AND 2009[dp]. (when searching “heart diseases” as a MeSH, narrower terms are also searched.) Meta-analysis[pt] was used to identify systematic reviews.

Using this approach Hofmann et al found 14 343 RCTs and 3214 SR’s published in 2009 in the field of the selected (sub)specialties. There was a clear scatter across journals, but this scatter varied considerably among specialties:

“Otolaryngology had the least scatter (363 trials across 167 journals) and neurology the most (2770 trials across 896 journals). In only three subspecialties (lung cancer, chronic obstructive pulmonary disease, hearing loss) were 10 or fewer journals needed to locate 50% of trials. The scatter was less for systematic reviews: hearing loss had the least scatter (10 reviews across nine journals) and cancer the most (670 reviews across 279 journals). For some specialties and subspecialties the papers were concentrated in specialty journals; whereas for others, few of the top 10 journals were a specialty journal for that area.
Generally, little overlap occurred between the top 10 journals publishing trials and those publishing systematic reviews. The number of journals required to find all trials or reviews was highly correlated (r=0.97) with the number of papers for each specialty/ subspecialty.”

Previous work already suggested that this scatter of research has a long tail. Half of the publications is in a minority of papers, whereas the remaining articles are scattered among many journals (see Fig below).

Click to enlarge en see legends at BMJ 2012;344:e3223 [CC]

The good news is that SRs are less scattered and that general journals appear more often in the top 10 journals publishing SRs. Indeed for 6 of the 7 specialties and 4 of the 9 subspecialties, the Cochrane Database of Systematic Reviews had published the highest number of systematic reviews, publishing between 6% and 18% of all the systematic reviews published in each area in 2009. The bad news is that even keeping up to date with SRs seems a huge, if not impossible, challenge.

In other words, it is not sufficient for clinicians to rely on personal subscriptions to a few journals in their specialty (which is common practice). Hoffmann et al suggest several solutions to help clinicians cope with the increasing volume and scatter of research publications.

  • a central library of systematic reviews (but apparently the Cochrane Library fails to fulfill such a role according to the authors, because many reviews are out of date and are perceived as less clinically relevant)
  • registry of planned and completed systematic reviews, such as prospero. (this makes it easier to locate SRs and reduces bias)
  • Synthesis of Evidence and synopses, like the ACP-Jounal Club which summarizes the best evidence in internal medicine
  • Specialised databases that collate and critically appraise randomized trials and systematic reviews, like www.pedro.org.au for physical therapy. In my personal experience, however, this database is often out of date and not comprehensive
  • Journal scanning services like EvidenceUpdates from mcmaster.ca), which scans over 120 journals, filters articles on the basis of quality, has practising clinicians rate them for relevance and newsworthiness, and makes them available as email alerts and in a searchable database. I use this service too, but besides that not all specialties are covered, the rating of evidence may not always be objective (see previous post [4])
  • The use of social media tools to alert clinicians to important new research.

Most of these solutions are (long) existing solutions that do not or only partly help to solve the information overload.

I was surprised that the authors didn’t propose the use of personalized alerts. PubMed’s My NCBI feature allows to create automatic email alerts on a topic and to subscribe to electronic tables of contents (which could include ACP journal Club). Suppose that a physician browses 10 journals roughly covering 25% of the trials. He/she does not need to read all the other journals from cover to cover to avoid missing one potentially relevant trial. Instead it is far more efficient to perform a topic search to filter relevant studies from journals that seldom publish trials on the topic of interest. One could even use the search of Hoffmann et al to achieve this.* Although in reality, most clinical researchers will have narrower fields of interest than all studies about endocrinology and neurology.

At our library we are working at creating deduplicated, easy to read, alerts that collate table of contents of certain journals with topic (and author) searches in PubMed, EMBASE and other databases. There are existing tools that do the same.

Another way to reduce the individual work (reading) load is to organize journals clubs or even better organize regular CATs (critical appraised topics). In the Netherlands, CATS are a compulsory item for residents. A few doctors do the work for many. Usually they choose topics that are clinically relevant (or for which the evidence is unclear).

The authors shortly mention that their search strategy might have missed  missed some eligible papers and included some that are not truly RCTs or SRs, because they relied on PubMed’s publication type to retrieve RCTs and SRs. For systematic reviews this may be a greater problem than recognized, for the authors have used meta-analyses[pt] to identify systematic reviews. Unfortunately PubMed has no publication type for systematic reviews, but it may be clear that there are many more systematic reviews that meta-analyses. Possibly systematical reviews might even have a different scatter pattern than meta-analyses (i.e. the latter might be preferentially included in core journals).

Furthermore not all meta-analyses and systematic reviews are reviews of RCTs (thus it is not completely fair to compare MAs with RCTs only). On the other hand it is a (not discussed) omission of this study, that only interventions are considered. Nowadays physicians have many other questions than those related to therapy, like questions about prognosis, harm and diagnosis.

I did a little imperfect search just to see whether use of other search terms than meta-analyses[pt] would have any influence on the outcome. I search for (1) meta-analyses [pt] and (2) systematic review [tiab] (title and abstract) of papers about endocrine diseases. Then I subtracted 1 from 2 (to analyse the systematic reviews not indexed as meta-analysis[pt])

Thus:

(ENDOCRINE DISEASES[MESH] AND SYSTEMATIC REVIEW[TIAB] AND 2009[DP]) NOT META-ANALYSIS[PT]

I analyzed the top 10/11 journals publishing these study types.

This little experiment suggests that:

  1. the precise scatter might differ per search: apparently the systematic review[tiab] search yielded different top 10/11 journals (for this sample) than the meta-analysis[pt] search. (partially because Cochrane systematic reviews apparently don’t mention systematic reviews in title and abstract?).
  2. the authors underestimate the numbers of Systematic Reviews: simply searching for systematic review[tiab] already found appr. 50% additional systematic reviews compared to meta-analysis[pt] alone
  3. As expected (by me at last), many of the SR’s en MA’s were NOT dealing with interventions, i.e. see the first 5 hits (out of 108 and 236 respectively).
  4. Together these findings indicate that the true information overload is far greater than shown by Hoffmann et al (not all systematic reviews are found, of all available search designs only RCTs are searched).
  5. On the other hand this indirectly shows that SRs are a better way to keep up-to-date than suggested: SRs  also summarize non-interventional research (the ratio SRs of RCTs: individual RCTs is much lower than suggested)
  6. It also means that the role of the Cochrane Systematic reviews to aggregate RCTs is underestimated by the published graphs (the MA[pt] section is diluted with non-RCT- systematic reviews, thus the proportion of the Cochrane SRs in the interventional MAs becomes larger)

Well anyway, these imperfections do not contradict the main point of this paper: that trials are scattered across hundreds of general and specialty journals and that “systematic reviews” (or meta-analyses really) do reduce the extent of scatter, but are still widely scattered and mostly in different journals to those of randomized trials.

Indeed, personal subscriptions to journals seem insufficient for keeping up to date.
Besides supplementing subscription by  methods such as journal scanning services, I would recommend the use of personalized alerts from PubMed and several prefiltered sources including an EBM search machine like TRIP (www.tripdatabase.com/).

*but I would broaden it to find all aggregate evidence, including ACP, Clinical Evidence, syntheses and synopses, not only meta-analyses.

**I do appreciate that one of the co-authors is a medical librarian: Sarah Thorning.

References

  1. Hoffmann, Tammy, Erueti, Chrissy, Thorning, Sarah, & Glasziou, Paul (2012). The scatter of research: cross sectional comparison of randomised trials and systematic reviews across specialties BMJ, 344 : 10.1136/bmj.e3223
  2. Bastian, H., Glasziou, P., & Chalmers, I. (2010). Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLoS Medicine, 7 (9) DOI: 10.1371/journal.pmed.1000326
  3. How will we ever keep up with 75 trials and 11 systematic reviews a day (laikaspoetnik.wordpress.com)
  4. Experience versus Evidence [1]. Opioid Therapy for Rheumatoid Arthritis Pain. (laikaspoetnik.wordpress.com)




Evidence Based Point of Care Summaries [2] More Uptodate with Dynamed.

18 10 2011

ResearchBlogging.orgThis post is part of a short series about Evidence Based Point of Care Summaries or POCs. In this series I will review 3 recent papers that objectively compare a selection of POCs.

In the previous post I reviewed a paper from Rita Banzi and colleagues from the Italian Cochrane Centre [1]. They analyzed 18 POCs with respect to their “volume”, content development and editorial policy. There were large differences among POCs, especially with regard to evidence-based methodology scores, but no product appeared the best according to the criteria used.

In this post I will review another paper by Banzi et al, published in the BMJ a few weeks ago [2].

This article examined the speed with which EBP-point of care summaries were updated using a prospective cohort design.

First the authors selected all the systematic reviews signaled by the American College of Physicians (ACP) Journal Club and Evidence-Based Medicine Primary Care and Internal Medicine from April to December 2009. In the same period the authors selected all the Cochrane systematic reviews labelled as “conclusion changed” in the Cochrane Library. In total 128 systematic reviews were retrieved, 68 from the literature surveillance journals (53%) and 60 (47%) from the Cochrane Library. Two months after the collection started (June 2009) the authors did a monthly screen for a year to look for potential citation of the identified 128 systematic reviews in the POCs.

Only those 5 POCs were studied that were ranked in the top quarter for at least 2 (out of 3) desirable dimensions, namely: Clinical Evidence, Dynamed, EBM Guidelines, UpToDate and eMedicine. Surprisingly eMedicine was among the selected POCs, having a rating of “1” on a scale of 1 to 15 for EBM methodology. One would think that Evidence-based-ness is a fundamental prerequisite  for EBM-POCs…..?!

Results were represented as a (rather odd, but clear) “survival analysis” ( “death” = a citation in a summary).

Fig.1 : Updating curves for relevant evidence by POCs (from [2])

I will be brief about the results.

Dynamed clearly beated all the other products  in its updating speed.

Expressed in figures, the updating speed of Dynamed was 78% and 97% greater than those of EBM Guidelines and Clinical Evidence, respectively. Dynamed had a median citation rate of around two months and EBM Guidelines around 10 months, quite close to the limit of the follow-up, but the citation rate of the other three point of care summaries (UpToDate, eMedicine, Clinical Evidence) were so slow that they exceeded the follow-up period and the authors could not compute the median.

Dynamed outperformed the other POC’s in updating of systematic reviews independent of the route. EBM Guidelines and UpToDate had similar overall updating rates, but Cochrane systematic reviews were more likely to be cited by EBM Guidelines than by UpToDate (odds ratio 0.02, P<0.001). Perhaps not surprising, as EBM Guidelines has a formal agreement with the Cochrane Collaboration to use Cochrane contents and label its summaries as “Cochrane inside.” On the other hand, UpToDate was faster than EBM Guidelines in updating systematic reviews signaled by literature surveillance journals.

Dynamed‘s higher updating ability was not due to a difference in identifying important new evidence, but to the speed with which this new information was incorporated in their summaries. Possibly the central updating of Dynamed by the editorial team might account for the more prompt inclusion of evidence.

As the authors rightly point out, slowness in updating could mean that new relevant information is ignored and could thus affect the validity of point of care information services”.

A slower updating rate may be considered more important for POCs that “promise” to “continuously update their evidence summaries” (EBM-Guidelines) or to “perform a continuous comprehensive review and to revise chapters whenever important new information is published, not according to any specific time schedule” (UpToDate). (see table with description of updating mechanisms )

In contrast, Emedicine doesn’t provide any detailed information on updating policy, another reason that it doesn’t belong to this list of best POCs.
Clinical Evidence, however, clearly states, We aim to update Clinical Evidence reviews annually. In addition to this cycle, details of clinically important studies are added to the relevant reviews throughout the year using the BMJ Updates service.” But BMJ Updates is not considered in the current analysis. Furthermore, patience is rewarded with excellent and complete summaries of evidence (in my opinion).

Indeed a major limitation of the current (and the previous) study by Banzi et al [1,2] is that they have looked at quantitative aspects and items that are relatively “easy to score”, like “volume” and “editorial quality”, not at the real quality of the evidence (previous post).

Although the findings were new to me, others have recently published similar results (studies were performed in the same time-span):

Shurtz and Foster [3] of the Texas A&M University Medical Sciences Library (MSL) also sought to establish a rubric for evaluating evidence-based medicine (EBM) point-of-care tools in a health sciences library.

They, too, looked at editorial quality and speed of updating plus reviewing content, search options, quality control, and grading.

Their main conclusion is that “differences between EBM tools’ options, content coverage, and usability were minimal, but that the products’ methods for locating and grading evidence varied widely in transparency and process”.

Thus this is in line with what Banzi et al reported in their first paper. They also share Banzi’s conclusion about differences in speed of updating

“DynaMed had the most up-to-date summaries (updated on average within 19 days), while First Consult had the least up to date (updated on average within 449 days). Six tools claimed to update summaries within 6 months or less. For the 10 topics searched, however, only DynaMed met this claim.”

Table 3 from Shurtz and Foster [3] 

Ketchum et al [4] also conclude that DynaMed the largest proportion of current (2007-2009) references (170/1131, 15%). In addition they found that Dynamed had the largest total number of references (1131/2330, 48.5%).

Yes, and you might have guessed it. The paper of Andrea Ketchum is the 3rd paper I’m going to review.

I also recommend to read the paper of the librarians Shurtz and Foster [3], which I found along the way. It has too much overlap with the Banzi papers to devote a separate post to it. Still it provides better background information then the Banzi papers, it focuses on POCs that claim to be EBM and doesn’t try to weigh one element over another. 

References

  1. Banzi, R., Liberati, A., Moschetti, I., Tagliabue, L., & Moja, L. (2010). A Review of Online Evidence-based Practice Point-of-Care Information Summary Providers Journal of Medical Internet Research, 12 (3) DOI: 10.2196/jmir.1288
  2. Banzi, R., Cinquini, M., Liberati, A., Moschetti, I., Pecoraro, V., Tagliabue, L., & Moja, L. (2011). Speed of updating online evidence based point of care summaries: prospective cohort analysis BMJ, 343 (sep22 2) DOI: 10.1136/bmj.d5856
  3. Shurtz, S., & Foster, M. (2011). Developing and using a rubric for evaluating evidence-based medicine point-of-care tools Journal of the Medical Library Association : JMLA, 99 (3), 247-254 DOI: 10.3163/1536-5050.99.3.012
  4. Ketchum, A., Saleh, A., & Jeong, K. (2011). Type of Evidence Behind Point-of-Care Clinical Information Products: A Bibliometric Analysis Journal of Medical Internet Research, 13 (1) DOI: 10.2196/jmir.1539
  5. Evidence Based Point of Care Summaries [1] No “Best” Among the Bests? (laikaspoetnik.wordpress.com)
  6. How will we ever keep up with 75 Trials and 11 Systematic Reviews a Day? (laikaspoetnik.wordpress.com
  7. UpToDate or Dynamed? (Shamsha Damani at laikaspoetnik.wordpress.com)
  8. How Evidence Based is UpToDate really? (laikaspoetnik.wordpress.com)

Related articles (automatically generated)





Evidence Based Point of Care Summaries [1] No “Best” Among the Bests?

13 10 2011

ResearchBlogging.orgFor many of today’s busy practicing clinicians, keeping up with the enormous and ever growing amount of medical information, poses substantial challenges [6]. Its impractical to do a PubMed search to answer each clinical question and then synthesize and appraise the evidence. Simply, because busy health care providers have limited time and many questions per day.

As repeatedly mentioned on this blog ([67]), it is far more efficient to try to find aggregate (or pre-filtered or pre-appraised) evidence first.

Haynes ‘‘5S’’ levels of evidence (adapted by [1])

There are several forms of aggregate evidence, often represented as the higher layers of an evidence pyramid (because they aggregate individual studies, represented by the lowest layer). There are confusingly many pyramids, however [8] with different kinds of hierarchies and based on different principles.

According to the “5S” paradigm[9] (now evolved to 6S -[10]) the peak of the pyramid are the ideal but not yet realized computer decision support systems, that link the individual patient characteristics to the current best evidence. According to the 5S model the next best source are Evidence Based Textbooks.
(Note: EBM and textbooks almost seem a contradiction in terms to me, personally I would not put many of the POCs somewhere at the top. Also see my post: How Evidence Based is UpToDate really?)

Whatever their exact place in the EBM-pyramid, these POCs are helpful to many clinicians. There are many different POCs (see The HLWIKI Canada for a comprehensive overview [11]) with a wide range of costs, varying from free with ads (e-Medicine) to very expensive site licenses (UpToDate). Because of the costs, hospital libraries have to choose among them.

Choices are often based on user preferences and satisfaction and balanced against costs, scope of coverage etc. Choices are often subjective and people tend to stick to the databases they know.

Initial literature about POCs concentrated on user preferences and satisfaction. A New Zealand study [3] among 84 GPs showed no significant difference in preference for, or usage levels of DynaMed, MD Consult (including FirstConsult) and UpToDate. The proportion of questions adequately answered by POCs differed per study (see introduction of [4] for an overview) varying from 20% to 70%.
McKibbon and Fridsma ([5] cited in [4]) found that the information resources chosen by primary care physicians were seldom helpful in providing the correct answers, leading them to conclude that:

“…the evidence base of the resources must be strong and current…We need to evaluate them well to determine how best to harness the resources to support good clinical decision making.”

Recent studies have tried to objectively compare online point-of-care summaries with respect to their breadth, content development, editorial policy, the speed of updating and the type of evidence cited. I will discuss 3 of these recent papers, but will review each paper separately. (My posts tend to be pretty long and in-depth. So in an effort to keep them readable I try to cut down where possible.)

Two of the three papers are published by Rita Banzi and colleagues from the Italian Cochrane Centre.

In the first paper, reviewed here, Banzi et al [1] first identified English Web-based POCs using Medline, Google, librarian association websites, and information conference proceedings from January to December 2008. In order to be eligible, a product had to be an online-delivered summary that is regularly updated, claims to provide evidence-based information and is to be used at the bedside.

They found 30 eligible POCs, of which the following 18 databases met the criteria: 5-Minute Clinical Consult, ACP-Pier, BestBETs, CKS (NHS), Clinical Evidence, DynaMed, eMedicine,  eTG complete, EBM Guidelines, First Consult, GP Notebook, Harrison’s Practice, Health Gate, Map Of Medicine, Micromedex, Pepid, UpToDate, ZynxEvidence.

They assessed and ranked these 18 point-of-care products according to: (1) coverage (volume) of medical conditions, (2) editorial quality, and (3) evidence-based methodology. (For operational definitions see appendix 1)

From a quantitive perspective DynaMed, eMedicine, and First Consult were the most comprehensive (88%) and eTG complete the least (45%).

The best editorial quality of EBP was delivered by Clinical Evidence (15), UpToDate (15), eMedicine (13), Dynamed (11) and eTG complete (10). (Scores are shown in brackets)

Finally, BestBETs, Clinical Evidence, EBM Guidelines and UpToDate obtained the maximal score (15 points each) for best evidence-based methodology, followed by DynaMed and Map Of Medicine (12 points each).
As expected eMedicine, eTG complete, First Consult, GP Notebook and Harrison’s Practice had a very low EBM score (1 point each). Personally I would not have even considered these online sources as “evidence based”.

The calculations seem very “exact”, but assumptions upon which these figures are based are open to question in my view. Furthermore all items have the same weight. Isn’t the evidence-based methodology far more important than “comprehensiveness” and editorial quality?

Certainly because “volume” is “just” estimated by analyzing to which extent 4 random chapters of the ICD-10 classification are covered by the POCs. Some sources, like Clinical Evidence and BestBets (scoring low for this item) don’t aim to be comprehensive but only “answer” a limited number of questions: they are not textbooks.

Editorial quality is determined by scoring of the specific indicators of transparency: authorship, peer reviewing procedure, updating, disclosure of authors’ conflicts of interest, and commercial support of content development.

For the EB methodology, Banzi et al scored the following indicators:

  1. Is a systematic literature search or surveillance the basis of content development?
  2. Is the critical appraisal method fully described?
  3. Are systematic reviews preferred over other types of publication?
  4. Is there a system for grading the quality of evidence?
  5. When expert opinion is included is it easily recognizable over studies’ data and results ?

The  score for each of these indicators is 3 for “yes”, 1 for “unclear”, and 0 for “no” ( if judged “not adequate” or “not reported.”)

This leaves little room for qualitative differences and mainly relies upon adequate reporting. As discussed earlier in a post where I questioned the evidence-based-ness of UpToDate, there is a difference between tailored searches and checking a limited list of sources (indicator 1.). It also matters whether the search is mentioned or not (transparency), whether it is qualitatively ok and whether it is extensive or not. For lists, it matters how many sources are “surveyed”. It also matters whether one or both methods are used… These important differences are not reflected by the scores.

Furthermore some points may be more important than others. Personally I find step 1 the most important. For what good is appraising and grading if it isn’t applied to the most relevant evidence? It is “easy” to do a grading or to copy it from other sources (yes, I wouldn’t be surprised if some POCs are doing this).

On the other hand, a zero for one indicator can have too much weight on the score.

Dynamed got 12 instead of the maximum 15 points, because their editorial policy page didn’t explicitly describe their absolute prioritization of systematic reviews although they really adhere to that in practice (see comment by editor-in-chief  Brian Alper [2]). Had Dynamed received the deserved 15 points for this indicator, they would have had the highest score overall.

The authors further conclude that none of the dimensions turned out to be significantly associated with the other dimensions. For example, BestBETs scored among the worst on volume (comprehensiveness), with an intermediate score for editorial quality, and the highest score for evidence-based methodology.  Overall, DynaMed, EBM Guidelines, and UpToDate scored in the top quartile for 2 out of 3 variables and in the 2nd quartile for the 3rd of these variables. (but as explained above Dynamed really scored in the top quartile for all 3 variables)

On basis of their findings Banzi et al conclude that only a few POCs satisfied the criteria, with none excelling in all.

The finding that Pepid, eMedicine, eTG complete, First Consult, GP Notebook, Harrison’s Practice and 5-Minute Clinical Consult only obtained 1 or 2 of the maximum 15 points for EBM methodology confirms my “intuitive grasp” that these sources really don’t deserve the label “evidence based”. Perhaps we should make a more strict distinction between “point of care” databases as a point where patients and practitioners interact, particularly referring to the context of the provider-patient dyad (definition by Banzi et al) and truly evidence based summaries. Only few of the tested databases would fit the latter definition. 

In summary, Banzi et al reviewed 18 Online Evidence-based Practice Point-of-Care Information Summary Providers. They comprehensively evaluated and summarized these resources with respect to coverage (volume) of medical conditions, editorial quality, and (3) evidence-based methodology. 

Limitations of the study, also according to the authors, were the lack of a clear definition of these products, arbitrariness of the scoring system and emphasis on the quality of reporting. Furthermore the study didn’t really assess the products qualitatively (i.e. with respect to performance). Nor did it take into account that products might have a different aim. Clinical Evidence only summarizes evidence on the effectiveness of treatments of a limited number of diseases, for instance. Therefore it scores bad on volume while excelling on the other items. 

Nevertheless it is helpful that POCs are objectively compared and it may help as starting point for decisions about acquisition.

References (not in chronological order)

  1. Banzi, R., Liberati, A., Moschetti, I., Tagliabue, L., & Moja, L. (2010). A Review of Online Evidence-based Practice Point-of-Care Information Summary Providers Journal of Medical Internet Research, 12 (3) DOI: 10.2196/jmir.1288
  2. Alper, B. (2010). Review of Online Evidence-based Practice Point-of-Care Information Summary Providers: Response by the Publisher of DynaMed Journal of Medical Internet Research, 12 (3) DOI: 10.2196/jmir.1622
  3. Goodyear-Smith F, Kerse N, Warren J, & Arroll B (2008). Evaluation of e-textbooks. DynaMed, MD Consult and UpToDate. Australian family physician, 37 (10), 878-82 PMID: 19002313
  4. Ketchum, A., Saleh, A., & Jeong, K. (2011). Type of Evidence Behind Point-of-Care Clinical Information Products: A Bibliometric Analysis Journal of Medical Internet Research, 13 (1) DOI: 10.2196/jmir.1539
  5. McKibbon, K., & Fridsma, D. (2006). Effectiveness of Clinician-selected Electronic Information Resources for Answering Primary Care Physicians’ Information Needs Journal of the American Medical Informatics Association, 13 (6), 653-659 DOI: 10.1197/jamia.M2087
  6. How will we ever keep up with 75 Trials and 11 Systematic Reviews a Day? (laikaspoetnik.wordpress.com)
  7. 10 + 1 PubMed Tips for Residents (and their Instructors) (laikaspoetnik.wordpress.com)
  8. Time to weed the (EBM-)pyramids?! (laikaspoetnik.wordpress.com)
  9. Haynes RB. Of studies, syntheses, synopses, summaries, and systems: the “5S” evolution of information services for evidence-based healthcare decisions. Evid Based Med 2006 Dec;11(6):162-164. [PubMed]
  10. DiCenso A, Bayley L, Haynes RB. ACP Journal Club. Editorial: Accessing preappraised evidence: fine-tuning the 5S model into a 6S model. Ann Intern Med. 2009 Sep 15;151(6):JC3-2, JC3-3. PubMed PMID: 19755349 [free full text].
  11. How Evidence Based is UpToDate really? (laikaspoetnik.wordpress.com)
  12. Point_of_care_decision-making_tools_-_Overview (hlwiki.slais.ubc.ca)
  13. UpToDate or Dynamed? (Shamsha Damani at laikaspoetnik.wordpress.com)

Related articles (automatically generated)





#FollowFriday #FF @DrJenGunter: EBM Sex Health Expert Wielding the Lasso of Truth

19 08 2011

If you’re on Twitter you probably seen the #FF or #FollowFriday phenomenon. FollowFriday is a way to recommend people on Twitter to others. For at least 2 reasons: to acknowledge your favorite tweople and to make it easier for your followers to find new interesting people.

However, some #FollowFriday tweet-series are more like a weekly spam. Almost 2 years ago I blogged about the misuse of FF-recommendations and I gave some suggestions to do #FollowFriday the right way: not by sheer mentioning many people in numerous  tweets, but by recommending one or a few people a time, and explaining why this person is so awesome to follow.

Twitter Lists are also useful tools for recommending people (see post). You could construct lists of your favorite Twitter people for others to follow. I have created a general FollowFridays list, where I list all the people I have recommended in a #FF-tweet and/or post.

In this post I would like to take up the tradition of highlighting the #FF favs at my blog. .

This FollowFriday I recommend:  

Jennifer Gunter

Jennifer Gunter (@DrJenGunter at Twitter), is a beautiful lady, but she shouldn’t be tackled without gloves, for she is a true defender of evidence-based medicine and wields the lasso of truth.

Her specialty is OB/GYN. She is a sex health expert. No surprise, many tweets are related to this topic, some very serious, some with a humorous undertone. And there can be just fun (re)tweets, like:

LOL -> “@BackpackingDad: New Word: Fungry. Full-hungry. “I just ate a ton of nachos, but hot damn am I fungry for those Buffalo wings!””

Dr Jen Gunter has a blog Dr. Jen Gunther (wielding the lasso of truth). 

Again we find the same spectrum of posts, mostly in the field of ob/gyn. You need not be an ob/gyn nor an EBM expert to enjoy them. Jen’s posts are written in plain language, suitable for anyone to understand (including patients).

Some titles:

In addition, There are also hilarious posts like “Cosmo’s sex position of the day proves they know nothing about good sex or women“,where she criticizes Cosmo for tweeting impossible sex positions (“If you’re over 40, I dare you to even GET into that position! “), which she thinks were created by one of the following:

A) a computer who has never had sex and is not programmed to understand how the female body bends.
B) a computer programmer who has never has sex and has no understanding of how the female body bends.
C) a Yogi master/Olympic athlete.

Sometimes the topic is blogging. Jen is a fierce proponent of medical blogging. She sees it as a way to “promote” yourself as a doctor, to learn from your readers and to “contribute credible content drowns out garbage medical information” (true) and as an ideal platform to deliver content to your patients and like-minded medical professionals. (great idea)

Read more at:

You can follow Jen at her Twitter-account (http://twitter.com/#!/DrJenGunter) and/or you can follow my lists. She is on:  ebm-cochrane-sceptics and the followfridays list.

Of course you can also take a subscription to her blog http://drjengunter.wordpress.com/

Related articles





PubMed versus Google Scholar for Retrieving Evidence

8 06 2010

ResearchBlogging.orgA while ago a resident in dermatology told me she got many hits out of PubMed, but zero results out of TRIP. It appeared she had used the same search for both databases: alopecea areata and diphenciprone (a drug with a lot of synonyms). Searching TRIP for alopecea (in the title) only, we found a Cochrane Review and a relevant NICE guideline.

Usually, each search engine has is its own search and index features. When comparing databases one should compare “optimal” searches and keep in mind for what purpose the search engines are designed. TRIP is most suited to search aggregate evidence, whereas PubMed is most suited to search individual biomedical articles.

Michael Anders and Dennis Evans ignore this “rule of the thumb” in their recent paper “Comparison of PubMed and Google Scholar Literature Searches”. And this is not the only shortcoming of the paper.

The authors performed searches on 3 different topics to compare PubMed and Google Scholar search results. Their main aim was to see which database was the most useful to find clinical evidence in respiratory care.

Well quick guess: PubMed wins…

The 3 respiratory care topics were selected from a list of systematic reviews on the Website of the Cochrane Collaboration and represented in-patient care, out-patient care, and pediatrics.

The references in the three chosen Cochrane Systematic Reviews served as a “reference” (or “golden”) standard. However, abstracts, conference proceedings, and responses to letters were excluded.

So far so good. But note that the outcome of the study only allows us to draw conclusions about interventional questions, that seek to find controlled clinical trials. Other principles may apply to other domains (diagnosis, etiology/harm, prognosis ) or to other types of studies. And it certainly doesn’t apply to non-EBM-topics.

The authors designed ONE search for each topic, by taking 2 common clinical terms from the title of each Cochrane review connected by the Boolean operator “AND” (see Table, ” ” are not used). No synonyms were used and the translation of searches in PubMed wasn’t checked (luckily the mapping was rather good).

“Mmmmm…”

Topic

Search Terms

Noninvasive positive-pressure ventilation for cardiogenic pulmonary edema “noninvasive positive-pressure ventilation” AND “pulmonary edema”
Self-management education and regular practitioner review for adults with asthma “asthma” AND “education”
Ribavirin for respiratory syncytial virus “ribavirin” AND “respiratory syncytial virus”

In PubMed they applied the narrow methodological filter, or Clinical Query, for the domain therapy.
This prefab search strategy (randomized controlled trial[Publication Type] OR (randomized[Title/Abstract] AND controlled[Title/Abstract] AND trial[Title/Abstract]), developed by Haynes, is suitable to quickly detect the available evidence (provided one is looking for RCT’s and doesn’t do an exhaustive search). (see previous posts 2, 3, 4)

Google Scholar, as we all probably know, does not have such methodological filters, but the authors “limited” their search by using the Advanced option and enter the 2 search terms in the “Find articles….with all of the words” space (so this is a boolean “AND“) and they limited it the search to the subject area “Medicine, Pharmacology, and Veterinary Science”.

They did a separate search for publications that were available at their library, which has limited value for others, subscriptions being different for each library.

Next they determined the sensitivity (the number of relevant records retrieved as a proportion of the total number of records in the gold standard) and the precision or positive predictive value, the  fraction of returned positives that are true positives (explained in 3).

Let me guess: sensitivity might be equal or somewhat higher, and precision is undoubtedly much lower in Google Scholar. This is because (in) Google Scholar:

  • you can often search full text instead of just in the abstract, title and (added) keywords/MeSH
  • the results are inflated by finding one and the same references cited in many different papers (that might not directly deal with the subject).
  • you can’t  limit on methodology, study type or “evidence”
  • there is no automatic mapping and explosion (which may provide a way to find more synonyms and thus more relevant studies)
  • has a broader coverage (grey literature, books, more topics)
  • lags behind PubMed in receiving updates from MEDLINE

Results: PubMed and Google Scholar had pretty much the same recall, but for ribavirin and RSV the recall was higher in PubMed, PubMed finding 100%  (12/12) of the included trials, and Google Scholar 58% (7/12)

No discussion as to the why. Since Google Scholar should find the words in titles and abstracts of PubMed I repeated the search in PubMed but only in the title, abstract field, so I searched ribavirin[tiab] AND respiratory syncytial virus[tiab]* and limited it with the narrow therapy filter: I found 26 papers instead of 32. These titles were missing when I only searched title and abstract (between brackets: [relevant MeSH (reason why paper was found), absence of abstract (thus only title and MeSH) and letter], bold: why terms in title abstract are not found)

  1. Evaluation by survival analysis on effect of traditional Chinese medicine in treating children with respiratory syncytial viral pneumonia of phlegm-heat blocking Fei syndrome.
    [MesH:
    Respiratory Syncytial Virus Infections/]
  2. Ribavarin in ventilated respiratory syncytial virus bronchiolitis: a randomized, placebo-controlled trial.
    [MeSH:
    Respiratory Syncytial Virus Infections/[NO ABSTRACT, LETTER]
  3. Study of interobserver reliability in clinical assessment of RSV lower respiratory illness.
    [MeSH:Respiratory Syncytial Virus Infections*]
  4. Ribavirin for severe RSV infection. N Engl J Med.
    [MeSH: Respiratory Syncytial Viruses
    [NO ABSTRACT, LETTER]
  5. Stutman HR, Rub B, Janaim HK. New data on clinical efficacy of ribavirin.
    MeSH: Respiratory Syncytial Viruses
    [NO ABSTRACT]
  6. Clinical studies with ribavirin.
    MeSH: Respiratory Syncytial Viruses
    [NO ABSTRACT]

Three of the papers had the additional MeSH respiratory syncytial virus and the three others respiratory syncytial virus infections. Although not all papers (2 comments/letters) may be relevant, it illustrates why PubMed may yield results, that are not retrieved by Google Scholar (if one doesn’t use synonyms)

In Contrast to Google Scholar, PubMed translates the search ribavirin AND respiratory syncytial virus so that the MeSH-terms “ribavirin”, “respiratory syncytial viruses”[MeSH Terms] and (indirectly) respiratory syncytial virus infection”[MeSH] are also found.

Thus in Google Scholar articles with terms like RSV and respiratory syncytial viral pneumonia (or lack of specifications, like clinical efficacy) could have been missed with the above-mentioned search.

The other result of the study (the result section comprises 3 sentences) is that “For each individual search, PubMed had better precision”.

The Precision was 59/467 (13%) in PubMed and 57/80,730 (0.07%)  in Google Scholar (p<0.001)!!
(note: they had to add author names in the Google Scholar search to find the papers in the haystack 😉

Héhéhé, how surprising. Well why would it be that no clinician or librarian would ever think of using Google Scholar as the primary, let alone the only, source to search for medical evidence?
It should also ring a bell, that [QUOTE**]:
In the Cochrane reviews the researchers retrieved information from multiple databases, including MEDLINE, the Cochrane Airways Group trial register (derived from MEDLINE)***, CENTRAL, EMBASE, CINAHL, DARE, NHSEED, the Acute Respiratory Infections Group’s specialized register, and LILACS… ”
Note
Google Scholar isn’t mentioned as a source! Google Scholar is only recommendable to search for work citing (already found) relevant articles (this is called forward searching), if one hasn’t access to Web of Science or SCOPUS. Thus only to catch the last fish.

Perhaps the paper could have been more interesting if the authors had looked at any ADDED VALUE of Google Scholar, when exhaustively searching for evidence. Then it would have been crucial to look for grey literature too, (instead of excluding it), because this could be a possible strong point for Google Scholar. Furthermore one could have researched if forward searching yielded extra papers.

The specificity of PubMed is attributed to the used therapy-narrow filter, but the vastly lower specificity of Google Scholar is also due to the searching in the full text, including the reference lists.

For instance, searching for ribavirin AND respiratory syncytial virus in PubMed yields 523 hits. This can be reduced to 32 hits when applying the narrow therapy filter. This means a reduction by a factor of 16.
Yet a similar search in Google Scholar yield
4,080 hits. Thus without the filter there is still an almost 8 times higher yield from Google Scholar than from PubMed.

That evokes another  research idea: what would have happened if randomized (OR randomised) would have been added to the Google Scholar search? Would this have increased the specificity? In case of the above search it lowers the yield with a factor 2, and the first hits look very relevant.

It is really funny but the authors bring down their own conclusion that “These results are important because efficient retrieval of the best available scientific evidence can inform respiratory care protocols, recommendations for clinical decisions in individual patients, and education, while minimizing information overload.” by saying elsewhere that “It is unlikely that users consider more than the first few hundred search results, so RTs who conduct literature searches with Google Scholar on these topics will be much less likely to find references cited in Cochrane reviews.”

Indeed no one would take it into ones head to try to find the relevant papers out of those 4,080 hits retrieved. So what is this study worth from a practical point of view?

Well anyway, as you can ask for the sake of asking you can research for the sake of researching. Despite being an EBM-addict I prefer a good subjective overview on this topic over a weak scientific, quasi-evidence based, research paper.

Does this mean Google Scholar is useless? Does it mean that all those PhD’s hooked on Google Scholar are wrong?

No, Google Scholar serves certain purposes.

Just like the example of PubMed and TRIP, you need to know what is in it for you and how to use it.

I used Google Scholar when I was a researcher:

  • to quickly find a known reference
  • to find citing papers
  • to get an idea of how much articles have been cited/ find the most relevant papers in a quick and dirty way (i.e. by browsing)
  • for quick and dirty searches by putting words string between brackets.
  • to search full text. I used quite extensive searches to find out what methods were used (for instance methods AND (synonym1 or syn2 or syn3)). An interesting possibility is to do a second search for only the last few words (in a string). This will often reveal the next words in the sentence. Often you can repeat this trick, reading a piece of the paper without need for access.

If you want to know more about the pros and cons of Google Scholar I recommend the recent overview by the expert librarian Dean Giustini: “Sure Google Scholar is ideal for some things” [7]”. He also compiled a “Google scholar bibliography” with ~115 articles as of May 2010.

Speaking of librarians, why was the study performed by PhD RRT (RN)’s and wasn’t the university librarian involved?****

* this is a search string and more strict than respiratory AND syncytial AND virus
**
abbreviations used instead of full (database) names
*** this is wrong, a register contains references to controlled clinical trials from EMBASE, CINAHL and all kind of  databases in addition to MEDLINE.
****other then to read the manuscript afterwards.

References

  1. Anders ME, & Evans DP (2010). Comparison of PubMed and Google Scholar Literature Searches. Respiratory care, 55 (5), 578-83 PMID: 20420728
  2. This Blog: https://laikaspoetnik.wordpress.com/2009/11/26/adding-methodological-filters-to-myncbi/
  3. This Blog: https://laikaspoetnik.wordpress.com/2009/01/22/search-filters-1-an-introduction/
  4. This Blog: https://laikaspoetnik.wordpress.com/2009/06/30/10-1-pubmed-tips-for-residents-and-their-instructors/
  5. NeuroDojo (2010/05) Pubmed vs Google Scholar? [also gives a nice overview of pros and cons]
  6. GenomeWeb (2010/05/10) Content versus interface at the heart of Pubmed versus Scholar?/ [response to 5]
  7. The Search principle Blog (2010/05) Sure Google Scholar is ideal for some things.




An Evidence Pyramid that Facilitates the Finding of Evidence

20 03 2010

Earlier I described that there are so many search- and EBM-pyramids that it is confusing. I described  3 categories of pyramids:

  1. Search Pyramids
  2. Pyramids of EBM-sources
  3. Pyramids of EBM-levels (levels of evidence)

In my courses where I train doctors and medical students how to find evidence quickly, I use a pyramid that is a mixture of 1. and 2. This is a slide from a 2007 course.

This pyramid consists of 4 layers (from top down):

  1. EBM-(evidence based) guidelines.
  2. Synopses & Syntheses*: a synopsis is a summary and critical appraisal of one article, whereas synthesis is a summary and critical appraisal of a topic (which may answer several questions and may cover many articles).
  3. Systematic Reviews (a systematic summary and critical appraisal of original studies) which may or may not include a meta-analysis.
  4. Original Studies.

The upper 3 layers represent “Aggregate Evidence”. This is evidence from secondary sources, that search, summarize and critically appraise original studies (lowest layer of the pyramid).

The layers do not necessarily represent the levels of evidence and should not be confused with Pyramids of EBM-levels (type 3). An Evidence Based guideline can have a lower level of evidence than a good systematic review, for instance.
The present pyramid is only meant to lead the way in the labyrinth of sources. Thus, to speed up to process of searching. The relevance and the quality of evidence should always be checked.

The idea is:

  • The higher the level in the pyramid the less publications it contains (the narrower it becomes)
  • Each level summarizes and critically appraises the underlying levels.

I advice people to try to find aggregate evidence first, thus to drill down (hence the drill in the Figure).

The advantage: faster results, lower number to read (NNR).

During the first courses I gave, I just made a pyramid in Word with the links to the main sources.

Our library ICT department converted it into a HTML document with clickable links.

However, although the pyramid looked quite complex, not all main evidence sources were included. Plus some sources belong to different layers. The Trip Database for instance searches sources from all layers.

Our ICT-department came up with a much better looking and better functioning 3-D pyramid, with databases like TRIP in the sidebar.

Moving the  mouse over a pyramid layer invokes a pop-up with links to the databases belonging to that layer.

Furthermore the sources included in the pyramid differ per specialty. So for the department Gynecology we include POPLINE and MIDIRS in the lowest layer, and the RCOG and NVOG (Dutch) guidelines in the EBM-guidelines layer.

Together my colleagues and I decide whether a source is evidence based (we don’t include UpToDate for instance) and where it  belongs. Each clinical librarian (we all serve different departments) then decides which databases to include. Clients can give suggestions.

Below is a short You Tube video showing how this pyramid can be used. Because of the rather poor quality, the video is best to be viewed in full screen mode.
I have no audio (yet), so in short this is what you see:

Made with Screenr:  http://screenr.com/8kg

The pyramid is highly appreciated by our clients and students.

But it is just a start. My dream is to visualize the entire pathway from question to PICO, checklists, FAQs and database of results per type of question/reason for searching (fast question, background question, CAT etc.).

I’m just waiting for someone to fulfill the technical part of this dream.

————–

*Note that there may be different definitions as well. The top layers in the 5S pyramid of Bryan Hayes are defined as follows: syntheses & synopses (succinct descriptions of selected individual studies or systematic reviews, such as those found in the evidence-based journals), summaries, which integrate best available evidence from the lower layers to develop practice guidelines based on a full range of evidence (e.g. Clinical Evidence, National Guidelines Clearinghouse), and at the peak of the model, systems, in which the individual patient’s characteristics are automatically linked to the current best evidence that matches the patient’s specific circumstances and the clinician is provided with key aspects of management (e.g., computerised decision support systems).

Begin with the richest source of aggregate (pre-filtered) evidence and decline in order to to decrease the number needed to read: there are less EBM guidelines than there are Systematic Reviews and (certainly) individual papers.




Searching Skills Toolkit. Finding the Evidence [Book Review]

4 03 2010

Most books on Evidence Based Medicine give little attention to the first two steps of EBM: asking focused answerable questions and searching the evidence. Being able to appraise an article, but not being able to find the best evidence may be challenging and frustrating to the busy clinicians.

Searching Skills Toolkit: Finding The Evidence” is a pocket-sized book that aims to instruct the clinician how to search for evidence. It is the third toolkit book in the series edited by Heneghan et al. (author of the CEBM-blog Trust the Evidence). The authors Caroline de Brún and Nicola Pearce Smith are experts in searching (librarian and information scientist respectively).

According to the description at Wiley’s, the distinguishing feature of this searching skills book,  is its user-friendliness. “The guiding principle is that readers do not want to become librarians, but they are faced with practical difficulties when searching for evidence, such as lack of skills, lack of time and information overload. They need to learn simple search skills, and be directed towards the right resources to find the best evidence to support their decision-making.”

Does this book give guidance that makes searching for evidence easy? Is this book the ‘perfect companion’ to doctors, nurses, allied health professionals, managers, researchers and students, as it promises?

I find it difficult to answer, partly because I’m not a clinician and partly because, being a medical information specialist myself, I would frequently tackle a search otherwise.

The booklet is in pocket-size, easy to take along. The lay-out is clear and pleasant. The approach is original and practical. Despite its small size, the booklet contains a wealth of information. Table one, for instance, gives an overview of truncation symbols, wildcards and Boolean operators for Cochrane, Dialog, EBSCO, OVID, PubMed and Webspirs (see photo). And although this is mouth watering for many medical librarians one wonders whether this detailed information is really useful for the clinician.

Furthermore 34 pages of the 102 (1/3) are devoted on searching these specific health care databases. IMHO of these databases only PubMed and the Cochrane Library are useful to the average clinician. In addition most of the screenshots of the individual databases are too small to read. And due to the PubMed Redesign the PubMed description is no longer up-to-date.

The readers are guided to the chapters on searching by asking themselves beforehand:

  1. The time available to search: 5 minutes, an hour or time to do a comprehensive search. This is an important first step, which is often not considered by other books and short guides.
    Primary sources, secondary sources and ‘other’ sources are given per time available. This is all presented in a table with reference to key chapters and related chapters. These particular chapters enable the reader to perform these short, intermediate or long searches.
  2. What type of publication he is looking for: a guideline, a systematic review, patient information or an RCT (with tips where to find them).
  3. Whether the query is about a specific topic, i.e. drug or safety information or health statistics.

All useful information, but I would have discussed topic 3 before covering EBM, because this doesn’t fit into the ‘normal’ EBM search.  So for drug information you could directly go to the FDA, WHO or EMEA website. Similarly, if my question was only to find a guideline I would simply search one or more guideline databases.
Furthermore it would be more easy to pile the small, intermediate and long searches upon each other instead of next to each other. The basic principle would be (in my opinion at least) to start with a PICO and to (almost) always search for secondary searches first (fast), search for primary publications (original research) in PubMed if necessary and broaden the search in other databases (broad search) in case of exhaustive searches. This is easy to remember, even without the schemes in the book.

Some minor points. There is an overemphasis on UK-sources. So the first source to find guidelines is the (UK) National Library of Guidelines, where I would put the National Guideline Clearinghouse (or the TRIP-database) first. And why is MedlinePlus not included as a source for patients, whereas NHS-choices is?

There is also an overemphasis on interventions. How PICO’s are constructed for other domains (diagnosis, etiology/harm and prognosis) is barely touched upon. It is much more difficult to make PICOs and search in these domains. More practical examples would also have been helpful.

Overall, I find this book very useful. The authors are clearly experts in searching and they fill a gap in the market: there is no comparable book on “the searching of the evidence”. Therefore, despite some critique and preferences for another approach, I do recommend this book to doctors who want to learn basic searching skills. As a medical information specialist I keep it in my pocket too: just in case…

Overview

What I liked about the book:

  • Pocket size, easy to take a long.
  • Well written
  • Clear diagrams
  • Broad coverage
  • Good description of (many) databases
  • Step for step approach

What I liked less about it:

  • Screen dumps are often too small to read and thereby not useful
  • Emphasis on UK-sources
  • Other domains than “therapy” (etiology/harm, prognosis, diagnosis) are almost not touched upon
  • Too few clinical examples
  • A too strict division in short, intermediate and long searches: these are not intrinsically different

The Chapters

  1. Introduction.
  2. Where to start? Summary tables and charts.
  3. Sources of clinical information: an overview.
  4. Using search engines on the World Wide Web.
  5. Formulating clinical questions.
  6. Building a search strategy.
  7. Free text versus thesaurus.
  8. Refining search results.
  9. Searching specific healthcare databases.
  10. Citation pearl searching.
  11. Saving/recording citations for future use.
  12. Critical appraisal.
  13. Further reading by topic or PubMed ID.
  14. Glossary of terms.
  15. Appendix 1: Ten tips for effective searching.
  16. Appendix 2: Teaching tips

References

  1. Searching Skills Toolkit – Finding The Evidence (Paperback – 2009/02/17) by Caroline De Brún and Nicola Pearce-smith; Carl Heneghan et al (Editors). Wiley-Blackell BMJ\ Books
  2. Kamal R Mahtani Evid Based Med 2009;14:189 doi:10.1136/ebm.14.6.189 (book review by a clinician)

Reblog this post [with Zemanta]




#NotSoFunny #16 – Ridiculing RCTs & EBM

1 02 2010

I remember it well. As a young researcher I presented my findings in one of my first talks, at the end of which the chair killed my work with a remark, that made the whole room of scientists laugh, but was really beside the point. My supervisor, a truly original and very wise scientist, suppressed his anger. Afterwards, he said: “it is very easy ridiculing something that isn’t a mainstream thought. It’s the argument that counts. We will prove that we are right.” …And we did.

This was not my only encounter with scientists who try to win the debate by making fun of a theory, a finding or …people. But it is not only the witty scientist who is to *blame*, it is also the uncritical audience that just swallows it.

I have similar feelings with some journal articles or blog posts that try to ridicule EBM – or any other theory or approach. Funny, perhaps, but often misunderstood and misused by “the audience”.

Take for instance the well known spoof article in the BMJ:

“Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials”

It is one of those Christmas spoof articles in the BMJ, meant to inject some medical humor into the normally serious scientific literature. The spoof parachute article pretends to be a Systematic Review of RCT’s  investigating if parachutes can prevent death and major trauma. Of course, no such trial has been done or will be done: dropping people at random with and without a parachute to proof that you better jump out of a plane with a parachute.

I found the article only mildly amusing. It is so unrealistic, that it becomes absurd. Not that I don’t enjoy absurdities at times, but  absurdities should not assume a live of their own.  In this way it doesn’t evoke a true discussion, but only worsens the prejudice some people already have.

People keep referring to this 2003 article. Last Friday, Dr. Val (with whom I mostly agree) devoted a Friday Funny post to it at Get Better Health: “The Friday Funny: Why Evidence-Based Medicine Is Not The Whole Story”.* In 2008 the paper was also discussed by Not Totally Rad [3]. That EBM is not the whole story seems pretty obvious to me. It was never meant to be…

But lets get specific. Which assumptions about RCT’s and SR’s are wrong, twisted or put out of context? Please read the excellent comments below the article. These often put the finger on the spot.

1. EBM is cookbook medicine.
Many define EBM as “make clinical decisions based on a synthesis of the best available evidence about a treatment.” (i.e. [3]). However, EBM is not cookbook medicine.

The accepted definition of EBM  is “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients” [4]. Sacket already emphasized back in 1996:

Good doctors use both individual clinical expertise and the best available external evidence, and neither alone is enough. Without clinical expertise, practice risks becoming tyrannised by evidence, for even excellent external evidence may be inapplicable to or inappropriate for an individual patient. Without current best evidence, practice risks becoming rapidly out of date, to the detriment of patients.


2. RCT’s are required for evidence.

Although a well performed RCT provides the “best” evidence, RCT’s are often not appropriate or indicated. That is especially true for domains other than therapy. In case of prognostic questions the most appropriate study design is usually an inception cohort. A RCT for instance can’t tell whether female age is a prognostic factor for clinical pregnancy rates following IVF: there is no way to randomize for “age”, or for “BMI”. 😉

The same is true for etiologic or harm questions. In theory, the “best” answer is obtained by RCT. However RCT’s are often unethical or unnecessary. RCT’s are out of the question to address whether substance X causes cancer. Observational studies will do. Sometimes cases provide sufficient evidence. If a woman gets hepatic veno-occlusive disease after drinking loads of a herbal tea the finding of  similar cases in the literature may be sufficient to conclude that the herbal tea probably caused the disease.

Diagnostic accuracy studies also require another study design (cross-sectional study, or cohort).

But even in the case of  interventions, we can settle for less than a RCT. Evidence is not present or not, but exists on a hierarchy. RCT’s (if well performed) are the most robust, but if not available we have to rely on “lower” evidence.

BMJ Clinical Evidence even made a list of clinical questions unlikely to be answered by RCT’s. In this case Clinical Evidence searches and includes the best appropriate form of evidence.

  1. where there are good reasons to think the intervention is not likely to be beneficial or is likely to be harmful;
  2. where the outcome is very rare (e.g. a 1/10000 fatal adverse reaction);
  3. where the condition is very rare;
  4. where very long follow up is required (e.g. does drinking milk in adolescence prevent fractures in old age?);
  5. where the evidence of benefit from observational studies is overwhelming (e.g. oxygen for acute asthma attacks);
  6. when applying the evidence to real clinical situations (external validity);
  7. where current practice is very resistant to change and/or patients would not be willing to take the control or active treatment;
  8. where the unit of randomisation would have to be too large (e.g. a nationwide public health campaign); and
  9. where the condition is acute and requires immediate treatment.
    Of these, only the first case is categorical. For the rest the cut off point when an RCT is not appropriate is not precisely defined.

Informed health decisions should be based on good science rather than EBM (alone).

Dr Val [2]: “EBM has been an over-reliance on “methodolatry” – resulting in conclusions made without consideration of prior probability, laws of physics, or plain common sense. (….) Which is why Steve Novella and the Science Based Medicine team have proposed that our quest for reliable information (upon which to make informed health decisions) should be based on good science rather than EBM alone.

Methodolatry is the profane worship of the randomized clinical trial as the only valid method of investigation. This is disproved in the previous sections.

The name “Science Based Medicine” suggests that it is opposed to “Evidence Based Medicine”. At their blog David Gorski explains: “We at SBM believe that medicine based on science is the best medicine and tirelessly promote science-based medicine through discussion of the role of science and medicine.”

While this may apply to a certain extent to quack or homeopathy (the focus of SBM) there are many examples of the opposite: that science or common sense led to interventions that were ineffective or even damaging, including:

As a matter of fact many side-effects are not foreseen and few in vitro or animal experiments have led to successful new treatments.

At the end it is most relevant to the patient that “it works” (and the benefits outweigh the harms).

Furthermore EBM is not -or should not be- without consideration of prior probability, laws of physics, or plain common sense. To me SBM and EBM are not mutually exclusive.

Why the example is bullshit unfair and unrealistic

I’ll leave it to the following comments (and yes the choice is biased) [1]

Nibu A George,Scientist :

First of all generalizing such reports of some selected cases and making it a universal truth is unhealthy and challenging the entire scientific community. Secondly, the comparing the parachute scenario with a pure medical situation is unacceptable since the parachute jump is rather a physical situation and it become a medical situation only if the jump caused any physical harm to the person involved.

Richard A. Davidson, MD,MPH:

This weak attempt at humor unfortunately reinforces one of the major negative stereotypes about EBM….that RCT’s are required for evidence, and that observational studies are worthless. If only 10% of the therapies that are paraded in front of us by journals were as effective as parachutes, we would have much less need for EBM. The efficacy of most of our current therapies are only mildly successful. In fact, many therapies can provide only a 25% or less therapeutic improvement. If parachutes were that effective, nobody would use them.
While it’s easy enough to just chalk this one up to the cliche of the cantankerous British clinician, it shows a tremendous lack of insight about what EBM is and does. Even worse, it’s just not funny.

Aviel Roy-Shapira, Senior Staff Surgeon

Smith and Pell succeeded in amusing me, but I think their spoof reflects a common misconception about evidence based medicine. All too many practitioners equate EBM with randomized controlled trials, and metaanalyses.
EBM is about what is accepted as evidence, not about how the evidence is obtained. For example, an RCT which shows that a given drug lowers blood pressure in patients with mild hypertension, however well designed and executed, is not acceptable as a basis for treatment decisions. One has to show that the drug actually lowers the incidence of strokes and heart attacks.
RCT’s are needed only when the outcome is not obvious. If most people who fall from airplanes without a parachute die, this is good enough. There is plenty of evidence for that.

EBM is about using outcome data for making therapeutic decisions. That data can come from RCTs but also from observation

Lee A. Green, Associate Professor

EBM is not RCTs. That’s probably worth repeating several times, because so often both EBM’s detractors and some of its advocates just don’t get it. Evidence is not binary, present or not, but exists on a heirarchy (Guyatt & Rennie, 2001). (….)
The methods and rigor of EBM are nothing more or less than ways of correcting for our
imperfect perceptions of our experiences. We prefer, cognitively, to perceive causal connections. We even perceive such connections where they do not exist, and we do so reliably and reproducibly under well-known sets of circumstances. RCTs aren’t holy writ, they’re simply a tool for filtering out our natural human biases in judgment and causal attribution. Whether it’s necessary to use that tool depends upon the likelihood of such bias occurring.

Scott D Ramsey, Associate Professor

Parachutes may be a no-brainer, but this article is brainless.

Unfortunately, there are few if any parallels to parachutes in health care. The danger with this type of article is that it can lead to labeling certain medical technologies as “parachutes” when in fact they are not. I’ve already seen this exact analogy used for a recent medical technology (lung volume reduction surgery for severe emphysema). In uncontrolled studies, it quite literally looked like everyone who didn’t die got better. When a high quality randomized controlled trial was done, the treatment turned out to have significant morbidity and mortality and a much more modest benefit than was originally hypothesized.

Timothy R. Church, Professor

On one level, this is a funny article. I chuckled when I first read it. On reflection, however, I thought “Well, maybe not,” because a lot of people have died based on physicians’ arrogance about their ability to judge the efficacy of a treatment based on theory and uncontrolled observation.

Several high profile medical procedures that were “obviously” effective have been shown by randomized trials to be (oops) killing people when compared to placebo. For starters to a long list of such failed therapies, look at antiarrhythmics for post-MI arrhythmias, prophylaxis for T. gondii in HIV infection, and endarterectomy for carotid stenosis; all were proven to be harmful rather than helpful in randomized trials, and in the face of widespread opposition to even testing them against no treatment. In theory they “had to work.” But didn’t.

But what the heck, let’s play along. Suppose we had never seen a parachute before. Someone proposes one and we agree it’s a good idea, but how to test it out? Human trials sound good. But what’s the question? It is not, as the author would have you believe, whether to jump out of the plane without a parachute or with one, but rather stay in the plane or jump with a parachute. No one was voluntarily jumping out of planes prior to the invention of the parachute, so it wasn’t to prevent a health threat, but rather to facilitate a rapid exit from a nonviable plane.

Another weakness in this straw-man argument is that the physics of the parachute are clear and experimentally verifiable without involving humans, but I don’t think the authors would ever suggest that human physiology and pathology in the face of medication, radiation, or surgical intervention is ever quite as clear and predictable, or that non-human experience (whether observational or experimental) would ever suffice.

The author offers as an alternative to evidence-based methods the “common sense” method, which is really the “trust me, I’m a doctor” method. That’s not worked out so well in many high profile cases (see above, plus note the recent finding that expensive, profitable angioplasty and coronary artery by-pass grafts are no better than simple medical treatment of arteriosclerosis). And these are just the ones for which careful scientists have been able to do randomized trials. Most of our accepted therapies never have been subjected to such scrutiny, but it is breathtaking how frequently such scrutiny reveals problems.

Thanks, but I’ll stick with scientifically proven remedies.

parachute experiments without humans

* on the same day as I posted Friday Foolery #15: The Man who pioneered the RCT. What a coincidence.

** Don’t forget to read the comments to the article. They are often excellent.

Photo Credits

ReferencesResearchBlogging.org

  1. Smith, G. (2003). Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials BMJ, 327 (7429), 1459-1461 DOI: 10.1136/bmj.327.7429.1459
  2. The Friday Funny: Why Evidence-Based Medicine Is Not The Whole Story”. (getbetterhealth.com) [2010.01.29]
  3. Call for randomized clinical trials of Parachutes (nottotallyrad.blogspot.com) [08-2008]
  4. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, & Richardson WS (1996). Evidence based medicine: what it is and what it isn’t. BMJ (Clinical research ed.), 312 (7023), 71-2 PMID: 8555924
Reblog this post [with Zemanta]
are very well edged off




NOT ONE RCT on Swine Flu or H1N1?! – Outrageous!

16 12 2009

Last week doctorblogs (Annabel Bentley) tweeted: “Outrageous- there isn’t ONE randomised trial on swine flu or #H1N1

Annabel referred to an article at Trust the Evidence, the excellent blog of the Centre for Evidence-Based Medicine (CEBM) in Oxford, UK.

In the article “Is swine flu the most over-published and over-hyped disease ever?Carl Heneghan first showed the results of a quick PubMed search using the terms ‘swine flu’ and ‘H1N1’: this yielded 4,475 articles on the subject, with approximately one third (1,437 articles) published in the last 7 months (search: November 27th). Of these 107, largely news articles, were published in the BMJ, followed by the Lancet and NEJM at 35 each.

Top News stories on H1N1 generated appr. 2000 to 4000 news articles each (in Google). Items included outbreak of a new form of ‘swine flu’ which prompted the United States and the World Health Organization to declare a public health emergency (April), Southern Hemisphere being mostly spared in the swine flu epidemic (May), Tamiflu, i.e. the effects of Tamiflu in children in the BMJ (co-authored by Carl) in August and the availability of the vaccine H1N1 vaccine clinics to offer seasonal flu shots in November.

According to Heneghan this must be the most over-hyped disease ever, and he wonders: “are there any other infections out there?”

Finally he ends with: Do you know what the killer fact is in all of this? There isn’t one randomized trial out there on swine flu or H1N1 – outrageous.”

My first thoughts were: “is H1N1 really so over-published compared to other (infectious) diseases?”, “Is it really surprising that there are no RCTs yet? The H1N1-pandemics just started a few months ago!” and even “are RCT’s really the study designs we urgently need right now?”

Now the severity of the H1N1 flu seems less than feared, it is easy to be wise. Isn’t is logic that there are a lot of “exploratory studies” first: characterization of the virus, establishing the spread of H1N1 around the world, establishing mortality and morbidity, and patterns of vulnerability among the population? It is also understandable that a lot of news articles are published, in the BMJ or in online newspapers. We want to be informed. In the Netherlands we now have a small outbreak of Q-fever, partly because the official approach was slow and underestimated the public health implications of Q-fever. So the public was really underinformed. That is worse than being “overexposed”.

News often spreads like wildfire, that is no news. When I google “US Preventive Services Task Force” (who issued the controversial US breast cancer screening guidelines last month) 2,364 hits still pop up in Google News (over the last month). All papers and other news sources echo the news. 2,000 hits are easily reached.

4,475 PubMed articles on ‘swine flu’ and ‘H1N1’ isn’t really that much. When I quickly search PubMed for the rather “new” disease Q-fever I get 3,752 hits, a search for HPV (Alphapapillomavirus OR papilloma infections OR HPV OR human papilloma virus) gives 19,543 hits (1,330 over the last 9 months), and a quick search for (aids) AND “last 9 months”[edat] yields 4,073 hits!

The number of hits alone doesn’t mean much, certainly not if news, editorials and comments are included. But lets go to the second comment, that there is “not ONE RCT on H1N1.”

Again, is it reasonable to expect ONE RCT published and included in PubMed over a 9 month period? Any serious study takes time from concept to initiation, patient-enrollment, sufficient follow-up, collection of data, writing and submitting the article, peer review, publication, inclusion in PubMed and assignment of MeSH-terms (including the publication type “Randomized Controlled Trial”).

Furthermore RCTs are not always the most feasible or appropriate study designs for answering certain questions. For instance for questions related to harm, etiology, epidemiology, spreading of virus, characteristics, diagnosis and prognosis. RCTs may be most suitable to evaluate the efficacy of treatment or prevention interventions. Thus in case of H1N1 the efficacy of vaccines and of neuraminidase inhibitors to prevent or treat H1N1 flu. However, it may not always be ethical to do so (see below).

I’ve repeated the search, and using prefab “My NCBI filters” for RCTs discussed before I get the following results:

Using the Randomized Controlled Trials limits in PubMed I do get 7 hits, and using broader filters, like the Therapy/Narrow Filter under  Clinical Queries I even find 2 more RCTs that have not yet been indexed by PubMed. With the Cochrane Highly sensitive Filter even more hits are obtained, most of which are “noise”, inherent to the use of a broad filter.

The found RCTs are safety/immunogenicity/stability studies of subunit or split vaccines to H1N1, H3N2, and B influenza strains. This means they are not restricted to H1N1, but this is true for the entire set of H1N1 publications. 40 of the 1443 hits are even animal studies. Thus the total number of articles dealing with H1N1 only -and in humans- is far less than 1443.
By the way, one of the 15 H1N1-hits in PubMed obtained with the SR-filter (see Fig) is a meta-analysis of RCTs in the BMJ, co-authored by Heneghan. It is not about H1N1, but contains the sentence: “Their (neuraminidase inhibitors) effects on the incidence of serious complications, and on the current A/H1N1 influenza strain remain to be determined.”

More important, if studies have been undertaken in this field they are probably not yet published. Thus, the place to look is a clinical trials register, like Clinical trials.gov (http://clinicaltrials.gov/), The International Clinical Registry Platform Search Portal at the WHO (www.who.int/trialsearch) , national or pharmaceutical industry trials registers.

A search for H1N1 OR swine flu in Clinical trials.gov, that offers the best searching functions, yields 132 studies, of which 116 were first recieved this year.

Again, most trials concern the safety and efficacy of H1N1 vaccines and include the testing of vaccines on subgroups, like pregnant women, children with asthma and people with AIDS. 30 trials are phase III.
Narrowing the search to H1N1
OR swine flu | neuraminidase inhibitors OR oseltamivir OR zanamivir (treatment filled in in the filed “Interventions”) yields 8 studies. One of the studies is a phase III trial.

This yield doesn’t seem bad per se. However, numbers of trials don’t mean a lot and a more pertinent issue is, whether the most important and urgent questions are investigated.

Three issues are important with respect to interventions:

  1. Are H1N1 vaccines safe and immunogenic? in subpopulations?
  2. Do H1N1 vaccines lower morbidity and mortality due to the H1N1 flu?
  3. Are neuraminidase inhibitors effective in preventing or treating H1N1 flu?
Question [1] will be answered by current trials.
Older Cochrane Reviews on the seasonal influenza flu (and updates) cast doubt on the efficacy of [2] vaccines (see the [poor*] Atlantic news article) ànd [2] neuraminidase inhibitors in children (Cochrane 2007 and BMJ 2009) ànd adults  (Cochrane 2006, update 2008 and BMJ 2009) against symptoms or complications of the seasonal flu. The possibility has even been raised that seasonal flu shots are linked to swine flu risk.
However, the current H1N1 isn’t a seasonal flu. It is a sudden, new pandemic that requires different actions. Overall H1N1 isn’t as deadly as the regular influenza strains, but it hits certain people harder: very young kids, people with asthma and pregnant women. About the latter group, Amy Tuteur (obstetrician-gynecologist blogging at The Skeptical OB) wrote a guest post at Kevin MD:
(…) the H1N1 influenza has had an unexpectedly devastating impact among pregnant women. According to the CDC, there have been approximately 700 reported cases of H1N1 in pregnant women since April.** Of these, 100 women have required admission to an intensive care unit and 28 have died. In other words, 1 out of every 25 pregnant women who contracted H1N1 died of it. By any standard, that is an appalling death rate. (……)
To put it in perspective, the chance of a pregnant woman dying from H1N1 is greater than the chance of a heart patient dying during triple bypass surgery. That is not a trivial risk.
The H1N1 flu has taken an extraordinary toll among pregnant women. A new vaccine is now available. Because of the nature of the emergency, there has not been time to do any long term studies of the vaccine. Yet pregnant women will need to make a decision as soon as possible on whether to be vaccinated. (Emphasis mine)
…. Given the dramatic threat and the fact that we know of no unusual complications of vaccination, the decision seems clear. Every pregnant woman should get vaccinated as soon as possible.
Thus the anticipated risks must be balanced against the anticipated benefits, Amy urges pregnant women to get vaccinated, even though no one can be sure about side effects ànd about the true efficacy of the vaccine.
For scientific purposes it would be best to perform a double randomized trial with half of a series of pregnant women receiving the vaccine, and the other half a placebo. This would provide the most rigid evidence for the true efficacy and safety of the vaccine.
However it would not be ethical to do so. As “Orac” of Orac Knows explains so well  in his post “Vaccination for H1N1 “swine” flu: Do The Atlantic, Shannon Brownlee, and Jeanne Lenzer matter?” RCTs are only acceptable from an ethical standpoint if we truly do not know whether one treatment is superior to another or a treatment is better than a placebo. There is sufficient reason to believe that vaccination for H1N1 will be more efficacious than “doing nothing”. Leaving a control group unvaccinated will certainly mean that a substantial percentage of pregnant women is going to die. To study the efficacy of the H1N1 among pregnant women observational studies (like cohort studies) are also suitable and more appropriate.
Among the studies found in ClinicalTrials.gov there are a few H1N1 Vaccine Clinical Studies in Pregnant Women, including RCTs. But these RCT’s never compare vaccinated women with a non-vaccinated women. All pregnant women are vaccinated, but the conditions vary.
In one Danish study the arms (study groups) are as follows:
Thus two doses of H1N1 with adjuvant are compared with a higher dose H1N1 without adjuvant. As a control non-pregnant women are vaccinated with the adjuvant H1N1.*** The RCT is performed within a prospective, birth-cohort study recruiting 800 pregnant mothers between Q1- 2009 and Q4-2010. As a natural control women pregnant in the H1N1 season (Q4) will be compared with women outside the season. Please note that the completion date of this study will be 2012, thus we will have to wait a number of years before the study describing the results will be found in PubMed….
To give an impression of the idea behind the study, here is the summary of that trial in the register (not because it is particularly outstanding, but to highlight the underlying thoughts):
“Pregnant women are at particular risk during the imminent H1N1v influenza pandemic. The new H1N1v virus requires urgent political and medical decisions on vaccination strategies in order to minimize severe disease and death from this pandemic. However, there is a lack of evidence to build such decisions upon. A vaccine will be provided in the fourth quarter of 2009, but there is little knowledge on the immunogenicity. Particularly its clinical effectiveness and duration of immunity in pregnant women and their newborn infants is unknown. Therefore, it will be important to study the optimal vaccination regimens with respect to dosing and use of adjuvant to decide future health policies on vaccination of pregnant women. We have a unique possibility to study these aspects of H1N1v infection in pregnant women in our ongoing unselected, prospective, birth-cohort study recruiting 800 pregnant mothers between Q1- 2009 and Q4-2010. Pregnant women from East-Denmark are being enrolled during the 2nd trimester and their infant will undergo a close clinical follow-up. The H1N1v pandemic is expected to reach Denmark Q4-2009. The timing of this enrollment and the imminent pandemic allows for an “experiment of nature” whereby the first half of the mothers completes pregnancy before the H1N1v pandemic. The other half of this cohort will be pregnant while H1N1v is prevalent in the community and will require H1N1v vaccination.The aim of this randomized, controlled, trial is to compare and evaluate the dose-related immune protection conferred by vaccine and adjuvant (Novartis vaccine Focetria) in pregnant women and non-pregnant women. In addition the protocol will assess the passive immunity conferred to the newborn from these vaccine regimes. The study will provide evidence-based guidance for health policies on vaccination for the population of pregnant women during future H1N1v pandemics.”
Although with regard to H1N1-vaccination, appropriate studies are being done, it is feasible that certain measures might not be appropriate on basis of what we know. For instance, pretreating people in the non-risk groups (healthy young adults) with neuraminidase-inhibitors, because they are “indispensable employees”. Perhaps Heneghan, who as you remember is a co-author of the BMJ paper on neuraminidase -inhibitors in children with the seasonal flu, was thinking of this when writing his post.
If Heneghan would have directed his arrows at certain interventions in certain circumstances in certain people he might have had a good point, but now his arrows don’t hit any target. Revere from Effect Measure and Orac from Orac Knows might well have diagnosed him as someone who suffers from “methodolatry,” which is, as Revere puts it, the “profane worship of the randomized clinical trial as the only valid method of investigation.”
Notes
* But see the excellent post of Orac who trashes the Atlantic paper in Flu vaccination: Do The Atlantic, Shannon Brownlee, and Jeanne Lenzer matter? (scienceblogs.com). He also critiques the attitude of the Cochrane author Jefferson, who has a different voice in the media compared to the Cochrane Reviews he co-authors. Here he is far more neutral.
** There is no direct link to the data in the post. I’m not sure whether all pregnant women in the US are routinely tested for H1N1. (if not the percentage of H1N1 deaths among H1N1 infected pregnant women might be overestimated)
***In the US, vaccins given to pregnant women are without adjuvant.

45,982

Reblog this post [with Zemanta]




#FollowFriday #FF the EBM-Skeptics @cochranecollab @EvidenceMatters @oracknows @ACPinternists

27 11 2009

FollowFriday is a twitter tradition in which twitter users recommend other users to follow (on Friday) by twittering their name(s), the hashtags #FF or #FollowFriday, and the reason for their recommendation(s).

Since the roll out of Twitter lists I add the #FollowFriday Recommendations to a (semi-)permanent #FollowFriday Twitter list: @laikas/followfridays-ff

This week I have added 4 people to the #FollowFriday list who are all twittering about EBM and/or are skeptics and/or belong to the Cochrane Collaboration. Since there are many interesting people in this field, I also made a separate Twitterlist: @laikas/ebm-cochrane-sceptics

The following people are added to both my #followfridays-ff (n=36) and ebm-cochrane-sceptics (n=46) lists. If you are on twitter you can follow these lists.
I’m sure I forgot somebody. If I did, let me know and I’ll see if I include that person.

All 4 tweople have twittered about the new and much discussed breast cancer screening guidelines.

  1. @ACPinternists* is the Communications Department of the American College of Physicians (ACP). I know ACP from the ACP-Journal club with its excellent critical appraised topics, in a section of the well known Annals of Internal Medicine. The uproar over the new U.S. breast cancer screening guidelines started with the publication of 3 articles in Ann Intern Med.
    *Mmm, when I come to think of it, shouldn’t @ACPinternists be added to the biomedical journals Twitter lists as well?
  2. @EvidenceMatters is really an invaluable tweeter with a high output of many different kinds of tweets, often (no surprise) related to Evidence Based Medicine. He (?) is very inspiring. My post “screening can’t hurt, can it” was inspired by one of his tweets.
  3. @cochranecollab stands for the Cochrane Collaboration. Like @acpinternists the tweets are mostly unidirectional, but provide interesting information related to EBM and/or the Cochrane Collaboration. Disclosure: I’m not entirely neutral.
  4. @oracknows. Who doesn’t know Orac? Orac is “a (not so) humble pseudonymous surgeon/scientist with an ego just big enough to delude himself that someone might actually care about his miscellaneous”. His tweets are valuable because of his high quality posts on his blog Respectful Insolence: Orac mostly uses Twitter as a publication platform. I really can recommend his excellent explanation of the new breast cancer guidelines.

You may also want to read:

Reblog this post [with Zemanta]




Adding Methodological Filters to MyNCBI

26 11 2009

Idea: Arnold Leenders
Text: “Laika”

Methodological Search Filters can help to narrow down a search by enriching for studies with a certain study design or methodology. PubMed has build-in methodological filters, the so called Clinical Queries for domains (like therapy and diagnosis) and for evidence based papers (like theSystematic Review subset” in Pubmed). These searches are often useful to quickly find evidence on a topic or to perform a CAT (Critical Appraised Topic). More exhaustive searches require broader  filters not incorporated in PubMed. (See Search Filters. 1. An Introduction.).

The Redesign of PubMed has made it more difficult to apply Clinical Queries after a search has been optimized. You can still go directly to the clinical queries (on the front page) and fill in some terms, but we rather advise to build the strategy first, check the terms and combine your search with filters afterwards.

Suppose you would like to find out whether spironolactone effectively reduces hirsutism in a female with PCOS (see 10+ 1 Pubmed Tips for Residents and their Instructors, Tip 9). You first check that the main concepts hirsutism and spironactone are o.k. (i.e. they map automatically with the correct MeSH). Applying the clinical queries at this stage would require you to scroll down the page each time you use them.

Instead you can use filters in My NCBI for that purpose. My NCBI is your (free) personal space for saving searches, results, PubMed preferences, for creating automatic email alerts and for creating Search Filters.
The My NCBI-option is at the upper right of the PubMed page. You first have to create a free account.

To activate or create filters, go to [1] My NCBI and click on [2] Search Filters.

Since our purpose is to make filters for PubMed, choose [3] PubMed from the list of NCBI-databases.

Under Frequently Requested Filters you find the most popular Limit options. You can choose any of the optional filters for future use. This works faster than searching for the appropriate limit each time. You can for instance use the filter for humans to exclude animals studies.

The Filters we are going to use are under “Browse Filters”, Subcategory Properties….

….. under Clinical Queries (Domains, i.e. therapy) and Subsets (Systematic Review Filters)

You can choose any filter you like. I choose the Systematic Review Filter (under Subsets) and the Therapy/Narrow Filter under  Clinical Queries.

In addition you can add custom filters. For instance you might want to add a sensitive Cochrane RCT filter, if you perform broad searches. Click Custom Filters, give the filter a name and copy/paste the search string you want to use as filter.

Control via “Run Filter” if the Filter works (the number of hits are shown) and SAVE the filter.

Next you have to activate the filters you want to use. Note there is a limit of five 15 filters (including custom filters) that can be selected and listed in My Filters. [edited: July 5th, hattip Tanya Feddern-Bekcan]

Under  My Filters you now see the Filters you have chosen or created.

From now on I can use these filters to limit my search. So lets go to my original search in “Advanced Search”. Unfiltered, search #3 (hirsutism  AND spironolactone) has 197 hits.

When you click on the number of hits you arrive at the results page.
At the right are the filters with the number of results of your search combined with these filters (between brackets).

When you click at the Systematic Reviews link you see the 11 results, most of them very relevant. Filters (except the Custom Filters) can be appended to the search (and thus saved) by clicking the yellow + button.

Each time you do a search (and you’re logged in into My NCBI)  the filtered results are automatically shown at the right.

Clinical Queries zijn vaak handig als je evidence zoekt of een CAT (Critical Appraised Topic) maakt. In de nieuwe versie van PubMed zijn de Clinical Queries echter moeilijker te vinden. Daarom is het handig om bepaalde ‘Clinical Queries’ op te nemen in ‘My NCBI’. Deze queries bevinden zich onder Browse Filters (mogelijkheid onder Search Filters)

Het is ook mogelijk speciale zoekfilters te creëeren, zoals b.v. het Cochrane highly sensitive filter voor RCT’s. Dit kan onder Custom Filters.

Controleer wel via ‘Run Filter” of het filter werkt en sla het daarna op.

Daarna moet je het filter nog activeren door het hokje aan te vinken. Dus je zou alle filters van de ‘Clinical study category’ kunnen opnemen en deze afhankelijk van het domein van de vraag kunnen activeren.

Zo heb je altijd alle filters bij de hand. De resultaten worden automatisch getoond (aan de rechterkant).

Reblog this post [with Zemanta]




Presentation at the #NVB09: “Help, the doctor is drowning”

16 11 2009

15-11-2009 23-24-33 nvb congressenLast week I was invited to speak at the NVB-congress, the Dutch society for librarians and information specialists. I replaced Josje Calff in the session “the professional”, chaired by Bram Donkers of the magazine InformatieProfessional. Other sessions were: “the client”, “the technique” and “the connection”. (see program)

It was a very successful meeting, with Andrew Keen and Bas Haring in the plenary session. I understand from tweets and blogposts that @eppovannispen en @lykle who were in parallel sessions were especially interesting.
Some of the (Dutch) blogposts (Not about my presentation….pfew) are:

I promised to upload my presentation to Slideshare. And here it is.

Some slides are different from the original. First, Slideshare doesn’t allow animation, (so slides have to be added to get a similar effect), second I realized later that the article and search I showed in Ede were not yet published, so I put “top secret” in front of it.

The title refers to a Dutch book and film: “Help de dokter verzuipt” (“Help the doctor is drowning”).

Slides 2-4: NVB-tracks; why I couldn’t discuss “the professional” without explaining the changes with which the medical profession is confronted.

Slides 5-8: Clients of a medical librarian (dependent on where he/she works).

Slides 9-38: Changes to the medical profession (less time, opinion-based medicine gradually replaced by evidence based medicine, information overload, many sources, information literacy)

Slides 39-66: How medical librarians can help (‘electronic’ collection accessible from home, study landscape for medical students, less emphasis on books, up to date with alerts (email, RSS, netvibes), portals (i.e. for evidence based searching), education (i.e. courses, computer workshops, e-learning), active participation in curriculum, helping with searches or performing them).

Slides 67-68: Summary (Potential)

Slide 69: Barriers/Risks: Money, support (management, contact persons at the departments/in the curriculum), doctors like to do it theirselves (it looks easy), you have to find a way to reach them, training medical information specialists.

Slides 70-73 Summary & Credits

Here are some tweets related to this presentation.

Reblog this post [with Zemanta]




#Cochrane Colloquium 2009: Better Working Relationship between Cochrane and Guideline Developers

19 10 2009

singapore CCLast week I attended the annual Cochrane Colloquium in Singapore. I will summarize some of the meetings.

Here is a summary of an interesting (parallel) special session: Creating a closer working relationship between Cochrane and Guideline Developers. This session was brought together as a partnership between the Guidelines International Network (G-I-N) and The Cochrane Collaboration to look at the current experience of guideline developers and their use of Cochrane reviews (see abstract).

Emma Tavender of the EPOC Australian Satellite, Australia reported on the survey carried out by the UK Cochrane Centre to identify the use of Cochrane reviews in guidelines produced in the UK ) (not attended this presentation) .

Pwee Keng Ho, Ministry of Health, Singapore, is leading the Health Technology Assessment (HTA) and guideline development program of the Singapore Ministry of Health. He spoke about the issues faced as a guideline developer using Cochrane reviews or -in his own words- his task was: “to summarize whether guideline developers like Cochrane Systematic reviews or not” .

Keng Ho presented the results of 3 surveys of different guideline developers. Most surveys had very few respondents: 12-29 if I remember it well.

Each survey had approximately the same questions, but in a different order. On the face of it, the 3 surveys gave the same picture.

Main points:

  • some guideline developers are not familiar with Cochrane Systematic Reviews
  • others have no access to it.
  • of those who are familiar with the Cochrane Reviews and do have access to it, most found the Cochrane reviews useful and reliable. (in one survey half of the respondents were neutral)
  • most importantly they actually did use the Cochrane reviews for most of their guidelines.
  • these guideline developers also used the Cochrane methodology to make their guidelines (whereas most physicians are not inclined to use the exhaustive search strategies and systematic approach of the Cochrane Collaboration)
  • An often heard critique of Guideline developers concerned the non-comprehensive coverage of topics by Cochrane Reviews. However, unlike in Western countries, the Singapore minister of Health mentioned acupuncture and herbs as missing topics (for certain diseases).

This incomplete coverage caused by a not-demand driven choice of subjects was a recurrent topic at this meeting and a main issue recognized by the entire Cochrane Community. Therefore priority setting of Cochrane Systematic reviews is one of the main topics addressed at this Colloquium and in the Cochrane Strategic review.

Kay Dickersin of the US Cochrane Center, USA, reported on the issues raised at the stakeholders meeting held in June 2009 in the US (see here for agenda) on whether systematic reviews can effectively inform guideline development, with a particular focus on areas of controversy and debate.

The Stakeholder summit concentrated on using quality SR’s for guidelines. This is different from effectiveness research, for which the Institute of Medicine (IOM) sets the standards: local and specialist guidelines require a different expertise and approach.

All kinds of people are involved in the development of guidelines, i.e. nurses, consumers, physicians.
Important issues to address, point by point:

  • Some may not understand the need to be systematic
  • How to get physicians on board: they are not very comfortable with extensive searching and systematic work
  • Ongoing education, like how-to workshops, is essential
  • What to do if there is no evidence?
  • More transparency; handling conflicts of interest
  • Guidelines differ, including the rating of the evidence. Almost everyone in the Stakeholders meeting used GRADE to grade the evidence, but not as it was originally described. There were numerous variations on the same theme. One question is whether there should be one system or not.
  • Another -recurrent- issue was that Guidelines should be made actionable.

Here are podcasts covering the meeting

Gordon Guyatt, McMaster University, Canada, gave  an outline of the GRADE approach and the purpose of ‘Summary of Findings’ tables, and how both are perceived by Cochrane review authors and guideline developers.

Gordon Guyatt, whose magnificent book ” Users’ Guide to the Medical Literature”  (JAMA-Evidence) lies at my desk, was clearly in favor of adherence to the original Grade-guidelines. Forty organizations have adopted these Grade Guidelines.

Grade stands for “Grading of Recommendations Assessment, Development and Evaluation”  system. It is used for grading evidence when submitting a clinical guidelines article. Six articles in the BMJ are specifically devoted to GRADE (see here for one (full text); and 2 (PubMed)). GRADE not only takes the rigor of the methods  into account, but also the balance between the benefits and the risks, burdens, and costs.

Suppose  a guideline would recommend  to use thrombolysis to treat disease X, because a good quality small RCTs show thrombolysis to be slightly but significantly more effective than heparin in this disease. However by relying on only direct evidence from the RCT’s it isn’t taken into account that observational studies have long shown that thrombolysis enhances the risk of massive bleeding in diseases Y and Z. Clearly the risk of harm is the same in disease X: both benefits and harms should be weighted.
Guyatt gave several other examples illustrating the importance of grading the evidence and the understandable overview presented in the Summary of Findings Table.

Another issue is that guideline makers are distressingly ready to embrace surrogate endpoints instead of outcomes that are more relevant to the patient. For instance it is not very meaningful if angiographic outcomes are improved, but mortality or the recurrence of cardiovascular disease are not.
GRADE takes into account if indirect evidence is used: It downgrades the evidence rating.  Downgrading also occurs in case of low quality RCT’s or the non-trade off of benefits versus harms.

Guyatt pleaded for uniform use of GRADE, and advised everybody to get comfortable with it.

Although I must say that it can feel somewhat uncomfortable to give absolute rates to non-absolute differences. These are really man-made formulas, people agreed upon. On the other hand it is a good thing that it is not only the outcome of the RCT’s with respect to benefits (of sometimes surrogate markers) that count.

A final remark of Guyatt: ” Everybody makes the claim they are following evidence based approach, but you have to learn them what that really means.”
Indeed, many people talk about their findings and/or recommendations being evidence based, because “EBM sells well”, but upon closer examination many reports are hardly worth the name.

Reblog this post [with Zemanta]