Medpedia, the Medical Wikipedia, is Dead. And we Missed its Funeral…

12 07 2013

In a post about Wikipedia in 2009 I suggested that initiatives like Ganfyd or Medpedia, might be a solution to Wikipedia’s accuracy and credibility problems, because only health experts are allowed to edit or contribute to the content of these knowledge bases.

MedPedia is a more sophisticated platform than Ganfyd, which looks more like a simple medical encyclopedia. A similar online encyclopedia project with many medical topics, Google Knol, was discontinued by Google as of May 1, 2012.

But now it appears Medpedia may have followed Google KNOL into the same blind alley.

Medpedia was founded in 2007 [2a] by James Currier, an entrepreneur and investor [2b], and an early proponent of social media. He founded the successful Tickle in 1999, when the term Web 2.0 was coined, but not yet mainstream. And his list of  investments is impressive: Flickr, Branchout and Goodreads for instance.

On its homepage Medpedia was described as a “long term, worldwide project to evolve a new model for sharing and advancing knowledge about health, medicine and the body.”
It was developed in association with top medical schools and organizations such as Harvard, Stanford, American College of Physicians, and the NHS. Medpedia was running on the same software and under the same license as Wikipedia and aimed both at the public and  the experts. Contrary to Wikipedia only experts were qualified to contribute to the main content (although others could suggest changes and new topics). [3, 4 , 5, 6] In contrast to many other medical wikis, Medpedia featured a directory of medical editor profiles with general and Medpedia-specific information. This is far more transparent than wikis without individual author recognition [5].

Although promising, Medpedia never became a real success. Von Muhlen wrote in 1999 [4] that there were no articles reporting success metrics for Medpedia or similar projects. In contrast, Wikipedia remains immensely popular among patients and doctors.

Health 2.0 pioneers like E-Patient Dave (@ePatientDave) and Bertalan Meskó (@berci) saw Medpedia’s Achilles heel right from the start:

Bertalan Meskó at his blog Science Roll [7]:

We need Medpedia to provide reliable medical content? That’s what we are working on in Wikipedia.

I believe elitism kills content. Only the power of masses controlled by well-designed editing guidelines can lead to a comprehensive encyclopaedia.

E-patient Dave (who is a fierce proponent of participatory medicine where everyone, medical expert or not, works in partnership to produce accurate information), addresses his concern in his post

“Medpedia: Who gets to say what info is reliable?” [8]

The title says it all. In Dave’s opinion it is “an error to presume that doctors inherently have the best answer” or as Dave summarizes his concern: “who will vet the vetters?”

In addition, Clay Shirky noted that some Wikipedia entries like the biopsy-entry were far more robust than the Medpedia entries [9,10 ].

Ben Toth on the other hand found the Atrial Fibrillation-Medpedia item better than the corresponding Wikipedia page in some respects, but less up-to-date [11].

In her Medpedia review in the JMLA medical librarian Melissa Rethlefsen [5] concludes that “the content of Medpedia is varied and not clearly developed, lacks topical breadth and depth and that it is more a set of ideals than a workable reference source. Another issue is that Medpedia pages never ranked high, which means its content was hardly findable in today’s Google-centric world.

She concludes that for now (2009) “it means that Wikipedia will continue to be the medical wiki of choice”.

I fear that this will be forever, for Medpedia ceased to exist.

I noticed it yesterday totally by coincidence: both my Medpedia blog badge  and Mesko’s Webicina-“Medical Librarianship in Social Medicine”-wiki page were redirected to a faulty page.

I checked the Internet, but all I could find was a message at Wikipedia:

‘It appears that Medpedia is now closed but there is no information about it closing. Their Facebook and Twitter feeds are still open but they have not been updated in a few years. Their webpage now goes to a spam site.

I checked the Waybackmachine and found the “last sparks of life” at January 2013:

11-7-2013 23-57-49 waybackmachine medpedia

This morning I contacted Medpedia’s founder James Currier, who kindly and almost instantly replied to all my questions.

These are shown (with permission) in entirety below.

=============================================================================

[me: ] I hope that you don’t mind that I use LinkedIn to ask you some questions about Medpedia.

{James:] I don’t mind at all!

Is Medpedia dead? And if so, why was it discontinued?

For now it is. We worked on it for 6 years, had a fantastic team of developers, had fantastic partners who supported us, had a fantastic core group of contributors like yourself, and I personally spent millions of dollars on it. In other words, we gave it a really good effort. But it never got the sort of scale it needed to become something important. So for the last two years, we kept looking for a new vision of what it could become, a new mission. We never found one, and it was expensive to keep running.
In the meantime, we had found a new mission that Medpedia could not be converted into, so we started a new company, Jiff, to pursue it. “Health Care in a Jiff” is the motto. Jiff continues the idea of digitizing healthcare, and making it simple and transparent for the individual, but goes after it in a very different way. More info about Jiff here: https://www.jiff.com and here https://www.jiff.com/static/newsJiff has taken our time and attention, and hopefully will produce the kinds of benefits we were hoping to see from Medpedia.

Why weren’t people informed and  was Medpedia quietly shut down?

We definitely could have done a better job with that! I apologize. We were under a tight time frame due to several things, such as people leaving the effort, technical issues around where the site was being hosted, and corporate and tax issues after 6 years of operating. So it was rushed, and we should have figured out a way to do a better job of communicating.

Couldn’t the redirection to the spam-site be prevented? And can you do something about it?

I didn’t know about that! I’ll look into it and find out what’s going on.*

Your LinkedIn profile says you’re still working for MedPedia. Why is that? Are there plans to make a new start, perhaps? And how?

Yes, I haven’t updated my LinkedIn profile in a while. I just made that change. We have no current plans to restart Medpedia. But we’re always looking for a new mission that can be self sustaining! Let me know if you have one.

And/or do you have (plans for) other health 2.0 initiatives?

Jiff is our main effort now, and there’s a wonderful CEO, Derek Newell running it.

I know you are a busy man, but I think it is important to inform all people who thought that Medpedia was a good initiative.

Thank you for saying you thought it was a good initiative. I did too! I just wish it had gotten bigger. I really appreciate your questions, and your involvement. Not all projects flourish, but we’ll all keep trying new ideas, and hopefully one will break out and make the big difference we hope for.

*somewhat later James gave an update about the redirection:

By the way, I asked about the redirect, and found out that that that page is produced by our registrar that holds the URL medpedia.com.

We wanted to put up the following message and I thought it was up:

“Medpedia was a great experiment begun in 2007.
Unfortunately, it never reached the size to be self sustaining, and it ceased operations in early 2013.
Thank you to all who contributed!”

I’m going to work again on getting that up!

============================================================================

I have one question left : what happened with all the materials the experts produced? Google Knol gave people time to export their contributions. Perhaps James Currier can answer that question too.

I also wonder why nobody noticed that Medpedia was shut down. Apparently it isn’t missed.

Finally I would like to thank all wo have contributed to this “experiment”. As a medical librarian, who is committed to providing reliable medical information, I still find it a shame that Medpedia didn’t work.

I wish James Currier all the best with his new initiatives.

References

  1. The Trouble with Wikipedia as a Source for Medical Information
    (http://laikaspoetnik.wordpress.com) (2009/09/14)
  2. [a] Medpedia and [b] James Currier , last edited at 6/30/13*  and 7/12/13 respectively (crunchbase.com)
  3. Laurent M.R. & Vickers T.J. (2009). Seeking Health Information Online: Does Wikipedia Matter?, Journal of the American Medical Informatics Association, 16 (4) 471-479. DOI:
  4. von Muhlen M. & Ohno-Machado L. (2012). Reviewing social media use by clinicians, Journal of the American Medical Informatics Association, 19 (5) 777-781. DOI:
  5. Rethlefsen M.L. (2009). Medpedia, Journal of the Medical Library Association : JMLA, 97 (4) 325-326. DOI:
  6. Medpedia: Reliable Crowdsourcing of Health and Medical Information (highlighthealth.com) (2009/7/24)
  7. Launching MedPedia: From the perspective of a Wikipedia administrator (scienceroll.com) (2009/2/20)
  8. Medpedia: Who gets to say what info is reliable? (e-patients.net/) (2009/2/20)
  9. Clay Shirky at MLA ’11 – On the Need for Health Sciences Librarians to Rock the Boat (mbanks.typepad.com) (2011
  10. Wikipedia vs Medpedia: The Crowd beats the Experts (http://blog.lib.uiowa.edu/hardinmd/2011/05/31
  11. Medpedia and Wikipedia (nelh.blogspot.nl) (2009/10/08)
  12. Jiff wants to do for employer wellness programs what WordPress did for blogs (medcitynews.com)
  13. Jiff Unveils Health App Development Platform, Wellness Marketplace (eweek.com)




No, Google Scholar Shouldn’t be Used Alone for Systematic Review Searching

9 07 2013

Several papers have addressed the usefulness of Google Scholar as a source for systematic review searching. Unfortunately the quality of those papers is often well below the mark.

In 2010 I already [1]  (in the words of Isla Kuhn [2]) “robustly rebutted” the Anders’ paper PubMed versus Google Scholar for Retrieving Evidence” [3] at this blog.

But earlier this year another controversial paper was published [4]:

“Is the coverage of google scholar enough to be used alone for systematic reviews?

It is one of the highly accessed papers of BMC Medical Informatics and Decision Making and has been welcomed in (for instance) the Twittosphere.

Researchers seem  to blindly accept the conclusions of the paper:

But don’t rush  and assume you can now forget about PubMed, MEDLINE, Cochrane and EMBASE for your systematic review search and just do a simple Google Scholar (GS) search instead.

You might  throw the baby out with the bath water….

… As has been immediately recognized by many librarians, either at their blogs (see blogs of Dean Giustini [5], Patricia Anderson [6] and Isla Kuhn [1]) or as direct comments to the paper (by Tuulevi OvaskaMichelle Fiander and Alison Weightman [7].

In their paper, Jean-François Gehanno et al examined whether GS was able to retrieve all the 738 original studies included in 29 Cochrane and JAMA systematic reviews.

And YES! GS had a coverage of 100%!

WOW!

All those fools at the Cochrane who do exhaustive searches in multiple databases using controlled vocabulary and a lot of synonyms when a simple search in GS could have sufficed…

But it is a logical fallacy to conclude from their findings that GS alone will suffice for SR-searching.

Firstly, as Tuulevi [7] rightly points out :

“Of course GS will find what you already know exists”

Or in the words of one of the official reviewers [8]:

What the authors show is only that if one knows what studies should be identified, then one can go to GS, search for them one by one, and find out that they are indexed. But, if a researcher already knows the studies that should be included in a systematic review, why bother to also check whether those studies are indexed in GS?

Right!

Secondly, it is also the precision that counts.

As Dean explains at his blog a 100% recall with a precision of 0,1% (and it can be worse!) means that in order to find 36 relevant papers you have to go through  ~36,700 items.

Dean:

Are the authors suggesting that researchers consider a precision level of 0.1% acceptable for the SR? Who has time to sift through that amount of information?

It is like searching for needles in a haystack.  Correction: It is like searching for particular hay stalks in a hay stack. It is very difficult to find them if they are hidden among other hay stalks. Suppose the hay stalks were all labeled (title), and I would have a powerful haystalk magnet (“title search”)  it would be a piece of cake to retrieve them. This is what we call “known item search”. But would you even consider going through the haystack and check the stalks one by one? Because that is what we have to do if we use Google Scholar as a one stop search tool for systematic reviews.

Another main point of criticism is that the authors have a grave and worrisome lack of understanding of the systematic review methodology [6] and don’t grasp the importance of the search interface and knowledge of indexing which are both integral to searching for systematic reviews.[7]

One wonders why the paper even passed the peer review, as one of the two reviewers (Miguel Garcia-Perez [8]) already smashed the paper to pieces.

The authors’ method is inadequate and their conclusion is not logically connected to their results. No revision (major, minor, or discretionary) will save this work. (…)

Miguel’s well funded criticism was not well addressed by the authors [9]. Apparently the editors didn’t see through and relied on the second peer reviewer [10], who merely said it was a “great job” etcetera, but that recall should not be written with a capital R.
(and that was about the only revision the authors made)

Perhaps it needs another paper to convince Gehanno et al and the uncritical readers of their manuscript.

Such a paper might just have been published [11]. It is written by Dean Giustini and Maged Kamel Boulos and is entitled:

Google Scholar is not enough to be used alone for systematic reviews

It is a simple and straightforward paper, but it makes its points clearly.

Giustini and Kamel Boulos looked for a recent SR in their own area of expertise (Chou et al [12]), that included a comparable number of references as that of Gehanno et al. Next they test GS’ ability to locate these references.

Although most papers cited by Chou et al. (n=476/506;  ~95%) were ultimately found in GS, numerous iterative searches were required to find the references and each citation had to be managed once at a time. Thus GS was not able to locate all references found by Chou et al. and the whole exercise was rather cumbersome.

As expected, trying to find the papers by a “real-life” GS search was almost impossible. Because due to its rudimentary structure, GS did not understand the expert search strings and was unable to translate them. Thus using Chou et al.’s original search strategy and keywords yielded unmanageable results of approximately >750,000 items.

Giustini and Kamel Boulos note that GS’ ability to search into the full-text of papers combined with its PageRank’s algorithm can be useful.

On the other hand GS’ changing content, unknown updating practices and poor reliability make it an inappropriate sole choice for systematic reviewers:

As searchers, we were often uncertain that results found one day in GS had not changed a day later and trying to replicate searches with date delimiters in GS did not help. Papers found today in GS did not mean they would be there tomorrow.

But most importantly, not all known items could be found and the search process and selection are too cumbersome.

Thus shall we now for once and for all conclude that GS is NOT sufficient to be used alone for SR searching?

We don’t need another bad paper addressing this.

But I would really welcome a well performed paper looking at the additional value of a GS in SR-searching. For I am sure that GS may be valuable for some questions and some topics in some respects. We have to find out which.

References

  1. PubMed versus Google Scholar for Retrieving Evidence 2010/06 (laikaspoetnik.wordpress.com)
  2. Google scholar for systematic reviews…. hmmmm  2013/01 (ilk21.wordpress.com)
  3. Anders M.E. & Evans D.P. (2010) Comparison of PubMed and Google Scholar literature searches, Respiratory care, May;55(5):578-83  PMID:
  4. Gehanno J.F., Rollin L. & Darmoni S. (2013). Is the coverage of Google Scholar enough to be used alone for systematic reviews., BMC medical informatics and decision making, 13:7  PMID:  (open access)
  5. Is Google scholar enough for SR searching? No. 2013/01 (blogs.ubc.ca/dean)
  6. What’s Wrong With Google Scholar for “Systematic” Review 2013/01 (etechlib.wordpress.com)
  7. Comments at Gehanno’s paper (www.biomedcentral.com)
  8. Official Reviewer’s report of Gehanno’s paper [1]: Miguel Garcia-Perez, 2012/09
  9. Authors response to comments  (www.biomedcentral.com)
  10. Official Reviewer’s report of Gehanno’s paper [2]: Henrik von Wehrden, 2012/10
  11. Giustini D. & Kamel Boulos M.N. (2013). Google Scholar is not enough to be used alone for systematic reviews, Online Journal of Public Health Informatics, 5 (2) DOI:
  12. Chou W.Y.S., Prestin A., Lyons C. & Wen K.Y. (2013). Web 2.0 for Health Promotion: Reviewing the Current Evidence, American Journal of Public Health, 103 (1) e9-e18. DOI:




#EAHIL2012 CEC 1: Drupal for Librarians

5 07 2012

This week I’m blogging at (and mostly about) the 13th EAHIL conference in Brussels. EAHIL stands for European Association for Health Information and Libraries.

I already blogged about the second Continuing Education Course (CEC) I followed, but I followed a continuing education course at Mondays, one day earlier. That session was led by Patrice Chalon, who is a Knowledge Manager at KCE – Belgian Health Care Knowledge Centre.

The first part was theoretical and easy to follow. Unfortunately there were quite a few mishaps with the practical part (some people could not install the program via the USB-stick, parts of the website were deleted and the computers were slow), but the entire session was instructive anyway. Even though I was about the only person (of 6) lacking CMS or HTML knowledge (but rereading the course abstract I now realize that was a prerequisite….)

Drupal is a freely available, easy to use,  modular content management system (CMS), for which you don’t need to have extensive programming (or HTML) experience.

Drupal was created by a Belgium student (Dries Buytaert) in 2000. It evolved from drop.org (small news site with build-in web board to share news among friends)  to Drupal (pronounced as “droo-puhl”, derived from the English pronunciation of the Dutch word “Druppel” which means “drop”). The purpose was to enable others to use and extend the experimentation platform so that more people could develop it further.

Drupal.org is a well established and active community with over 630,000 subscribed members.

This web application makes use of PHP as a programming language and MySQL as a database backend.

In Drupal every “page” is a node. You can define as many nodes as you need (news, page, event etc) and create “child” pages if you like (and move them to another parent page if necessary).

The editing function is easy: you can easily edit the format without needing HTML (looks quite like WordPress) and add files as if were email. Therefore it could easily have a wiki function as well.

Drupal is fitted with a very good taxonomy system. This helps to organize nodes and menus.

Nodes, account registration and maintenance, menu management, and system administration all are basic features of  the standard release of Drupal, known as Drupal core.

But thanks to the large community, Drupal benefits from thousands of third party modules, to tailor Drupal to your needs. When choosing modules it is important to check for longevity (are modules still being adapted for new Drupal releases, how many downloads are there: the more downloads the more popular the module, the more likely the module is going to stay).

There are also different themes.

Drupal is used a lot by libraries and libraries in turn have developed specific modules apt to use for library-purposes.

The view-module enables you to provide a view of the metadata and you can use metadata as a filter to create lists. Patrick was very enthusiastic about the bibliographic function (“the ENDNOTE within the context management system). He showed that it was very easy to import and search for bibliographic records (and metadata) from PubMed, Google Scholar etc (and maintain correct links over time), i.e. just enter the PMID, DOI lookup etc. Keywords like MeSH are loaded correctly.

Forgive me if I don’t remember (and even may be wrong about) the technical details, but it really looked like a great tool with many possible forms of  uses.

If you need more information you can contact Patrick (Twitter: @pchalon) or consult Drupal and especially the Drupal Group “Libraries”  and Drupalib.

And as said, there is a large active community. For Drupal’s motto is “Come for the software, stay for the community.”

Examples of Drupal Websites:
 http://www.cochrane.org/. The new face of the Cochrane was created by its webmaster Chris Mavergames, and it is far more inviting to read and more interactive then it’s boring predecessor. As a matter of fact it was Chris’ enthusiasm about Drupal and the new looks of the Cochrane site which raised my interest into Drupal. Chris has a website about Drupal (& web development, linked data & information architecture in general) and a Twitter list of Drupal folks you can follow.

Another example is http://htai.org/vortal, created by Patrick. Here is a presentation by Patrick that shows more details about this website (and Drupal’s versatility to create library websites).

This blogpost is largely based on the comprehensive course notes of Patrick Chalon’s “Drupal for Librarians” (CC), supplemented with my own notes.





The Scatter of Medical Research and What to do About it.

18 05 2012

ResearchBlogging.orgPaul Glasziou, GP and professor in Evidence Based Medicine, co-authored a new article in the BMJ [1]. Similar to another paper [2] I discussed before [3] this paper deals with the difficulty for clinicians of staying up-to-date with the literature. But where the previous paper [2,3] highlighted the mere increase in number of research articles over time, the current paper looks at the scatter of randomized clinical trials (RCTs) and systematic reviews (SR’s) accross different journals cited in one year (2009) in PubMed.

Hofmann et al analyzed 7 specialties and 9 sub-specialties, that are considered the leading contributions to the burden of disease in high income countries.

They followed a relative straightforward method for identifying the publications. Each search string consisted of a MeSH term (controlled  term) to identify the selected disease or disorders, a publication type [pt] to identify the type of study, and the year of publication. For example, the search strategy for randomized trials in cardiology was: “heart diseases”[MeSH] AND randomized controlled trial[pt] AND 2009[dp]. (when searching “heart diseases” as a MeSH, narrower terms are also searched.) Meta-analysis[pt] was used to identify systematic reviews.

Using this approach Hofmann et al found 14 343 RCTs and 3214 SR’s published in 2009 in the field of the selected (sub)specialties. There was a clear scatter across journals, but this scatter varied considerably among specialties:

“Otolaryngology had the least scatter (363 trials across 167 journals) and neurology the most (2770 trials across 896 journals). In only three subspecialties (lung cancer, chronic obstructive pulmonary disease, hearing loss) were 10 or fewer journals needed to locate 50% of trials. The scatter was less for systematic reviews: hearing loss had the least scatter (10 reviews across nine journals) and cancer the most (670 reviews across 279 journals). For some specialties and subspecialties the papers were concentrated in specialty journals; whereas for others, few of the top 10 journals were a specialty journal for that area.
Generally, little overlap occurred between the top 10 journals publishing trials and those publishing systematic reviews. The number of journals required to find all trials or reviews was highly correlated (r=0.97) with the number of papers for each specialty/ subspecialty.”

Previous work already suggested that this scatter of research has a long tail. Half of the publications is in a minority of papers, whereas the remaining articles are scattered among many journals (see Fig below).

Click to enlarge en see legends at BMJ 2012;344:e3223 [CC]

The good news is that SRs are less scattered and that general journals appear more often in the top 10 journals publishing SRs. Indeed for 6 of the 7 specialties and 4 of the 9 subspecialties, the Cochrane Database of Systematic Reviews had published the highest number of systematic reviews, publishing between 6% and 18% of all the systematic reviews published in each area in 2009. The bad news is that even keeping up to date with SRs seems a huge, if not impossible, challenge.

In other words, it is not sufficient for clinicians to rely on personal subscriptions to a few journals in their specialty (which is common practice). Hoffmann et al suggest several solutions to help clinicians cope with the increasing volume and scatter of research publications.

  • a central library of systematic reviews (but apparently the Cochrane Library fails to fulfill such a role according to the authors, because many reviews are out of date and are perceived as less clinically relevant)
  • registry of planned and completed systematic reviews, such as prospero. (this makes it easier to locate SRs and reduces bias)
  • Synthesis of Evidence and synopses, like the ACP-Jounal Club which summarizes the best evidence in internal medicine
  • Specialised databases that collate and critically appraise randomized trials and systematic reviews, like www.pedro.org.au for physical therapy. In my personal experience, however, this database is often out of date and not comprehensive
  • Journal scanning services like EvidenceUpdates from mcmaster.ca), which scans over 120 journals, filters articles on the basis of quality, has practising clinicians rate them for relevance and newsworthiness, and makes them available as email alerts and in a searchable database. I use this service too, but besides that not all specialties are covered, the rating of evidence may not always be objective (see previous post [4])
  • The use of social media tools to alert clinicians to important new research.

Most of these solutions are (long) existing solutions that do not or only partly help to solve the information overload.

I was surprised that the authors didn’t propose the use of personalized alerts. PubMed’s My NCBI feature allows to create automatic email alerts on a topic and to subscribe to electronic tables of contents (which could include ACP journal Club). Suppose that a physician browses 10 journals roughly covering 25% of the trials. He/she does not need to read all the other journals from cover to cover to avoid missing one potentially relevant trial. Instead it is far more efficient to perform a topic search to filter relevant studies from journals that seldom publish trials on the topic of interest. One could even use the search of Hoffmann et al to achieve this.* Although in reality, most clinical researchers will have narrower fields of interest than all studies about endocrinology and neurology.

At our library we are working at creating deduplicated, easy to read, alerts that collate table of contents of certain journals with topic (and author) searches in PubMed, EMBASE and other databases. There are existing tools that do the same.

Another way to reduce the individual work (reading) load is to organize journals clubs or even better organize regular CATs (critical appraised topics). In the Netherlands, CATS are a compulsory item for residents. A few doctors do the work for many. Usually they choose topics that are clinically relevant (or for which the evidence is unclear).

The authors shortly mention that their search strategy might have missed  missed some eligible papers and included some that are not truly RCTs or SRs, because they relied on PubMed’s publication type to retrieve RCTs and SRs. For systematic reviews this may be a greater problem than recognized, for the authors have used meta-analyses[pt] to identify systematic reviews. Unfortunately PubMed has no publication type for systematic reviews, but it may be clear that there are many more systematic reviews that meta-analyses. Possibly systematical reviews might even have a different scatter pattern than meta-analyses (i.e. the latter might be preferentially included in core journals).

Furthermore not all meta-analyses and systematic reviews are reviews of RCTs (thus it is not completely fair to compare MAs with RCTs only). On the other hand it is a (not discussed) omission of this study, that only interventions are considered. Nowadays physicians have many other questions than those related to therapy, like questions about prognosis, harm and diagnosis.

I did a little imperfect search just to see whether use of other search terms than meta-analyses[pt] would have any influence on the outcome. I search for (1) meta-analyses [pt] and (2) systematic review [tiab] (title and abstract) of papers about endocrine diseases. Then I subtracted 1 from 2 (to analyse the systematic reviews not indexed as meta-analysis[pt])

Thus:

(ENDOCRINE DISEASES[MESH] AND SYSTEMATIC REVIEW[TIAB] AND 2009[DP]) NOT META-ANALYSIS[PT]

I analyzed the top 10/11 journals publishing these study types.

This little experiment suggests that:

  1. the precise scatter might differ per search: apparently the systematic review[tiab] search yielded different top 10/11 journals (for this sample) than the meta-analysis[pt] search. (partially because Cochrane systematic reviews apparently don’t mention systematic reviews in title and abstract?).
  2. the authors underestimate the numbers of Systematic Reviews: simply searching for systematic review[tiab] already found appr. 50% additional systematic reviews compared to meta-analysis[pt] alone
  3. As expected (by me at last), many of the SR’s en MA’s were NOT dealing with interventions, i.e. see the first 5 hits (out of 108 and 236 respectively).
  4. Together these findings indicate that the true information overload is far greater than shown by Hoffmann et al (not all systematic reviews are found, of all available search designs only RCTs are searched).
  5. On the other hand this indirectly shows that SRs are a better way to keep up-to-date than suggested: SRs  also summarize non-interventional research (the ratio SRs of RCTs: individual RCTs is much lower than suggested)
  6. It also means that the role of the Cochrane Systematic reviews to aggregate RCTs is underestimated by the published graphs (the MA[pt] section is diluted with non-RCT- systematic reviews, thus the proportion of the Cochrane SRs in the interventional MAs becomes larger)

Well anyway, these imperfections do not contradict the main point of this paper: that trials are scattered across hundreds of general and specialty journals and that “systematic reviews” (or meta-analyses really) do reduce the extent of scatter, but are still widely scattered and mostly in different journals to those of randomized trials.

Indeed, personal subscriptions to journals seem insufficient for keeping up to date.
Besides supplementing subscription by  methods such as journal scanning services, I would recommend the use of personalized alerts from PubMed and several prefiltered sources including an EBM search machine like TRIP (www.tripdatabase.com/).

*but I would broaden it to find all aggregate evidence, including ACP, Clinical Evidence, syntheses and synopses, not only meta-analyses.

**I do appreciate that one of the co-authors is a medical librarian: Sarah Thorning.

References

  1. Hoffmann, Tammy, Erueti, Chrissy, Thorning, Sarah, & Glasziou, Paul (2012). The scatter of research: cross sectional comparison of randomised trials and systematic reviews across specialties BMJ, 344 : 10.1136/bmj.e3223
  2. Bastian, H., Glasziou, P., & Chalmers, I. (2010). Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLoS Medicine, 7 (9) DOI: 10.1371/journal.pmed.1000326
  3. How will we ever keep up with 75 trials and 11 systematic reviews a day (laikaspoetnik.wordpress.com)
  4. Experience versus Evidence [1]. Opioid Therapy for Rheumatoid Arthritis Pain. (laikaspoetnik.wordpress.com)




“Pharmacological Action” in PubMed has no True Equivalent in OVID MEDLINE

11 01 2012

Searching for EMBASE Subject Headings (the EMBASE index terms) for drugs is relatively straight forward in EMBASE.

When you want to search for aromatase inhibitors you first search for the Subject Heading mapping to aromatase inhibitors (aromatase inhibitor). Next you explode aromatase inhibitor/ if you are interested in all its narrower terms. If not, you search both for the general term aromatase inhibitor and those specific narrower terms you want to include.
Exploding aromatase inhibitor (exp aromatase inhibitor/) yields 15938 results. That is approximately twice what you get by searching aromatase inhibitor/ alone (not exploded). This yields 7434 hits.

It is different in MEDLINE. If you search for aromatase inhibitors in the MeSH database you get two suggestions.

The first index term “Aromatase Inhibitors” is a Mesh. It has no narrower terms.
Drug-Mesh are generally not arranged by working mechanism, but by chemical structure/type of compound. That is often confusing. Spironolactone for instance belongs to the MeSH Lactones (and Pregnenes) not to the MeSH Aldosterone Antagonists or Androgen Antagonist. Most Clinicians want to search for a group of compounds with the same mechanism of action, not the same biochemical family

The second term “Aromatase Inhibitors” [Pharmacological Action]  however does stand for the working mechanism. It does have narrower terms, including 2 MeSH terms (highlighted) and various substance names, also called Supplementary Concepts. 

For complete results you have to search for both MeSH and Pharmacological action: “Aromatase Inhibitors”[Mesh] yields 3930 records, whereas (“Aromatase Inhibitors”[Mesh]) OR “Aromatase Inhibitors” [Pharmacological Action] yields 6045. That is a lot more.

I usually don’t search PubMed, but OVID MEDLINE.

I know that Pharmacological Action-subheadings are important, so I tried to find the equivalent in OVID .

I found the MeSH Aromatase Inhibitors, but -unlike PubMed- OVID showed only two narrower Drug Terms (called Non-MeSH here versus MeSH in PubMed).

I found that odd.

I reasoned “Pharmacological action” might perhaps be combined with the MESH in OVID MEDLINE. This was later confirmed by Melissa Rethlefsen (see Twitter discussion below)

In Ovid MEDLINE I got 3937 hits with Aromatase Inhibitors/ and 5219 with exp Aromatase Inhibitors/ (thus including aminogluthemide or Fadrozole)

At this point I checked PubMed (shown above). Here I found  that “Aromatase Inhibitors”[Mesh] OR “Aromatase Inhibitors” [Pharmacological Action] yielded 6045 hits in PubMed, against 5219 in OVID MEDLINE for exp Aromatase Inhibitors/

The specific aromatase inhibitors Aminogluthemide/and Fadrozole/ [set 60] accounted fully for the difference  between exploded [set 59] and non-exploded Aromatase Inhibitors[set 58].

But what explained the gap of approximately 800 records between “Aromatase Inhibitors”[Mesh] OR “Aromatase Inhibitors”[Pharmacological Action]* in PubMed and exp aromatase inhibitors/ in OVID MEDLINE?

Could it be the substance names, mentioned under “Aromatase Inhibitors”[Pharmacological Action], I wondered?

Thus I added all the individual substance names in OVID MEDLINE (code= .rn.). See search set 61 below.

Indeed these accounted fully for the difference (set 62= 59 or 61 : the total number of hits in PubMed is similar)

It obviously is a mistake of OVID MEDLINE and I will inform them.

For the meanwhile, take care to add the individual substance names when you search for drug terms that have a pharmacological action-equivalent in PubMed. The substance names are not automatically searched when exploding the MeSH-term in OVID MEDLINE.

——–

For more info on Pharmacological action, see: http://www.nlm.nih.gov/bsd/disted/mesh/paterms.html

Twitter Discussion between me and Melissa Rethlefsen about the discrepancy between PubMed and OVID MEDLINE (again showing how helpful Twitter can be for immediate discussions and exchange of thoughts)

[read from bottom to top]





Things to Keep in Mind when Searching OVID MEDLINE instead of PubMed

25 11 2011

When I search extensively for systematic reviews I prefer OVID MEDLINE to PubMed for several reasons. Among them, it is easier to build a systematic search in OVID, the search history has a more structured format that is easy to edit, the search features are more advanced giving you more control over the search and translation of the a search to OVID EMBASE, PSYCHINFO and the Cochrane Library is “peanuts”, relatively speaking.

However, there are at least two things to keep in mind when searching OVID MEDLINE instead of PubMed.

1. You may miss publications, most notably recent papers.

PubMed doesn’t only provide access to MEDLINE, but also contains some other citations, including in-process citations which provide a record for an article before it is indexed with MeSH and added to MEDLINE.

As previously mentioned, I once missed a crucial RCT that was available in PubMed, but not yet available in OVID/MEDLINE.

A few weeks ago one of my clients said that she found 3 important papers with a simple PubMed search that were not retrieved by my exhaustive OVID MEDLINE (Doh!).
All articles were recent ones [Epub ahead of print, PubMed - as supplied by publisher]. I checked that these articles were indeed not yet included in OVID MEDLINE, and they weren’t.

As said, PubMed doesn’t have all search features of OVID MEDLINE and I felt a certain reluctance to make a completely new exhaustive search in PubMed. I would probably retrieve many irrelevant papers which I had tried to avoid by searching OVID*. I therefore decided to roughly translate the OVID search using textwords only (the missed articles had no MESH attached). It was a matter of copy-pasting the single textwords from the OVID MEDLINE search (and omitting adjacency operators) and adding the command [tiab], which means that terms are searched as textwords (in title and abstract) in PubMed (#2, only part of the long search string is shown).

To see whether all articles missed in OVID were in the non-MEDLINE set, I added the command: NOT MEDLINE[sb] (#3). Of the 332 records (#2), 28 belonged to the non-MEDLINE subset. All 3 relevant articles, not found in OVID MEDLINE, were in this set.

In total, there were 15 unique records not present in the OVID MEDLINE and EMBASE search. This additional search in PubMed was certainly worth the effort as it yielded more than 3 new relevant papers. (Apparently there was a boom in relevant papers on the topic, recently)

In conclusion, when doing an exhaustive search in OVID MEDLINE it is worth doing an additional search in PubMed to find the non-MEDLINE papers. Regularly these are very relevant papers that you wouldn’t like to have missed. Dependent on your aim you can suffice with a simpler, broader search for only textwords and limit by using NOT MEDLINE[sb].**

From now on, I will always include this PubMed step in my exhaustive searches. 

2. OVID MEDLINE contains duplicate records

I use Reference Manager to deduplicate the records retrieved from all databases  and I share the final database with my client. I keep track of the number of hits in each database and of the number of duplicates to facilitate the reporting of the search procedure later on (using the PRISM flowchart, see above). During this procedure, I noticed that I always got LESS records in Reference Manager when I imported records from OVID MEDLINE, but not when I imported records from the other databases. Thus it appears that OVID MEDLINE contains duplicate records.

For me it was just a fact that there were duplicate records in OVID MEDLINE. But others were surprised to hear this.

Where everyone just wrote down the number of total number hits in OVID MEDLINE, I always used the number of hits after deduplication in Reference Manager. But this is a quite a detour and not easy to explain in the PRISM-flowchart.

I wondered whether this deduplication could be done in OVID MEDLINE directly. I knew you cold deduplicate a multifile search, but would it also be possible to deduplicate a set from one database only? According to OVID help there should be a button somewhere, but I couldn’t find it (curious if you can).

Googling I found another OVID manual saying :

..dedup n = Removes duplicate records from multifile search results. For example, ..dedup 5 removes duplicate records from the multifile results set numbered 5.

Although the manual only talked about “multifile searches”, I tried the comment (..dedup 34) on the final search set (34) in OVID MEDLINE, and voilà, 21 duplicates were found (exactly the same number as removed by Reference manager)

The duplicates had the same PubMed ID (PMID, the .an. command in OVID), and were identical or almost identical.

Differences that I noticed were minimal changes in the MeSH (i.e. one or more MeSH  and/or subheadings changed) and changes in journal format (abbreviation used instead of full title).

Why are these duplicates present in OVID MEDLINE and not in PubMed?

These are the details of the PMID 20846254 in OVID (2 records) and in PubMed (1 record)

The Electronic Date of Publication (PHST)  was September 16th 2010. 2 days later the record was included in PubMed , but MeSH were added 3 months later ((MHDA: 2011/02/12). Around this date records are also entered in OVID MEDLINE. The only difference between the 2 records in OVID MEDLINE is that one record appears to be revised at 2011-10-13, whereas the other is not.

The duplicate records of 18231698 have again the same creation date (20080527) and entry date (20081203), but one is revised 2110-20-09 and updated 2010-12-14, while the other is revised 2011-08-18 and updated 2011-08-19 (thus almost one year later).

Possibly PubMed changes some records, instantaneously replacing the old ones, but OVID only includes the new PubMed records during MEDLINE-updates and doesn’t delete the old version.

Anyway, wouldn’t it be a good thing if OVID deduplicated its MEDLINE records on a daily basis or would replace the old ones when loading  new records from MEDLINE?

In the meantime, I would recommend to apply the deduplicate command yourself to get the exact number of unique records retrieved by your search in OVID MEDLINE.

*mostly because PubMed doesn’t have an adjacency-operator.
** Of course, only if you have already an extensive OVID MEDLINE search.





Evidence Based Point of Care Summaries [2] More Uptodate with Dynamed.

18 10 2011

ResearchBlogging.orgThis post is part of a short series about Evidence Based Point of Care Summaries or POCs. In this series I will review 3 recent papers that objectively compare a selection of POCs.

In the previous post I reviewed a paper from Rita Banzi and colleagues from the Italian Cochrane Centre [1]. They analyzed 18 POCs with respect to their “volume”, content development and editorial policy. There were large differences among POCs, especially with regard to evidence-based methodology scores, but no product appeared the best according to the criteria used.

In this post I will review another paper by Banzi et al, published in the BMJ a few weeks ago [2].

This article examined the speed with which EBP-point of care summaries were updated using a prospective cohort design.

First the authors selected all the systematic reviews signaled by the American College of Physicians (ACP) Journal Club and Evidence-Based Medicine Primary Care and Internal Medicine from April to December 2009. In the same period the authors selected all the Cochrane systematic reviews labelled as “conclusion changed” in the Cochrane Library. In total 128 systematic reviews were retrieved, 68 from the literature surveillance journals (53%) and 60 (47%) from the Cochrane Library. Two months after the collection started (June 2009) the authors did a monthly screen for a year to look for potential citation of the identified 128 systematic reviews in the POCs.

Only those 5 POCs were studied that were ranked in the top quarter for at least 2 (out of 3) desirable dimensions, namely: Clinical Evidence, Dynamed, EBM Guidelines, UpToDate and eMedicine. Surprisingly eMedicine was among the selected POCs, having a rating of “1″ on a scale of 1 to 15 for EBM methodology. One would think that Evidence-based-ness is a fundamental prerequisite  for EBM-POCs…..?!

Results were represented as a (rather odd, but clear) “survival analysis” ( “death” = a citation in a summary).

Fig.1 : Updating curves for relevant evidence by POCs (from [2])

I will be brief about the results.

Dynamed clearly beated all the other products  in its updating speed.

Expressed in figures, the updating speed of Dynamed was 78% and 97% greater than those of EBM Guidelines and Clinical Evidence, respectively. Dynamed had a median citation rate of around two months and EBM Guidelines around 10 months, quite close to the limit of the follow-up, but the citation rate of the other three point of care summaries (UpToDate, eMedicine, Clinical Evidence) were so slow that they exceeded the follow-up period and the authors could not compute the median.

Dynamed outperformed the other POC’s in updating of systematic reviews independent of the route. EBM Guidelines and UpToDate had similar overall updating rates, but Cochrane systematic reviews were more likely to be cited by EBM Guidelines than by UpToDate (odds ratio 0.02, P<0.001). Perhaps not surprising, as EBM Guidelines has a formal agreement with the Cochrane Collaboration to use Cochrane contents and label its summaries as “Cochrane inside.” On the other hand, UpToDate was faster than EBM Guidelines in updating systematic reviews signaled by literature surveillance journals.

Dynamed‘s higher updating ability was not due to a difference in identifying important new evidence, but to the speed with which this new information was incorporated in their summaries. Possibly the central updating of Dynamed by the editorial team might account for the more prompt inclusion of evidence.

As the authors rightly point out, slowness in updating could mean that new relevant information is ignored and could thus affect the validity of point of care information services”.

A slower updating rate may be considered more important for POCs that “promise” to “continuously update their evidence summaries” (EBM-Guidelines) or to “perform a continuous comprehensive review and to revise chapters whenever important new information is published, not according to any specific time schedule” (UpToDate). (see table with description of updating mechanisms )

In contrast, Emedicine doesn’t provide any detailed information on updating policy, another reason that it doesn’t belong to this list of best POCs.
Clinical Evidence, however, clearly states, We aim to update Clinical Evidence reviews annually. In addition to this cycle, details of clinically important studies are added to the relevant reviews throughout the year using the BMJ Updates service.” But BMJ Updates is not considered in the current analysis. Furthermore, patience is rewarded with excellent and complete summaries of evidence (in my opinion).

Indeed a major limitation of the current (and the previous) study by Banzi et al [1,2] is that they have looked at quantitative aspects and items that are relatively “easy to score”, like “volume” and “editorial quality”, not at the real quality of the evidence (previous post).

Although the findings were new to me, others have recently published similar results (studies were performed in the same time-span):

Shurtz and Foster [3] of the Texas A&M University Medical Sciences Library (MSL) also sought to establish a rubric for evaluating evidence-based medicine (EBM) point-of-care tools in a health sciences library.

They, too, looked at editorial quality and speed of updating plus reviewing content, search options, quality control, and grading.

Their main conclusion is that “differences between EBM tools’ options, content coverage, and usability were minimal, but that the products’ methods for locating and grading evidence varied widely in transparency and process”.

Thus this is in line with what Banzi et al reported in their first paper. They also share Banzi’s conclusion about differences in speed of updating

“DynaMed had the most up-to-date summaries (updated on average within 19 days), while First Consult had the least up to date (updated on average within 449 days). Six tools claimed to update summaries within 6 months or less. For the 10 topics searched, however, only DynaMed met this claim.”

Table 3 from Shurtz and Foster [3] 

Ketchum et al [4] also conclude that DynaMed the largest proportion of current (2007-2009) references (170/1131, 15%). In addition they found that Dynamed had the largest total number of references (1131/2330, 48.5%).

Yes, and you might have guessed it. The paper of Andrea Ketchum is the 3rd paper I’m going to review.

I also recommend to read the paper of the librarians Shurtz and Foster [3], which I found along the way. It has too much overlap with the Banzi papers to devote a separate post to it. Still it provides better background information then the Banzi papers, it focuses on POCs that claim to be EBM and doesn’t try to weigh one element over another. 

References

  1. Banzi, R., Liberati, A., Moschetti, I., Tagliabue, L., & Moja, L. (2010). A Review of Online Evidence-based Practice Point-of-Care Information Summary Providers Journal of Medical Internet Research, 12 (3) DOI: 10.2196/jmir.1288
  2. Banzi, R., Cinquini, M., Liberati, A., Moschetti, I., Pecoraro, V., Tagliabue, L., & Moja, L. (2011). Speed of updating online evidence based point of care summaries: prospective cohort analysis BMJ, 343 (sep22 2) DOI: 10.1136/bmj.d5856
  3. Shurtz, S., & Foster, M. (2011). Developing and using a rubric for evaluating evidence-based medicine point-of-care tools Journal of the Medical Library Association : JMLA, 99 (3), 247-254 DOI: 10.3163/1536-5050.99.3.012
  4. Ketchum, A., Saleh, A., & Jeong, K. (2011). Type of Evidence Behind Point-of-Care Clinical Information Products: A Bibliometric Analysis Journal of Medical Internet Research, 13 (1) DOI: 10.2196/jmir.1539
  5. Evidence Based Point of Care Summaries [1] No “Best” Among the Bests? (laikaspoetnik.wordpress.com)
  6. How will we ever keep up with 75 Trials and 11 Systematic Reviews a Day? (laikaspoetnik.wordpress.com
  7. UpToDate or Dynamed? (Shamsha Damani at laikaspoetnik.wordpress.com)
  8. How Evidence Based is UpToDate really? (laikaspoetnik.wordpress.com)

Related articles (automatically generated)








Follow

Get every new post delivered to your Inbox.

Join 611 other followers