No, Google Scholar Shouldn’t be Used Alone for Systematic Review Searching

9 07 2013

Several papers have addressed the usefulness of Google Scholar as a source for systematic review searching. Unfortunately the quality of those papers is often well below the mark.

In 2010 I already [1]  (in the words of Isla Kuhn [2]) “robustly rebutted” the Anders’ paper PubMed versus Google Scholar for Retrieving Evidence” [3] at this blog.

But earlier this year another controversial paper was published [4]:

“Is the coverage of google scholar enough to be used alone for systematic reviews?

It is one of the highly accessed papers of BMC Medical Informatics and Decision Making and has been welcomed in (for instance) the Twittosphere.

Researchers seem  to blindly accept the conclusions of the paper:

https://twitter.com/jeffvallance/status/340562086524510208

But don’t rush  and assume you can now forget about PubMed, MEDLINE, Cochrane and EMBASE for your systematic review search and just do a simple Google Scholar (GS) search instead.

You might  throw the baby out with the bath water….

… As has been immediately recognized by many librarians, either at their blogs (see blogs of Dean Giustini [5], Patricia Anderson [6] and Isla Kuhn [1]) or as direct comments to the paper (by Tuulevi OvaskaMichelle Fiander and Alison Weightman [7].

In their paper, Jean-François Gehanno et al examined whether GS was able to retrieve all the 738 original studies included in 29 Cochrane and JAMA systematic reviews.

And YES! GS had a coverage of 100%!

WOW!

All those fools at the Cochrane who do exhaustive searches in multiple databases using controlled vocabulary and a lot of synonyms when a simple search in GS could have sufficed…

But it is a logical fallacy to conclude from their findings that GS alone will suffice for SR-searching.

Firstly, as Tuulevi [7] rightly points out :

“Of course GS will find what you already know exists”

Or in the words of one of the official reviewers [8]:

What the authors show is only that if one knows what studies should be identified, then one can go to GS, search for them one by one, and find out that they are indexed. But, if a researcher already knows the studies that should be included in a systematic review, why bother to also check whether those studies are indexed in GS?

Right!

Secondly, it is also the precision that counts.

As Dean explains at his blog a 100% recall with a precision of 0,1% (and it can be worse!) means that in order to find 36 relevant papers you have to go through  ~36,700 items.

Dean:

Are the authors suggesting that researchers consider a precision level of 0.1% acceptable for the SR? Who has time to sift through that amount of information?

It is like searching for needles in a haystack.  Correction: It is like searching for particular hay stalks in a hay stack. It is very difficult to find them if they are hidden among other hay stalks. Suppose the hay stalks were all labeled (title), and I would have a powerful haystalk magnet (“title search”)  it would be a piece of cake to retrieve them. This is what we call “known item search”. But would you even consider going through the haystack and check the stalks one by one? Because that is what we have to do if we use Google Scholar as a one stop search tool for systematic reviews.

Another main point of criticism is that the authors have a grave and worrisome lack of understanding of the systematic review methodology [6] and don’t grasp the importance of the search interface and knowledge of indexing which are both integral to searching for systematic reviews.[7]

One wonders why the paper even passed the peer review, as one of the two reviewers (Miguel Garcia-Perez [8]) already smashed the paper to pieces.

The authors’ method is inadequate and their conclusion is not logically connected to their results. No revision (major, minor, or discretionary) will save this work. (…)

Miguel’s well funded criticism was not well addressed by the authors [9]. Apparently the editors didn’t see through and relied on the second peer reviewer [10], who merely said it was a “great job” etcetera, but that recall should not be written with a capital R.
(and that was about the only revision the authors made)

Perhaps it needs another paper to convince Gehanno et al and the uncritical readers of their manuscript.

Such a paper might just have been published [11]. It is written by Dean Giustini and Maged Kamel Boulos and is entitled:

Google Scholar is not enough to be used alone for systematic reviews

It is a simple and straightforward paper, but it makes its points clearly.

Giustini and Kamel Boulos looked for a recent SR in their own area of expertise (Chou et al [12]), that included a comparable number of references as that of Gehanno et al. Next they test GS’ ability to locate these references.

Although most papers cited by Chou et al. (n=476/506;  ~95%) were ultimately found in GS, numerous iterative searches were required to find the references and each citation had to be managed once at a time. Thus GS was not able to locate all references found by Chou et al. and the whole exercise was rather cumbersome.

As expected, trying to find the papers by a “real-life” GS search was almost impossible. Because due to its rudimentary structure, GS did not understand the expert search strings and was unable to translate them. Thus using Chou et al.’s original search strategy and keywords yielded unmanageable results of approximately >750,000 items.

Giustini and Kamel Boulos note that GS’ ability to search into the full-text of papers combined with its PageRank’s algorithm can be useful.

On the other hand GS’ changing content, unknown updating practices and poor reliability make it an inappropriate sole choice for systematic reviewers:

As searchers, we were often uncertain that results found one day in GS had not changed a day later and trying to replicate searches with date delimiters in GS did not help. Papers found today in GS did not mean they would be there tomorrow.

But most importantly, not all known items could be found and the search process and selection are too cumbersome.

Thus shall we now for once and for all conclude that GS is NOT sufficient to be used alone for SR searching?

We don’t need another bad paper addressing this.

But I would really welcome a well performed paper looking at the additional value of a GS in SR-searching. For I am sure that GS may be valuable for some questions and some topics in some respects. We have to find out which.

References

  1. PubMed versus Google Scholar for Retrieving Evidence 2010/06 (laikaspoetnik.wordpress.com)
  2. Google scholar for systematic reviews…. hmmmm  2013/01 (ilk21.wordpress.com)
  3. Anders M.E. & Evans D.P. (2010) Comparison of PubMed and Google Scholar literature searches, Respiratory care, May;55(5):578-83  PMID:
  4. Gehanno J.F., Rollin L. & Darmoni S. (2013). Is the coverage of Google Scholar enough to be used alone for systematic reviews., BMC medical informatics and decision making, 13:7  PMID:  (open access)
  5. Is Google scholar enough for SR searching? No. 2013/01 (blogs.ubc.ca/dean)
  6. What’s Wrong With Google Scholar for “Systematic” Review 2013/01 (etechlib.wordpress.com)
  7. Comments at Gehanno’s paper (www.biomedcentral.com)
  8. Official Reviewer’s report of Gehanno’s paper [1]: Miguel Garcia-Perez, 2012/09
  9. Authors response to comments  (www.biomedcentral.com)
  10. Official Reviewer’s report of Gehanno’s paper [2]: Henrik von Wehrden, 2012/10
  11. Giustini D. & Kamel Boulos M.N. (2013). Google Scholar is not enough to be used alone for systematic reviews, Online Journal of Public Health Informatics, 5 (2) DOI:
  12. Chou W.Y.S., Prestin A., Lyons C. & Wen K.Y. (2013). Web 2.0 for Health Promotion: Reviewing the Current Evidence, American Journal of Public Health, 103 (1) e9-e18. DOI:




The Scatter of Medical Research and What to do About it.

18 05 2012

ResearchBlogging.orgPaul Glasziou, GP and professor in Evidence Based Medicine, co-authored a new article in the BMJ [1]. Similar to another paper [2] I discussed before [3] this paper deals with the difficulty for clinicians of staying up-to-date with the literature. But where the previous paper [2,3] highlighted the mere increase in number of research articles over time, the current paper looks at the scatter of randomized clinical trials (RCTs) and systematic reviews (SR’s) accross different journals cited in one year (2009) in PubMed.

Hofmann et al analyzed 7 specialties and 9 sub-specialties, that are considered the leading contributions to the burden of disease in high income countries.

They followed a relative straightforward method for identifying the publications. Each search string consisted of a MeSH term (controlled  term) to identify the selected disease or disorders, a publication type [pt] to identify the type of study, and the year of publication. For example, the search strategy for randomized trials in cardiology was: “heart diseases”[MeSH] AND randomized controlled trial[pt] AND 2009[dp]. (when searching “heart diseases” as a MeSH, narrower terms are also searched.) Meta-analysis[pt] was used to identify systematic reviews.

Using this approach Hofmann et al found 14 343 RCTs and 3214 SR’s published in 2009 in the field of the selected (sub)specialties. There was a clear scatter across journals, but this scatter varied considerably among specialties:

“Otolaryngology had the least scatter (363 trials across 167 journals) and neurology the most (2770 trials across 896 journals). In only three subspecialties (lung cancer, chronic obstructive pulmonary disease, hearing loss) were 10 or fewer journals needed to locate 50% of trials. The scatter was less for systematic reviews: hearing loss had the least scatter (10 reviews across nine journals) and cancer the most (670 reviews across 279 journals). For some specialties and subspecialties the papers were concentrated in specialty journals; whereas for others, few of the top 10 journals were a specialty journal for that area.
Generally, little overlap occurred between the top 10 journals publishing trials and those publishing systematic reviews. The number of journals required to find all trials or reviews was highly correlated (r=0.97) with the number of papers for each specialty/ subspecialty.”

Previous work already suggested that this scatter of research has a long tail. Half of the publications is in a minority of papers, whereas the remaining articles are scattered among many journals (see Fig below).

Click to enlarge en see legends at BMJ 2012;344:e3223 [CC]

The good news is that SRs are less scattered and that general journals appear more often in the top 10 journals publishing SRs. Indeed for 6 of the 7 specialties and 4 of the 9 subspecialties, the Cochrane Database of Systematic Reviews had published the highest number of systematic reviews, publishing between 6% and 18% of all the systematic reviews published in each area in 2009. The bad news is that even keeping up to date with SRs seems a huge, if not impossible, challenge.

In other words, it is not sufficient for clinicians to rely on personal subscriptions to a few journals in their specialty (which is common practice). Hoffmann et al suggest several solutions to help clinicians cope with the increasing volume and scatter of research publications.

  • a central library of systematic reviews (but apparently the Cochrane Library fails to fulfill such a role according to the authors, because many reviews are out of date and are perceived as less clinically relevant)
  • registry of planned and completed systematic reviews, such as prospero. (this makes it easier to locate SRs and reduces bias)
  • Synthesis of Evidence and synopses, like the ACP-Jounal Club which summarizes the best evidence in internal medicine
  • Specialised databases that collate and critically appraise randomized trials and systematic reviews, like www.pedro.org.au for physical therapy. In my personal experience, however, this database is often out of date and not comprehensive
  • Journal scanning services like EvidenceUpdates from mcmaster.ca), which scans over 120 journals, filters articles on the basis of quality, has practising clinicians rate them for relevance and newsworthiness, and makes them available as email alerts and in a searchable database. I use this service too, but besides that not all specialties are covered, the rating of evidence may not always be objective (see previous post [4])
  • The use of social media tools to alert clinicians to important new research.

Most of these solutions are (long) existing solutions that do not or only partly help to solve the information overload.

I was surprised that the authors didn’t propose the use of personalized alerts. PubMed’s My NCBI feature allows to create automatic email alerts on a topic and to subscribe to electronic tables of contents (which could include ACP journal Club). Suppose that a physician browses 10 journals roughly covering 25% of the trials. He/she does not need to read all the other journals from cover to cover to avoid missing one potentially relevant trial. Instead it is far more efficient to perform a topic search to filter relevant studies from journals that seldom publish trials on the topic of interest. One could even use the search of Hoffmann et al to achieve this.* Although in reality, most clinical researchers will have narrower fields of interest than all studies about endocrinology and neurology.

At our library we are working at creating deduplicated, easy to read, alerts that collate table of contents of certain journals with topic (and author) searches in PubMed, EMBASE and other databases. There are existing tools that do the same.

Another way to reduce the individual work (reading) load is to organize journals clubs or even better organize regular CATs (critical appraised topics). In the Netherlands, CATS are a compulsory item for residents. A few doctors do the work for many. Usually they choose topics that are clinically relevant (or for which the evidence is unclear).

The authors shortly mention that their search strategy might have missed  missed some eligible papers and included some that are not truly RCTs or SRs, because they relied on PubMed’s publication type to retrieve RCTs and SRs. For systematic reviews this may be a greater problem than recognized, for the authors have used meta-analyses[pt] to identify systematic reviews. Unfortunately PubMed has no publication type for systematic reviews, but it may be clear that there are many more systematic reviews that meta-analyses. Possibly systematical reviews might even have a different scatter pattern than meta-analyses (i.e. the latter might be preferentially included in core journals).

Furthermore not all meta-analyses and systematic reviews are reviews of RCTs (thus it is not completely fair to compare MAs with RCTs only). On the other hand it is a (not discussed) omission of this study, that only interventions are considered. Nowadays physicians have many other questions than those related to therapy, like questions about prognosis, harm and diagnosis.

I did a little imperfect search just to see whether use of other search terms than meta-analyses[pt] would have any influence on the outcome. I search for (1) meta-analyses [pt] and (2) systematic review [tiab] (title and abstract) of papers about endocrine diseases. Then I subtracted 1 from 2 (to analyse the systematic reviews not indexed as meta-analysis[pt])

Thus:

(ENDOCRINE DISEASES[MESH] AND SYSTEMATIC REVIEW[TIAB] AND 2009[DP]) NOT META-ANALYSIS[PT]

I analyzed the top 10/11 journals publishing these study types.

This little experiment suggests that:

  1. the precise scatter might differ per search: apparently the systematic review[tiab] search yielded different top 10/11 journals (for this sample) than the meta-analysis[pt] search. (partially because Cochrane systematic reviews apparently don’t mention systematic reviews in title and abstract?).
  2. the authors underestimate the numbers of Systematic Reviews: simply searching for systematic review[tiab] already found appr. 50% additional systematic reviews compared to meta-analysis[pt] alone
  3. As expected (by me at last), many of the SR’s en MA’s were NOT dealing with interventions, i.e. see the first 5 hits (out of 108 and 236 respectively).
  4. Together these findings indicate that the true information overload is far greater than shown by Hoffmann et al (not all systematic reviews are found, of all available search designs only RCTs are searched).
  5. On the other hand this indirectly shows that SRs are a better way to keep up-to-date than suggested: SRs  also summarize non-interventional research (the ratio SRs of RCTs: individual RCTs is much lower than suggested)
  6. It also means that the role of the Cochrane Systematic reviews to aggregate RCTs is underestimated by the published graphs (the MA[pt] section is diluted with non-RCT- systematic reviews, thus the proportion of the Cochrane SRs in the interventional MAs becomes larger)

Well anyway, these imperfections do not contradict the main point of this paper: that trials are scattered across hundreds of general and specialty journals and that “systematic reviews” (or meta-analyses really) do reduce the extent of scatter, but are still widely scattered and mostly in different journals to those of randomized trials.

Indeed, personal subscriptions to journals seem insufficient for keeping up to date.
Besides supplementing subscription by  methods such as journal scanning services, I would recommend the use of personalized alerts from PubMed and several prefiltered sources including an EBM search machine like TRIP (www.tripdatabase.com/).

*but I would broaden it to find all aggregate evidence, including ACP, Clinical Evidence, syntheses and synopses, not only meta-analyses.

**I do appreciate that one of the co-authors is a medical librarian: Sarah Thorning.

References

  1. Hoffmann, Tammy, Erueti, Chrissy, Thorning, Sarah, & Glasziou, Paul (2012). The scatter of research: cross sectional comparison of randomised trials and systematic reviews across specialties BMJ, 344 : 10.1136/bmj.e3223
  2. Bastian, H., Glasziou, P., & Chalmers, I. (2010). Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLoS Medicine, 7 (9) DOI: 10.1371/journal.pmed.1000326
  3. How will we ever keep up with 75 trials and 11 systematic reviews a day (laikaspoetnik.wordpress.com)
  4. Experience versus Evidence [1]. Opioid Therapy for Rheumatoid Arthritis Pain. (laikaspoetnik.wordpress.com)




“Pharmacological Action” in PubMed has no True Equivalent in OVID MEDLINE

11 01 2012

Searching for EMBASE Subject Headings (the EMBASE index terms) for drugs is relatively straight forward in EMBASE.

When you want to search for aromatase inhibitors you first search for the Subject Heading mapping to aromatase inhibitors (aromatase inhibitor). Next you explode aromatase inhibitor/ if you are interested in all its narrower terms. If not, you search both for the general term aromatase inhibitor and those specific narrower terms you want to include.
Exploding aromatase inhibitor (exp aromatase inhibitor/) yields 15938 results. That is approximately twice what you get by searching aromatase inhibitor/ alone (not exploded). This yields 7434 hits.

It is different in MEDLINE. If you search for aromatase inhibitors in the MeSH database you get two suggestions.

The first index term “Aromatase Inhibitors” is a Mesh. It has no narrower terms.
Drug-Mesh are generally not arranged by working mechanism, but by chemical structure/type of compound. That is often confusing. Spironolactone for instance belongs to the MeSH Lactones (and Pregnenes) not to the MeSH Aldosterone Antagonists or Androgen Antagonist. Most Clinicians want to search for a group of compounds with the same mechanism of action, not the same biochemical family

The second term “Aromatase Inhibitors” [Pharmacological Action]  however does stand for the working mechanism. It does have narrower terms, including 2 MeSH terms (highlighted) and various substance names, also called Supplementary Concepts. 

For complete results you have to search for both MeSH and Pharmacological action: “Aromatase Inhibitors”[Mesh] yields 3930 records, whereas (“Aromatase Inhibitors”[Mesh]) OR “Aromatase Inhibitors” [Pharmacological Action] yields 6045. That is a lot more.

I usually don’t search PubMed, but OVID MEDLINE.

I know that Pharmacological Action-subheadings are important, so I tried to find the equivalent in OVID .

I found the MeSH Aromatase Inhibitors, but -unlike PubMed- OVID showed only two narrower Drug Terms (called Non-MeSH here versus MeSH in PubMed).

I found that odd.

I reasoned “Pharmacological action” might perhaps be combined with the MESH in OVID MEDLINE. This was later confirmed by Melissa Rethlefsen (see Twitter discussion below)

In Ovid MEDLINE I got 3937 hits with Aromatase Inhibitors/ and 5219 with exp Aromatase Inhibitors/ (thus including aminogluthemide or Fadrozole)

At this point I checked PubMed (shown above). Here I found  that “Aromatase Inhibitors”[Mesh] OR “Aromatase Inhibitors” [Pharmacological Action] yielded 6045 hits in PubMed, against 5219 in OVID MEDLINE for exp Aromatase Inhibitors/

The specific aromatase inhibitors Aminogluthemide/and Fadrozole/ [set 60] accounted fully for the difference  between exploded [set 59] and non-exploded Aromatase Inhibitors[set 58].

But what explained the gap of approximately 800 records between “Aromatase Inhibitors”[Mesh] OR “Aromatase Inhibitors”[Pharmacological Action]* in PubMed and exp aromatase inhibitors/ in OVID MEDLINE?

Could it be the substance names, mentioned under “Aromatase Inhibitors”[Pharmacological Action], I wondered?

Thus I added all the individual substance names in OVID MEDLINE (code= .rn.). See search set 61 below.

Indeed these accounted fully for the difference (set 62= 59 or 61 : the total number of hits in PubMed is similar)

It obviously is a mistake of OVID MEDLINE and I will inform them.

For the meanwhile, take care to add the individual substance names when you search for drug terms that have a pharmacological action-equivalent in PubMed. The substance names are not automatically searched when exploding the MeSH-term in OVID MEDLINE.

——–

For more info on Pharmacological action, see: http://www.nlm.nih.gov/bsd/disted/mesh/paterms.html

Twitter Discussion between me and Melissa Rethlefsen about the discrepancy between PubMed and OVID MEDLINE (again showing how helpful Twitter can be for immediate discussions and exchange of thoughts)

[read from bottom to top]





Things to Keep in Mind when Searching OVID MEDLINE instead of PubMed

25 11 2011

When I search extensively for systematic reviews I prefer OVID MEDLINE to PubMed for several reasons. Among them, it is easier to build a systematic search in OVID, the search history has a more structured format that is easy to edit, the search features are more advanced giving you more control over the search and translation of the a search to OVID EMBASE, PSYCHINFO and the Cochrane Library is “peanuts”, relatively speaking.

However, there are at least two things to keep in mind when searching OVID MEDLINE instead of PubMed.

1. You may miss publications, most notably recent papers.

PubMed doesn’t only provide access to MEDLINE, but also contains some other citations, including in-process citations which provide a record for an article before it is indexed with MeSH and added to MEDLINE.

As previously mentioned, I once missed a crucial RCT that was available in PubMed, but not yet available in OVID/MEDLINE.

A few weeks ago one of my clients said that she found 3 important papers with a simple PubMed search that were not retrieved by my exhaustive OVID MEDLINE (Doh!).
All articles were recent ones [Epub ahead of print, PubMed – as supplied by publisher]. I checked that these articles were indeed not yet included in OVID MEDLINE, and they weren’t.

As said, PubMed doesn’t have all search features of OVID MEDLINE and I felt a certain reluctance to make a completely new exhaustive search in PubMed. I would probably retrieve many irrelevant papers which I had tried to avoid by searching OVID*. I therefore decided to roughly translate the OVID search using textwords only (the missed articles had no MESH attached). It was a matter of copy-pasting the single textwords from the OVID MEDLINE search (and omitting adjacency operators) and adding the command [tiab], which means that terms are searched as textwords (in title and abstract) in PubMed (#2, only part of the long search string is shown).

To see whether all articles missed in OVID were in the non-MEDLINE set, I added the command: NOT MEDLINE[sb] (#3). Of the 332 records (#2), 28 belonged to the non-MEDLINE subset. All 3 relevant articles, not found in OVID MEDLINE, were in this set.

In total, there were 15 unique records not present in the OVID MEDLINE and EMBASE search. This additional search in PubMed was certainly worth the effort as it yielded more than 3 new relevant papers. (Apparently there was a boom in relevant papers on the topic, recently)

In conclusion, when doing an exhaustive search in OVID MEDLINE it is worth doing an additional search in PubMed to find the non-MEDLINE papers. Regularly these are very relevant papers that you wouldn’t like to have missed. Dependent on your aim you can suffice with a simpler, broader search for only textwords and limit by using NOT MEDLINE[sb].**

From now on, I will always include this PubMed step in my exhaustive searches. 

2. OVID MEDLINE contains duplicate records

I use Reference Manager to deduplicate the records retrieved from all databases  and I share the final database with my client. I keep track of the number of hits in each database and of the number of duplicates to facilitate the reporting of the search procedure later on (using the PRISM flowchart, see above). During this procedure, I noticed that I always got LESS records in Reference Manager when I imported records from OVID MEDLINE, but not when I imported records from the other databases. Thus it appears that OVID MEDLINE contains duplicate records.

For me it was just a fact that there were duplicate records in OVID MEDLINE. But others were surprised to hear this.

Where everyone just wrote down the number of total number hits in OVID MEDLINE, I always used the number of hits after deduplication in Reference Manager. But this is a quite a detour and not easy to explain in the PRISM-flowchart.

I wondered whether this deduplication could be done in OVID MEDLINE directly. I knew you cold deduplicate a multifile search, but would it also be possible to deduplicate a set from one database only? According to OVID help there should be a button somewhere, but I couldn’t find it (curious if you can).

Googling I found another OVID manual saying :

..dedup n = Removes duplicate records from multifile search results. For example, ..dedup 5 removes duplicate records from the multifile results set numbered 5.

Although the manual only talked about “multifile searches”, I tried the comment (..dedup 34) on the final search set (34) in OVID MEDLINE, and voilà, 21 duplicates were found (exactly the same number as removed by Reference manager)

The duplicates had the same PubMed ID (PMID, the .an. command in OVID), and were identical or almost identical.

Differences that I noticed were minimal changes in the MeSH (i.e. one or more MeSH  and/or subheadings changed) and changes in journal format (abbreviation used instead of full title).

Why are these duplicates present in OVID MEDLINE and not in PubMed?

These are the details of the PMID 20846254 in OVID (2 records) and in PubMed (1 record)

The Electronic Date of Publication (PHST)  was September 16th 2010. 2 days later the record was included in PubMed , but MeSH were added 3 months later ((MHDA: 2011/02/12). Around this date records are also entered in OVID MEDLINE. The only difference between the 2 records in OVID MEDLINE is that one record appears to be revised at 2011-10-13, whereas the other is not.

The duplicate records of 18231698 have again the same creation date (20080527) and entry date (20081203), but one is revised 2110-20-09 and updated 2010-12-14, while the other is revised 2011-08-18 and updated 2011-08-19 (thus almost one year later).

Possibly PubMed changes some records, instantaneously replacing the old ones, but OVID only includes the new PubMed records during MEDLINE-updates and doesn’t delete the old version.

Anyway, wouldn’t it be a good thing if OVID deduplicated its MEDLINE records on a daily basis or would replace the old ones when loading  new records from MEDLINE?

In the meantime, I would recommend to apply the deduplicate command yourself to get the exact number of unique records retrieved by your search in OVID MEDLINE.

*mostly because PubMed doesn’t have an adjacency-operator.
** Of course, only if you have already an extensive OVID MEDLINE search.





FUTON Bias. Or Why Limiting to Free Full Text Might not Always be a Good Idea.

8 09 2011

ResearchBlogging.orgA few weeks ago I was discussing possible relevant papers for the Twitter Journal Club  (Hashtag #TwitJC), a succesful initiative on Twitter, that I have discussed previously here and here [7,8].

I proposed an article, that appeared behind a paywall. Annemarie Cunningham (@amcunningham) immediately ran the idea down, stressing that open-access (OA) is a pre-requisite for the TwitJC journal club.

One of the TwitJC organizers, Fi Douglas (@fidouglas on Twitter), argued that using paid-for journals would defeat the objective that  #TwitJC is open to everyone. I can imagine that fee-based articles could set a too high threshold for many doctors. In addition, I sympathize with promoting OA.

However, I disagree with Annemarie that an OA (or rather free) paper is a prerequisite if you really want to talk about what might impact on practice. On the contrary, limiting to free full text (FFT) papers in PubMed might lead to bias: picking “low hanging fruit of convenience” might mean that the paper isn’t representative and/or doesn’t reflect the current best evidence.

But is there evidence for my theory that selecting FFT papers might lead to bias?

Lets first look at the extent of the problem. Which percentage of papers do we miss by limiting for free-access papers?

survey in PLOS by Björk et al [1] found that one in five peer reviewed research papers published in 2008 were freely available on the internet. Overall 8,5% of the articles published in 2008 (and 13,9 % in Medicine) were freely available at the publishers’ sites (gold OA).  For an additional 11,9% free manuscript versions could be found via the green route:  i.e. copies in repositories and web sites (7,8% in Medicine).
As a commenter rightly stated, the lag time is also important, as we would like to have immediate access to recently published research, yet some publishers (37%) impose an access-embargo of 6-12 months or more. (these papers were largely missed as the 2008 OA status was assessed late 2009).

PLOS 2009

The strength of the paper is that it measures  OA prevalence on an article basis, not on calculating the share of journals which are OA: an OA journal generally contains a lower number of articles.
The authors randomly sampled from 1.2 million articles using the advanced search facility of Scopus. They measured what share of OA copies the average researcher would find using Google.

Another paper published in  J Med Libr Assoc (2009) [2], using similar methods as the PLOS survey examined the state of open access (OA) specifically in the biomedical field. Because of its broad coverage and popularity in the biomedical field, PubMed was chosen to collect their target sample of 4,667 articles. Matsubayashi et al used four different databases and search engines to identify full text copies. The authors reported an OA percentage of 26,3 for peer reviewed articles (70% of all articles), which is comparable to the results of Björk et al. More than 70% of the OA articles were provided through journal websites. The percentages of green OA articles from the websites of authors or in institutional repositories was quite low (5.9% and 4.8%, respectively).

In their discussion of the findings of Matsubayashi et al, Björk et al. [1] quickly assessed the OA status in PubMed by using the new “link to Free Full Text” search facility. First they searched for all “journal articles” published in 2005 and then repeated this with the further restrictions of “link to FFT”. The PubMed OA percentages obtained this way were 23,1 for 2005 and 23,3 for 2008.

This proportion of biomedical OA papers is gradually increasing. A chart in Nature’s News Blog [9] shows that the proportion of papers indexed on the PubMed repository each year has increased from 23% in 2005 to above 28% in 2009.
(Methods are not shown, though. The 2008 data are higher than those of Björk et al, who noticed little difference with 2005. The Data for this chart, however, are from David Lipman, NCBI director and driving force behind the digital OA archive PubMed Central).
Again, because of the embargo periods, not all literature is immediately available at the time that it is published.

In summary, we would miss about 70% of biomedical papers by limiting for FFT papers. However, we would miss an even larger proportion of papers if we limit ourselves to recently published ones.

Of course, the key question is whether ignoring relevant studies not available in full text really matters.

Reinhard Wentz of the Imperial College Library and Information Service already argued in a visionary 2002 Lancet letter[3] that the availability of full-text articles on the internet might have created a new form of bias: FUTON bias (Full Text On the Net bias).

Wentz reasoned that FUTON bias will not affect researchers who are used to comprehensive searches of published medical studies, but that it will affect staff and students with limited experience in doing searches and that it might have the same effect in daily clinical practice as publication bias or language bias when doing systematic reviews of published studies.

Wentz also hypothesized that FUTON bias (together with no abstract available (NAA) bias) will affect the visibility and the impact factor of OA journals. He makes a reasonable cause that the NAA-bias will affect publications on new, peripheral, and under-discussion subjects more than established topics covered in substantive reports.

The study of Murali et al [4] published in Mayo Proceedings 2004 confirms that the availability of journals on MEDLINE as FUTON or NAA affects their impact factor.

Of the 324 journals screened by Murali et al. 38.3% were FUTON, 19.1%  NAA and 42.6% had abstracts only. The mean impact factor was 3.24 (±0.32), 1.64 (±0.30), and 0.14 (±0.45), respectively! The authors confirmed this finding by showing a difference in impact factors for journals available in both the pre and the post-Internet era (n=159).

Murali et al informally questioned many physicians and residents at multiple national and international meetings in 2003. These doctors uniformly admitted relying on FUTON articles on the Web to answer a sizable proportion of their questions. A study by Carney et al (2004) [5] showed  that 98% of the US primary care physicians used the Internet as a resource for clinical information at least once a week and mostly used FUTON articles to aid decisions about patient care or patient education and medical student or resident instruction.

Murali et al therefore conclude that failure to consider FUTON bias may not only affect a journal’s impact factor, but could also limit consideration of medical literature by ignoring relevant for-fee articles and thereby influence medical education akin to publication or language bias.

This proposed effect of the FFT limit on citation retrieval for clinical questions, was examined in a  more recent study (2008), published in J Med Libr Assoc [6].

Across all 4 questions based on a research agenda for physical therapy, the FFT limit reduced the number of citations to 11.1% of the total number of citations retrieved without the FFT limit in PubMed.

Even more important, high-quality evidence such as systematic reviews and randomized controlled trials were missed when the FFT limit was used.

For example, when searching without the FFT limit, 10 systematic reviews of RCTs were retrieved against one when the FFT limit was used. Likewise when searching without the FFT limit, 28 RCTs were retrieved and only one was retrieved when the FFT limit was used.

The proportion of missed studies (appr. 90%) is higher than in the studies mentioned above. Possibly this is because real searches have been tested and that only relevant clinical studies  have been considered.

The authors rightly conclude that consistently missing high-quality evidence when searching clinical questions is problematic because it undermines the process of Evicence Based Practice. Krieger et al finally conclude:

“Librarians can educate health care consumers, scientists, and clinicians about the effects that the FFT limit may have on their information retrieval and the ways it ultimately may affect their health care and clinical decision making.”

It is the hope of this librarian that she did a little education in this respect and clarified the point that limiting to free full text might not always be a good idea. Especially if the aim is to critically appraise a topic, to educate or to discuss current best medical practice.

References

  1. Björk, B., Welling, P., Laakso, M., Majlender, P., Hedlund, T., & Guðnason, G. (2010). Open Access to the Scientific Journal Literature: Situation 2009 PLoS ONE, 5 (6) DOI: 10.1371/journal.pone.0011273
  2. Matsubayashi, M., Kurata, K., Sakai, Y., Morioka, T., Kato, S., Mine, S., & Ueda, S. (2009). Status of open access in the biomedical field in 2005 Journal of the Medical Library Association : JMLA, 97 (1), 4-11 DOI: 10.3163/1536-5050.97.1.002
  3. WENTZ, R. (2002). Visibility of research: FUTON bias The Lancet, 360 (9341), 1256-1256 DOI: 10.1016/S0140-6736(02)11264-5
  4. Murali NS, Murali HR, Auethavekiat P, Erwin PJ, Mandrekar JN, Manek NJ, & Ghosh AK (2004). Impact of FUTON and NAA bias on visibility of research. Mayo Clinic proceedings. Mayo Clinic, 79 (8), 1001-6 PMID: 15301326
  5. Carney PA, Poor DA, Schifferdecker KE, Gephart DS, Brooks WB, & Nierenberg DW (2004). Computer use among community-based primary care physician preceptors. Academic medicine : journal of the Association of American Medical Colleges, 79 (6), 580-90 PMID: 15165980
  6. Krieger, M., Richter, R., & Austin, T. (2008). An exploratory analysis of PubMed’s free full-text limit on citation retrieval for clinical questions Journal of the Medical Library Association : JMLA, 96 (4), 351-355 DOI: 10.3163/1536-5050.96.4.010
  7. The #TwitJC Twitter Journal Club, a new Initiative on Twitter. Some Initial Thoughts. (laikaspoetnik.wordpress.com)
  8. The Second #TwitJC Twitter Journal Club (laikaspoetnik.wordpress.com)
  9. How many research papers are freely available? (blogs.nature.com)




PubMed’s Higher Sensitivity than OVID MEDLINE… & other Published Clichés.

21 08 2011

ResearchBlogging.orgIs it just me, or are biomedical papers about searching for a systematic review often of low quality or just too damn obvious? I’m seldom excited about papers dealing with optimal search strategies or peculiarities of PubMed, even though it is my specialty.
It is my impression, that many of the lower quality and/or less relevant papers are written by clinicians/researchers instead of information specialists (or at least no medical librarian as the first author).

I can’t help thinking that many of those authors just happen to see an odd feature in PubMed or encounter an unexpected  phenomenon in the process of searching for a systematic review.
They think: “Hey, that’s interesting” or “that’s odd. Lets write a paper about it.” An easy way to boost our scientific output!
What they don’t realize is that the published findings are often common knowledge to the experienced MEDLINE searchers.

Lets give two recent examples of what I think are redundant papers.

The first example is a letter under the heading “Clinical Observation” in Annals of Internal Medicine, entitled:

“Limitations of the MEDLINE Database in Constructing Meta-analyses”.[1]

As the authors rightly state “a thorough literature search is of utmost importance in constructing a meta-analysis. Since the PubMed interface from the National Library of Medicine is a cornerstone of many meta-analysis,  the authors (two MD’s) focused on the freely available PubMed” (with MEDLINE as its largest part).

The objective was:

“To assess the accuracy of MEDLINE’s “human” and “clinical trial” search limits, which are used by authors to focus literature searches on relevant articles.” (emphasis mine)

O.k…. Stop! I know enough. This paper should have be titled: “Limitation of Limits in MEDLINE”.

Limits are NOT DONE, when searching for a systematic review. For the simple reason that most limits (except language and dates) are MESH-terms.
It takes a while before the indexers have assigned a MESH to the papers and not all papers are correctly (or consistently) indexed. Thus, by using limits you will automatically miss recent, not yet, or not correctly indexed papers. Whereas it is your goal (or it should be) to find as many relevant papers as possible for your systematic review. And wouldn’t it be sad if you missed that one important RCT that was published just the other day?

On the other hand, one doesn’t want to drown in irrelevant papers. How can one reduce “noise” while minimizing the risk of loosing relevant papers?

  1. Use both MESH and textwords to “limit” you search, i.e. also search “trial” as textword, i.e. in title and abstract: trial[tiab]
  2. Use more synonyms and truncation (random*[tiab] OR  placebo[tiab])
  3. Don’t actively limit but use double negation. Thus to get rid of animal studies, don’t limit to humans (this is the same as combining with MeSH [mh]) but safely exclude animals as follows: NOT animals[mh] NOT humans[mh] (= exclude papers indexed with “animals” except when these papers are also indexed with “humans”).
  4. Use existing Methodological Filters (ready-made search strategies) designed to help focusing on study types. These filters are based on one or more of the above-mentioned principles (see earlier posts here and here).
    Simple Methodological Filters can be found at the PubMed Clinical Queries. For instance the narrow filter for Therapy not only searches for the Publication Type “Randomized controlled trial” (a limit), but also for randomized, controlled ànd  trial  as textwords.
    Usually broader (more sensitive) filters are used for systematic reviews. The Cochrane handbook proposes to use the following filter maximizing precision and sensitivity to identify randomized trials in PubMed (see http://www.cochrane-handbook.org/):
    (randomized controlled trial [pt] OR controlled clinical trial [pt] OR randomized [tiab] OR placebo [tiab] OR clinical trials as topic [mesh: noexp] OR randomly [tiab] OR trial [ti]) NOT (animals [mh] NOT humans [mh]).
    When few hits are obtained, one can either use a broader filter or no filter at all.

In other words, it is a beginner’s mistake to use limits when searching for a systematic review.
Besides that the authors publish what should be common knowledge (even our medical students learn it) they make many other (little) mistakes, their precise search is difficult to reproduce and far from complete. This is already addressed by Dutch colleagues in a comment [2].

The second paper is:

PubMed had a higher sensitivity than Ovid-MEDLINE in the search for systematic reviews [3], by Katchamart et al.

Again this paper focuses on the usefulness of PubMed to identify RCT’s for a systematic review, but it concentrates on the differences between PubMed and OVID in this respect. The paper starts with  explaining that PubMed:

provides access to bibliographic information in addition to MEDLINE, such as in-process citations (..), some OLDMEDLINE citations (….) citations that precede the date that a journal was selected for MEDLINE indexing, and some additional life science journals that submit full texts to PubMed Central and receive a qualitative review by NLM.

Given these “facts”, am I exaggerating when I am saying that the authors are pushing at an open door when their main conclusion is that PubMed retrieved more citations overall than Ovid-MEDLINE? The one (!) relevant article missed in OVID was a 2005 study published in a Japanese journal that MEDLINE started indexing in 2007. It was therefore in PubMed, but not in OVID MEDLINE.

An important aspect to keep in mind when searching OVID/MEDLINE ( I have earlier discussed here and here). But worth a paper?

Recently, after finishing an exhaustive search in OVID/MEDLINE, we noticed that we missed a RCT in PubMed, that was not yet available in OVID/MEDLINE.  I just added one sentence to the search methods:

Additionally, PubMed was searched for randomized controlled trials ahead of print, not yet included in OVID MEDLINE. 

Of course, I could have devoted a separate article to this finding. But it is so self-evident, that I don’t think it would be worth it.

The authors have expressed their findings in sensitivity (85% for Ovid-MEDLINE vs. 90% for PubMed, 5% is that ONE paper missing), precision and  number to read (comparable for OVID-MEDLINE and PubMed).

If I might venture another opinion: it looks like editors of medical and epidemiology journals quickly fall for “diagnostic parameters” on a topic that they don’t understand very well: library science.

The sensitivity/precision data found have little general value, because:

  • it concerns a single search on a single topic
  • there are few relevant papers (17- 18)
  • useful features of OVID MEDLINE that are not available in PubMed are not used. I.e. Adjacency searching could enhance the retrieval of relevant papers in OVID MEDLINE (adjacency=words searched within a specified maximal distance of each other)
  • the searches are not comparable, nor are the search field commands.

The latter is very important, if one doesn’t wish to compare apples and oranges.

Lets take a look at the first part of the search (which is in itself well structured and covers many synonyms).
First part of the search - Click to enlarge
This part of the search deals with the P: patients with rheumatoid arthritis (RA). The authors first search for relevant MeSH (set 1-5) and then for a few textwords. The MeSH are fine. The authors have chosen to use Arthritis, rheumatoid and a few narrower terms (MeSH-tree shown at the right). The authors have taken care to use the MeSH:noexp command in PubMed to prevent the automatic explosion of narrower terms in PubMed (although this is superfluous for MesH terms having no narrow terms, like Caplan syndrome etc.).

But the fields chosen for the free text search (sets 6-9) are not comparable at all.

In OVID the mp. field is used, whereas all fields or even no fields are used in PubMed.

I am not even fond of the uncontrolled use of .mp (I rather search in title and abstract, remember we already have the proper MESH-terms), but all fields is even broader than .mp.

In general a .mp. search looks in the Title, Original Title, Abstract, Subject Heading, Name of Substance, and Registry Word fields. All fields would be .af in OVID not .mp.

Searching for rheumatism in OVID using the .mp field yields 7879 hits against 31390 hits when one searches in the .af field.

Thus 4 times as much. Extra fields searched are for instance the journal and the address field. One finds all articles in the journal Arthritis & Rheumatism for instance [line 6], or papers co-authored by someone of the dept. of rheumatoid surgery [line 9]

Worse, in PubMed the “all fields” command doesn’t prevent the automatic mapping.

In PubMed, Rheumatism[All Fields] is translated as follows:

“rheumatic diseases”[MeSH Terms] OR (“rheumatic”[All Fields] AND “diseases”[All Fields]) OR “rheumatic diseases”[All Fields] OR “rheumatism”[All Fields]

Oops, Rheumatism[All Fields] is searched as the (exploded!) MeSH rheumatic diseases. Thus rheumatic diseases (not included in the MeSH-search) plus all its narrower terms! This makes the entire first part of the PubMed search obsolete (where the authors searched for non-exploded specific terms). It explains the large difference in hits with rheumatism between PubMed and OVID/MEDLINE: 11910 vs 6945.

Not only do the authors use this .mp and [all fields] command instead of the preferred [tiab] field, they also apply this broader field to the existing (optimized) Cochrane filter, that uses [tiab]. Finally they use limits!

Well anyway, I hope that I made my point that useful comparison between strategies can only be made if optimal strategies and comparable  strategies are used. Sensitivity doesn’t mean anything here.

Coming back to my original point. I do think that some conclusions of these papers are “good to know”. As a matter of fact it should be basic knowledge for those planning an exhaustive search for a systematic review. We do not need bad studies to show this.

Perhaps an expert paper (or a series) on this topic, understandable for clinicians, would be of more value.

Or the recognition that such search papers should be designed and written by librarians with ample experience in searching for systematic reviews.

NOTE:
* = truncation=search for different word endings; [tiab] = title and abstract; [ti]=title; mh=mesh; pt=publication type

Photo credit

The image is taken from the Dragonfly-blog; here the Flickr-image Brain Vocab Sketch by labguest was adapted by adding the Pubmed logo.

References

  1. Winchester DE, & Bavry AA (2010). Limitations of the MEDLINE database in constructing meta-analyses. Annals of internal medicine, 153 (5), 347-8 PMID: 20820050
  2. Leclercq E, Kramer B, & Schats W (2011). Limitations of the MEDLINE database in constructing meta-analyses. Annals of internal medicine, 154 (5) PMID: 21357916
  3. Katchamart W, Faulkner A, Feldman B, Tomlinson G, & Bombardier C (2011). PubMed had a higher sensitivity than Ovid-MEDLINE in the search for systematic reviews. Journal of clinical epidemiology, 64 (7), 805-7 PMID: 20926257
  4. Search OVID EMBASE and Get MEDLINE for Free…. without knowing it (laikaspoetnik.wordpress.com 2010/10/19/)
  5. 10 + 1 PubMed Tips for Residents (and their Instructors) (laikaspoetnik.wordpress.com 2009/06/30)
  6. Adding Methodological filters to myncbi (laikaspoetnik.wordpress.com 2009/11/26/)
  7. Search filters 1. An Introduction (laikaspoetnik.wordpress.com 2009/01/22/)




3rd Call for Submissions for “Medical Information Matters”: Tools for Searching the Biomedical Literature

8 05 2011

It takes some doing to breathe life into Medical Information Matters” (blog carnival about medical  information).
A month ago I wrote a 2nd call for submissions post for this blog carnival. Unfortunately the next host, Martin Fenner, didn’t have time to finish a blog post and has come up with a new (interesting) variation on the theme “A Wish list for better medical information”.

Martin asks you to philosophize, blog and/or comment about “Tools for Searching the Biomedical Literature.

You can base your contribution on a recent (editable) survey of 28 different PubMed derivative tools by Zhiyong Lu (NCBI) [1].

Thus, write your thoughts on the various PubMed derivative tools mentioned here or write about your own favorite 3rd party PubMed tool (included or not).

For details, see Martin’s blog post announcing this upcoming edition. The Blog Carnival FAQs are here.

And if you don’t have time to write about this topic, you may still find the survey useful, as well as the views of others on this topic. So check out Martin’s blog Gobbledygook once in a while to see if the blog edition has been posted.

Note [1]: If you have already submitted a post to the carnival, or would like to write about another theme, we will take care that your post (if relevant)  will be included in this or the next edition. You can always submit here.

Note [2]: Would you like to host “Medical Information Matters” at your blog? Please comment here or write to: laika dot spoetnik at gmail dot com. We need hosts for June, July, August and September (submission deadline first Saturday of every month, posting on the next Tuesday)

  1. Lu Z. PubMed and beyond: a survey of web tools for searching biomedical literature. Database. 2011 Jan;2011. doi: http://dx.doi.org/10.1093/database/baq036




PubMed’s Shutdown Averted… For Now.

12 04 2011

MEDLINE is the National Library of Medicine‘s (NLM) premier bibliographic database of citations from biomedical journals. The content of MEDLINE is available via commercial, fee-for-service MEDLINE vendors, like OVID.

On June 26, 1997, Vice President Al Gore officially announced free MEDLINE access via PubMed. This was one of the consequences of  the Freedom of Information Act (FOIA), a federal law that allows for the full or partial disclosure of previously unreleased information and documents controlled by the United States Government. (http://www.nih.gov/icd/od/foia/index.htm). National Library of Medicine (which is “just” one of the NIH web servers) gives access to many other databases besides PubMed/MEDLINEMeSH, UMLS, ClinicalTrials.gov, MedlinePlus, TOXNET.

I may complain about PubMed once in a while and I may criticize some of its new features, but I cannot imagine a working  life without PubMed. Probably this is even more true for biomedical scientist and physicians who have only access to freely available PubMed and not to OVID MEDLINE, EMBASE and Web of Science, like I do. PubMed and many other NLM databases have become an indispensable source of Medical Information.

We are so used to these free sources, that we take them for granted. Who would imagine that PubMed -or any other great free NLM/NIH database would cease to exists? Still, shutdown of these databases was imminent last weekend. Remarkably it largely went unnoticed, especially for people outside the U.S.

Did you know that there was a great chance of PubMed being killed this weekend?

I happened to get the news via my Twitter stream. I joined in around Friday midnight -Dutch time, 3-4 days ago.

Here are some selected tweets. Have a look. See and feel the panic:

As somebody far from the epicenter  it is hard for me to unravel the logic (?) behind the shutdown threat.

I understand that the near-breakshutdown is the result of the disagreement between the democrats and republicans on the ways to cut the federal costs. By refusing to pass a bill allowing the federal government to be funded, the Republican dominated House of Representatives was forcing a showdown with the White House and Barack Obama. The arrows of the Republicans were mainly directed at Planned Parenthood, the health organisation that Republicans portray as primarily focused on performing abortions, using American taxpayer dollars to do it. However, Planned Parenthood provides an array of services, from screenings for cancer to testing for sexually transmitted diseases (see Huffington post).

Well the tweet of Sarah Palin illustrates the view of the Tea Party (in typical Palin style).

For now, the threat has been averted. The Republicans forced the Democrats to agree to $39bn (£23bn) in spending cuts in this year’s budget to September, $6bn more than the Democrats were prepared to accept earlier this week. In return, the Republicans dropped a demand to cut funding for Planned Parenthood (Guardian). But no one knows whether the aversion is definitive.

This post isn’t meant to dive deep into the US political debate. It is just meant to reflect on the possibility that one of those federal databases, on which we rely, is wiped away overnight, thereby seriously affecting our usual workflows.

Some consequences when PubMed (and MEDLINE?)  would disappear:

  • Many Doctors can no longer search efficiently for medical information (only brows medical journals,  “Google” or look up outdated info).
  • The same is true for many scientists. Look at FlutesUD remarks about the references for her thesis.
  • The disappearance of Pubmed would especially affect rural areas and third world countries.
  • EBM would become difficult to practice:
    • The comprehensive search of PubMed, obligatory for systematic reviews, has to be skipped.
    • It would become almost impossible to do a critical appraised topic (i.e. interns are often used to search/have only access to PubMed)
    • CENTRAL (the largest database of controlled trials) can no longer retrieve its records from PubMed.
  • Librarians can delete many tutorials, e-learning materials and -even- classes.
  • Perhaps many librarians can even say goodbye to their jobs?
  • MYNCBI Saved searches and alerts are gone.
  • MYNCBI Saved papers (collections) are no more.
  • 3rd party Pubmed tools (Novoseek, GoPubMed, HubMed) would also cease to exist.
  • Commercially available MEDLINE sources will be affected as well.
  • By the way clinical.trials.gov, TOXNET etc would also stop. Another hit for librarians, doctors and patients.

For many, disappearance of PubMed is a relative “minor” event compared to the shutdown of other services like the NASA, or healtcare institutions. The near-disappearance of PubMed made me realize how fragile this excellent service is on which we -librarians, physicians, medical students and scientists- rely. On the other hand, it also made me realize how thankful we should be that such a database is available to us for free (yes, even for people outside the US).

Note: (Per 2011-04-14)

I have changed the title from “PubMed’s Sudden Death averted” to “PubMed’s Shutdown averted”, because Death is permanent and it was unknown if the shutdown, if any, would be permanent.

I have also changed some words in the text (blue), thus changed disappearance to “shutdown” for the same reasons as mentioned above.

On the other hand I’ve added some tweets which clearly indicate that the shutdown was not “nothing to worry about”.

The tweets mentioned are not from official resources. And this is what this post is partly about. The panic that results if there is a lack of reliable information. Other main points: (2) the importance of PubMed for biomedical information and (3) that PubMed’s permanent (free) existence is not granted.

Nikki D at Eagledawg describes the event (lack of info and panic) very clearly in her post: Pubmed. Keep Calm and Carry On?

More Info:





Search OVID EMBASE and Get MEDLINE for Free…. without knowing it

19 10 2010

I have the impression that OVIDSP listens more to librarians than the NLM, who considers the end users of databases like PubMed more important, mainly because there are more of them. On the other hand NLM communicates PubMed’s changes better (NLM Technical Bulletin) and has easier to find tutorials & FAQs, namely at the PubMed homepage.

I gather that the new changes to the OVIDSP interface are the reason why two older OVID posts are the recent number 2 and 3 hits on my blog. My guess is that people are looking for some specific information on OVID’s interface changes that they can’t easily access otherwise.

But this post won’t address the technical changes. I will write about this later.

I just want to mention a few changes to the OVIDSP databases MEDLINE and EMBASE, some of them temporary, that could have been easily missed.

[1] First, somewhere in August, OVID MEDLINE contained only indexed PubMed articles. I know that OVID MEDLINE misses some papers PubMed already has -namely the “as supplied by publisher” subset-, but this time the difference was dramatic: “in data review” and “in process” papers weren’t found as well. I almost panicked, because if I missed that much in OVID MEDLINE, I would have to search PubMed as well, and adapt the search strategy…. and, since I already lost hours because of OVID’s extreme slowness at that time, I wasn’t looking forward to this.

According to an OVID-representative this change was not new, but was already there since (many) months. Had I been blind? I checked the printed search results of a search I performed in June. It was clear that the newer update found less records, meaning that some records were missed in the current (August) update. Furthermore the old Reference Manager database contained non-indexed records. So no problems then.

But to make a long story short. Don’t worry: this change disappeared as quickly as it came.
I would have doubted my own eyes, if my colleague hadn’t seen it too.

If you have done a MEDLINE OVID search in the second half of August you might like to check the results.

[2] Simultaneously there was another change. A change that is still there.

Did you know that OVID EMBASE contains MEDLINE records as well? I knew that you could search EMBASE.com for MEDLINE and EMBASE records using the “highly praised EMTREE“, but not that OVID EMBASE recently added these records too.

They are automatic found by the text-word searches and by the EMTREE already includes all of MeSH.

Should I be happy that I get these records for free?

No, I am not.

I always start with a MEDLINE search, which is optimized for MEDLINE (with regard to the MeSH).

Since indexing by  EMTREE is deep, I usually have (much) more noise (irrelevant hits) in EMBASE.

I do not want to have an extra number of MEDLINE-records in an uncontrolled way.

I can imagine though, that it would be worthwhile in case of a quick search in EMBASE alone: that could save time.
In my case, doing extensive searches for systematic reviews I want to be in control. I also want to show the number of articles from MEDLINE and the number of extra hits from EMBASE.

(Later I realized that a figure shown by the OVID representative wasn’t fair: they showed the hits obtained when searching EMBASE, MEDLINE and other databases in Venn diagrams: MEDLINE offered little extra beyond EMBASE, which is self-evident, considering that EMBASE includes almost all MEDLINE records.- But I only learned this later.)

It is no problem if you want to include these MEDLINE records, but it is easy to exclude them.

You can limit for MEDLINE or EMBASE records.

Suppose your last search set is 26.

Click Limits > Additional Limits > EMBASE (or MEDLINE)

Alternatively type: limit 26 to embase (resp limit 26 to medline) Added together they make 100%

If only they would have told us….


3. EMBASE OVID now also adds conference abstracts.

A good thing if you do an exhaustive search and want to include unpublished material as well (50% of the conference abstracts don’t get published).

You can still exclude them if you like  (see publication types to the right)

Here is what is written at EMBASE.com

Embase now contains almost 800 conferences and more than 260,000 conference abstracts, primarily from journals and journal supplements published in 2009 and 2010. Currently, conference abstracts are being added to Embase at the rate of 1,000 records per working day, each indexed with Emtree.
Conference information is not available from PubMed, and is significantly greater than BIOSIS conference coverage. (…)

4. And did you know that OVID has eliminated StopWords from MEDLINE and EMBASE? Since  a few years you can now search for words or phrases like is there hope.tw. Which is a very good thing, because it broadens the possibility to search for certain word strings. However, it isn’t generally known.

OVID changed it after complaints by many, including me and a few Cochrane colleagues. I thought I had written a post on it before, but I apparently I haven’t ;).

Credits

Thanks to Joost Daams who always has the latest news on OVID.

Related Articles





Problems with Disappearing Set Numbers in PubMed’s Clinical Queries

18 10 2010

In some upcoming posts I will address various problems related to the changing interfaces of bibliographic databases.

We, librarians and end users, are overwhelmed by a flood of so-called upgrades, which often fail to bring the improvements that were promised….. or which go hand-in-hand with temporary glitches.

Christina of Christina’s Lis Rant even made rundown of the new interfaces of last summer. Although she didn’t include OVID MEDLINE/EMBASE, the Cochrane Library and Reference manager in her list, the total number of changed interfaces reached 22 !

As a matter of fact, the Cochrane Library was suffering some outages yesterday, to repair some bugs. So I will postpone my coverage of the Cochrane bugs a little.

And OVID send out a notice last week: This week Ovid will be deploying a software release of the OvidSPplatform that will add new functionality and address improvements to some existing functionality.”

In this post I will confine myself to the PubMed Clinical Queries. According to Christina PubMed changes “were a bit ago”, but PubMed continuously tweaks  its interface, often without paying much attention to its effects.

Back in July, I already covered that the redesign of the PubMed Clinical Queries was no improvement for people who wanted to do more than a quick and dirty search.

It was no longer possible to enter a set number in the Clinical Queries search bar. Thus it wasn’t possible to set up a search in PubMed first and to then enter the final set number in the Clinical Queries. This bug was repaired promptly.

From then on, the set number could be entered again in the clinical queries.

However, one bug was replaced by another: next, search numbers were disappearing from the search history.

I will use the example I used before: I want to know if spironolactone reduces hirsutism in women with PCOS, and if it works better than cyproterone acetate.

Since little is published about this topic,  I only search for  hirsutism and spironolactone. These terms  map correctly with  MeSH terms. In the MeSH database I also see (under “see also”) that spironolactone belongs to the aldosterone antagonists, so I broaden spironolactone (#2) with “Aldosterone antagonists”[pharmacological Action] using “OR” (set #7). My last set (#8) consists of #1 (hirsutism) AND #7 (#2 OR #6)

Next I go to the Clinical Queries in the Advanced Search and enter #8. (now possible again).

I change the Therapy Filter from “broad”  to “narrow”, because the broad filter gives too much noise.

In the clinical queries you see only the first five results.

Apparently even the clinical queries are now designed to just take a quick look at the most recent results, but of course, that is NOT what we are trying to achieve when we search for (the best) evidence.

To see all results for the narrow therapy filter I have to go back to the Clinical Queries again and click on see all (27) [5]

A bit of a long way about. But it gets longer…


The 27 hits, that result from combining the Narrow therapy filter with my search #8 appears. This is set #9.
Note it is a lower number than set #11 (search + systematic review filter).

Meanwhile set #9 has disappeared from my history.

This is a nuisance if I want to use this set further or if I want to give an overview of my search, i.e. for a presentation.

There are several tricks by which this flaw can be overcome. But they are all cumbersome.

1. Just add set number (#11 in this case, which is the last search (#8) + 3 more) to the search history (you have to remember the search set number though).

This is the set number remembered by the system. As you see in the history, you “miss” certain sets. #3 to #5 are for instance are searches you performed in the MeSH-database, which show up in the History of the MeSH database, but not in PubMed’s history.

The Clinical query set number is still there, but it doesn’t show either. Apparently the 3 clinical query-subsets yield a separate set number, whether the search is truly performed or not. In this case  #11 for (#8) AND systematic[sb], #9 for (#8) AND (Therapy/Narrow[filter]). And #10 for (#8) AND the medical genetics filter.

In this way you have all results in your history. It isn’t immediately clear, however, what these sets represent.

2. Use the commands rather than going to the clinical queries.

Thus type in the search bar: #8 AND systematic[sb]

And then: #8 AND (Therapy/Narrow[filter])

It is easiest to keep all filters in Word/Notepad and copy/paste each time you need the filter

3. Add clinical queries as filters to your personal NCBI account so that the filters show up each time you do a search in PubMed. This post describes how to do it.

Anyway these remain just tricks to try to make something right that is wrong.

Furthermore it makes it more difficult to explain the usefulness of the clinical queries to doctors and medical students. Explaining option 3 takes too long in a short course, option 1 seems illogical and 2 is hard to remember.

Thus we want to keep the set numbers in the history, at least.

A while ago Dieuwke Brand notified the NLM of this problem.

Only recently she received an answer saying that:

we are aware of the continuing problem.  The problem remains on our programmers’ list of items to investigate.  Unfortunately, because this problem appears to be limited to very few users, it has been listed as a low priority.

Only after a second Dutch medical librarian confirmed the problem to the NLM, saying it not only affects one or two librarians, but all the students we teach (~1000-2000 students/university/yearly), they realized that it was a more widespread problem than Dieuwke Brand’s personal problem. Now the problem has a higher priority.

Where is the time that a problem was taken for what it was? As another librarian sighed: Apparently something is only a problem if many people complain about it.

Now I know this (I regarded Dieuwke as a delegate of all Dutch Clinical Librarians), I realize that I have to “complain” myself, each time I and/or my colleagues encounter a problem.

Related Articles





A Filter for Finding “All Studies on Animal Experimentation in PubMed”

29 09 2010

ResearchBlogging.orgFor  an introduction to search filters you can first read this post.

Most people searching PubMed try to get rid of publications about animals. But basic scientists and lab animal technicians just want to find those animal studies.

PubMed has built-in filters for that: the limits. There is a limit  for “humans” and a limit for “animals”. But that is not good enough to find each and every article about humans, respectively animals. The limits are MeSH, Medical Subject Headings or index-terms and these are per definition not added to new articles, that haven’t been indexed yet. To name the main disadvantage…
Thus to find all papers one should at least search for other relevant MeSH and textwords (words in title and abstract) too.

A recent paper published in Laboratory Animals describes a filter for finding “all studies on animal experimentation in PubMed“, to facilitate “writing a systematic review (SR) of animal research” .

As the authors rightly emphasize, SR’s are no common practice in the field of animal research. Knowing what already has been done can prevent unnecessary duplication of animal experiments and thus unnecessary animal use. The authors have interesting ideas like registration of animal studies (similar to clinical trials registers).

In this article they describe the design of an animal filter for PubMed. The authors describe their filter as follows:

“By using our effective search filter in PubMed, all available literature concerning a specific topic can be found and read, which will help in making better evidencebased decisions and result in optimal experimental conditions for both science and animal welfare.”

Is this conclusion justified?

Design of the filter

Their filter is subjectively derived: the terms are “logically” chosen.

[1] The first part of the animal filter consists of only MeSH-terms.

You can’t use animals[mh] (mh=Mesh) as a search term, because MeSH are automatically exploded in PubMed. This means that narrower terms (lower in the tree) are also searched. If “Animals” were allowed to explode, the search would include the MeSH, “Humans”, which is at the end of one tree (primates etc, see Fig. below)

Therefore the MeSH-parts of their search consists of:

  1. animals [mh:noexp]: only articles are found that are indexed with “animals”, but not its narrower terms. Notably, this is identical to the PubMed Limit: animals).
  2. Exploded Animal-specific MeSH-terms not having humans as a narrow term, i.e. “fishes”[MeSH Terms].
  3. and non-exploded MeSH in those cases that humans occurred in the same branch. Like “primates”[MeSH Terms:noexp]
  4. In addition two other MeSH are used: “animal experimentation”[MeSH Terms] and “models, animal”[MeSH Terms]

[2] The second part of the search filter consist of terms in the title and abstract (command: [tiab]).

The terms are taken from relevant MeSH, two reports about animal experimentation in the Netherlands and in Europe, and the experience of the authors, who are experts in the field.

The authors use this string for non-indexed records (command: NOT medline[sb]). Thus this part is only meant to find records that haven’t (yet) been indexed, but in which (specific) animals are mentioned by the author in title or text. Synonyms and spelling variants have been taken into account.

Apparently the authors have chosen NOT to search for text words in indexed records only. Presumably it gives too much noise, to search for animals mentioned in non-indexed articles. However, the authors do not discuss why this was necessary.

This search string is extremely long. Partly because truncation isn’t used with the longer words: i.e. nematod*[tiab] instead of nematoda[Tiab] OR nematode[Tiab] OR nematoda[Tiab] OR nematode[Tiab] OR nematodes[Tiab]. Partly because they aim for completeness. However the usefulness of the terms as such hasn’t been verified (see below).

Search strategies can be freely accessed here.

Validation

The filter is mainly validated against the PubMed Limit “Animals”.

The authors assume that the PubMed Limits are “the most easily available and most obvious method”. However I know few librarians or authors of systematic reviews who would solely apply this so called ‘regular method’. In the past I have used exactly the same MeSH-terms (1) and the main text words (2) as included in their filter.

Considering that the filter includes the PubMed limit “Animals” [1.1] it does not come as a surprise that the sensitivity of the filter exceeds that of the PubMed limit Animals…

Still, the sensitivity (106%) is not really dramatic: 6% more records are found, the PubMed Limit “animals” is set as 100%.

Apparently records are very well indexed with the MeSH “animals”. Few true animal records are missed, because “animals” is a check tag. A check tag is a MeSH that is looked for routinely by indexers in every journal article. It is added to the record even if it isn’t the main (or major) point of an article.

Is an increased sensitivity of appr. 6% sufficient to conclude that this filter “performs much better than the current alternative in PubMed”?

No. It is not only important that MORE is found but to what degree the extra hits are relevant. Surprisingly, the authors ONLY determined SENSITIVITY, not specificity or precision.

There are many irrelevant hits, partly caused by the inclusion of animal population groups[mesh], which has some narrower terms that often not used for experimentation, i.e. endangered species.

Thus even after omission of animal population groups[mesh], the filter still gives hits like:

These are evidently NOT laboratory animal experiments and mainly caused by the inclusion invertebrates  like plankton.

Most other MeSH are not extremely useful either. Even terms as animal experimentation[mh] and models, animal[mh] are seldom assigned to experimental studies lacking animals as a MeSH.

According to the authors, the MeSH “Animals” will not retrieve studies solely indexed with the MeSH term Mice. However, the first records missed with mice[mesh] NOT animals[mh:noexp] are from 1965, when they apparently didn’t use “animals” as a check tag in addition to specific ‘animal’ MeSH.

Thus presumably the MeSH-filter can be much shorter and need only contain animal MeSH (rats[mh], mice[mh] etc) when publications older than 1965 are also required.

The types of vertebrate animals used in lab re...

Image via Wikipedia

Their text word string (2) is also extremely long.  Apart from the lack of truncation, most animal terms are not relevant for most searches. 2/3 of the experiments are done with rodents (see Fig). The other animals are often used for specific experiments (zebra-fish, Drosophila) or in another context, not related to animal experiments, such as:

swine flu, avian flu, milk production by cows, or allergy by milk-products or mites, stings by insects and bites by dogs and of course fish, birds, cattle and poultry as food, fetal calf serum in culture medium, but also vaccination with “mouse products” in humans. Thus most of the terms produce noise for most topics. An example below (found by birds[mesh] 🙂

On the other hand strains of mice and rats are missing from the search string: i.e. balb/c, wistar.

Extremely long search strings (1 page) are also annoying to use. However, the main issue is whether the extra noise matters. Because the filter is meant to find all experimental animal studies.

As Carlijn Hooijmans notices correctly, the filters are never used on their own, only in combination with topic search terms.

Hooijmans et al have therefore “validated” their filter with two searches. “Validated” between quotation marks because they have only compared the number of hits, thus the increase in sensitivity.

Their first topic is the use of probiotics in experimental pancreatitis (see appendix).

Their filter (combined with the topic search) retrieved 37 items against 33 items with the so called “regular method”: an increase in sensitivity of 21,1%.

After updating the search I got  38 vs 35 hits. Two of the 3 extra hits obtained with the broad filter are relevant and are missed with the PubMed limit for animals, because the records haven’t been indexed. They could also have been found with the text words pig*[tiab] or dog*[tiab]. Thus the filter is ok for this purpose, but unnecessary long. The MeSH-part of the filter had NO added value compared to animals[mh:noexp].

Since there are only 148 hits without the use of any filters, researchers could also use screen all hits. Alternatively there is a trick to safely exclude human studies:

NOT (humans[mh] NOT animals[mh:noexp])

With this double negation you exclude PubMed records that are indexed with humans[mh], as long as these records aren’t indexed with animals[mh:noexp] too. It is far “safer” than limiting for “animals”[mesh:noexp] only. We use a similar approach to ” exclude”  animals when we search for human studies.

This extremely simple filter yields 48 hits, finding all hits found with the large animal filter (plus 10 irrelevant hits).

Such a simple filter can easily be used for searches with relatively few hits, but gives too many irrelevant hits in case of  a high yield.

The second topic is food restriction. 9280 Records were obtained with the Limit: “Animals”, whereas this strategy combined with the complete filter retrieved 9650 items. The sensitivity in this search strategy was therefore 104%. 4% extra hits were obtained.

The MeSH-search added little to the search. Only 21 extra hits. The relevant hits were (again) only from before 1965.

The text-word part of the search finds relevant new articles, although there are quite some  irrelevant findings too, i.e. dieting and obtaining proteins from chicken.

4% isn’t a lot extra, but the aim of the researchers is too find all there is.

However, it is the question whether researchers want to find every single experiment or observation done in the animal kingdom. If I were to plan an experiment on whether food restriction lowers the risk for prostate cancer in a transgenic mice, need I know what the effects are of food restriction on Drosophila, nematodes, salmon or even chicken on whatever outcome? Would I like to screen 10,000 hits?

Probably most researchers would like separate filters for rodents and other laboratory animals (primates, dogs) and for work on Drosophila or fish. In some fields there might also be a need to filter clinical trials and reviews out.

Furthermore, it is not only important to have a good filter but also a good search.

The topic searches in the current paper are not ideal: they contain overlapping terms (food restriction is also found by food and restriction) and misses important MeSH (Food deprivation, fasting and the broader term of caloric restriction “energy intake” are assigned more often to records about food deprivation than caloric restriction).

Their search:

(“food restriction”[tiab] OR (“food”[tiab] AND “restriction”[tiab]) OR “feed restriction”[tiab] OR (“feed”[tiab] AND “restriction”[tiab]) OR “restricted feeding”[tiab] OR (“feeding”[tiab] AND “restricted”[tiab]) OR “energy restriction”[tiab] OR (“energy”[tiab] AND “restriction”[tiab]) OR “dietary restriction”[tiab] OR (dietary”[tiab] AND “restriction”[tiab]) OR “caloric restriction”[MeSH Terms] OR (“caloric”[tiab] AND “restriction”[tiab]) OR “caloric restriction”[tiab])
might for instance be changed to:

Energy Intake[mh] OR Food deprivation[mh] OR Fasting[mh] OR food restrict*[tiab] OR feed restrict*[tiab] OR restricted feed*[tiab] OR energy restrict*[tiab] OR dietary restrict*[tiab] OR  caloric restrict*[tiab] OR calorie restrict*[tiab] OR diet restrict*[tiab]

You do not expect such incomplete strategies from people who repeatedly stress that: most scientists do not know how to use PubMed effectively” and that “many researchers do not use ‘Medical Subject Headings’ (MeSH terms), even though they work with PubMed every day”…..

Combining this modified search with their animal filter yields 21920 hits instead of 10335 as found with their “food deprivation” search and their animal filter. A sensitivity of 212%!!! Now we are talking! 😉 (And yes there are many new relevant hits found)

Summary

The paper describes the performance of a subjective search filter to find all experimental studies performed with laboratory animals. The authors have merely “validated”  this filter against the Pubmed Limits: animals. In addition, they only determined sensitivity:  on average 7% more hits were obtained with the new animal filter than with the PubMed limit alone.

The authors have not determined the specificity or precision of the filter, not even for the 2 topics where they have applied the filter. A quick look at the results shows that the MeSH-terms other than the PubMed limit “animals” contributed little to the enhanced sensitivity. The text word part of the filter yields more relevant hits. Still -depending on the topic- there are many irrelevant records found, because  it is difficult to separate animals as food, allergens etc from laboratory animals used in experiments and the filter is developed to find every single animal in the animal kingdom, including poultry, fish, nematodes, flies, endangered species and plankton. Another (hardly to avoid) “contamination” comes from in vitro experiments with animal cells, animal products used in clinical trials and narrative reviews.

In practice, only parts of the search filter seem useful for most systematic reviews, and especially if these reviews are not meant to give an overview of all findings in the universe, but are needed to check if a similar experiment hasn’t already be done. It seems impractical if researchers have to make a systematic review, checking, summarizing and appraising  10,000 records each time they start a new experiment.

Perhaps I’m somewhat too critical, but the cheering and triumphant tone of the paper in combination with a too simple design and without proper testing of the filter asked for a critical response.

Credits

Thanks to Gerben ter Riet for alerting me to the paper. He also gave the tip that the paper can be obtained here for free.

References

  1. Hooijmans CR, Tillema A, Leenaars M, & Ritskes-Hoitinga M (2010). Enhancing search efficiency by means of a search filter for finding all studies on animal experimentation in PubMed. Laboratory animals, 44 (3), 170-5 PMID: 20551243

———————-





Thoughts on the PubMed Clinical Queries Redesign

7 07 2010

Added 2010-07-09:  It is possible to enter the set numbers again, but the results are not yet reliable. They are probably working on it.

Last Wednesday (June 30th 2010) the PubMed Clinical Queries were redesigned.

Clinical Queries are prefab search filters that enable you to find aggregate evidence (Systematic Reviews-filter) or articles in a certain domain (Clinical study category-filters: like diagnosis and therapy), as well as papers in the field of  Medical Genetics (not shown below).

This was how it looked:

Since there were several different boxes you had to re-enter your search each time you tried another filter.

Now the Clinical Queries page has been reconfigured with columns to preview the first five citations of the results for all three research areas.

So this is how it looks now (search= PCOS spironolactone cyproterone hirsutism (PubMed automatically connects with “AND”))

Click to enlarge

Most quick responses to the change are “Neat”, “improved”, “tightened up”…….

This change might be a stylistic improvement for those who are used to enter words in the clinical queries without optimizing the search. At least you see “what you get”, you can preview the results of 3 filters, and you can still see “all” results by clicking on “see all”.  However, if you want to see the all results of another filter, you still have to go back to the clinical queries again.

But… I was not pleased to notice that it is no longer possible to enter a set number (i.e. #9) in the clinical queries search bar.

….Especially since the actual change was just before the start of an EBM-search session. I totally relied on this feature….

  1. Laika (Jacqueline)
    laikas Holy shit. #Pubmed altered the clinical queries, so that I can’t optimize my search first and enter the setnumber in the clin queries later.
  2. Laika (Jacqueline)
    laikas Holy shit 2 And I have a search class in 15 minutes. Can’t prepare changes. I hate this #pubmed #fail
  3. Mark MacEachern
    markmac perfect timing (for an intrface chnge) RT @laikas Holy shit 2 And I have a search class in 15 min. Can’t prepare changes. #pubmed #fail

this quote was brought to you by quoteurl

Furthermore the clinical study category is now default on “therapy broad” instead of narrow. This means a lot more noise: the broad filter searches for (all) clinical trials, while the narrow filter is meant to find randomized controlled trials only.

Normally I optimize the search first before entering the final search set number into the clinical queries.(see  Tip 9 of  “10+1 Pubmed tips for residents and their instructors“).  For instance, the above search would not include PCOS (which doesn’t map to the proper MeSH and isn’t required) and cyproterone, but would consist of hirsutism AND spironolactone (both mapping to the appropriate MeSH).

The set number of the “optimized” search is then entered in the search box of the Systematic Review filter. This yields 9 more hits, including Cochrane systematic reviews. The narrow therapy filter gives more hits, that are more relevant as well (24).

The example that is shown in the NLM technical bulletin (dementia stroke) yields 142 systematic reviews and 1318 individual trials of which only the 5 most recent trials are shown. Not very helpful to doctors and scientists, IMHO.

Anyway, we “lost” a (roundabout) way- to optimize the search before entering it into the search box.

The preview of 3 boxes is o.k., the looks are o.k. but why is this functionality lost?

For the moment I decided to teach my class another option that I use myself: adding clinical queries to your personal NCBI account so that the filters show up each time you perform a search in PubMed ( this post describes how to do it).

It only takes some time to make NCBI accounts and to explain the procedure to the class, time you would like to save for the searches themselves  (in a 1-2 hr workshop). But it is the most practical solution.

We notified PubMed, but it is not clear whether they plan to restore this function.

Note: 2010-07-09:  It is possible to enter the set numbers again, but the results are not yet reliable. They are probably working on it.

Still, for advanced users, adding filters to your NCBI may be most practical.

——-

*re-entering spironolactone and hirsutism in the clinical queries is doable here, but often the search is more complex and different per filter. For instance I might add a third concept when looking for an individual trial.





PubMed versus Google Scholar for Retrieving Evidence

8 06 2010

ResearchBlogging.orgA while ago a resident in dermatology told me she got many hits out of PubMed, but zero results out of TRIP. It appeared she had used the same search for both databases: alopecea areata and diphenciprone (a drug with a lot of synonyms). Searching TRIP for alopecea (in the title) only, we found a Cochrane Review and a relevant NICE guideline.

Usually, each search engine has is its own search and index features. When comparing databases one should compare “optimal” searches and keep in mind for what purpose the search engines are designed. TRIP is most suited to search aggregate evidence, whereas PubMed is most suited to search individual biomedical articles.

Michael Anders and Dennis Evans ignore this “rule of the thumb” in their recent paper “Comparison of PubMed and Google Scholar Literature Searches”. And this is not the only shortcoming of the paper.

The authors performed searches on 3 different topics to compare PubMed and Google Scholar search results. Their main aim was to see which database was the most useful to find clinical evidence in respiratory care.

Well quick guess: PubMed wins…

The 3 respiratory care topics were selected from a list of systematic reviews on the Website of the Cochrane Collaboration and represented in-patient care, out-patient care, and pediatrics.

The references in the three chosen Cochrane Systematic Reviews served as a “reference” (or “golden”) standard. However, abstracts, conference proceedings, and responses to letters were excluded.

So far so good. But note that the outcome of the study only allows us to draw conclusions about interventional questions, that seek to find controlled clinical trials. Other principles may apply to other domains (diagnosis, etiology/harm, prognosis ) or to other types of studies. And it certainly doesn’t apply to non-EBM-topics.

The authors designed ONE search for each topic, by taking 2 common clinical terms from the title of each Cochrane review connected by the Boolean operator “AND” (see Table, ” ” are not used). No synonyms were used and the translation of searches in PubMed wasn’t checked (luckily the mapping was rather good).

“Mmmmm…”

Topic

Search Terms

Noninvasive positive-pressure ventilation for cardiogenic pulmonary edema “noninvasive positive-pressure ventilation” AND “pulmonary edema”
Self-management education and regular practitioner review for adults with asthma “asthma” AND “education”
Ribavirin for respiratory syncytial virus “ribavirin” AND “respiratory syncytial virus”

In PubMed they applied the narrow methodological filter, or Clinical Query, for the domain therapy.
This prefab search strategy (randomized controlled trial[Publication Type] OR (randomized[Title/Abstract] AND controlled[Title/Abstract] AND trial[Title/Abstract]), developed by Haynes, is suitable to quickly detect the available evidence (provided one is looking for RCT’s and doesn’t do an exhaustive search). (see previous posts 2, 3, 4)

Google Scholar, as we all probably know, does not have such methodological filters, but the authors “limited” their search by using the Advanced option and enter the 2 search terms in the “Find articles….with all of the words” space (so this is a boolean “AND“) and they limited it the search to the subject area “Medicine, Pharmacology, and Veterinary Science”.

They did a separate search for publications that were available at their library, which has limited value for others, subscriptions being different for each library.

Next they determined the sensitivity (the number of relevant records retrieved as a proportion of the total number of records in the gold standard) and the precision or positive predictive value, the  fraction of returned positives that are true positives (explained in 3).

Let me guess: sensitivity might be equal or somewhat higher, and precision is undoubtedly much lower in Google Scholar. This is because (in) Google Scholar:

  • you can often search full text instead of just in the abstract, title and (added) keywords/MeSH
  • the results are inflated by finding one and the same references cited in many different papers (that might not directly deal with the subject).
  • you can’t  limit on methodology, study type or “evidence”
  • there is no automatic mapping and explosion (which may provide a way to find more synonyms and thus more relevant studies)
  • has a broader coverage (grey literature, books, more topics)
  • lags behind PubMed in receiving updates from MEDLINE

Results: PubMed and Google Scholar had pretty much the same recall, but for ribavirin and RSV the recall was higher in PubMed, PubMed finding 100%  (12/12) of the included trials, and Google Scholar 58% (7/12)

No discussion as to the why. Since Google Scholar should find the words in titles and abstracts of PubMed I repeated the search in PubMed but only in the title, abstract field, so I searched ribavirin[tiab] AND respiratory syncytial virus[tiab]* and limited it with the narrow therapy filter: I found 26 papers instead of 32. These titles were missing when I only searched title and abstract (between brackets: [relevant MeSH (reason why paper was found), absence of abstract (thus only title and MeSH) and letter], bold: why terms in title abstract are not found)

  1. Evaluation by survival analysis on effect of traditional Chinese medicine in treating children with respiratory syncytial viral pneumonia of phlegm-heat blocking Fei syndrome.
    [MesH:
    Respiratory Syncytial Virus Infections/]
  2. Ribavarin in ventilated respiratory syncytial virus bronchiolitis: a randomized, placebo-controlled trial.
    [MeSH:
    Respiratory Syncytial Virus Infections/[NO ABSTRACT, LETTER]
  3. Study of interobserver reliability in clinical assessment of RSV lower respiratory illness.
    [MeSH:Respiratory Syncytial Virus Infections*]
  4. Ribavirin for severe RSV infection. N Engl J Med.
    [MeSH: Respiratory Syncytial Viruses
    [NO ABSTRACT, LETTER]
  5. Stutman HR, Rub B, Janaim HK. New data on clinical efficacy of ribavirin.
    MeSH: Respiratory Syncytial Viruses
    [NO ABSTRACT]
  6. Clinical studies with ribavirin.
    MeSH: Respiratory Syncytial Viruses
    [NO ABSTRACT]

Three of the papers had the additional MeSH respiratory syncytial virus and the three others respiratory syncytial virus infections. Although not all papers (2 comments/letters) may be relevant, it illustrates why PubMed may yield results, that are not retrieved by Google Scholar (if one doesn’t use synonyms)

In Contrast to Google Scholar, PubMed translates the search ribavirin AND respiratory syncytial virus so that the MeSH-terms “ribavirin”, “respiratory syncytial viruses”[MeSH Terms] and (indirectly) respiratory syncytial virus infection”[MeSH] are also found.

Thus in Google Scholar articles with terms like RSV and respiratory syncytial viral pneumonia (or lack of specifications, like clinical efficacy) could have been missed with the above-mentioned search.

The other result of the study (the result section comprises 3 sentences) is that “For each individual search, PubMed had better precision”.

The Precision was 59/467 (13%) in PubMed and 57/80,730 (0.07%)  in Google Scholar (p<0.001)!!
(note: they had to add author names in the Google Scholar search to find the papers in the haystack 😉

Héhéhé, how surprising. Well why would it be that no clinician or librarian would ever think of using Google Scholar as the primary, let alone the only, source to search for medical evidence?
It should also ring a bell, that [QUOTE**]:
In the Cochrane reviews the researchers retrieved information from multiple databases, including MEDLINE, the Cochrane Airways Group trial register (derived from MEDLINE)***, CENTRAL, EMBASE, CINAHL, DARE, NHSEED, the Acute Respiratory Infections Group’s specialized register, and LILACS… ”
Note
Google Scholar isn’t mentioned as a source! Google Scholar is only recommendable to search for work citing (already found) relevant articles (this is called forward searching), if one hasn’t access to Web of Science or SCOPUS. Thus only to catch the last fish.

Perhaps the paper could have been more interesting if the authors had looked at any ADDED VALUE of Google Scholar, when exhaustively searching for evidence. Then it would have been crucial to look for grey literature too, (instead of excluding it), because this could be a possible strong point for Google Scholar. Furthermore one could have researched if forward searching yielded extra papers.

The specificity of PubMed is attributed to the used therapy-narrow filter, but the vastly lower specificity of Google Scholar is also due to the searching in the full text, including the reference lists.

For instance, searching for ribavirin AND respiratory syncytial virus in PubMed yields 523 hits. This can be reduced to 32 hits when applying the narrow therapy filter. This means a reduction by a factor of 16.
Yet a similar search in Google Scholar yield
4,080 hits. Thus without the filter there is still an almost 8 times higher yield from Google Scholar than from PubMed.

That evokes another  research idea: what would have happened if randomized (OR randomised) would have been added to the Google Scholar search? Would this have increased the specificity? In case of the above search it lowers the yield with a factor 2, and the first hits look very relevant.

It is really funny but the authors bring down their own conclusion that “These results are important because efficient retrieval of the best available scientific evidence can inform respiratory care protocols, recommendations for clinical decisions in individual patients, and education, while minimizing information overload.” by saying elsewhere that “It is unlikely that users consider more than the first few hundred search results, so RTs who conduct literature searches with Google Scholar on these topics will be much less likely to find references cited in Cochrane reviews.”

Indeed no one would take it into ones head to try to find the relevant papers out of those 4,080 hits retrieved. So what is this study worth from a practical point of view?

Well anyway, as you can ask for the sake of asking you can research for the sake of researching. Despite being an EBM-addict I prefer a good subjective overview on this topic over a weak scientific, quasi-evidence based, research paper.

Does this mean Google Scholar is useless? Does it mean that all those PhD’s hooked on Google Scholar are wrong?

No, Google Scholar serves certain purposes.

Just like the example of PubMed and TRIP, you need to know what is in it for you and how to use it.

I used Google Scholar when I was a researcher:

  • to quickly find a known reference
  • to find citing papers
  • to get an idea of how much articles have been cited/ find the most relevant papers in a quick and dirty way (i.e. by browsing)
  • for quick and dirty searches by putting words string between brackets.
  • to search full text. I used quite extensive searches to find out what methods were used (for instance methods AND (synonym1 or syn2 or syn3)). An interesting possibility is to do a second search for only the last few words (in a string). This will often reveal the next words in the sentence. Often you can repeat this trick, reading a piece of the paper without need for access.

If you want to know more about the pros and cons of Google Scholar I recommend the recent overview by the expert librarian Dean Giustini: “Sure Google Scholar is ideal for some things” [7]”. He also compiled a “Google scholar bibliography” with ~115 articles as of May 2010.

Speaking of librarians, why was the study performed by PhD RRT (RN)’s and wasn’t the university librarian involved?****

* this is a search string and more strict than respiratory AND syncytial AND virus
**
abbreviations used instead of full (database) names
*** this is wrong, a register contains references to controlled clinical trials from EMBASE, CINAHL and all kind of  databases in addition to MEDLINE.
****other then to read the manuscript afterwards.

References

  1. Anders ME, & Evans DP (2010). Comparison of PubMed and Google Scholar Literature Searches. Respiratory care, 55 (5), 578-83 PMID: 20420728
  2. This Blog: https://laikaspoetnik.wordpress.com/2009/11/26/adding-methodological-filters-to-myncbi/
  3. This Blog: https://laikaspoetnik.wordpress.com/2009/01/22/search-filters-1-an-introduction/
  4. This Blog: https://laikaspoetnik.wordpress.com/2009/06/30/10-1-pubmed-tips-for-residents-and-their-instructors/
  5. NeuroDojo (2010/05) Pubmed vs Google Scholar? [also gives a nice overview of pros and cons]
  6. GenomeWeb (2010/05/10) Content versus interface at the heart of Pubmed versus Scholar?/ [response to 5]
  7. The Search principle Blog (2010/05) Sure Google Scholar is ideal for some things.




Ten Years of PubMed Central: a Good Thing that’s Only Going to Get Better.

26 05 2010

PubMed Central (PMC) is a free digital archive of biomedical and life sciences journal literature at the U.S. National Institutes of Health (NIH), developed and managed by NIH’s National Center for Biotechnology Information (NCBI) in the National Library of Medicine (NLM) (see PMC overview).
PMC is a central repository for biomedical peer reviewed literature in the same way as NCBI’s GenBank is the public archive of DNA sequences. The idea behind it “that giving all users free access to the material in PubMed Central is the best way to ensure the durability and utility of the electronical archive as technology changes over time and to integrate the literature with other information resources at NLM”.
Many journals are already involved, although most of them adhere to restrictions (i.e. availability after 1 year). For list see http://www.ncbi.nlm.nih.gov/pmc/journals/

PMC, the brain child of Harold Varmus, once the Director of the National Institutes of Health, celebrated its 10 year anniversary earlier this year.

For this occasion Dr. Lipman, Director of the NCBI, gave an overview of past and future plans for the NIH’s archive of biomedical research articles. See videotape of the Columbia University Libraries below:

Vodpod videos no longer available.

more about “Ten Years of PubMed Central | Scholar…“, posted with vodpod

The main points raised by David Lipman (appr. time given if you want to learn more about it; the text below is not a transcription, but a summary in my own words):

PAST/PRESENT

  • >7:00. BiomedCental (taken over by Spinger) and PLoS ONE show that Open Access can be a sustaining way in Publishing Science.
  • 13:23 Publisher keeps the copyright. He may stop depositing but the content already deposited remains in PMC.
  • 13:50 PMC is also an obligatory repository for author manuscripts under various funding agencies mandates, like the NIH and the UK welcome trust.
  • 14:31 One of the ideas from the beginning was to crosslink the literature with the underlying molecular and other databases. For instance NCBI is capable of mining out the information in the archived text and connecting it to the compound and the protein structure database.
  • 16:50 There is a back issue digitization for the journals that are participating, enabling to find research that you wouldn’t have easily found otherwise.
  • PMC has become international (not restricted to USA)
  • The PMC archive becomes more useful if it becomes more comprehensive
  • Before PMC you could do a Google Scholar search and find a paper in PubMed, that appeared funded by NIH, but then you had to pay $30 for it in order to get it. That’s hard to explain to the taxpayers (Lipman had a hard time explaining it to his dad who was looking for medical information online). This was the impetus for making the results of NIH-sponsored results freely available.

PRESENT/FUTURE

  • 23:00 Discovery initiative: is the use of tracking tools to find out which changes to the website work for users and which don’t. Thus modifications should lead to alterations in users behavior (statistics is easy with millions of users). Discovery initiative led to development and improvement of sensors, like sensors for disease names, drug names, genes and citations. What is being measured is if people click through (if it isn’t interesting, they usually don’t) and how quickly they find results. Motto: train the machine, not the users.
  • 30:37 We changed the looks of PMC. Planning to make a better presentation on the i-phone and on broad monitors.
  • 31:40. There are almost 2 million articles in PubMed Central, 585 journals fully participate in PMC
  • 32.30 It takes very long to publish a paper, even in Open Access papers. Therefore a lot of people are not publishing little discoveries, which are not important enough to put a lot of time in. Publishing should be almost as easy as writing a blog, but with peer review. This requires a new type of journal, with peer review, but with instant feedback from readers and reviewers and rapid response to comments. The Google Knol authoring system offers a fast and simple authoring system where authors (with a Google profile) can collaborate and compose the article on the server. Uploading of documents and figures is easy, the article updates are simple and fast, there is a simple workflow for moderators. After the paper is accepted you press a button, the paper is immediately available and the next day PMC automatically gets the XML content. There is also a simple Reference Manager included to paste citations.
  • Principle: How you can start a journal with this system (see Figure). Till now: 60 articles in PLOS Currents Influenza. There are also plans for other journals: the CDC is announcing a Systematic Reviews journal, for instance.

QUESTIONS (>39:30):

  • Process by which “KNOL-journal” is considered for inclusion in NLM?
    • Decide: is it in scope?, implicit policy (health peer review being done), who are the people involved, look at a dozen articles.
  • As the content in PMC increases, will it become possible to search in the full text, just like in Google Scholar?
    • Actually the full text is searchable in PMC as apposed to PubMed, but we are not that happy with the full text retrieval. Even with a really good approach, searching full text works just a little bit better than searching PubMed.
      We are incorporating more of the information of PMC into PubMed, and are working on a separate image database with all the figures from books and articles in PMC (with other search possibilities). Subsets of book(chapter)s (like practice guidelines) will get PubMed abstracts and become searchable in PubMed as well.
  • Are there ways to track a full list of our institutions OA articles in PMC (not picking up everything in PubMed)
    • Likely NIH will be contacting offices responsible for research to let them know what articles are out of compliance,  get their assistance in making sure that those get in.
    • Authors can easily update the electornic My Bibliography (in My NCBI in PubMed).
    • Author ID project, involves computational disambiguation. Where you are asked if you are the author of a paper if you didn’t include it. It may also be possible to have automatic reporting to the institutions.
  • What did it took politically to get the appropriation bill passed (PMC initiative)?
    • Congress always pushed more open access, because it was already spending money on the research. Most of the initiative came more from librarians (i.e. small libraries not having sufficient access) and government, than from the NIH.
  • Is there way to narrow down to NIH, free full text papers from PMC?
    • In PubMed, you can filter free full text articles in general via the limits.
  • Are all the articles deposited in PMC submitted the final manuscript?
    • Generally, yes.

HT: @bentoth on Twitter