Things to Keep in Mind when Searching OVID MEDLINE instead of PubMed

25 11 2011

When I search extensively for systematic reviews I prefer OVID MEDLINE to PubMed for several reasons. Among them, it is easier to build a systematic search in OVID, the search history has a more structured format that is easy to edit, the search features are more advanced giving you more control over the search and translation of the a search to OVID EMBASE, PSYCHINFO and the Cochrane Library is “peanuts”, relatively speaking.

However, there are at least two things to keep in mind when searching OVID MEDLINE instead of PubMed.

1. You may miss publications, most notably recent papers.

PubMed doesn’t only provide access to MEDLINE, but also contains some other citations, including in-process citations which provide a record for an article before it is indexed with MeSH and added to MEDLINE.

As previously mentioned, I once missed a crucial RCT that was available in PubMed, but not yet available in OVID/MEDLINE.

A few weeks ago one of my clients said that she found 3 important papers with a simple PubMed search that were not retrieved by my exhaustive OVID MEDLINE (Doh!).
All articles were recent ones [Epub ahead of print, PubMed - as supplied by publisher]. I checked that these articles were indeed not yet included in OVID MEDLINE, and they weren’t.

As said, PubMed doesn’t have all search features of OVID MEDLINE and I felt a certain reluctance to make a completely new exhaustive search in PubMed. I would probably retrieve many irrelevant papers which I had tried to avoid by searching OVID*. I therefore decided to roughly translate the OVID search using textwords only (the missed articles had no MESH attached). It was a matter of copy-pasting the single textwords from the OVID MEDLINE search (and omitting adjacency operators) and adding the command [tiab], which means that terms are searched as textwords (in title and abstract) in PubMed (#2, only part of the long search string is shown).

To see whether all articles missed in OVID were in the non-MEDLINE set, I added the command: NOT MEDLINE[sb] (#3). Of the 332 records (#2), 28 belonged to the non-MEDLINE subset. All 3 relevant articles, not found in OVID MEDLINE, were in this set.

In total, there were 15 unique records not present in the OVID MEDLINE and EMBASE search. This additional search in PubMed was certainly worth the effort as it yielded more than 3 new relevant papers. (Apparently there was a boom in relevant papers on the topic, recently)

In conclusion, when doing an exhaustive search in OVID MEDLINE it is worth doing an additional search in PubMed to find the non-MEDLINE papers. Regularly these are very relevant papers that you wouldn’t like to have missed. Dependent on your aim you can suffice with a simpler, broader search for only textwords and limit by using NOT MEDLINE[sb].**

From now on, I will always include this PubMed step in my exhaustive searches. 

2. OVID MEDLINE contains duplicate records

I use Reference Manager to deduplicate the records retrieved from all databases  and I share the final database with my client. I keep track of the number of hits in each database and of the number of duplicates to facilitate the reporting of the search procedure later on (using the PRISM flowchart, see above). During this procedure, I noticed that I always got LESS records in Reference Manager when I imported records from OVID MEDLINE, but not when I imported records from the other databases. Thus it appears that OVID MEDLINE contains duplicate records.

For me it was just a fact that there were duplicate records in OVID MEDLINE. But others were surprised to hear this.

Where everyone just wrote down the number of total number hits in OVID MEDLINE, I always used the number of hits after deduplication in Reference Manager. But this is a quite a detour and not easy to explain in the PRISM-flowchart.

I wondered whether this deduplication could be done in OVID MEDLINE directly. I knew you cold deduplicate a multifile search, but would it also be possible to deduplicate a set from one database only? According to OVID help there should be a button somewhere, but I couldn’t find it (curious if you can).

Googling I found another OVID manual saying :

..dedup n = Removes duplicate records from multifile search results. For example, ..dedup 5 removes duplicate records from the multifile results set numbered 5.

Although the manual only talked about “multifile searches”, I tried the comment (..dedup 34) on the final search set (34) in OVID MEDLINE, and voilà, 21 duplicates were found (exactly the same number as removed by Reference manager)

The duplicates had the same PubMed ID (PMID, the .an. command in OVID), and were identical or almost identical.

Differences that I noticed were minimal changes in the MeSH (i.e. one or more MeSH  and/or subheadings changed) and changes in journal format (abbreviation used instead of full title).

Why are these duplicates present in OVID MEDLINE and not in PubMed?

These are the details of the PMID 20846254 in OVID (2 records) and in PubMed (1 record)

The Electronic Date of Publication (PHST)  was September 16th 2010. 2 days later the record was included in PubMed , but MeSH were added 3 months later ((MHDA: 2011/02/12). Around this date records are also entered in OVID MEDLINE. The only difference between the 2 records in OVID MEDLINE is that one record appears to be revised at 2011-10-13, whereas the other is not.

The duplicate records of 18231698 have again the same creation date (20080527) and entry date (20081203), but one is revised 2110-20-09 and updated 2010-12-14, while the other is revised 2011-08-18 and updated 2011-08-19 (thus almost one year later).

Possibly PubMed changes some records, instantaneously replacing the old ones, but OVID only includes the new PubMed records during MEDLINE-updates and doesn’t delete the old version.

Anyway, wouldn’t it be a good thing if OVID deduplicated its MEDLINE records on a daily basis or would replace the old ones when loading  new records from MEDLINE?

In the meantime, I would recommend to apply the deduplicate command yourself to get the exact number of unique records retrieved by your search in OVID MEDLINE.

*mostly because PubMed doesn’t have an adjacency-operator.
** Of course, only if you have already an extensive OVID MEDLINE search.





PubMed’s Higher Sensitivity than OVID MEDLINE… & other Published Clichés.

21 08 2011

ResearchBlogging.orgIs it just me, or are biomedical papers about searching for a systematic review often of low quality or just too damn obvious? I’m seldom excited about papers dealing with optimal search strategies or peculiarities of PubMed, even though it is my specialty.
It is my impression, that many of the lower quality and/or less relevant papers are written by clinicians/researchers instead of information specialists (or at least no medical librarian as the first author).

I can’t help thinking that many of those authors just happen to see an odd feature in PubMed or encounter an unexpected  phenomenon in the process of searching for a systematic review.
They think: “Hey, that’s interesting” or “that’s odd. Lets write a paper about it.” An easy way to boost our scientific output!
What they don’t realize is that the published findings are often common knowledge to the experienced MEDLINE searchers.

Lets give two recent examples of what I think are redundant papers.

The first example is a letter under the heading “Clinical Observation” in Annals of Internal Medicine, entitled:

“Limitations of the MEDLINE Database in Constructing Meta-analyses”.[1]

As the authors rightly state “a thorough literature search is of utmost importance in constructing a meta-analysis. Since the PubMed interface from the National Library of Medicine is a cornerstone of many meta-analysis,  the authors (two MD’s) focused on the freely available PubMed” (with MEDLINE as its largest part).

The objective was:

“To assess the accuracy of MEDLINE’s “human” and “clinical trial” search limits, which are used by authors to focus literature searches on relevant articles.” (emphasis mine)

O.k…. Stop! I know enough. This paper should have be titled: “Limitation of Limits in MEDLINE”.

Limits are NOT DONE, when searching for a systematic review. For the simple reason that most limits (except language and dates) are MESH-terms.
It takes a while before the indexers have assigned a MESH to the papers and not all papers are correctly (or consistently) indexed. Thus, by using limits you will automatically miss recent, not yet, or not correctly indexed papers. Whereas it is your goal (or it should be) to find as many relevant papers as possible for your systematic review. And wouldn’t it be sad if you missed that one important RCT that was published just the other day?

On the other hand, one doesn’t want to drown in irrelevant papers. How can one reduce “noise” while minimizing the risk of loosing relevant papers?

  1. Use both MESH and textwords to “limit” you search, i.e. also search “trial” as textword, i.e. in title and abstract: trial[tiab]
  2. Use more synonyms and truncation (random*[tiab] OR  placebo[tiab])
  3. Don’t actively limit but use double negation. Thus to get rid of animal studies, don’t limit to humans (this is the same as combining with MeSH [mh]) but safely exclude animals as follows: NOT animals[mh] NOT humans[mh] (= exclude papers indexed with “animals” except when these papers are also indexed with “humans”).
  4. Use existing Methodological Filters (ready-made search strategies) designed to help focusing on study types. These filters are based on one or more of the above-mentioned principles (see earlier posts here and here).
    Simple Methodological Filters can be found at the PubMed Clinical Queries. For instance the narrow filter for Therapy not only searches for the Publication Type “Randomized controlled trial” (a limit), but also for randomized, controlled ànd  trial  as textwords.
    Usually broader (more sensitive) filters are used for systematic reviews. The Cochrane handbook proposes to use the following filter maximizing precision and sensitivity to identify randomized trials in PubMed (see http://www.cochrane-handbook.org/):
    (randomized controlled trial [pt] OR controlled clinical trial [pt] OR randomized [tiab] OR placebo [tiab] OR clinical trials as topic [mesh: noexp] OR randomly [tiab] OR trial [ti]) NOT (animals [mh] NOT humans [mh]).
    When few hits are obtained, one can either use a broader filter or no filter at all.

In other words, it is a beginner’s mistake to use limits when searching for a systematic review.
Besides that the authors publish what should be common knowledge (even our medical students learn it) they make many other (little) mistakes, their precise search is difficult to reproduce and far from complete. This is already addressed by Dutch colleagues in a comment [2].

The second paper is:

PubMed had a higher sensitivity than Ovid-MEDLINE in the search for systematic reviews [3], by Katchamart et al.

Again this paper focuses on the usefulness of PubMed to identify RCT’s for a systematic review, but it concentrates on the differences between PubMed and OVID in this respect. The paper starts with  explaining that PubMed:

provides access to bibliographic information in addition to MEDLINE, such as in-process citations (..), some OLDMEDLINE citations (….) citations that precede the date that a journal was selected for MEDLINE indexing, and some additional life science journals that submit full texts to PubMed Central and receive a qualitative review by NLM.

Given these “facts”, am I exaggerating when I am saying that the authors are pushing at an open door when their main conclusion is that PubMed retrieved more citations overall than Ovid-MEDLINE? The one (!) relevant article missed in OVID was a 2005 study published in a Japanese journal that MEDLINE started indexing in 2007. It was therefore in PubMed, but not in OVID MEDLINE.

An important aspect to keep in mind when searching OVID/MEDLINE ( I have earlier discussed here and here). But worth a paper?

Recently, after finishing an exhaustive search in OVID/MEDLINE, we noticed that we missed a RCT in PubMed, that was not yet available in OVID/MEDLINE.  I just added one sentence to the search methods:

Additionally, PubMed was searched for randomized controlled trials ahead of print, not yet included in OVID MEDLINE. 

Of course, I could have devoted a separate article to this finding. But it is so self-evident, that I don’t think it would be worth it.

The authors have expressed their findings in sensitivity (85% for Ovid-MEDLINE vs. 90% for PubMed, 5% is that ONE paper missing), precision and  number to read (comparable for OVID-MEDLINE and PubMed).

If I might venture another opinion: it looks like editors of medical and epidemiology journals quickly fall for “diagnostic parameters” on a topic that they don’t understand very well: library science.

The sensitivity/precision data found have little general value, because:

  • it concerns a single search on a single topic
  • there are few relevant papers (17- 18)
  • useful features of OVID MEDLINE that are not available in PubMed are not used. I.e. Adjacency searching could enhance the retrieval of relevant papers in OVID MEDLINE (adjacency=words searched within a specified maximal distance of each other)
  • the searches are not comparable, nor are the search field commands.

The latter is very important, if one doesn’t wish to compare apples and oranges.

Lets take a look at the first part of the search (which is in itself well structured and covers many synonyms).
First part of the search - Click to enlarge
This part of the search deals with the P: patients with rheumatoid arthritis (RA). The authors first search for relevant MeSH (set 1-5) and then for a few textwords. The MeSH are fine. The authors have chosen to use Arthritis, rheumatoid and a few narrower terms (MeSH-tree shown at the right). The authors have taken care to use the MeSH:noexp command in PubMed to prevent the automatic explosion of narrower terms in PubMed (although this is superfluous for MesH terms having no narrow terms, like Caplan syndrome etc.).

But the fields chosen for the free text search (sets 6-9) are not comparable at all.

In OVID the mp. field is used, whereas all fields or even no fields are used in PubMed.

I am not even fond of the uncontrolled use of .mp (I rather search in title and abstract, remember we already have the proper MESH-terms), but all fields is even broader than .mp.

In general a .mp. search looks in the Title, Original Title, Abstract, Subject Heading, Name of Substance, and Registry Word fields. All fields would be .af in OVID not .mp.

Searching for rheumatism in OVID using the .mp field yields 7879 hits against 31390 hits when one searches in the .af field.

Thus 4 times as much. Extra fields searched are for instance the journal and the address field. One finds all articles in the journal Arthritis & Rheumatism for instance [line 6], or papers co-authored by someone of the dept. of rheumatoid surgery [line 9]

Worse, in PubMed the “all fields” command doesn’t prevent the automatic mapping.

In PubMed, Rheumatism[All Fields] is translated as follows:

“rheumatic diseases”[MeSH Terms] OR (“rheumatic”[All Fields] AND “diseases”[All Fields]) OR “rheumatic diseases”[All Fields] OR “rheumatism”[All Fields]

Oops, Rheumatism[All Fields] is searched as the (exploded!) MeSH rheumatic diseases. Thus rheumatic diseases (not included in the MeSH-search) plus all its narrower terms! This makes the entire first part of the PubMed search obsolete (where the authors searched for non-exploded specific terms). It explains the large difference in hits with rheumatism between PubMed and OVID/MEDLINE: 11910 vs 6945.

Not only do the authors use this .mp and [all fields] command instead of the preferred [tiab] field, they also apply this broader field to the existing (optimized) Cochrane filter, that uses [tiab]. Finally they use limits!

Well anyway, I hope that I made my point that useful comparison between strategies can only be made if optimal strategies and comparable  strategies are used. Sensitivity doesn’t mean anything here.

Coming back to my original point. I do think that some conclusions of these papers are “good to know”. As a matter of fact it should be basic knowledge for those planning an exhaustive search for a systematic review. We do not need bad studies to show this.

Perhaps an expert paper (or a series) on this topic, understandable for clinicians, would be of more value.

Or the recognition that such search papers should be designed and written by librarians with ample experience in searching for systematic reviews.

NOTE:
* = truncation=search for different word endings; [tiab] = title and abstract; [ti]=title; mh=mesh; pt=publication type

Photo credit

The image is taken from the Dragonfly-blog; here the Flickr-image Brain Vocab Sketch by labguest was adapted by adding the Pubmed logo.

References

  1. Winchester DE, & Bavry AA (2010). Limitations of the MEDLINE database in constructing meta-analyses. Annals of internal medicine, 153 (5), 347-8 PMID: 20820050
  2. Leclercq E, Kramer B, & Schats W (2011). Limitations of the MEDLINE database in constructing meta-analyses. Annals of internal medicine, 154 (5) PMID: 21357916
  3. Katchamart W, Faulkner A, Feldman B, Tomlinson G, & Bombardier C (2011). PubMed had a higher sensitivity than Ovid-MEDLINE in the search for systematic reviews. Journal of clinical epidemiology, 64 (7), 805-7 PMID: 20926257
  4. Search OVID EMBASE and Get MEDLINE for Free…. without knowing it (laikaspoetnik.wordpress.com 2010/10/19/)
  5. 10 + 1 PubMed Tips for Residents (and their Instructors) (laikaspoetnik.wordpress.com 2009/06/30)
  6. Adding Methodological filters to myncbi (laikaspoetnik.wordpress.com 2009/11/26/)
  7. Search filters 1. An Introduction (laikaspoetnik.wordpress.com 2009/01/22/)




Problems with Disappearing Set Numbers in PubMed’s Clinical Queries

18 10 2010

In some upcoming posts I will address various problems related to the changing interfaces of bibliographic databases.

We, librarians and end users, are overwhelmed by a flood of so-called upgrades, which often fail to bring the improvements that were promised….. or which go hand-in-hand with temporary glitches.

Christina of Christina’s Lis Rant even made rundown of the new interfaces of last summer. Although she didn’t include OVID MEDLINE/EMBASE, the Cochrane Library and Reference manager in her list, the total number of changed interfaces reached 22 !

As a matter of fact, the Cochrane Library was suffering some outages yesterday, to repair some bugs. So I will postpone my coverage of the Cochrane bugs a little.

And OVID send out a notice last week: This week Ovid will be deploying a software release of the OvidSPplatform that will add new functionality and address improvements to some existing functionality.”

In this post I will confine myself to the PubMed Clinical Queries. According to Christina PubMed changes “were a bit ago”, but PubMed continuously tweaks  its interface, often without paying much attention to its effects.

Back in July, I already covered that the redesign of the PubMed Clinical Queries was no improvement for people who wanted to do more than a quick and dirty search.

It was no longer possible to enter a set number in the Clinical Queries search bar. Thus it wasn’t possible to set up a search in PubMed first and to then enter the final set number in the Clinical Queries. This bug was repaired promptly.

From then on, the set number could be entered again in the clinical queries.

However, one bug was replaced by another: next, search numbers were disappearing from the search history.

I will use the example I used before: I want to know if spironolactone reduces hirsutism in women with PCOS, and if it works better than cyproterone acetate.

Since little is published about this topic,  I only search for  hirsutism and spironolactone. These terms  map correctly with  MeSH terms. In the MeSH database I also see (under “see also”) that spironolactone belongs to the aldosterone antagonists, so I broaden spironolactone (#2) with “Aldosterone antagonists”[pharmacological Action] using “OR” (set #7). My last set (#8) consists of #1 (hirsutism) AND #7 (#2 OR #6)

Next I go to the Clinical Queries in the Advanced Search and enter #8. (now possible again).

I change the Therapy Filter from “broad”  to “narrow”, because the broad filter gives too much noise.

In the clinical queries you see only the first five results.

Apparently even the clinical queries are now designed to just take a quick look at the most recent results, but of course, that is NOT what we are trying to achieve when we search for (the best) evidence.

To see all results for the narrow therapy filter I have to go back to the Clinical Queries again and click on see all (27) [5]

A bit of a long way about. But it gets longer…


The 27 hits, that result from combining the Narrow therapy filter with my search #8 appears. This is set #9.
Note it is a lower number than set #11 (search + systematic review filter).

Meanwhile set #9 has disappeared from my history.

This is a nuisance if I want to use this set further or if I want to give an overview of my search, i.e. for a presentation.

There are several tricks by which this flaw can be overcome. But they are all cumbersome.

1. Just add set number (#11 in this case, which is the last search (#8) + 3 more) to the search history (you have to remember the search set number though).

This is the set number remembered by the system. As you see in the history, you “miss” certain sets. #3 to #5 are for instance are searches you performed in the MeSH-database, which show up in the History of the MeSH database, but not in PubMed’s history.

The Clinical query set number is still there, but it doesn’t show either. Apparently the 3 clinical query-subsets yield a separate set number, whether the search is truly performed or not. In this case  #11 for (#8) AND systematic[sb], #9 for (#8) AND (Therapy/Narrow[filter]). And #10 for (#8) AND the medical genetics filter.

In this way you have all results in your history. It isn’t immediately clear, however, what these sets represent.

2. Use the commands rather than going to the clinical queries.

Thus type in the search bar: #8 AND systematic[sb]

And then: #8 AND (Therapy/Narrow[filter])

It is easiest to keep all filters in Word/Notepad and copy/paste each time you need the filter

3. Add clinical queries as filters to your personal NCBI account so that the filters show up each time you do a search in PubMed. This post describes how to do it.

Anyway these remain just tricks to try to make something right that is wrong.

Furthermore it makes it more difficult to explain the usefulness of the clinical queries to doctors and medical students. Explaining option 3 takes too long in a short course, option 1 seems illogical and 2 is hard to remember.

Thus we want to keep the set numbers in the history, at least.

A while ago Dieuwke Brand notified the NLM of this problem.

Only recently she received an answer saying that:

we are aware of the continuing problem.  The problem remains on our programmers’ list of items to investigate.  Unfortunately, because this problem appears to be limited to very few users, it has been listed as a low priority.

Only after a second Dutch medical librarian confirmed the problem to the NLM, saying it not only affects one or two librarians, but all the students we teach (~1000-2000 students/university/yearly), they realized that it was a more widespread problem than Dieuwke Brand’s personal problem. Now the problem has a higher priority.

Where is the time that a problem was taken for what it was? As another librarian sighed: Apparently something is only a problem if many people complain about it.

Now I know this (I regarded Dieuwke as a delegate of all Dutch Clinical Librarians), I realize that I have to “complain” myself, each time I and/or my colleagues encounter a problem.

Related Articles





Thoughts on the PubMed Clinical Queries Redesign

7 07 2010

Added 2010-07-09:  It is possible to enter the set numbers again, but the results are not yet reliable. They are probably working on it.

Last Wednesday (June 30th 2010) the PubMed Clinical Queries were redesigned.

Clinical Queries are prefab search filters that enable you to find aggregate evidence (Systematic Reviews-filter) or articles in a certain domain (Clinical study category-filters: like diagnosis and therapy), as well as papers in the field of  Medical Genetics (not shown below).

This was how it looked:

Since there were several different boxes you had to re-enter your search each time you tried another filter.

Now the Clinical Queries page has been reconfigured with columns to preview the first five citations of the results for all three research areas.

So this is how it looks now (search= PCOS spironolactone cyproterone hirsutism (PubMed automatically connects with “AND”))

Click to enlarge

Most quick responses to the change are “Neat”, “improved”, “tightened up”…….

This change might be a stylistic improvement for those who are used to enter words in the clinical queries without optimizing the search. At least you see “what you get”, you can preview the results of 3 filters, and you can still see “all” results by clicking on “see all”.  However, if you want to see the all results of another filter, you still have to go back to the clinical queries again.

But… I was not pleased to notice that it is no longer possible to enter a set number (i.e. #9) in the clinical queries search bar.

….Especially since the actual change was just before the start of an EBM-search session. I totally relied on this feature….

  1. Laika (Jacqueline)
    laikas Holy shit. #Pubmed altered the clinical queries, so that I can’t optimize my search first and enter the setnumber in the clin queries later.
  2. Laika (Jacqueline)
    laikas Holy shit 2 And I have a search class in 15 minutes. Can’t prepare changes. I hate this #pubmed #fail
  3. Mark MacEachern
    markmac perfect timing (for an intrface chnge) RT @laikas Holy shit 2 And I have a search class in 15 min. Can’t prepare changes. #pubmed #fail

this quote was brought to you by quoteurl

Furthermore the clinical study category is now default on “therapy broad” instead of narrow. This means a lot more noise: the broad filter searches for (all) clinical trials, while the narrow filter is meant to find randomized controlled trials only.

Normally I optimize the search first before entering the final search set number into the clinical queries.(see  Tip 9 of  “10+1 Pubmed tips for residents and their instructors“).  For instance, the above search would not include PCOS (which doesn’t map to the proper MeSH and isn’t required) and cyproterone, but would consist of hirsutism AND spironolactone (both mapping to the appropriate MeSH).

The set number of the “optimized” search is then entered in the search box of the Systematic Review filter. This yields 9 more hits, including Cochrane systematic reviews. The narrow therapy filter gives more hits, that are more relevant as well (24).

The example that is shown in the NLM technical bulletin (dementia stroke) yields 142 systematic reviews and 1318 individual trials of which only the 5 most recent trials are shown. Not very helpful to doctors and scientists, IMHO.

Anyway, we “lost” a (roundabout) way- to optimize the search before entering it into the search box.

The preview of 3 boxes is o.k., the looks are o.k. but why is this functionality lost?

For the moment I decided to teach my class another option that I use myself: adding clinical queries to your personal NCBI account so that the filters show up each time you perform a search in PubMed ( this post describes how to do it).

It only takes some time to make NCBI accounts and to explain the procedure to the class, time you would like to save for the searches themselves  (in a 1-2 hr workshop). But it is the most practical solution.

We notified PubMed, but it is not clear whether they plan to restore this function.

Note: 2010-07-09:  It is possible to enter the set numbers again, but the results are not yet reliable. They are probably working on it.

Still, for advanced users, adding filters to your NCBI may be most practical.

——-

*re-entering spironolactone and hirsutism in the clinical queries is doable here, but often the search is more complex and different per filter. For instance I might add a third concept when looking for an individual trial.





PubMed versus Google Scholar for Retrieving Evidence

8 06 2010

ResearchBlogging.orgA while ago a resident in dermatology told me she got many hits out of PubMed, but zero results out of TRIP. It appeared she had used the same search for both databases: alopecea areata and diphenciprone (a drug with a lot of synonyms). Searching TRIP for alopecea (in the title) only, we found a Cochrane Review and a relevant NICE guideline.

Usually, each search engine has is its own search and index features. When comparing databases one should compare “optimal” searches and keep in mind for what purpose the search engines are designed. TRIP is most suited to search aggregate evidence, whereas PubMed is most suited to search individual biomedical articles.

Michael Anders and Dennis Evans ignore this “rule of the thumb” in their recent paper “Comparison of PubMed and Google Scholar Literature Searches”. And this is not the only shortcoming of the paper.

The authors performed searches on 3 different topics to compare PubMed and Google Scholar search results. Their main aim was to see which database was the most useful to find clinical evidence in respiratory care.

Well quick guess: PubMed wins…

The 3 respiratory care topics were selected from a list of systematic reviews on the Website of the Cochrane Collaboration and represented in-patient care, out-patient care, and pediatrics.

The references in the three chosen Cochrane Systematic Reviews served as a “reference” (or “golden”) standard. However, abstracts, conference proceedings, and responses to letters were excluded.

So far so good. But note that the outcome of the study only allows us to draw conclusions about interventional questions, that seek to find controlled clinical trials. Other principles may apply to other domains (diagnosis, etiology/harm, prognosis ) or to other types of studies. And it certainly doesn’t apply to non-EBM-topics.

The authors designed ONE search for each topic, by taking 2 common clinical terms from the title of each Cochrane review connected by the Boolean operator “AND” (see Table, ” ” are not used). No synonyms were used and the translation of searches in PubMed wasn’t checked (luckily the mapping was rather good).

“Mmmmm…”

Topic

Search Terms

Noninvasive positive-pressure ventilation for cardiogenic pulmonary edema “noninvasive positive-pressure ventilation” AND “pulmonary edema”
Self-management education and regular practitioner review for adults with asthma “asthma” AND “education”
Ribavirin for respiratory syncytial virus “ribavirin” AND “respiratory syncytial virus”

In PubMed they applied the narrow methodological filter, or Clinical Query, for the domain therapy.
This prefab search strategy (randomized controlled trial[Publication Type] OR (randomized[Title/Abstract] AND controlled[Title/Abstract] AND trial[Title/Abstract]), developed by Haynes, is suitable to quickly detect the available evidence (provided one is looking for RCT’s and doesn’t do an exhaustive search). (see previous posts 2, 3, 4)

Google Scholar, as we all probably know, does not have such methodological filters, but the authors “limited” their search by using the Advanced option and enter the 2 search terms in the “Find articles….with all of the words” space (so this is a boolean “AND“) and they limited it the search to the subject area “Medicine, Pharmacology, and Veterinary Science”.

They did a separate search for publications that were available at their library, which has limited value for others, subscriptions being different for each library.

Next they determined the sensitivity (the number of relevant records retrieved as a proportion of the total number of records in the gold standard) and the precision or positive predictive value, the  fraction of returned positives that are true positives (explained in 3).

Let me guess: sensitivity might be equal or somewhat higher, and precision is undoubtedly much lower in Google Scholar. This is because (in) Google Scholar:

  • you can often search full text instead of just in the abstract, title and (added) keywords/MeSH
  • the results are inflated by finding one and the same references cited in many different papers (that might not directly deal with the subject).
  • you can’t  limit on methodology, study type or “evidence”
  • there is no automatic mapping and explosion (which may provide a way to find more synonyms and thus more relevant studies)
  • has a broader coverage (grey literature, books, more topics)
  • lags behind PubMed in receiving updates from MEDLINE

Results: PubMed and Google Scholar had pretty much the same recall, but for ribavirin and RSV the recall was higher in PubMed, PubMed finding 100%  (12/12) of the included trials, and Google Scholar 58% (7/12)

No discussion as to the why. Since Google Scholar should find the words in titles and abstracts of PubMed I repeated the search in PubMed but only in the title, abstract field, so I searched ribavirin[tiab] AND respiratory syncytial virus[tiab]* and limited it with the narrow therapy filter: I found 26 papers instead of 32. These titles were missing when I only searched title and abstract (between brackets: [relevant MeSH (reason why paper was found), absence of abstract (thus only title and MeSH) and letter], bold: why terms in title abstract are not found)

  1. Evaluation by survival analysis on effect of traditional Chinese medicine in treating children with respiratory syncytial viral pneumonia of phlegm-heat blocking Fei syndrome.
    [MesH:
    Respiratory Syncytial Virus Infections/]
  2. Ribavarin in ventilated respiratory syncytial virus bronchiolitis: a randomized, placebo-controlled trial.
    [MeSH:
    Respiratory Syncytial Virus Infections/[NO ABSTRACT, LETTER]
  3. Study of interobserver reliability in clinical assessment of RSV lower respiratory illness.
    [MeSH:Respiratory Syncytial Virus Infections*]
  4. Ribavirin for severe RSV infection. N Engl J Med.
    [MeSH: Respiratory Syncytial Viruses
    [NO ABSTRACT, LETTER]
  5. Stutman HR, Rub B, Janaim HK. New data on clinical efficacy of ribavirin.
    MeSH: Respiratory Syncytial Viruses
    [NO ABSTRACT]
  6. Clinical studies with ribavirin.
    MeSH: Respiratory Syncytial Viruses
    [NO ABSTRACT]

Three of the papers had the additional MeSH respiratory syncytial virus and the three others respiratory syncytial virus infections. Although not all papers (2 comments/letters) may be relevant, it illustrates why PubMed may yield results, that are not retrieved by Google Scholar (if one doesn’t use synonyms)

In Contrast to Google Scholar, PubMed translates the search ribavirin AND respiratory syncytial virus so that the MeSH-terms “ribavirin”, “respiratory syncytial viruses”[MeSH Terms] and (indirectly) respiratory syncytial virus infection”[MeSH] are also found.

Thus in Google Scholar articles with terms like RSV and respiratory syncytial viral pneumonia (or lack of specifications, like clinical efficacy) could have been missed with the above-mentioned search.

The other result of the study (the result section comprises 3 sentences) is that “For each individual search, PubMed had better precision”.

The Precision was 59/467 (13%) in PubMed and 57/80,730 (0.07%)  in Google Scholar (p<0.001)!!
(note: they had to add author names in the Google Scholar search to find the papers in the haystack ;)

Héhéhé, how surprising. Well why would it be that no clinician or librarian would ever think of using Google Scholar as the primary, let alone the only, source to search for medical evidence?
It should also ring a bell, that [QUOTE**]:
In the Cochrane reviews the researchers retrieved information from multiple databases, including MEDLINE, the Cochrane Airways Group trial register (derived from MEDLINE)***, CENTRAL, EMBASE, CINAHL, DARE, NHSEED, the Acute Respiratory Infections Group’s specialized register, and LILACS… “
Note
Google Scholar isn’t mentioned as a source! Google Scholar is only recommendable to search for work citing (already found) relevant articles (this is called forward searching), if one hasn’t access to Web of Science or SCOPUS. Thus only to catch the last fish.

Perhaps the paper could have been more interesting if the authors had looked at any ADDED VALUE of Google Scholar, when exhaustively searching for evidence. Then it would have been crucial to look for grey literature too, (instead of excluding it), because this could be a possible strong point for Google Scholar. Furthermore one could have researched if forward searching yielded extra papers.

The specificity of PubMed is attributed to the used therapy-narrow filter, but the vastly lower specificity of Google Scholar is also due to the searching in the full text, including the reference lists.

For instance, searching for ribavirin AND respiratory syncytial virus in PubMed yields 523 hits. This can be reduced to 32 hits when applying the narrow therapy filter. This means a reduction by a factor of 16.
Yet a similar search in Google Scholar yield
4,080 hits. Thus without the filter there is still an almost 8 times higher yield from Google Scholar than from PubMed.

That evokes another  research idea: what would have happened if randomized (OR randomised) would have been added to the Google Scholar search? Would this have increased the specificity? In case of the above search it lowers the yield with a factor 2, and the first hits look very relevant.

It is really funny but the authors bring down their own conclusion that “These results are important because efficient retrieval of the best available scientific evidence can inform respiratory care protocols, recommendations for clinical decisions in individual patients, and education, while minimizing information overload.” by saying elsewhere that “It is unlikely that users consider more than the first few hundred search results, so RTs who conduct literature searches with Google Scholar on these topics will be much less likely to find references cited in Cochrane reviews.”

Indeed no one would take it into ones head to try to find the relevant papers out of those 4,080 hits retrieved. So what is this study worth from a practical point of view?

Well anyway, as you can ask for the sake of asking you can research for the sake of researching. Despite being an EBM-addict I prefer a good subjective overview on this topic over a weak scientific, quasi-evidence based, research paper.

Does this mean Google Scholar is useless? Does it mean that all those PhD’s hooked on Google Scholar are wrong?

No, Google Scholar serves certain purposes.

Just like the example of PubMed and TRIP, you need to know what is in it for you and how to use it.

I used Google Scholar when I was a researcher:

  • to quickly find a known reference
  • to find citing papers
  • to get an idea of how much articles have been cited/ find the most relevant papers in a quick and dirty way (i.e. by browsing)
  • for quick and dirty searches by putting words string between brackets.
  • to search full text. I used quite extensive searches to find out what methods were used (for instance methods AND (synonym1 or syn2 or syn3)). An interesting possibility is to do a second search for only the last few words (in a string). This will often reveal the next words in the sentence. Often you can repeat this trick, reading a piece of the paper without need for access.

If you want to know more about the pros and cons of Google Scholar I recommend the recent overview by the expert librarian Dean Giustini: “Sure Google Scholar is ideal for some things” [7]“. He also compiled a “Google scholar bibliography” with ~115 articles as of May 2010.

Speaking of librarians, why was the study performed by PhD RRT (RN)’s and wasn’t the university librarian involved?****

* this is a search string and more strict than respiratory AND syncytial AND virus
**
abbreviations used instead of full (database) names
*** this is wrong, a register contains references to controlled clinical trials from EMBASE, CINAHL and all kind of  databases in addition to MEDLINE.
****other then to read the manuscript afterwards.

References

  1. Anders ME, & Evans DP (2010). Comparison of PubMed and Google Scholar Literature Searches. Respiratory care, 55 (5), 578-83 PMID: 20420728
  2. This Blog: http://laikaspoetnik.wordpress.com/2009/11/26/adding-methodological-filters-to-myncbi/
  3. This Blog: http://laikaspoetnik.wordpress.com/2009/01/22/search-filters-1-an-introduction/
  4. This Blog: http://laikaspoetnik.wordpress.com/2009/06/30/10-1-pubmed-tips-for-residents-and-their-instructors/
  5. NeuroDojo (2010/05) Pubmed vs Google Scholar? [also gives a nice overview of pros and cons]
  6. GenomeWeb (2010/05/10) Content versus interface at the heart of Pubmed versus Scholar?/ [response to 5]
  7. The Search principle Blog (2010/05) Sure Google Scholar is ideal for some things.




An Evidence Pyramid that Facilitates the Finding of Evidence

20 03 2010

Earlier I described that there are so many search- and EBM-pyramids that it is confusing. I described  3 categories of pyramids:

  1. Search Pyramids
  2. Pyramids of EBM-sources
  3. Pyramids of EBM-levels (levels of evidence)

In my courses where I train doctors and medical students how to find evidence quickly, I use a pyramid that is a mixture of 1. and 2. This is a slide from a 2007 course.

This pyramid consists of 4 layers (from top down):

  1. EBM-(evidence based) guidelines.
  2. Synopses & Syntheses*: a synopsis is a summary and critical appraisal of one article, whereas synthesis is a summary and critical appraisal of a topic (which may answer several questions and may cover many articles).
  3. Systematic Reviews (a systematic summary and critical appraisal of original studies) which may or may not include a meta-analysis.
  4. Original Studies.

The upper 3 layers represent “Aggregate Evidence”. This is evidence from secondary sources, that search, summarize and critically appraise original studies (lowest layer of the pyramid).

The layers do not necessarily represent the levels of evidence and should not be confused with Pyramids of EBM-levels (type 3). An Evidence Based guideline can have a lower level of evidence than a good systematic review, for instance.
The present pyramid is only meant to lead the way in the labyrinth of sources. Thus, to speed up to process of searching. The relevance and the quality of evidence should always be checked.

The idea is:

  • The higher the level in the pyramid the less publications it contains (the narrower it becomes)
  • Each level summarizes and critically appraises the underlying levels.

I advice people to try to find aggregate evidence first, thus to drill down (hence the drill in the Figure).

The advantage: faster results, lower number to read (NNR).

During the first courses I gave, I just made a pyramid in Word with the links to the main sources.

Our library ICT department converted it into a HTML document with clickable links.

However, although the pyramid looked quite complex, not all main evidence sources were included. Plus some sources belong to different layers. The Trip Database for instance searches sources from all layers.

Our ICT-department came up with a much better looking and better functioning 3-D pyramid, with databases like TRIP in the sidebar.

Moving the  mouse over a pyramid layer invokes a pop-up with links to the databases belonging to that layer.

Furthermore the sources included in the pyramid differ per specialty. So for the department Gynecology we include POPLINE and MIDIRS in the lowest layer, and the RCOG and NVOG (Dutch) guidelines in the EBM-guidelines layer.

Together my colleagues and I decide whether a source is evidence based (we don’t include UpToDate for instance) and where it  belongs. Each clinical librarian (we all serve different departments) then decides which databases to include. Clients can give suggestions.

Below is a short You Tube video showing how this pyramid can be used. Because of the rather poor quality, the video is best to be viewed in full screen mode.
I have no audio (yet), so in short this is what you see:

Made with Screenr:  http://screenr.com/8kg

The pyramid is highly appreciated by our clients and students.

But it is just a start. My dream is to visualize the entire pathway from question to PICO, checklists, FAQs and database of results per type of question/reason for searching (fast question, background question, CAT etc.).

I’m just waiting for someone to fulfill the technical part of this dream.

————–

*Note that there may be different definitions as well. The top layers in the 5S pyramid of Bryan Hayes are defined as follows: syntheses & synopses (succinct descriptions of selected individual studies or systematic reviews, such as those found in the evidence-based journals), summaries, which integrate best available evidence from the lower layers to develop practice guidelines based on a full range of evidence (e.g. Clinical Evidence, National Guidelines Clearinghouse), and at the peak of the model, systems, in which the individual patient’s characteristics are automatically linked to the current best evidence that matches the patient’s specific circumstances and the clinician is provided with key aspects of management (e.g., computerised decision support systems).

Begin with the richest source of aggregate (pre-filtered) evidence and decline in order to to decrease the number needed to read: there are less EBM guidelines than there are Systematic Reviews and (certainly) individual papers.




Adding Methodological Filters to MyNCBI

26 11 2009

Idea: Arnold Leenders
Text: “Laika”

Methodological Search Filters can help to narrow down a search by enriching for studies with a certain study design or methodology. PubMed has build-in methodological filters, the so called Clinical Queries for domains (like therapy and diagnosis) and for evidence based papers (like theSystematic Review subset” in Pubmed). These searches are often useful to quickly find evidence on a topic or to perform a CAT (Critical Appraised Topic). More exhaustive searches require broader  filters not incorporated in PubMed. (See Search Filters. 1. An Introduction.).

The Redesign of PubMed has made it more difficult to apply Clinical Queries after a search has been optimized. You can still go directly to the clinical queries (on the front page) and fill in some terms, but we rather advise to build the strategy first, check the terms and combine your search with filters afterwards.

Suppose you would like to find out whether spironolactone effectively reduces hirsutism in a female with PCOS (see 10+ 1 Pubmed Tips for Residents and their Instructors, Tip 9). You first check that the main concepts hirsutism and spironactone are o.k. (i.e. they map automatically with the correct MeSH). Applying the clinical queries at this stage would require you to scroll down the page each time you use them.

Instead you can use filters in My NCBI for that purpose. My NCBI is your (free) personal space for saving searches, results, PubMed preferences, for creating automatic email alerts and for creating Search Filters.
The My NCBI-option is at the upper right of the PubMed page. You first have to create a free account.

To activate or create filters, go to [1] My NCBI and click on [2] Search Filters.

Since our purpose is to make filters for PubMed, choose [3] PubMed from the list of NCBI-databases.

Under Frequently Requested Filters you find the most popular Limit options. You can choose any of the optional filters for future use. This works faster than searching for the appropriate limit each time. You can for instance use the filter for humans to exclude animals studies.

The Filters we are going to use are under “Browse Filters”, Subcategory Properties….

….. under Clinical Queries (Domains, i.e. therapy) and Subsets (Systematic Review Filters)

You can choose any filter you like. I choose the Systematic Review Filter (under Subsets) and the Therapy/Narrow Filter under  Clinical Queries.

In addition you can add custom filters. For instance you might want to add a sensitive Cochrane RCT filter, if you perform broad searches. Click Custom Filters, give the filter a name and copy/paste the search string you want to use as filter.

Control via “Run Filter” if the Filter works (the number of hits are shown) and SAVE the filter.

Next you have to activate the filters you want to use. Note there is a limit of five 15 filters (including custom filters) that can be selected and listed in My Filters. [edited: July 5th, hattip Tanya Feddern-Bekcan]

Under  My Filters you now see the Filters you have chosen or created.

From now on I can use these filters to limit my search. So lets go to my original search in “Advanced Search”. Unfiltered, search #3 (hirsutism  AND spironolactone) has 197 hits.

When you click on the number of hits you arrive at the results page.
At the right are the filters with the number of results of your search combined with these filters (between brackets).

When you click at the Systematic Reviews link you see the 11 results, most of them very relevant. Filters (except the Custom Filters) can be appended to the search (and thus saved) by clicking the yellow + button.

Each time you do a search (and you’re logged in into My NCBI)  the filtered results are automatically shown at the right.

Clinical Queries zijn vaak handig als je evidence zoekt of een CAT (Critical Appraised Topic) maakt. In de nieuwe versie van PubMed zijn de Clinical Queries echter moeilijker te vinden. Daarom is het handig om bepaalde ‘Clinical Queries’ op te nemen in ‘My NCBI’. Deze queries bevinden zich onder Browse Filters (mogelijkheid onder Search Filters)

Het is ook mogelijk speciale zoekfilters te creëeren, zoals b.v. het Cochrane highly sensitive filter voor RCT’s. Dit kan onder Custom Filters.

Controleer wel via ‘Run Filter” of het filter werkt en sla het daarna op.

Daarna moet je het filter nog activeren door het hokje aan te vinken. Dus je zou alle filters van de ‘Clinical study category’ kunnen opnemen en deze afhankelijk van het domein van de vraag kunnen activeren.

Zo heb je altijd alle filters bij de hand. De resultaten worden automatisch getoond (aan de rechterkant).

Reblog this post [with Zemanta]




Presentation at the #NVB09: “Help, the doctor is drowning”

16 11 2009

15-11-2009 23-24-33 nvb congressenLast week I was invited to speak at the NVB-congress, the Dutch society for librarians and information specialists. I replaced Josje Calff in the session “the professional”, chaired by Bram Donkers of the magazine InformatieProfessional. Other sessions were: “the client”, “the technique” and “the connection”. (see program)

It was a very successful meeting, with Andrew Keen and Bas Haring in the plenary session. I understand from tweets and blogposts that @eppovannispen en @lykle who were in parallel sessions were especially interesting.
Some of the (Dutch) blogposts (Not about my presentation….pfew) are:

I promised to upload my presentation to Slideshare. And here it is.

Some slides are different from the original. First, Slideshare doesn’t allow animation, (so slides have to be added to get a similar effect), second I realized later that the article and search I showed in Ede were not yet published, so I put “top secret” in front of it.

The title refers to a Dutch book and film: “Help de dokter verzuipt” (“Help the doctor is drowning”).

Slides 2-4: NVB-tracks; why I couldn’t discuss “the professional” without explaining the changes with which the medical profession is confronted.

Slides 5-8: Clients of a medical librarian (dependent on where he/she works).

Slides 9-38: Changes to the medical profession (less time, opinion-based medicine gradually replaced by evidence based medicine, information overload, many sources, information literacy)

Slides 39-66: How medical librarians can help (‘electronic’ collection accessible from home, study landscape for medical students, less emphasis on books, up to date with alerts (email, RSS, netvibes), portals (i.e. for evidence based searching), education (i.e. courses, computer workshops, e-learning), active participation in curriculum, helping with searches or performing them).

Slides 67-68: Summary (Potential)

Slide 69: Barriers/Risks: Money, support (management, contact persons at the departments/in the curriculum), doctors like to do it theirselves (it looks easy), you have to find a way to reach them, training medical information specialists.

Slides 70-73 Summary & Credits

Here are some tweets related to this presentation.

Reblog this post [with Zemanta]




Grey Literature: Time to make it systematic

6 09 2009

Guest author: Shamsha Damani (@shamsha)

Grey literature is a term I first encountered in library school; I remember dubbing it “the-wild-goose-chase search” because it is time consuming, totally un-systematic, and a huge pain altogether. Things haven’t changed much in the grey literature arena, as I found out last week, when my boss asked me to help with the grey literature part of a systematic review.

Let me back up a bit and offer the official definition for grey literature by the experts of the Grey Literature International Steering Committee: “Information produced on all levels of government, academics, business and industry in electronic and print formats not controlled by commercial publishing i.e. where publishing is not the primary activity of the producing body.” Grey literature can include things such as policy documents, government reports, academic papers, theses, dissertations, bibliographies, conference abstracts/proceedings/papers, newsletters, PowerPoint presentations, standards/best practice documents, technical specifications, working papers and more! (Benzies et al 2006). So what is so time consuming about all this? There is no one magic database that will search all these at once. Translation: you have to search a gazillion places separately, which means you have to learn how to search each of these gazillion websites/databases separately. Now if doing searches for systematic reviews is your bread-and-butter, then you are probably scoffing already. But for a newbie like me, I was drowning big time.

After spending what seemed like an eternity to finish my search, I went back to the literature to see why inclusion of grey literature was so important. I know that grey literature adds to the evidence base and results in a comprehensive search, but it is often not peer-reviewed, and the quality of some of the documents is often questionable. So what I dug up was a bit surprising. The first was a Cochrane Review from 2007 titled “Grey literature in meta-analyses of randomized trials of health care interventions (review).” The authors concluded that not including grey literature in meta-analyses produced inflated results when looking at treatment effects. So the reason for inclusion of grey literature made sense: to reduce publication bias. Another paper published in the Bulletin of the World Health Organization concluded that grey literature tends to be more current, provides global coverage, and may have an impact on cost-effectiveness of various treatment strategies. This definitely got my attention because of the new buzzword in Washington: Comparative Effectiveness Research (CER). A lot of the grey literature is comprised of policy documents so it definitely has a big role to play in systematic reviews as well. However, the authors also pointed out that there is no systematic way to search the grey literature and undertaking such a search can be very expensive and time consuming. This validated my frustrations, but gave no solutions.

When I was struggling to get through my search, I was delighted to find a wonderful resource from the Canadian Agency for Drugs and Technologies in Health. They have created a document called “Grey Matters: A Practical Search Tool for Evidence-Based Medicine”, which is a 34-page checklist of many of the popular websites for searching grey literature, including a built-in documentation system. It was still tedious work because I had to search a ton of places, many resulting in no hits. But at least I had a start and a transparent way of documenting my work.

However, I’m still at a loss for why there are no official guidelines for librarians to search for grey literature. There are clear guidelines for authors of grey literature. Benzies and colleagues give compelling reasons for inclusion of grey literature in a systematic review, complete with a checklist for authors! Why not have guidelines for searching too? I know that every search would require different tools; but I think that a master list can be created, sort of like a must-search-these-first type of a list. It surely would help a newbie like me. I know that many libraries have such lists but they tend to be 10 pages long, with bibliographies for bibliographies! Based on my experience, I would start with the following resources the next time I encounter a grey literature search:

  1. National Guideline Clearinghouse
  2. Centre for Reviews and Dissemination
  3. Agency for Healthcare Research and Quality (AHRQ)
  4. Health Technology Assessment International (HTAI)
  5. Turning Research Into Practice (TRIP)

Some databases like Mednar, Deep Dyve, RePORTer, OAIster, and Google Scholar also deserve a mention but I have not had much luck with them. This is obviously not meant to be an exhaustive list. For that, I present my delicious page: http://delicious.com/shamsha/greylit, which is also ever-growing.

Finally, a request for the experts out there: if you have any tips on how to make this process less painful, please share it here. The newbies of the world will appreciate it.

Shamsha Damani

Clinical Librarian

Reblog this post [with Zemanta]




Time to weed the (EBM-)pyramids?!

26 09 2008

Information overload is a major barrier in finding that particular medical information you’re really looking for. Search- and EBM-pyramids are designed as a (search) guidance both for physicians, med students and information specialists. Pyramids can be very handy to get a quick overview of which sources to use and which evidence to look for in which order.

But look at the small collection of pyramids I retrieved from Internet plus the ones I made myself (8,9)………

ALL DIFFERENT!!!!

What may be particularly confusing is that these pyramids serve different goals. As pyramids look alike (they are all pyramids) this may not be directly obvious.

There are 3 main kinds of pyramids (or hierarchies):

  1. Search Pyramid (no true example, 4, 5 and 6 come closest)
    Guiding searches to answer a clinical question as promptly as possible. Begin with the easiest/richest source, for instance UpToDate, Harrison’s (books), local hospital protocols or useful websites. Search aggregate evidence respectively the best original studies if answer isn’t found or doubtful.
  2. Pyramid of EBM-sources (3 ,4, 8 )
    Begin with the richest source of aggregate (pre-filtered) evidence and decline in order to to decrease the number needed to read: there are less EBM guidelines than there are Systematic Reviews and (certainly) individual papers.
  3. Pyramid of EBM-levels (1, 2, 5, 7, 9)
    Begin to look for the original papers with the highest level of evidence.
    Often only individual papers/original research, including Systematic Reviews, are considered (1, 9), but sometimes the pyramid is a mixture of original and aggregated literature (2,5)
  4. A mixture of 2, 3 and/or 4 (2,5)

Further discrepancies:

  • Hierarchies.
    • Some place Cochrane Systematic Reviews higher than ‘other systematic reviews’, others place meta-analysis above Systematic reviews (2,6). This is respectively unnecessary or wrong. (Come back to that in another post).
    • Sometimes Systematic reviews are on top, sometimes Systems (never found out what that is), sometimes meta-analysis or Evidence based Guidelines
    • Synopses (critically appraised individual articles) may be placed above or below Syntheses (critically appraised topics).
    • Textbooks and Reviews may at the base of the pyramid or a little more up.
    • etcetera
  • Nomenclature
    • Evidence Summaries ?= Summaries of the evidence? = Evidence Syntheses? = critically appraised topics?
    • Etcetera
  • Categorization
    • UpToDate is sometimes placed at the top of the pyramid in Summaries (4) OR at the base in Textbooks (5), where I think it should belong in terms of evidence levels, but not in terms of usefulness.
    • DARE is considered a review, but it is really a synopsis (critical appraised summary) of a Systematic Review.

Isn’t it about time to weed the pyramids rigorously?

Are pyramids really serving the aim of making it easier for the meds to find their information?

Like to hear your thoughts about this.

What my thoughts are? I will give a hint: I would rather guide the informationseeker through different routes, dependent on his background, question, available time and goal. The pyramid of evidence sources and the levels of evidence would just be part of that scheme, ideally.

Will be continued….





Related Articles = Lateral Navigation

18 05 2008

What I did pick up form the WordPress announcement by Matt on possibly related posts (see previous post) is the term “lateral navigation” for navigating from one post to another. Why is this such a nice term?

Well, in my classes on systematic searching I teach people to perform (1) backward searching (checking citations in the reference list of selected papers), (2) forward searching (looking for papers that cite relevant papers) and (3) to browse Related Articles in PubMed or use “Find Similar” in OVID (MEDLINE, EMBASE). This approach is called snowballing or pearl method. It serves to find papers that you might have missed, but even more so to find new terms to add to your search, so you catch these ‘missing studies’ with your final search strategy.

The term lateral searching is so perfect because you can easily vissualize what this word stands for and it fits in with backward and forward searching.

So lateral searching will now be added to my slides! (see figure)

lateral searching

———————————————-

NL flag NL vlag

De post van Matt (WordPress) over “possibly related posts” (zie mijn vorige post) bracht me op een voor mij nieuw begrip. “lateral navigation”, of in mijn geval nog beter “lateral searching” (lateraal of zijwaarts zoeken). Dit vind ik nl. een heel toepasselijke term voor het zoeken naar verwante artikelen.

In mijn cursussen “systematisch zoeken” raad ik mensen aan om aan de hand van geincludeerde (geselecteerde) artikelen systematisch ontbrekende studies te vinden door (1) “backward searching” (referentielijst checken), (2) “forward searching” (citerende artikelen zoeken) en (3) Related Articles in PubMed or “Find Similar” in OVID (MEDLINE, EMBASE) door te nemen. Deze zoekmethode wordt ook wel de sneeuwbal- of parelmethode genoemd. Het dient niet alleen om de ontbrekende artikelen te vinden maar vooral om nieuwe termen te vinden waarmee je je zoekactie kunt vervolmaken, zodat je deze èn andere artikelen met je uiteindelijke zoekactie vangt.

The term “lateral searching” past zo mooi bij de termen backward and forward searching, omdat ze alle 3 een beweging uitdrukken, waarbij de zijwaartse beweging nog het minst doelgericht lijkt en dat is het ook. Als je niet uitkijkt zwalk je zo van de ene naar de andere studie, en daarmee verlies je de systematiek. Leuk als je op nieuwe ideeen wilt komen, niet goed als je systematisch wilt zoeken.

Dus vanaf nu komt “lateral searching” op mijn powerpoint-presentatie te staan! (zie figuur))

——————————

Previous posts on related articles/posts at this blog:
http://laikaspoetnik.wordpress.com/2008/05/16/possibly-an-announcement-about-possibly-related-posts/









Follow

Get every new post delivered to your Inbox.

Join 610 other followers