Call for Submissions: Medical Grand Rounds at Laika’s MedLibLog

18 10 2011

Grand Rounds is a weekly round up of the best health blog posts on the Internet. Each week a different blogger takes turns hosting and summarizing the best submissions of the week.

October 25th I will be your host. Again…. for I have hosted Grand Rounds once before. Then we made a trip around the library.

This time the theme will be “INFORMATION”.

Difficult? Not at all. Almost anything may fit into this theme. Examples:  Searching for information, information overload, lack of information, misinformation, the hardest information you had to share, the way the doctor (mis)informed you about a disease, how pharma deals with information…. The way information is interpreted (you can also choose psychiatric topics here). Nice or noteworthy articles or books you read. Or you may review an app. Web2.0 tools. Social Media. Data carriers. Ah well, if you sell it the right way and your post is of good quality, I will accept almost everything…..

I have one slight problem though. Grand Rounds is traveling all the way from India to the Netherlands this week and I am away for the weekend. You would help me tremendously if you submit your post this Tuesday or Wednesday!

Official Deadline: Sunday October 23rd, 20.00 pm Central European Time. This is 14.00 EDT (NY)

Please Email your submissions to:

And include:

  •  “Submission for Grand Rounds” in the subject line of your e-mail.
  • Your name (blog author), the name of your blog, and the URL of your specific blog-post submission.
  • A short summary (1 to 3 sentences) of your blog post.

I look forward to receiving your submissions and featuring them here next week. Thank you!

Jacqueline aka Laika.

Photo Credits (CC):  Picture by mag3737 (Flickr)

Advertisements




Health Experts & Patient Advocates Beware: 10 Reasons Why you Shouldn’t be a Curator at Organized Wisdom!! #OrganizedWisdom

11 05 2011

Last year I aired my concern about Organized Wisdom in a post called Expert Curators, WisdomCards & The True Wisdom of @organizedwisdom.

Organized Wisdom shares health links of health experts or advocates, who (according to OW’s FAQ), either requested a profile or were recommended by OW’s Medical Review Board. I was one of those so called Expert Curators. However, I had never requested a profile and I seriously doubt whether someone from the a medical board had actually read any of my tweets or my blog posts.

This was one of the many issues with Organized Wisdom. But the main issue was its lack of credibility and transparency. I vented my complaints, I removed my profile from OW, stopped following updates at Twitter and informed some fellow curators.

I almost forgot about it, till Simon Sikorski, MD, commented at my blog, informing me that my complaints hadn’t been fully addressed and convincing me things were even worse than I thought.

He has started a campaign to do something about this Unethical Health Information Content Farming by Organized Wisdom (OW).

While discussing this affair with a few health experts and patient advocates I was disappointed by the reluctant reactions of a few people: “Well, our profiles are everywhere”, “Thanks I will keep an eye open”, “cannot say much yet”. How much evidence does one need?

Of course there were also people – well known MD’s and researchers – who immediately removed their profile and compared OW’s approach with that of Wellsphere, that scammed the Health Blogosphere. Yes, OW also scrapes and steals your intellectual property (blog and/or tweet content), but the difference is: OW doesn’t ask you to join, it just puts up your profile and shares it with the world.

As a medical librarian and e-patient I find the quality, reliability and objectivity of health information of utmost importance. I believe in the emancipation of patients (“Patient is not a third person word”, e-patient Dave), but it can only work if patients are truly well informed. This is difficult enough, because of the information overload and the conflicting data. We don’t need any further misinformation and non-transparency.

I belief that Organized Wisdom puts the reputation of  its “curators” at stake and that it is not a trustworthy nor useful resource for health information. For the following reasons (x see also Simon’s blog post and slides, his emphasis is more on content theft)

1. Profiles of Expert Curators are set up without their knowledge and consent
Most curators I asked didn’t know they were expert curators. Simon has spoken with 151 of the 5700 expert curators and not one of those persons knew he/she was listed on OW. (x)

2. The name Expert Curator suggests that you (can) curate information, but you cannot.
The information is automatically produced and is shown unfiltered (and often shown in duplicate, because many different people can link to the same source). It is not possible to edit the cards.
Ideally, curating should even be more than filtering (see this nice post about 
Social Media Content Curators, where curation is defined as the act of synthesizing and interpreting in order to present a complete record of a concept.)

3. OW calls your profile address: “A vanity URL¹”.

Is that how they see you? Well it must be said they try to win you by pure flattery. And they often succeed….

¹Quote OW: “We credit, honor, and promote our Health Experts, including offering: A vanity URL to promote so visitors can easily share your Health Profile with others, e.g. my.organizedwisdom.com/ePatientDave.
Note: this too is quite similar to the Wellsphere’s approach (read more at E-patients-net)

4. Bots tap into your tweets and/or scrape the content off their website
(x: see healthcare content farms monetizing scheme)

5. Scraping your content can affect your search rankings (x)
This probably affects starting/small blogs the most. I checked two posts of well known blogs and their websites still came up first.

6.  The site is funded/sponsored by pharmaceutical companies.
 “Tailored” ads show up next to the so called Wisdom Cards dealing with the same topic. If no pharmaceutical business has responded Google ads show up instead.
See the form where they actually invite pharma companies to select a target condition for advertizing. Note that the target conditions fit the OW topics.

7. The Wisdom Cards are no more than links to your tweets or posts. They have no added value. 

8. Worse, tweets and links are shown out of context.
I provided various examples in my previous post (mainly in the comment section)

A Cancer and Homeopathy WisdomCard™ shows Expert Curator Liz Ditz who is sharing a link about Cancer and Homeopathy. The link she shares is a dangerous article by a Dr. who is working in an Homeopathic General Hospital, in India “reporting” several cases of miraculous cures by Conium 1M, Thuja 50M and other watery-dilutions. I’m sure that Liz Ditz, didn’t say anything positive about the “article”. Still it seems she “backs it up”. Perhaps she tweeted: “Look what a dangerous crap.”
When I informed her, Liz said:“AIEEEE…. didn’t sign up with Organized Wisdom that I know of”. She felt she was used for credulous support for homeopathy & naturopathy.

Note: Liz card has disappeared (because she opted out), but I was was surprised to find that the link (http://organizedwisdom.com/Cancer-and-Homeopathy/wt/medstill works and links to other “evidence” on the same topic.


9. There is no quality control. Not of the wisdom cards and not of the expert curators.
Many curators are not what I would call true experts and I’m not alone: @holly comments at a Techcrunch postI am glad you brought up the “written by people who do not have a clue, let alone ANY medical training [of any kind] at all.” I have no experience with any kind of medical education, knowledge or even the slightest clue of a tenth of the topics covered on OW, yet for some reason they tried to recruit me to review cards there!?! )

The emphasis is also on alternative treatments: prevention of cancer, asthma, ADHD by herbs etc. In addition to “Health Centers”, there also Wellness Centers (AgingDietFitness etc) and Living Centers (BeautyCookingEnvironment). A single card can share information of 2 or 3 centers (diabetes and multivitamins for example).

And as said, all links of expert curators are placed unfiltered, even when you make a joke or mention you’re on vacation. Whether you’re a  Top health expert or advocate (there is a regular shout-out) just depends on the number of links you share, thus NOT on quality. For this reason the real experts are often at lower positions.

Some cards are just link baits.

 

10.  Organized Wisdom is heavily promoting its site.
Last year it launched activitydigest, automatic digests meant to stimulate “engagement” of expert curators. It tries to connect with top health experts, pharma -people and patient advocates. Hoping they will support OW. This leads to uncritical interviews such as at Pixels and Pills, at Health Interview (
Reader’s Digest + Organized Wisdom = Wiser Patients), Xconomy.com organizedwisdom recruits experts to filter health information on the web.

What can you do?

  • Check whether you have a profile at Organized Wisdom here.
  • Take a good look at Organized Wisdom and what it offers. It isn’t difficult and it doesn’t take much time to see through the facade.
  • If you don’t agree with what it represents, please consider to opt out.
  • You can email info@organizedwisdom.com to let your profile as expert curator removed.
  • If you agree that what OW does is no good practice, you could do the following (most are suggestions of Simon):
  • spread the word and inform others
  • join the conversation on Twitter #EndToFarms
  • join the tweetup on what you can do about this scandal and how to protect yourself from being liable. (more details will be offered by Simon at his regularly updated blogpost)
  • If you don’t agree this Content Farm deserves HONcode certification, notify HON at  https://www.healthonnet.org/HONcode/Conduct.html?HONConduct444558
Please don’t sit back and think that being a wisdom curator does not matter. Don’t show off  with an Organized Wisdom badget, widget or link at your blog or website.  Resist the flattery of being called an expert curator, because it doesn’t mean anything in this context. And by being part of Organized Wisdom, you indirectly support their practice. This may seriously affect your own reputation and indirectly you may contribute to misinformation.

Or as Heidi’s commented to my previous post:

I am flabbergasted that people’s reputation are being used to endorse content without their say so.
Even more so that they cannot delete their profile and withdraw their support.*

For me those two things on their own signal big red flags:

The damage to a health professional’s reputation as a result could be great.
Misleading the general public with poor (yes dangerous) information another

Altogether unethical.

*This was difficult at that time.

Update May 10, 2011: News from Simon: 165 individuals & 5 hospitals have now spoken up about unfolding scandal and are doing something about it (Tuesday )

Update May 12, 2011: If I failed to convince you, please read the post of Ramona Bates MD (@rlbates at Twitter, plastic surgeon, blogger at Suture for a Living), called “More Organized Wisdom Un-Fair Play. Ramona asked her profile to be removed from OW half a year ago).  Recommended pages at her blog seem to be written by other people.
She concludes:

“Once again, I encourage my fellow healthcare bloggers (doctors, nurses, patient advocates, etc) to remove yourself from any association with Organized Wisdom and other sites like them”

Related articles





How will we ever keep up with 75 Trials and 11 Systematic Reviews a Day?

6 10 2010

ResearchBlogging.orgAn interesting paper was published in PLOS Medicine [1]. As an information specialist and working part time for the Cochrane Collaboration* (see below), this topic is close to my heart.

The paper, published in PLOS Medicine is written by Hilda Bastian and two of my favorite EBM devotees ànd critics, Paul Glasziou and Iain Chalmers.

Their article gives an good overview of the rise in number of trials, systematic reviews (SR’s) of interventions and of medical papers in general. The paper (under the head: Policy Forum) raises some important issues, but the message is not as sharp and clear as usual.

Take the title for instance.

Seventy-Five Trials and Eleven Systematic Reviews a Day:
How Will We Ever Keep Up?

What do you consider its most important message?

  1. That doctors suffer from an information overload that is only going to get worse, as I did and probably also in part @kevinclauson who tweeted about it to medical librarians
  2. that the solution to this information overload consists of Cochrane systematic reviews (because they aggregate the evidence from individual trials) as @doctorblogs twittered
  3. that it is just about “too many systematic reviews (SR’s) ?”, the title of the PLOS-press release (so the other way around),
  4. That it is about too much of everything and the not always good quality SR’s: @kevinclauson and @pfanderson discussed that they both use the same ” #Cochrane Disaster” (see Kevin’s Blog) in their  teaching.
  5. that Archie Cochrane’s* dream is unachievable and ought perhaps be replaced by something less Utopian (comment by Richard Smith, former editor of the BMJ: 1, 3, 4, 5 together plus a new aspect: SR’s should not only  include randomized controlled trials (RCT’s)

The paper reads easily, but matters of importance are often only touched upon.  Even after reading it twice, I wondered: a lot is being said, but what is really their main point and what are their answers/suggestions?

But lets look at their arguments and pieces of evidence. (Black is from their paper, blue my remarks)

The landscape

I often start my presentations “searching for evidence” by showing the Figure to the right, which is from an older PLOS-article. It illustrates the information overload. Sometimes I also show another slide, with (5-10 year older data), saying that there are 55 trials a day, 1400 new records added per day to MEDLINE and 5000 biomedical articles a day. I also add that specialists have to read 17-22 articles a day to keep up to date with the literature. GP’s even have to read more, because they are generalists. So those 75 trials and the subsequent information overload is not really a shock to me.

Indeed the authors start with saying that “Keeping up with information in health care has never been easy.” The authors give an interesting overview of the driving forces for the increase in trials and the initiation of SR’s and critical appraisals to synthesize the evidence from all individual trials to overcome the information overload (SR’s and other forms of aggregate evidence decrease the number needed to read).

In box 1 they give an overview of the earliest systematic reviews. These SR’s often had a great impact on medical practice (see for instance an earlier discussion on the role of the Crash trial and of the first Cochrane review).
They also touch upon the institution of the Cochrane Collaboration.  The Cochrane collaboration is named after Archie Cochrane who “reproached the medical profession for not having managed to organise a “critical summary, by speciality or subspecialty, adapted periodically, of all relevant randomised controlled trials” He inspired the establishment of the international Oxford Database of Perinatal Trials and he encouraged the use of systematic reviews of randomized controlled trials (RCT’s).

A timeline with some of the key events are shown in Figure 1.

Where are we now?

The second paragraph shows many, interesting, graphs (figs 2-4).

Annoyingly, PLOS only allows one sentence-legends. The details are in the (WORD) supplement without proper referral to the actual figure numbers. Grrrr..!  This is completely unnecessary in reviews/editorials/policy forums. And -as said- annoying, because you have to read a Word file to understand where the data actually come from.

Bastian et al. have used MEDLINE’s publication types (i.e. case reports [pt], reviews[pt], Controlled Clinical Trial[pt] ) and search filters (the Montori SR filter and the Haynes narrow therapy filter, which is built-in in PubMed’s Clinical Queries) to estimate the yearly rise in number of study types. The total number of Clinical trials in CENTRAL (the largest database of controlled clinical trials, abbreviated as CCTRS in the article) and the Cochrane Database of Systematic Reviews (CDSR) are easy to retrieve, because the numbers are published quaterly (now monthly) by the Cochrane Library. Per definition, CDSR only contains SR’s and CENTRAL (as I prefer to call it) contains almost invariably controlled clinical trials.

In short, these are the conclusions from their three figures:

  • Fig 2: The number of published trials has raised sharply from 1950 till 2010
  • Fig 3: The number of systematic reviews and meta-analysis has raised tremendously as well
  • Fig 4: But systematic reviews and clinical trials are still far outnumbered by narrative reviews and case reports.

O.k. that’s clear & they raise a good point : an “astonishing growth has occurred in the number of reports of clinical trials since the middle of the 20th century, and in reports of systematic reviews since the 1980s—and a plateau in growth has not yet been reached.
Plus indirectly: the increase in systematic reviews  didn’t lead to a lower the number of trials and narrative reviews. Thus the information overload is still increasing.
But instead of discussing these findings they go into an endless discussion on the actual data and the fact that we “still do not know exactly how many trials have been done”, to end the discussion by saying that “Even though these figures must be seen as more illustrative than precise…” And than you think. So what? Furthermore, I don’t really get their point of this part of their article.

 

Fig. 2: The number of published trials, 1950 to 2007.

 

 

With regard to Figure 2 they say for instance:

The differences between the numbers of trial records in MEDLINE and CCTR (CENTRAL) (see Figure 2) have multiple causes. Both CCTR and MEDLINE often contain more than one record from a single study, and there are lags in adding new records to both databases. The NLM filters are probably not as efficient at excluding non-trials as are the methods used to compile CCTR. Furthermore, MEDLINE has more language restrictions than CCTR. In brief, there is still no single repository reliably showing the true number of randomised trials. Similar difficulties apply to trying to estimate the number of systematic reviews and health technology assessments (HTAs).

Sorry, although some of these points may be true, Bastian et al. don’t go into the main reason for the difference between both graphs, that is the higher number of trial records in CCTR (CENTRAL) than in MEDLINE: the difference can be simply explained by the fact that CENTRAL contains records from MEDLINE as well as from many other electronic databases and from hand-searched materials (see this post).
With respect to other details:. I don’t know which NLM filter they refer to, but if they mean the narrow therapy filter: this filter is specifically meant to find randomized controlled trials, and is far more specific and less sensitive than the Cochrane methodological filters for retrieving controlled clinical trials. In addition, MEDLINE does not have more language restrictions per se: it just contains a (extensive) selection of  journals. (Plus people more easily use language limits in MEDLINE, but that is besides the point).

Elsewhere the authors say:

In Figures 2 and 3 we use a variety of data sources to estimate the numbers of trials and systematic reviews published from 1950 to the end of 2007 (see Text S1). The number of trials continues to rise: although the data from CCTR suggest some fluctuation in trial numbers in recent years, this may be misleading because the Cochrane Collaboration virtually halted additions to CCTR as it undertook a review and internal restructuring that lasted a couple of years.

As I recall it , the situation is like this: till 2005 the Cochrane Collaboration did the so called “retag project” , in which they searched for controlled clinical trials in MEDLINE and EMBASE (with a very broad methodological filter). All controlled trials articles were loaded in CENTRAL, and the NLM retagged the controlled clinical trials that weren’t tagged with the appropriate publication type in MEDLINE. The Cochrane stopped the laborious retag project in 2005, but still continues the (now) monthly electronic search updates performed by the various Cochrane groups (for their topics only). They still continue handsearching. So they didn’t (virtually?!) halted additions to CENTRAL, although it seems likely that stopping the retagging project caused the plateau. Again the author’s main points are dwarfed by not very accurate details.

Some interesting points in this paragraph:

  • We still do not know exactly how many trials have been done.
  • For a variety of reasons, a large proportion of trials have remained unpublished (negative publication bias!) (note: Cochrane Reviews try to lower this kind of bias by applying no language limits and including unpublished data, i.e. conference proceedings, too)
  • Many trials have been published in journals without being electronically indexed as trials, which makes them difficult to find. (note: this has been tremendously improved since the Consort-statement, which is an evidence-based, minimum set of recommendations for reporting RCTs, and by the Cochrane retag-project, discussed above)
  • Astonishing growth has occurred in the number of reports of clinical trials since the middle of the 20th century, and in reports of systematic reviews since the 1980s—and a plateau in growth has not yet been reached.
  • Trials are now registered in prospective trial registers at inception, theoretically enabling an overview of all published and unpublished trials (note: this will also facilitate to find out reasons for not publishing data, or alteration of primary outcomes)
  • Once the International Committee of Medical Journal Editors announced that their journals would no longer publish trials that had not been prospectively registered, far more ongoing trials were being registered per week (200 instead of 30). In 2007, the US Congress made detailed prospective trial registration legally mandatory.

The authors do not discuss that better reporting of trials and the retag project might have facilitated the indexing and retrieval of trials.

How Close Are We to Archie Cochrane’s Goal?

According to the authors there are various reasons why Archie Cochrane’s goal will not be achieved without some serious changes in course:

  • The increase in systematic reviews didn’t displace other less reliable forms of information (Figs 3 and 4)
  • Only a minority of trials have been assessed in systematic review
  • The workload involved in producing reviews is increasing
  • The bulk of systematic reviews are now many years out of date.

Where to Now?

In this paragraph the authors discuss what should be changed:

  • Prioritize trials
  • Wider adoption of the concept that trials will not be supported unless a SR has shown the trial to be necessary.
  • Prioritizing SR’s: reviews should address questions that are relevant to patients, clinicians and policymakers.
  • Chose between elaborate reviews that answer a part of the relevant questions or “leaner” reviews of most of what we want to know. Apparently the authors have already chosen for the latter: they prefer:
    • shorter and less elaborate reviews
    • faster production ànd update of SR’s
    • no unnecessary inclusion of other study types other than randomized trials. (unless it is about less common adverse effects)
  • More international collaboration and thereby a better use  of resources for SR’s and HTAs. As an example of a good initiative they mention “KEEP Up,” which will aim to harmonise updating standards and aggregate updating results, initiated and coordinated by the German Institute for Quality and Efficiency in Health Care (IQWiG) and involving key systematic reviewing and guidelines organisations such as the Cochrane Collaboration, Duodecim, the Scottish Intercollegiate Guidelines Network (SIGN), and the National Institute for Health and Clinical Excellence (NICE).

Summary and comments

The main aim of this paper is to discuss  to which extent the medical profession has managed to make “critical summaries, by speciality or subspeciality, adapted periodically, of all relevant randomized controlled trials”, as proposed 30 years ago by Archie Cochrane.

Emphasis of the paper is mostly on the number of trials and systematic reviews, not on qualitative aspects. Furthermore there is too much emphasis on the methods determining the number of trials and reviews.

The main conclusion of the authors is that an astonishing growth has occurred in the number of reports of clinical trials as well as in the number of SR’s, but that these systematic pieces of evidence shrink into insignificance compared to the a-systematic narrative reviews or case reports published. That is an important, but not an unexpected conclusion.

Bastian et al don’t address whether systematic reviews have made the growing number of trials easier to access or digest. Neither do they go into developments that have facilitated the retrieval of clinical trials and aggregate evidence from databases like PubMed: the Cochrane retag-project, the Consort-statement, the existence of publication types and search filters (they use themselves to filter out trials and systematic reviews). They also skip other sources than systematic reviews, that make it easier to find the evidence: Databases with Evidence Based Guidelines, the TRIP database, Clinical Evidence.
As Clay Shirky said: “It’s Not Information Overload. It’s Filter Failure.”

It is also good to note that case reports and narrative reviews serve other aims. For medical practitioners rare case reports can be very useful for their clinical practice and good narrative reviews can be valuable for getting an overview in the field or for keeping up-to-date. You just have to know when to look for what.

Bastian et al have several suggestions for improvement, but these suggestions are not always underpinned. For instance, they propose access to all systematic reviews and trials. Perfect. But how can this be attained? We could stimulate authors to publish their trials in open access papers. For Cochrane reviews this would be desirable but difficult, as we cannot demand from authors who work for months for free to write a SR to pay the publications themselves. The Cochrane Collab is an international organization that does not receive subsidies for this. So how could this be achieved?

In my opinion, we can expect the most important benefits from prioritizing of trials ànd SR’s, faster production ànd update of SR’s, more international collaboration and less duplication. It is a pity the authors do not mention other projects than “Keep up”.  As discussed in previous posts, the Cochrane Collaboration also recognizes the many issues raised in this paper, and aims to speed up the updates and to produce evidence on priority topics (see here and here). Evidence aid is an example of a successful effort.  But this is only the Cochrane Collaboration. There are many more non-Cochrane systematic reviews produced.

And then we arrive at the next issue: Not all systematic reviews are created equal. There are a lot of so called “systematic reviews”, that aren’t the conscientious, explicit and judicious created synthesis of evidence as they ought to be.

Therefore, I do not think that the proposal that each single trial should be preceded by a systematic review, is a very good idea.
In the Netherlands writing a SR is already required for NWO grants. In practice, people just approach me, as a searcher, the days before Christmas, with the idea to submit the grant proposal (including the SR) early in January. This evidently is a fast procedure, but doesn’t result in a high standard SR, upon which others can rely.

Another point is that this simple and fast production of SR’s will only lead to a larger increase in number of SR’s, an effect that the authors wanted to prevent.

Of course it is necessary to get a (reliable) picture of what has already be done and to prevent unnecessary duplication of trials and systematic reviews. It would the best solution if we would have a triplet (nano-publications)-like repository of trials and systematic reviews done.

Ideally, researchers and doctors should first check such a database for existing systematic reviews. Only if no recent SR is present they could continue writing a SR themselves. Perhaps it sometimes suffices to search for trials and write a short synthesis.

There is another point I do not agree with. I do not think that SR’s of interventions should only include RCT’s . We should include those study types that are relevant. If RCT’s furnish a clear proof, than RCT’s are all we need. But sometimes – or in some topics/specialties- RCT’s are not available. Inclusion of other study designs and rating them with GRADE (proposed by Guyatt) gives a better overall picture. (also see the post: #notsofunny: ridiculing RCT’s and EBM.

The authors strive for simplicity. However, the real world isn’t that simple. In this paper they have limited themselves to evidence of the effects of health care interventions. Finding and assessing prognostic, etiological and diagnostic studies is methodologically even more difficult. Still many clinicians have these kinds of questions. Therefore systematic reviews of other study designs (diagnostic accuracy or observational studies) are also of great importance.

In conclusion, whereas I do not agree with all points raised, this paper touches upon a lot of important issues and achieves what can be expected from a discussion paper:  a thorough shake-up and a lot of discussion.

References

  1. Bastian, H., Glasziou, P., & Chalmers, I. (2010). Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLoS Medicine, 7 (9) DOI: 10.1371/journal.pmed.1000326

Related Articles





Searching Skills Toolkit. Finding the Evidence [Book Review]

4 03 2010

Most books on Evidence Based Medicine give little attention to the first two steps of EBM: asking focused answerable questions and searching the evidence. Being able to appraise an article, but not being able to find the best evidence may be challenging and frustrating to the busy clinicians.

Searching Skills Toolkit: Finding The Evidence” is a pocket-sized book that aims to instruct the clinician how to search for evidence. It is the third toolkit book in the series edited by Heneghan et al. (author of the CEBM-blog Trust the Evidence). The authors Caroline de Brún and Nicola Pearce Smith are experts in searching (librarian and information scientist respectively).

According to the description at Wiley’s, the distinguishing feature of this searching skills book,  is its user-friendliness. “The guiding principle is that readers do not want to become librarians, but they are faced with practical difficulties when searching for evidence, such as lack of skills, lack of time and information overload. They need to learn simple search skills, and be directed towards the right resources to find the best evidence to support their decision-making.”

Does this book give guidance that makes searching for evidence easy? Is this book the ‘perfect companion’ to doctors, nurses, allied health professionals, managers, researchers and students, as it promises?

I find it difficult to answer, partly because I’m not a clinician and partly because, being a medical information specialist myself, I would frequently tackle a search otherwise.

The booklet is in pocket-size, easy to take along. The lay-out is clear and pleasant. The approach is original and practical. Despite its small size, the booklet contains a wealth of information. Table one, for instance, gives an overview of truncation symbols, wildcards and Boolean operators for Cochrane, Dialog, EBSCO, OVID, PubMed and Webspirs (see photo). And although this is mouth watering for many medical librarians one wonders whether this detailed information is really useful for the clinician.

Furthermore 34 pages of the 102 (1/3) are devoted on searching these specific health care databases. IMHO of these databases only PubMed and the Cochrane Library are useful to the average clinician. In addition most of the screenshots of the individual databases are too small to read. And due to the PubMed Redesign the PubMed description is no longer up-to-date.

The readers are guided to the chapters on searching by asking themselves beforehand:

  1. The time available to search: 5 minutes, an hour or time to do a comprehensive search. This is an important first step, which is often not considered by other books and short guides.
    Primary sources, secondary sources and ‘other’ sources are given per time available. This is all presented in a table with reference to key chapters and related chapters. These particular chapters enable the reader to perform these short, intermediate or long searches.
  2. What type of publication he is looking for: a guideline, a systematic review, patient information or an RCT (with tips where to find them).
  3. Whether the query is about a specific topic, i.e. drug or safety information or health statistics.

All useful information, but I would have discussed topic 3 before covering EBM, because this doesn’t fit into the ‘normal’ EBM search.  So for drug information you could directly go to the FDA, WHO or EMEA website. Similarly, if my question was only to find a guideline I would simply search one or more guideline databases.
Furthermore it would be more easy to pile the small, intermediate and long searches upon each other instead of next to each other. The basic principle would be (in my opinion at least) to start with a PICO and to (almost) always search for secondary searches first (fast), search for primary publications (original research) in PubMed if necessary and broaden the search in other databases (broad search) in case of exhaustive searches. This is easy to remember, even without the schemes in the book.

Some minor points. There is an overemphasis on UK-sources. So the first source to find guidelines is the (UK) National Library of Guidelines, where I would put the National Guideline Clearinghouse (or the TRIP-database) first. And why is MedlinePlus not included as a source for patients, whereas NHS-choices is?

There is also an overemphasis on interventions. How PICO’s are constructed for other domains (diagnosis, etiology/harm and prognosis) is barely touched upon. It is much more difficult to make PICOs and search in these domains. More practical examples would also have been helpful.

Overall, I find this book very useful. The authors are clearly experts in searching and they fill a gap in the market: there is no comparable book on “the searching of the evidence”. Therefore, despite some critique and preferences for another approach, I do recommend this book to doctors who want to learn basic searching skills. As a medical information specialist I keep it in my pocket too: just in case…

Overview

What I liked about the book:

  • Pocket size, easy to take a long.
  • Well written
  • Clear diagrams
  • Broad coverage
  • Good description of (many) databases
  • Step for step approach

What I liked less about it:

  • Screen dumps are often too small to read and thereby not useful
  • Emphasis on UK-sources
  • Other domains than “therapy” (etiology/harm, prognosis, diagnosis) are almost not touched upon
  • Too few clinical examples
  • A too strict division in short, intermediate and long searches: these are not intrinsically different

The Chapters

  1. Introduction.
  2. Where to start? Summary tables and charts.
  3. Sources of clinical information: an overview.
  4. Using search engines on the World Wide Web.
  5. Formulating clinical questions.
  6. Building a search strategy.
  7. Free text versus thesaurus.
  8. Refining search results.
  9. Searching specific healthcare databases.
  10. Citation pearl searching.
  11. Saving/recording citations for future use.
  12. Critical appraisal.
  13. Further reading by topic or PubMed ID.
  14. Glossary of terms.
  15. Appendix 1: Ten tips for effective searching.
  16. Appendix 2: Teaching tips

References

  1. Searching Skills Toolkit – Finding The Evidence (Paperback – 2009/02/17) by Caroline De Brún and Nicola Pearce-smith; Carl Heneghan et al (Editors). Wiley-Blackell BMJ\ Books
  2. Kamal R Mahtani Evid Based Med 2009;14:189 doi:10.1136/ebm.14.6.189 (book review by a clinician)

Reblog this post [with Zemanta]