How will we ever keep up with 75 Trials and 11 Systematic Reviews a Day?

6 10 2010

ResearchBlogging.orgAn interesting paper was published in PLOS Medicine [1]. As an information specialist and working part time for the Cochrane Collaboration* (see below), this topic is close to my heart.

The paper, published in PLOS Medicine is written by Hilda Bastian and two of my favorite EBM devotees ànd critics, Paul Glasziou and Iain Chalmers.

Their article gives an good overview of the rise in number of trials, systematic reviews (SR’s) of interventions and of medical papers in general. The paper (under the head: Policy Forum) raises some important issues, but the message is not as sharp and clear as usual.

Take the title for instance.

Seventy-Five Trials and Eleven Systematic Reviews a Day:
How Will We Ever Keep Up?

What do you consider its most important message?

  1. That doctors suffer from an information overload that is only going to get worse, as I did and probably also in part @kevinclauson who tweeted about it to medical librarians
  2. that the solution to this information overload consists of Cochrane systematic reviews (because they aggregate the evidence from individual trials) as @doctorblogs twittered
  3. that it is just about “too many systematic reviews (SR’s) ?”, the title of the PLOS-press release (so the other way around),
  4. That it is about too much of everything and the not always good quality SR’s: @kevinclauson and @pfanderson discussed that they both use the same ” #Cochrane Disaster” (see Kevin’s Blog) in their  teaching.
  5. that Archie Cochrane’s* dream is unachievable and ought perhaps be replaced by something less Utopian (comment by Richard Smith, former editor of the BMJ: 1, 3, 4, 5 together plus a new aspect: SR’s should not only  include randomized controlled trials (RCT’s)

The paper reads easily, but matters of importance are often only touched upon.  Even after reading it twice, I wondered: a lot is being said, but what is really their main point and what are their answers/suggestions?

But lets look at their arguments and pieces of evidence. (Black is from their paper, blue my remarks)

The landscape

I often start my presentations “searching for evidence” by showing the Figure to the right, which is from an older PLOS-article. It illustrates the information overload. Sometimes I also show another slide, with (5-10 year older data), saying that there are 55 trials a day, 1400 new records added per day to MEDLINE and 5000 biomedical articles a day. I also add that specialists have to read 17-22 articles a day to keep up to date with the literature. GP’s even have to read more, because they are generalists. So those 75 trials and the subsequent information overload is not really a shock to me.

Indeed the authors start with saying that “Keeping up with information in health care has never been easy.” The authors give an interesting overview of the driving forces for the increase in trials and the initiation of SR’s and critical appraisals to synthesize the evidence from all individual trials to overcome the information overload (SR’s and other forms of aggregate evidence decrease the number needed to read).

In box 1 they give an overview of the earliest systematic reviews. These SR’s often had a great impact on medical practice (see for instance an earlier discussion on the role of the Crash trial and of the first Cochrane review).
They also touch upon the institution of the Cochrane Collaboration.  The Cochrane collaboration is named after Archie Cochrane who “reproached the medical profession for not having managed to organise a “critical summary, by speciality or subspecialty, adapted periodically, of all relevant randomised controlled trials” He inspired the establishment of the international Oxford Database of Perinatal Trials and he encouraged the use of systematic reviews of randomized controlled trials (RCT’s).

A timeline with some of the key events are shown in Figure 1.

Where are we now?

The second paragraph shows many, interesting, graphs (figs 2-4).

Annoyingly, PLOS only allows one sentence-legends. The details are in the (WORD) supplement without proper referral to the actual figure numbers. Grrrr..!  This is completely unnecessary in reviews/editorials/policy forums. And -as said- annoying, because you have to read a Word file to understand where the data actually come from.

Bastian et al. have used MEDLINE’s publication types (i.e. case reports [pt], reviews[pt], Controlled Clinical Trial[pt] ) and search filters (the Montori SR filter and the Haynes narrow therapy filter, which is built-in in PubMed’s Clinical Queries) to estimate the yearly rise in number of study types. The total number of Clinical trials in CENTRAL (the largest database of controlled clinical trials, abbreviated as CCTRS in the article) and the Cochrane Database of Systematic Reviews (CDSR) are easy to retrieve, because the numbers are published quaterly (now monthly) by the Cochrane Library. Per definition, CDSR only contains SR’s and CENTRAL (as I prefer to call it) contains almost invariably controlled clinical trials.

In short, these are the conclusions from their three figures:

  • Fig 2: The number of published trials has raised sharply from 1950 till 2010
  • Fig 3: The number of systematic reviews and meta-analysis has raised tremendously as well
  • Fig 4: But systematic reviews and clinical trials are still far outnumbered by narrative reviews and case reports.

O.k. that’s clear & they raise a good point : an “astonishing growth has occurred in the number of reports of clinical trials since the middle of the 20th century, and in reports of systematic reviews since the 1980s—and a plateau in growth has not yet been reached.
Plus indirectly: the increase in systematic reviews  didn’t lead to a lower the number of trials and narrative reviews. Thus the information overload is still increasing.
But instead of discussing these findings they go into an endless discussion on the actual data and the fact that we “still do not know exactly how many trials have been done”, to end the discussion by saying that “Even though these figures must be seen as more illustrative than precise…” And than you think. So what? Furthermore, I don’t really get their point of this part of their article.

 

Fig. 2: The number of published trials, 1950 to 2007.

 

 

With regard to Figure 2 they say for instance:

The differences between the numbers of trial records in MEDLINE and CCTR (CENTRAL) (see Figure 2) have multiple causes. Both CCTR and MEDLINE often contain more than one record from a single study, and there are lags in adding new records to both databases. The NLM filters are probably not as efficient at excluding non-trials as are the methods used to compile CCTR. Furthermore, MEDLINE has more language restrictions than CCTR. In brief, there is still no single repository reliably showing the true number of randomised trials. Similar difficulties apply to trying to estimate the number of systematic reviews and health technology assessments (HTAs).

Sorry, although some of these points may be true, Bastian et al. don’t go into the main reason for the difference between both graphs, that is the higher number of trial records in CCTR (CENTRAL) than in MEDLINE: the difference can be simply explained by the fact that CENTRAL contains records from MEDLINE as well as from many other electronic databases and from hand-searched materials (see this post).
With respect to other details:. I don’t know which NLM filter they refer to, but if they mean the narrow therapy filter: this filter is specifically meant to find randomized controlled trials, and is far more specific and less sensitive than the Cochrane methodological filters for retrieving controlled clinical trials. In addition, MEDLINE does not have more language restrictions per se: it just contains a (extensive) selection of  journals. (Plus people more easily use language limits in MEDLINE, but that is besides the point).

Elsewhere the authors say:

In Figures 2 and 3 we use a variety of data sources to estimate the numbers of trials and systematic reviews published from 1950 to the end of 2007 (see Text S1). The number of trials continues to rise: although the data from CCTR suggest some fluctuation in trial numbers in recent years, this may be misleading because the Cochrane Collaboration virtually halted additions to CCTR as it undertook a review and internal restructuring that lasted a couple of years.

As I recall it , the situation is like this: till 2005 the Cochrane Collaboration did the so called “retag project” , in which they searched for controlled clinical trials in MEDLINE and EMBASE (with a very broad methodological filter). All controlled trials articles were loaded in CENTRAL, and the NLM retagged the controlled clinical trials that weren’t tagged with the appropriate publication type in MEDLINE. The Cochrane stopped the laborious retag project in 2005, but still continues the (now) monthly electronic search updates performed by the various Cochrane groups (for their topics only). They still continue handsearching. So they didn’t (virtually?!) halted additions to CENTRAL, although it seems likely that stopping the retagging project caused the plateau. Again the author’s main points are dwarfed by not very accurate details.

Some interesting points in this paragraph:

  • We still do not know exactly how many trials have been done.
  • For a variety of reasons, a large proportion of trials have remained unpublished (negative publication bias!) (note: Cochrane Reviews try to lower this kind of bias by applying no language limits and including unpublished data, i.e. conference proceedings, too)
  • Many trials have been published in journals without being electronically indexed as trials, which makes them difficult to find. (note: this has been tremendously improved since the Consort-statement, which is an evidence-based, minimum set of recommendations for reporting RCTs, and by the Cochrane retag-project, discussed above)
  • Astonishing growth has occurred in the number of reports of clinical trials since the middle of the 20th century, and in reports of systematic reviews since the 1980s—and a plateau in growth has not yet been reached.
  • Trials are now registered in prospective trial registers at inception, theoretically enabling an overview of all published and unpublished trials (note: this will also facilitate to find out reasons for not publishing data, or alteration of primary outcomes)
  • Once the International Committee of Medical Journal Editors announced that their journals would no longer publish trials that had not been prospectively registered, far more ongoing trials were being registered per week (200 instead of 30). In 2007, the US Congress made detailed prospective trial registration legally mandatory.

The authors do not discuss that better reporting of trials and the retag project might have facilitated the indexing and retrieval of trials.

How Close Are We to Archie Cochrane’s Goal?

According to the authors there are various reasons why Archie Cochrane’s goal will not be achieved without some serious changes in course:

  • The increase in systematic reviews didn’t displace other less reliable forms of information (Figs 3 and 4)
  • Only a minority of trials have been assessed in systematic review
  • The workload involved in producing reviews is increasing
  • The bulk of systematic reviews are now many years out of date.

Where to Now?

In this paragraph the authors discuss what should be changed:

  • Prioritize trials
  • Wider adoption of the concept that trials will not be supported unless a SR has shown the trial to be necessary.
  • Prioritizing SR’s: reviews should address questions that are relevant to patients, clinicians and policymakers.
  • Chose between elaborate reviews that answer a part of the relevant questions or “leaner” reviews of most of what we want to know. Apparently the authors have already chosen for the latter: they prefer:
    • shorter and less elaborate reviews
    • faster production ànd update of SR’s
    • no unnecessary inclusion of other study types other than randomized trials. (unless it is about less common adverse effects)
  • More international collaboration and thereby a better use  of resources for SR’s and HTAs. As an example of a good initiative they mention “KEEP Up,” which will aim to harmonise updating standards and aggregate updating results, initiated and coordinated by the German Institute for Quality and Efficiency in Health Care (IQWiG) and involving key systematic reviewing and guidelines organisations such as the Cochrane Collaboration, Duodecim, the Scottish Intercollegiate Guidelines Network (SIGN), and the National Institute for Health and Clinical Excellence (NICE).

Summary and comments

The main aim of this paper is to discuss  to which extent the medical profession has managed to make “critical summaries, by speciality or subspeciality, adapted periodically, of all relevant randomized controlled trials”, as proposed 30 years ago by Archie Cochrane.

Emphasis of the paper is mostly on the number of trials and systematic reviews, not on qualitative aspects. Furthermore there is too much emphasis on the methods determining the number of trials and reviews.

The main conclusion of the authors is that an astonishing growth has occurred in the number of reports of clinical trials as well as in the number of SR’s, but that these systematic pieces of evidence shrink into insignificance compared to the a-systematic narrative reviews or case reports published. That is an important, but not an unexpected conclusion.

Bastian et al don’t address whether systematic reviews have made the growing number of trials easier to access or digest. Neither do they go into developments that have facilitated the retrieval of clinical trials and aggregate evidence from databases like PubMed: the Cochrane retag-project, the Consort-statement, the existence of publication types and search filters (they use themselves to filter out trials and systematic reviews). They also skip other sources than systematic reviews, that make it easier to find the evidence: Databases with Evidence Based Guidelines, the TRIP database, Clinical Evidence.
As Clay Shirky said: “It’s Not Information Overload. It’s Filter Failure.”

It is also good to note that case reports and narrative reviews serve other aims. For medical practitioners rare case reports can be very useful for their clinical practice and good narrative reviews can be valuable for getting an overview in the field or for keeping up-to-date. You just have to know when to look for what.

Bastian et al have several suggestions for improvement, but these suggestions are not always underpinned. For instance, they propose access to all systematic reviews and trials. Perfect. But how can this be attained? We could stimulate authors to publish their trials in open access papers. For Cochrane reviews this would be desirable but difficult, as we cannot demand from authors who work for months for free to write a SR to pay the publications themselves. The Cochrane Collab is an international organization that does not receive subsidies for this. So how could this be achieved?

In my opinion, we can expect the most important benefits from prioritizing of trials ànd SR’s, faster production ànd update of SR’s, more international collaboration and less duplication. It is a pity the authors do not mention other projects than “Keep up”.  As discussed in previous posts, the Cochrane Collaboration also recognizes the many issues raised in this paper, and aims to speed up the updates and to produce evidence on priority topics (see here and here). Evidence aid is an example of a successful effort.  But this is only the Cochrane Collaboration. There are many more non-Cochrane systematic reviews produced.

And then we arrive at the next issue: Not all systematic reviews are created equal. There are a lot of so called “systematic reviews”, that aren’t the conscientious, explicit and judicious created synthesis of evidence as they ought to be.

Therefore, I do not think that the proposal that each single trial should be preceded by a systematic review, is a very good idea.
In the Netherlands writing a SR is already required for NWO grants. In practice, people just approach me, as a searcher, the days before Christmas, with the idea to submit the grant proposal (including the SR) early in January. This evidently is a fast procedure, but doesn’t result in a high standard SR, upon which others can rely.

Another point is that this simple and fast production of SR’s will only lead to a larger increase in number of SR’s, an effect that the authors wanted to prevent.

Of course it is necessary to get a (reliable) picture of what has already be done and to prevent unnecessary duplication of trials and systematic reviews. It would the best solution if we would have a triplet (nano-publications)-like repository of trials and systematic reviews done.

Ideally, researchers and doctors should first check such a database for existing systematic reviews. Only if no recent SR is present they could continue writing a SR themselves. Perhaps it sometimes suffices to search for trials and write a short synthesis.

There is another point I do not agree with. I do not think that SR’s of interventions should only include RCT’s . We should include those study types that are relevant. If RCT’s furnish a clear proof, than RCT’s are all we need. But sometimes – or in some topics/specialties- RCT’s are not available. Inclusion of other study designs and rating them with GRADE (proposed by Guyatt) gives a better overall picture. (also see the post: #notsofunny: ridiculing RCT’s and EBM.

The authors strive for simplicity. However, the real world isn’t that simple. In this paper they have limited themselves to evidence of the effects of health care interventions. Finding and assessing prognostic, etiological and diagnostic studies is methodologically even more difficult. Still many clinicians have these kinds of questions. Therefore systematic reviews of other study designs (diagnostic accuracy or observational studies) are also of great importance.

In conclusion, whereas I do not agree with all points raised, this paper touches upon a lot of important issues and achieves what can be expected from a discussion paper:  a thorough shake-up and a lot of discussion.

References

  1. Bastian, H., Glasziou, P., & Chalmers, I. (2010). Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLoS Medicine, 7 (9) DOI: 10.1371/journal.pmed.1000326

Related Articles





I’ve got Good News and I’ve got Bad News

26 01 2010

If someone tells you: “I’ve got Good News and I’ve got Bad News”, you probably ask this person: “Well, tell me the bad news first!”

Laika’s MedLibLog has good and bad news for you.

The Bad News is, that this blog didn’t make it to the Finals of the sixth annual Medical Weblog Awards, organized by Medgadget. (see earlier post)

The Good news is that this keeps me from the stress that inevitably comes with following the stats and seeing how your blog is lagging more and more behind. Plus you don’t have to waste time desperately trying to mobilize your husband to just press the *$%# vote button (choosing the right person: me), no matter how many times he says he doesn’t care a bit – (“and wouldn’t it be better to spend less time on blogging anyway?”)

This reminds me of something I’ve tried to suppress, namely that this blog didn’t make it to the shortlists of the Dutch Bloggies 2009 either (see Laika’s MedLibLog on the Longlist of the DutchBloggies!)

The Good news is that many high quality blogs did make it to the finals. Including The Blog that Ate Manhattan, Clinical Cases and Images, Musings of a Distractible Mind (Best Medical Weblog) , other things amanzi (Best Literary Medical Weblog), Allergy Notes, Clinical Cases and Images, Life in the Fast Lane (Best Clinical Sciences Weblog), ScienceRoll (Best Medical Technologies/Informatics Weblog).

Best of all, the superb blog I nominated for Best Medical WeblogDr Shock MD PhD made it to the finals as well!!

But it is hard to understand that blogs like EverythingHealth and Body in Mind with many nominations are not among the finalists. That underlines that contests are very subjective, but so are individual preferences for blogs. It is all in the game.

Anyway you can start voting for your favorite blogs tomorrow. Please have a look at the finalists here at Medgadget, so you can decide who deserves your votes.

Finally I would like to conclude with positive news concerning this blog. This week’s “Cochrane in the news” features the post on Cochrane Evidence Aid. It is on the Cochrane homepage today.

Photo Credit

Best Literary Medical Weblog
Reblog this post [with Zemanta]




#FollowFriday #FF the EBM-Skeptics @cochranecollab @EvidenceMatters @oracknows @ACPinternists

27 11 2009

FollowFriday is a twitter tradition in which twitter users recommend other users to follow (on Friday) by twittering their name(s), the hashtags #FF or #FollowFriday, and the reason for their recommendation(s).

Since the roll out of Twitter lists I add the #FollowFriday Recommendations to a (semi-)permanent #FollowFriday Twitter list: @laikas/followfridays-ff

This week I have added 4 people to the #FollowFriday list who are all twittering about EBM and/or are skeptics and/or belong to the Cochrane Collaboration. Since there are many interesting people in this field, I also made a separate Twitterlist: @laikas/ebm-cochrane-sceptics

The following people are added to both my #followfridays-ff (n=36) and ebm-cochrane-sceptics (n=46) lists. If you are on twitter you can follow these lists.
I’m sure I forgot somebody. If I did, let me know and I’ll see if I include that person.

All 4 tweople have twittered about the new and much discussed breast cancer screening guidelines.

  1. @ACPinternists* is the Communications Department of the American College of Physicians (ACP). I know ACP from the ACP-Journal club with its excellent critical appraised topics, in a section of the well known Annals of Internal Medicine. The uproar over the new U.S. breast cancer screening guidelines started with the publication of 3 articles in Ann Intern Med.
    *Mmm, when I come to think of it, shouldn’t @ACPinternists be added to the biomedical journals Twitter lists as well?
  2. @EvidenceMatters is really an invaluable tweeter with a high output of many different kinds of tweets, often (no surprise) related to Evidence Based Medicine. He (?) is very inspiring. My post “screening can’t hurt, can it” was inspired by one of his tweets.
  3. @cochranecollab stands for the Cochrane Collaboration. Like @acpinternists the tweets are mostly unidirectional, but provide interesting information related to EBM and/or the Cochrane Collaboration. Disclosure: I’m not entirely neutral.
  4. @oracknows. Who doesn’t know Orac? Orac is “a (not so) humble pseudonymous surgeon/scientist with an ego just big enough to delude himself that someone might actually care about his miscellaneous”. His tweets are valuable because of his high quality posts on his blog Respectful Insolence: Orac mostly uses Twitter as a publication platform. I really can recommend his excellent explanation of the new breast cancer guidelines.

You may also want to read:

Reblog this post [with Zemanta]




Role of Consumer Networks in Evidence Based Health Information

11 11 2009

Guest author: Janet Wale
member of the Cochrane Consumer Network

People are still struggling with evidence or modern medicine – clinicians, patients, health consumers, carers and the public alike. Part of this is because we always thought medicine was based on quality research, or evidence. It is not only that. For evidence to be used most effectively in healthcare systems researchers, clinicians and ‘the existing or potential patients and carers’ have to communicate and resonate with each other – to share knowledge and responsibilities both in developing the evidence and in individual decision making. On the broader population level, this may include consultation but is best achieved by developing partnerships.

The Cochrane Collaboration develops a large number of the published systematic reviews of best evidence on healthcare interventions, available electronically on The Cochrane Library. Systematic reviews are integral to the collation of evidence to inform clinical practice guidelines. They are also an integral part of health technology assessments, where the cost-effectiveness of healthcare interventions is determined for a particular health system.

With the availability of the Internet we are able to readily share information. We are also acutely aware of disadvantage for many of the World’s populations. What this has meant is pooled efforts. Now we have not only the World Health Organization but also The Cochrane Collaboration, Guidelines International Network, and Health Technology Assessment International. What is common among these organizations? They involve the users of health care, including patients, consumers and carers. The latter three organizations have a formal consumer/patient and citizen group that informs their work. In this way we work to make the evidence relevant, accessible and being used. We all have to be discerning whatever knowledge we are given and apply it to ourselves.

This is  a short post on request.
It also appeared as a comment at:
http://e-patients.net/archives/2009/11/tell-the-fda-the-whole-story-please.html

Reblog this post [with Zemanta]




#Cochrane Colloquium 2009: Better Working Relationship between Cochrane and Guideline Developers

19 10 2009

singapore CCLast week I attended the annual Cochrane Colloquium in Singapore. I will summarize some of the meetings.

Here is a summary of an interesting (parallel) special session: Creating a closer working relationship between Cochrane and Guideline Developers. This session was brought together as a partnership between the Guidelines International Network (G-I-N) and The Cochrane Collaboration to look at the current experience of guideline developers and their use of Cochrane reviews (see abstract).

Emma Tavender of the EPOC Australian Satellite, Australia reported on the survey carried out by the UK Cochrane Centre to identify the use of Cochrane reviews in guidelines produced in the UK ) (not attended this presentation) .

Pwee Keng Ho, Ministry of Health, Singapore, is leading the Health Technology Assessment (HTA) and guideline development program of the Singapore Ministry of Health. He spoke about the issues faced as a guideline developer using Cochrane reviews or -in his own words- his task was: “to summarize whether guideline developers like Cochrane Systematic reviews or not” .

Keng Ho presented the results of 3 surveys of different guideline developers. Most surveys had very few respondents: 12-29 if I remember it well.

Each survey had approximately the same questions, but in a different order. On the face of it, the 3 surveys gave the same picture.

Main points:

  • some guideline developers are not familiar with Cochrane Systematic Reviews
  • others have no access to it.
  • of those who are familiar with the Cochrane Reviews and do have access to it, most found the Cochrane reviews useful and reliable. (in one survey half of the respondents were neutral)
  • most importantly they actually did use the Cochrane reviews for most of their guidelines.
  • these guideline developers also used the Cochrane methodology to make their guidelines (whereas most physicians are not inclined to use the exhaustive search strategies and systematic approach of the Cochrane Collaboration)
  • An often heard critique of Guideline developers concerned the non-comprehensive coverage of topics by Cochrane Reviews. However, unlike in Western countries, the Singapore minister of Health mentioned acupuncture and herbs as missing topics (for certain diseases).

This incomplete coverage caused by a not-demand driven choice of subjects was a recurrent topic at this meeting and a main issue recognized by the entire Cochrane Community. Therefore priority setting of Cochrane Systematic reviews is one of the main topics addressed at this Colloquium and in the Cochrane Strategic review.

Kay Dickersin of the US Cochrane Center, USA, reported on the issues raised at the stakeholders meeting held in June 2009 in the US (see here for agenda) on whether systematic reviews can effectively inform guideline development, with a particular focus on areas of controversy and debate.

The Stakeholder summit concentrated on using quality SR’s for guidelines. This is different from effectiveness research, for which the Institute of Medicine (IOM) sets the standards: local and specialist guidelines require a different expertise and approach.

All kinds of people are involved in the development of guidelines, i.e. nurses, consumers, physicians.
Important issues to address, point by point:

  • Some may not understand the need to be systematic
  • How to get physicians on board: they are not very comfortable with extensive searching and systematic work
  • Ongoing education, like how-to workshops, is essential
  • What to do if there is no evidence?
  • More transparency; handling conflicts of interest
  • Guidelines differ, including the rating of the evidence. Almost everyone in the Stakeholders meeting used GRADE to grade the evidence, but not as it was originally described. There were numerous variations on the same theme. One question is whether there should be one system or not.
  • Another -recurrent- issue was that Guidelines should be made actionable.

Here are podcasts covering the meeting

Gordon Guyatt, McMaster University, Canada, gave  an outline of the GRADE approach and the purpose of ‘Summary of Findings’ tables, and how both are perceived by Cochrane review authors and guideline developers.

Gordon Guyatt, whose magnificent book ” Users’ Guide to the Medical Literature”  (JAMA-Evidence) lies at my desk, was clearly in favor of adherence to the original Grade-guidelines. Forty organizations have adopted these Grade Guidelines.

Grade stands for “Grading of Recommendations Assessment, Development and Evaluation”  system. It is used for grading evidence when submitting a clinical guidelines article. Six articles in the BMJ are specifically devoted to GRADE (see here for one (full text); and 2 (PubMed)). GRADE not only takes the rigor of the methods  into account, but also the balance between the benefits and the risks, burdens, and costs.

Suppose  a guideline would recommend  to use thrombolysis to treat disease X, because a good quality small RCTs show thrombolysis to be slightly but significantly more effective than heparin in this disease. However by relying on only direct evidence from the RCT’s it isn’t taken into account that observational studies have long shown that thrombolysis enhances the risk of massive bleeding in diseases Y and Z. Clearly the risk of harm is the same in disease X: both benefits and harms should be weighted.
Guyatt gave several other examples illustrating the importance of grading the evidence and the understandable overview presented in the Summary of Findings Table.

Another issue is that guideline makers are distressingly ready to embrace surrogate endpoints instead of outcomes that are more relevant to the patient. For instance it is not very meaningful if angiographic outcomes are improved, but mortality or the recurrence of cardiovascular disease are not.
GRADE takes into account if indirect evidence is used: It downgrades the evidence rating.  Downgrading also occurs in case of low quality RCT’s or the non-trade off of benefits versus harms.

Guyatt pleaded for uniform use of GRADE, and advised everybody to get comfortable with it.

Although I must say that it can feel somewhat uncomfortable to give absolute rates to non-absolute differences. These are really man-made formulas, people agreed upon. On the other hand it is a good thing that it is not only the outcome of the RCT’s with respect to benefits (of sometimes surrogate markers) that count.

A final remark of Guyatt: ” Everybody makes the claim they are following evidence based approach, but you have to learn them what that really means.”
Indeed, many people talk about their findings and/or recommendations being evidence based, because “EBM sells well”, but upon closer examination many reports are hardly worth the name.

Reblog this post [with Zemanta]




Cochrane 2.0 Workshop at the Cochrane Colloquium #CC2009

12 10 2009

Today Chris Mavergames and I held a workshop at the Cochrane Colloquium, entitled:  Web 2.0 for Cochrane (see previous post and abstract of the workshop)

First I gave an introduction into Medicine 2.0 and (thus) Web 2.0. Chris, Web Operations Manager and Information Architect of the Cochrane Collaboration, talked more about which Web 2.0 tools were already used by the Cochrane Collaboration and which Web 2.0 might be useful as such.

We had half an hour for discussion which was easily filled. There was no doubt about the usefulness of Web 2.0 for the Cochrane in this group. Therefore, there was ample room for discussing technical aspects, like:

  • Can you load your RSS feed of a PubMed search in Reference Manager? (According to Chris you can)
  • How can you deal with this lot of information (by following a specific subject, or not too much people – not many updates on a daily basis; you don’t have to follow it all, just pick up the headlines, when you can)
  • Are you involved in a Wiki that is successful? (it appears very difficult to involve people)
  • What happens if people comment or upload picture on facebook (of the Cochrane collaboration) in an appropriate way (Chris: didn’t happen, but you have to check and remove them)
  • How do you follow tweets (we showed Tweetdeckhashtags # and #followfridays)
  • What is the worst thing that happened to you (regarding web 2.0)? Chris and I thought a long time. Chris: that I revealed something that wasn’t officially public yet (though appeared to be o.k.). Me: spam (but I remove it/don’t approve it).
    Later I remembered two better (worse) examples, like the “Clinical Reader” social misbehaviour, a good example of how “branding” should not be done, and sites that publish top 50 and 100 list of bloggers just to get more traffic to their spam websites

Below is my presentation on Slideshare.

The (awful) green blackgound color indicates I went “live” on the web. As a reminder of what I did, I included some screendumps.

The current workshop was just meant to introduce and discuss Medicine 2.0 and Cochrane 2.0.

I hope we have a vivid discussion Wednesday when the plenary lectures deal with Cochrane 2.0.

The answers to my question on Twitter

  1. Why Web 2.0 is useful? (or not)
  2. Why we need Cochrane 2.0? (or not)

can be found on Visibletweets (temporary) and saved as: Quoteurl.com/sggq0 (permanent selection).

I think it would be good when these points are taken into account during the Cochrane 2.0 plenary discussions.

* possible WIKI (+ links) might appear at http://medicine20.wetpaint.com/page/Cochrane+2.0

Reblog this post [with Zemanta]




This week I will blog from…..

10 10 2009

35167809 singapore colloquiumPicture taken by Chris Mavergames http://twitpic.com/kxrnl

Chris and I will facilitate a web 2.0 workshop for the Cochrane (see here, for all workshops see here).
The entire program can be viewed at the Cochrane Colloquium site.

Chris Mavergames, Web Operations Manager and Information Architect of the Cochrane Collaboration will also give a plenary presentation entitled:
Cochrane for the Twitter generation:
inserting ourselves into the ‘conversation
‘”.

The session has the promising title: The Cochrane Library – brave new world?

Here is the introductory text of the session:

The Cochrane Collaboration is not unique in facing a considerable challenge to the way it packages and disseminates healthcare information. The proliferation of communication platforms and social networking sites provides opportunities to reach new audiences, but how far can or should the Collaboration go in embracing these new media? In this session we hear from speakers who are at the heart of the discussions about The Cochrane Library’s future direction, including the Library’s Editor in Chief. We finish the session with reflections on the week’s discussions with respect to the Strategic Review (…)

Request (for the workshop, not the plenary session):
If you ‘re on Twitter, could you please tell the participants of the (small) web 2.0 workshop  your opinion on the following, using the hashtag #CC20.
*

  1. Why Web 2.0 is useful? (or not)
  2. Why we need Cochrane 2.0? (or not)

An example of such an answer (from @Berci):

#CC20 Web 2.0 opens up the world and eases communication. Cochrane 2.0 is needed bc such an important database should have a modern platform

If you don’t have Twitter you can add your comment here and I will post it for you (if you leave a name).

Thanks for all who have contributed so far.

—–

*this is only for our small-scaled workshop, I propose to use #CC2009 for the conference itself.

Reblog this post [with Zemanta]




New Cochrane Handbook: altered search policies

14 11 2008

cochrane-symbolThe Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.

The current version of the Handbook is 5.0.1 (updated September 2008) is available either for purchase from John Wiley & Sons, Ltd or for download only to members of The Cochrane Collaboration (via the Collaboration’s information management system, Archie).
Version 5.0.0, updated February 2008, is freely available in browseable format, here. It should be noted however, that this version is not as up to date as version 5.0.1. The methodological search filters, for instance, are not1989 visual 6 completely identical.

As an information specialist I will concentrate on Chapter 6: Searching for studies.

This chapter consist of the following paragraphs:

  • 6.1 Introduction
  • 6.2 Sources to search
  • 6.3 Planning the search process
  • 6.4 Designing search strategies
  • 6.5 Managing references
  • 6.6 Documenting and reporting the search process
  • 6.7 Chapter information
  • 6.8 References

As the previous versions the essence of the Cochrane searches is to perform a comprehensive (sensitive) search for relevant studies (RCTs) to minimize bias. The most prominent changes are:

1. More emphasis on the central role of the Trial Search Coordinator (TSC) in the search process.
Practically each paragraph summary begins with an advice to consult the TSC, i.e. in 6.1: Cochrane review authors should seek advice from the Trials Search Co-ordinator of their Cochrane Review Group (CRG) before starting a search.

One of the main roles of TSC’s is the assisting of authors with searching, although the range of assistance may vary from advise on to how run searches to designing, running and sending the searches to authors.

I know from experience that most authors have not enough search literacy to be able to satisfactory complete the entire search on their own. Not even all librarians may be equipped to perform such exhaustive searches. That is why the handbook says: “If a CRG is currently without a Trials Search Co-ordinator authors should seek the guidance of a local healthcare librarian or information specialist, where possible one with experience of conducting searches for systematic reviews.”

Another essential core function of the TSC is the development and maintenance of the Specialized Register, containing all relevant studies in their area of interest, and submit this to CENTRAL (The Cochrane Central Register of Controlled Trials) on a quarterly basis”. CENTRAL is the most comprehensive source of reports of controlled trials (~500,000 records), available in “The Cochrane Library” (there it is called CLINICAL TRIALS). CENTRAL is available to all Cochrane Library subscribers, whereas the Specialized Register is only available via the TSC.

central-middle

Redrawn from the Handbook Fig. 6.3.a: The contents of CENTRAL

2. Therefore Trials registers are an increasingly important source of information. CENTRAL is considered to be the best single source of reports of trials that might be eligible for inclusion in Cochrane reviews. However, other than would be expected (at least by many authors) a search of MEDLINE (PubMed) alone is not considered adequate.

The approach now is: Specialized Registers/CENTRAL and MEDLINE should be searched as a minimum, together with EMBASE if it is available (apart from topic specific databases, snowballing). MEDLINE should be searched from 2005 onwards, since CENTRAL contains all records from MEDLINE indexed with the Publication Type term ‘Randomized Controlled Trial’ or ‘Controlled Clinical Trial’ (a substantial proportion of theses MEDLINE records have been retagged as a result of the work of The Cochrane Collaboration (Dickersin 2002)).

Personally, for non-Cochrane searches, I would rather search the other way around, MEDLINE (OVID) first, than EMBASE (OVID) and finally CENTRAL, and deduplicate the searches afterwards (in Reference Manager for instance). The (Wiley) Cochrane Library is not easy to search (for non-experienced users, i.e. you have to know the MESH beforehand, there is (yet) no mapping). If you start your search in MEDLINE (OVID) you can easily transform it in EMBASE and subsequently CENTRAL (using both MESH and EMBASE keywords as well as textwords)

3. The full search strategies for each database searched need to be included in an Appendix with the total number of hits retrieved by the electronic searches included in the Results section. Indeed the reporting has been very variable, some authors only referring to the general search strategy of the group. This made the searching part less transparent.

4. Two new Cochrane Highly Sensitive Search Strategies for identifying randomized trials in MEDLINE strategies have been developed: a sensitivity-maximizing version and a sensitivity- and precision-maximizing version. These filters (that are to be combined with the subject search) were designed for MEDLINE-indexed records. Therefore, a separate search is needed to find non-indexed records as well. An EMBASE RCT filter is still under development.

These methodological filters will be exhaustively discussed in another post.





CC (2) Duodecim: Connecting patients (and doctors) to the best-evidence

5 10 2008

This is the second post in the series Cochrane Colloquium (CC) 2008.

In the previous post, I mentioned a very interesting opening session.

Here I will summarize one of the presentations in that opening session, i.e. the presentation by Pekka Mustonen, called:

Connecting patients to the best-evidence through technology: An effective solution or “the great seduction”?

Pekka essentially showed us what the Finnish have achieved with their Duodecim database.

Duodecim was started as a health portal for professionals only. It is a database (a decision support system) made by doctors for doctors. It contains Evidence Base (EBM) Guidelines with:

  • regularly updated recommendations
  • links to evidence, including guidelines and Cochrane Systematic Reviews
  • commentaries

Busy Clinicians don’t have the time to perform an extensive search to find the best available evidence each time they have a clinical question. Ideally, they only would have to carry out one search, taking not more than one minute to find the right information.

This demand seems to be reasonably met by Duodecim.

Notably, Duodecim is not only very popular as a source for clinicians ànd nurses, the guidelines are also read and followed by them. Those familiar with healthcare know that this is the main obstacle: getting doctors and nurses to actually use the guidelines.

According to Pekka, patients are even more important than doctors to implement guidelines: Half of the patients don’t seem to follow their doctor’s advice. If the advice is to keep on inhaled steroids for long-term management for asthma, many patients won’t follow that advice, for instance. “When you reach patients, small changes can have large benefits”, he said.

However, although many patients rely on internet to find health information, formal health information sites face fierce competition on Internet. It is difficult for consumers to separate chaff from wheat:

Still, Duodecim has managed to make a website for the general public that is now as popular as the original physicians database is for doctors, the only difference being that doctors use the database continuously, whereas the general public just consults the database when they are confronted with a health problem.
The database contains 1000 EBM key articles, where the content is integrated with personal health records. The site looks rather straightforward, not glitzy nor flashy. Intentionally, in order to look like a serious and trustworthy professional health care site.

A survey revealed that Duodecim performed a lot better than Google in answering health care questions, and does lead to more people either deciding NOT to consult a physician (because they are reassured), or deciding to consult one (because the symptoms might be more serious than thought). Thus it can make a difference!

The results are communicated differently to patients compared to doctors. For instance, whether it is useful to wear stockings during long-haul flights to prevent deep venous thrombosis in patients that have either a low or a high risk for thrombosis is explained to the physician in terms of RR, ARR, RRR and NNT.
Patients see a table with red (high risk patients) and green columns (low risk patients). Conclusions will be translated as follows:

If 1000 patients with a low risk for DVT wear stockings on long-haul flights

  • 9 will avoid it
  • 1 will get it
  • 1 out of 1000 (will get it)
  • 990 use stocking in vain

If 1000 patients at high risk for DVT wear stockings on long-haul flights:

  • 27 will avoid it
  • 3 will get it
  • 1 out of 333 (will get it)
  • 970 use stocking in vain

This database will be integrated with permanent health records and virtual health checks. It is also linked to a tv program with the aim of changing the way of living. Online you can do a life expectancy test to see what age you would reach if you continue your life style as you do (compare “je echte leeftijd”, “your real age”[dutch]).

“What young people don’t realize”, Pekka said, is that most older people find that the best of life starts at the age of 60(?!) Thus, it doesn’t end at 30, as most youngsters think. But young people will only notice, when they reach old age in good health. To do this, they must change their habits already when young.

The Finnish database is for free for Finnish people.

Quite coincidentally (asking for a free usb-stick at the Wiley stand 😉 ) I found out that Wiley’s database EBM Guidelines links to the Duodecim platform (see below). Quite interesting to take a trial, I think.

(Although this presumably is only the professional part of Duodecim, thus not the patient oriented database.)





Attend the virtual Cochrane Colloquium

5 10 2008

The annual Cochrane Colloquium is now ongoing in Freiburg, Germany. The theme is “Evidence in the Era of Globalisation.”

As many readers may already know, the Cochrane (CC, Cochrane Collaboration) is an international not-for-profit and independent organization, dedicated to making up-to-date, accurate information about the effects of healthcare readily available worldwide. It produces and disseminates systematic reviews of healthcare interventions and promotes the search for evidence in the form of clinical trials and other studies of interventions (see Glossary).

The yearly Cochrane Colloquium is meant for members of the CC, and those interested in the organization.

For those that cannot attend the meeting, there is an opportunity to virtually view the following items:

To go to the individual virtual items you can click one of the items above.

You can also go to “Welcome” by following this link and go to the Virtual Colloquium. It is easier to switch to another item from there.

It should be noted that there is also a lot uncovered by the virtual media: the meetings and workshops (of course), as well as the (non plenary) oral sessions and even the very interesting opening session with the following speakers: Gerd Antes (German Congres Centre), Tikki Pang (WHO) and Pekka Mustonen (Duodecim).

An overview of the colloquium program can be found here.





Thesis Mariska Leeflang: Systematic Reviews of Diagnostic Test Accuracy.

22 08 2008

While I was on vacation Mariska Leeflang got her PhD. The ceremony was July 1st 2008.

Her thesis is entitled: Systematic Reviews of Diagnostic Test Accuracy.

Mariska is a colleague working (part time) at the Dutch Cochrane Centre (DCC). She studied veterinarian science in Utrecht, but gradually noticed that she was more interested in research than in veterinary practice. Four years ago she applied for a job at the dept. of Clinical Epidemiology, Biostatistics and Bioinformatics (KEBB) at the Amsterdam Academic Medical Centre (AMC). Having a cv with all kinds of odd subjects like livestock and courses delivering anesthetic drugs from a distance, she thought she would never make it, but she did.

Those 4 years have been very fruitful. She did research on diagnostic accuracy, is member of the Cochrane Diagnostic Test Accuracy Working Group and first author of one of the Cochrane pilot reviews of diagnostic test accuracy (chapter 7 of thesis). [Note: Cochrane Diagnostic Test Accuracy Reviews are a new initiative; till recently all Cochrane Systematic reviews were about health care interventions].
Mariska also supports authors of Cochrane systematic reviews, gave many presentations and led many workshops. In fact, she also gave in-service training to our group of Clinical Librarians in diagnostic studies and together we have given several courses on Evidence Based Medicine and Systematic Reviews. In leisure time she is Chair of “Stichting DIO” (Vet Science & Development Cooperation)

She will continue to work for the Cochrane Collaboration, including the DCC, but has also accepted a job at the Royal Tropical Institute (http://www.kit.nl).

Because of her backgound Mariska often gives her work a light “vet” touch.

“The cover of her thesis for instance is inspired by Celtic artwork and reflects the process of a systematic review: parts become a whole. The anthropomorphic (human-like) and zoomorphic (animal-like) creatures represent the background of the author. The stethoscopes and the corners refer specifically to diagnostic test accuracy reviews.The snakes eating their own tail stand in Celtic mythology for longevity and the ever-lasting life cycle.”

Also, she often closes her presentations with a slide showing swimming pigs, the pig being symbolic for “luck”.

So I would like to close this post in turn by wishing Mariska: “Good Luck”

Thesis: ISBN: 978-90-9023139-6
Digital Version at : http://dare.uva.nl
Index: (I’ll come back to chapter 1 and 2 another time)
Chapter 1: Systematic Reviews of Diagnostic Test Accuracy – New Developments within The Cochrane Collaboration – Submitted
Chapter 2: The use of methodological search filters to identify diagnostic accuracy studies can lead to the omission of relevant studies – J Clin Epidemiol. 2006;59(3):234-40
Chapter 3: Impact of adjustment for quality on results of meta-analyses of diagnostic accuracy – Clin Chem. 2007;53(2):164-72
Chapter 4: Bias in sensitivity and specificity caused by data driven selection of optimal cut-off values: mechanisms, magnitude and solutions – Clin Chem. 2008; 54(4):729-37
Chapter 5: Diagnostic accuracy may vary with prevalence: Implications for evidence-based diagnosis – Accepted by J Clin Epidemiol
Chapter 6: Accuracy of fibronectin tests for the prediction of pre-eclampsia: a systematic review – Eur J Obstet Gynecol Reprod Biol. 2007;133(1):12-9
Chapter 7: Galactomannan detection for the diagnosis of invasive aspergillosis in immunocompromized patients. A Cochrane Review of Diagnostic Test Accuracy – Conducted as a pilot Cochrane Diagnostic Test Accuracy review

——————-

Mariska Leeflang is op 1 juli 2008 aan de Universiteit van Amsterdam gepromoveerd op het onderwerp:“Systematische Reviews van de Diagnostische Accurratesse”.

Mariska is eigenlijk een collega van mij. We werken samen part time op het Dutch Cochrane Centre (DCC). Zij heeft diergeneeskunde gestudeerd in Utrecht, maar kwam er gaandeweg toch achter dat ze liever onderzoeker dan practiserend dierenarts wilde zijn. Toen ze vier jaar geleden ging solliciteren bij de afdeling Klinische Epidemiologie, Biostatistiek en Bioinfomatica (KEBB) van het AMC gaf ze zichzelf weinig kans met vakken als graslandbeheer en een cursus ‘verdoven op afstand’ op haar cv. Maar ze werd wel aangenomen. En terecht!

Die 4 jaar zijn zeer vruchtbaar geweest. Ze deed diagnostisch onderzoek, is lid van de Cochrane Diagnostic Test Accuracy Working Group en eerste auteur van een pilot diagnostisch accuratesse review (H 7 van proefschrift). Cochrane Systematische Reviews van Diagnostische Accuratessestudies zijn een nieuw type Systematisch Review, naast de bestaande Cochrane Reviews van interventies.
Mariska heeft veel presentaties en workshops gegeven, ook in Cochrane verband. Ze heeft zelfs ons clinical librarians bijgeschoold op het gebied van diagnostische stusies. Samen geef ik met haar EBM-cursussen en de cursus “Systematische Reviews” voor Cochrane auteurs. In haar vrije tijd is ze voorzitter van de Stichting DIO ( Diergeneeskunde in Ontwikkelingssamenwerking).

Ze zal voor de Cochrane Collaboration blijven werken, maar werkt sinds kort ook 2 dagen per week op het Koninklijk Tropeninstituut (KIT).

Vaak zie je dat Mariska vanwege haar achtergrond als diergeneeskundige vaak een link maakt naar dieren.

Op de omslag van haar boekje dat gebaseerd is op Keltisch kunstwerk wordt het proces van een systematisch review als volgt weergegeven: Alle delen worden samen een geheel. The mensachtige en dierlijke wezens vormen Mariska’s achtergrond. De stethoscoop en de hoeken staan voor de diagnostische accuratessereviews. De slangen, die hun eigen staart opeten staan in de Keltische mythologie voor een lang leven en de eeuwigdurende levenscyclus.

Ook sluit ze haar presentatie vaak af met een plaatje met zwemmende biggetjes, die voor “geluk” staan.

Dat lijkt me ook hier een passend slot: Veel geluk Mariska!!





Two new Cochrane Groups

8 05 2008

Two groups have officially joined the Cochrane Collaboration: the Cochrane Public Health Review Group and the Cochrane Prognosis Methods Group.

The Cochrane Public Health Review group belongs to the Cochrane Review Groups, i.e. groups that produce Cochrane Reviews in specific medical topic areas.

The Cochrane Prognosis Methods group will be the 13th Cochrane Method Group. This group will have two primary roles: 1. Work with existing Cochrane entities, including Methods Groups to ensure the best use of prognostic evidence in Cochrane reviews 2. Conduct research to advance the methods of prognosis reviews and other types of reviews, where similar methods apply.

By calling into existence Method Groups like the Cochrane Prognosis, Cochrane Adverse Effects, Cochrane Screening and Diagnostic Tests and the Cochrane Qualitative Research Methods Group, the Cochrane Collaboration will no longer fully concentrate on Systematic Reviews of Randomized Controlled Trials / interventions. That has been a major criticism of the Cochrane Systematic Reviews.

For people not familiar with the structure of the Cochrane Collaboration, see the schematic picture below or follow this link

******************************************************

De Cochrane Collaboration heeft er 2 nieuwe groepen bij, de Cochrane Public Health Review Group en de Cochrane Prognosis Methods Group.

Zoals de naam al zegt is de Cochrane Public Health Review Group een Cochrane Review Group, d.w.z. een groep die Cochrane Reviews schrijft over een bepaald medisch onderwerp, in dit geval dus volksgezondheid.

De Cochrane Prognosis Methods Group is de 13e Cochrane Method Group. Deze groep ondersteunt andere Cochrane groepen zodat ze evidence op het gebied van prognose goed implementeren en voert onderzoek uit om de de methodologie van prognostische reviews te verbeteren.

Het is een goede zaak dat de Cochrane Collaboration de Prognosis Method Group alsmede enkele andere groepen als de Cochrane Adverse Effects, Cochrane Screening and Diagnostic Tests en de Cochrane Qualitative Research Methods Group in het leven heeft geroepen. Hiermee komt zij tegemoet aan de vaak geuite kritiek dat Cochrane Systematic Reviews zich teveel op het nut van interventies richten en zich ‘alleen’ baseren op (randomized) controlled trials.

Voor wie niet bekend is met de structuur van de Cochrane Collaboration, zie bovenstaand plaatje en deze link





FREE online course on evidence-based healthcare

14 04 2008

The U.S. Cochrane Center has launched a new online course on evidence-based health care-.

Although the free course is expressly designed for consumer advocates, it is also open and available for anyone seeking competence in evidence-based medicine. Presumably the course is also perfectly suited for medical librarians to get more acquainted with EBM-principles and critical appraisal. At least in the Netherlands I know of many librarians who would like to improve their skills in EBM. The course consists of six modules that illustrate key concepts in evidence-based health care through real-world examples. In all, the modules include 5 to 6 hours of lectures and case studies, divided into 10- to-15-minute segments. The expectation is that the course be completed within three months of registration.

The Course objectives are mentioned in the picture above.

The six modules are as follows:
  • Module 1. INTRO: What is evidence-based healthcare and why is it important?
  • Module 2. ASK: The importance of research questions in evidence based healthcare.
  • Module 3. ALIGN: Research design, bias and levels of evidence.
  • Module 4. ACQUIRE: Searching for healthcare information. Assessing harms and benefits.
  • Module 5. APPRAISE: Behind the numbers: Understanding healthcare statistics. Science, speed and the search for best evidence.
  • Module 6: APPLY: Critical appraisal and making better decisions for evidence-based healthcare, Determining causality.

More information at:

———————————–

Het US Cochrane Centre heeft een online cursus ontwikkeld voor “consumers” (patiënten en anderen die ‘zorg’afnemen), zodat zij de basisprincipes van evidence based medicine beter begrijpen. Het uitgangspunt van de Cochrane Collaboration is namelijk dat patiënten leren inzien aan welke eisen een goede studie moet voldoen zodat juist ook zij een weloverwogen keuze kunnen maken over zaken die voor hun eigen gezondheid van belang zijn. Hoewel de gratis online cursus opgezet is voor deze doelgroep kan iedereen eraan deelnemen. De cursus lijkt me bij uitstek geschikt voor medisch informatiespecialisten/clinical librarians die hun EBM kennis en vaardigheden willen vergroten.





Cochrane Library underused in the US??

10 02 2008

Al rondneuzende ben ik inmiddels enkele interessante blogs tegengekomen. Een ervan is van de Krafty Librarian. Heel wetenswaardig wat ze allemaal te melden heeft op mijn interessegebied. Het is me zelfs reeds gelukt om een RSS feed van haar blog te krijgen. Ik loop dus al een dag voor op de cursus (heb per ongeluk op het knopje RSS feed gedrukt en automatisch werd mij de keuze geboden uit tig RSS-readers waarvan ik maar lukraak die van Google heb gedownload – en het werkte – voor een enkele feed.) .

Wat ik heb geselecteerd kunnen jullie lezen onder “Laika’s selecties” in de rechterkolom, een mogelijkheid die Google biedt.

De Krafty Librarian wees o.a. op een recent artikel in Obstetrics and Gynaecology. Dat wil ik er even uit lichten omdat het zo mooi aansluit op het verzoek tot financiële steun om de Cochrane Library voor iedereen toegankelijk te maken in Europa. Methodologisch is het niet zo sterk, maar het legt wel de vinger op de zere plek: 1. niet alle zogenaamde evidence based reviews zijn evidence based 2. hoe zorg je ervoor dat de evidence (die netjes op een rij gezet is in een systematic review) ook gevonden en gebruikt wordt?

Do clinical experts rely on the cochrane library? Obstet Gynecol. 2008 Feb;111(2):420-2. Grimes DA, Hou MY, Lopez LM, Nanda K.

Abstract:
In part because of limited public access, Cochrane reviews are underused in the United States compared with other developed nations. To assess use of these reviews by opinion leaders, we examined citation of Cochrane reviews in the Clinical Expert Series of Obstetrics & Gynecology from inception through June of 2007. We reviewed all 54 articles for mention of Cochrane reviews, then searched for potentially relevant Cochrane reviews that the authors could have cited. Thirty-six of 54 Clinical Expert Series articles had one or more relevant Cochrane reviews published at least two calendar quarters before the Clinical Expert Series article. Of these 36 articles, 19 (53%) cited one or more Cochrane reviews. We identified 187 instances of relevant Cochrane reviews, of which 40 (21%) were cited in the Clinical Expert Series articles. No temporal trends were evident in citation of Cochrane reviews. Although about one half of Clinical Expert Series articles cited relevant Cochrane reviews, most eligible reviews were not referenced. Wider use of Cochrane reviews could strengthen the scientific basis of this popular series.

In het kort: Clinical Expert Series articles in Obstetrics & Gynecology is een populaire serie gewijd aan practische evidence based overzichten op dit gebied, waarbij e.e.a. wordt afgezet tegen de klinische expertise van de auteur (opinion leader). Men heeft nu in een bepaals tijdsbestek gekeken in hoeveel artikelen een relevant Cochrane review wel en hoeveel artikelen Cochrane reviews ten onrechte niet werden geciteerd. Slechts 21% van de relevante reviews werd geciteerd, hetgeen opmerkelijk is, daar Cochrane reviews toch als zeer hoge evidence beschouwd worden.

Hoe zou dit verklaard kunnen worden? Volgens Grimes et al:

  • Sommige auteurs zijn niet op de hoogte van de Cochrane Library. Lijkt niet voor de hand liggend, omdat Cochrane abstracts sinds 2000 in PubMed opgenomen zijn
  • Hoewel auteurs gevraagd wordt om hun manuscripten te baseren op evidence , wordt hen niet expliciet gevraagd in de Cochrane Library te zoeken naar relevante reviews.
  • Beperkte toegang tot de volledige tekst van Cochrane Reviews. Auters zijn echter werkzaam op medische scholen, die i.h.a. toegang hebben tot de Cochrane Library.
  • Voorkeur voor het noemen van de primaire bronnen (specifieke RCT’s, randomized controlled trials) boven de systematische reviews die deze RCT’s samenvatten.
  • Cochrane Reviews werden wel gevonden maar niet relevant beschouwd.
  • Cochrane Reviews zijn niet gebruikersvriendelijk. (format; nadruk op methodologie en ontoegankelijk woordgebruik) [9]

…….het volgende stukje licht toe wat het gevolg is als belangrijke evidence niet gevonden wordt:

Despite their clinical usefulness, Cochrane systematic reviews of randomized controlled trials are underused in the United States. For example, a Cochrane review documenting that magnesium sulfate is ineffective as a tocolytic agent received little attention in the United States and Canada, where this treatment has dominated practice for several decades.[10] This therapy had been abandoned in other industrialized nations, where access to the Cochrane Library is easier.[11] Citizens of many countries have free online access to the Cochrane Library through governmental or other funding. In the United States, only Wyoming residents have public access through libraries, thanks to funding by its State Legislature.

9. Rowe BH, Wyer PC, Cordell WH. Evidence-based emergency medicine. Improving the dissemination of systematic reviews in emergency medicine. Ann Emerg Med 2002;39:293-5.
10. Crowther CA, Hiller JE, Doyle LW. Magnesium sulphate for preventing preterm birth in threatened preterm labour. Cochrane Database Syst Rev 2002;(4):CD001060.

11. Grimshaw J. So what has the Cochrane Collaboration ever done for us? A report card on the first 10 years. CMAJ 2004;171:747-9.