How will we ever keep up with 75 Trials and 11 Systematic Reviews a Day?

6 10 2010

ResearchBlogging.orgAn interesting paper was published in PLOS Medicine [1]. As an information specialist and working part time for the Cochrane Collaboration* (see below), this topic is close to my heart.

The paper, published in PLOS Medicine is written by Hilda Bastian and two of my favorite EBM devotees ànd critics, Paul Glasziou and Iain Chalmers.

Their article gives an good overview of the rise in number of trials, systematic reviews (SR’s) of interventions and of medical papers in general. The paper (under the head: Policy Forum) raises some important issues, but the message is not as sharp and clear as usual.

Take the title for instance.

Seventy-Five Trials and Eleven Systematic Reviews a Day:
How Will We Ever Keep Up?

What do you consider its most important message?

  1. That doctors suffer from an information overload that is only going to get worse, as I did and probably also in part @kevinclauson who tweeted about it to medical librarians
  2. that the solution to this information overload consists of Cochrane systematic reviews (because they aggregate the evidence from individual trials) as @doctorblogs twittered
  3. that it is just about “too many systematic reviews (SR’s) ?”, the title of the PLOS-press release (so the other way around),
  4. That it is about too much of everything and the not always good quality SR’s: @kevinclauson and @pfanderson discussed that they both use the same ” #Cochrane Disaster” (see Kevin’s Blog) in their  teaching.
  5. that Archie Cochrane’s* dream is unachievable and ought perhaps be replaced by something less Utopian (comment by Richard Smith, former editor of the BMJ: 1, 3, 4, 5 together plus a new aspect: SR’s should not only  include randomized controlled trials (RCT’s)

The paper reads easily, but matters of importance are often only touched upon.  Even after reading it twice, I wondered: a lot is being said, but what is really their main point and what are their answers/suggestions?

But lets look at their arguments and pieces of evidence. (Black is from their paper, blue my remarks)

The landscape

I often start my presentations “searching for evidence” by showing the Figure to the right, which is from an older PLOS-article. It illustrates the information overload. Sometimes I also show another slide, with (5-10 year older data), saying that there are 55 trials a day, 1400 new records added per day to MEDLINE and 5000 biomedical articles a day. I also add that specialists have to read 17-22 articles a day to keep up to date with the literature. GP’s even have to read more, because they are generalists. So those 75 trials and the subsequent information overload is not really a shock to me.

Indeed the authors start with saying that “Keeping up with information in health care has never been easy.” The authors give an interesting overview of the driving forces for the increase in trials and the initiation of SR’s and critical appraisals to synthesize the evidence from all individual trials to overcome the information overload (SR’s and other forms of aggregate evidence decrease the number needed to read).

In box 1 they give an overview of the earliest systematic reviews. These SR’s often had a great impact on medical practice (see for instance an earlier discussion on the role of the Crash trial and of the first Cochrane review).
They also touch upon the institution of the Cochrane Collaboration.  The Cochrane collaboration is named after Archie Cochrane who “reproached the medical profession for not having managed to organise a “critical summary, by speciality or subspecialty, adapted periodically, of all relevant randomised controlled trials” He inspired the establishment of the international Oxford Database of Perinatal Trials and he encouraged the use of systematic reviews of randomized controlled trials (RCT’s).

A timeline with some of the key events are shown in Figure 1.

Where are we now?

The second paragraph shows many, interesting, graphs (figs 2-4).

Annoyingly, PLOS only allows one sentence-legends. The details are in the (WORD) supplement without proper referral to the actual figure numbers. Grrrr..!  This is completely unnecessary in reviews/editorials/policy forums. And -as said- annoying, because you have to read a Word file to understand where the data actually come from.

Bastian et al. have used MEDLINE’s publication types (i.e. case reports [pt], reviews[pt], Controlled Clinical Trial[pt] ) and search filters (the Montori SR filter and the Haynes narrow therapy filter, which is built-in in PubMed’s Clinical Queries) to estimate the yearly rise in number of study types. The total number of Clinical trials in CENTRAL (the largest database of controlled clinical trials, abbreviated as CCTRS in the article) and the Cochrane Database of Systematic Reviews (CDSR) are easy to retrieve, because the numbers are published quaterly (now monthly) by the Cochrane Library. Per definition, CDSR only contains SR’s and CENTRAL (as I prefer to call it) contains almost invariably controlled clinical trials.

In short, these are the conclusions from their three figures:

  • Fig 2: The number of published trials has raised sharply from 1950 till 2010
  • Fig 3: The number of systematic reviews and meta-analysis has raised tremendously as well
  • Fig 4: But systematic reviews and clinical trials are still far outnumbered by narrative reviews and case reports.

O.k. that’s clear & they raise a good point : an “astonishing growth has occurred in the number of reports of clinical trials since the middle of the 20th century, and in reports of systematic reviews since the 1980s—and a plateau in growth has not yet been reached.
Plus indirectly: the increase in systematic reviews  didn’t lead to a lower the number of trials and narrative reviews. Thus the information overload is still increasing.
But instead of discussing these findings they go into an endless discussion on the actual data and the fact that we “still do not know exactly how many trials have been done”, to end the discussion by saying that “Even though these figures must be seen as more illustrative than precise…” And than you think. So what? Furthermore, I don’t really get their point of this part of their article.

 

Fig. 2: The number of published trials, 1950 to 2007.

 

 

With regard to Figure 2 they say for instance:

The differences between the numbers of trial records in MEDLINE and CCTR (CENTRAL) (see Figure 2) have multiple causes. Both CCTR and MEDLINE often contain more than one record from a single study, and there are lags in adding new records to both databases. The NLM filters are probably not as efficient at excluding non-trials as are the methods used to compile CCTR. Furthermore, MEDLINE has more language restrictions than CCTR. In brief, there is still no single repository reliably showing the true number of randomised trials. Similar difficulties apply to trying to estimate the number of systematic reviews and health technology assessments (HTAs).

Sorry, although some of these points may be true, Bastian et al. don’t go into the main reason for the difference between both graphs, that is the higher number of trial records in CCTR (CENTRAL) than in MEDLINE: the difference can be simply explained by the fact that CENTRAL contains records from MEDLINE as well as from many other electronic databases and from hand-searched materials (see this post).
With respect to other details:. I don’t know which NLM filter they refer to, but if they mean the narrow therapy filter: this filter is specifically meant to find randomized controlled trials, and is far more specific and less sensitive than the Cochrane methodological filters for retrieving controlled clinical trials. In addition, MEDLINE does not have more language restrictions per se: it just contains a (extensive) selection of  journals. (Plus people more easily use language limits in MEDLINE, but that is besides the point).

Elsewhere the authors say:

In Figures 2 and 3 we use a variety of data sources to estimate the numbers of trials and systematic reviews published from 1950 to the end of 2007 (see Text S1). The number of trials continues to rise: although the data from CCTR suggest some fluctuation in trial numbers in recent years, this may be misleading because the Cochrane Collaboration virtually halted additions to CCTR as it undertook a review and internal restructuring that lasted a couple of years.

As I recall it , the situation is like this: till 2005 the Cochrane Collaboration did the so called “retag project” , in which they searched for controlled clinical trials in MEDLINE and EMBASE (with a very broad methodological filter). All controlled trials articles were loaded in CENTRAL, and the NLM retagged the controlled clinical trials that weren’t tagged with the appropriate publication type in MEDLINE. The Cochrane stopped the laborious retag project in 2005, but still continues the (now) monthly electronic search updates performed by the various Cochrane groups (for their topics only). They still continue handsearching. So they didn’t (virtually?!) halted additions to CENTRAL, although it seems likely that stopping the retagging project caused the plateau. Again the author’s main points are dwarfed by not very accurate details.

Some interesting points in this paragraph:

  • We still do not know exactly how many trials have been done.
  • For a variety of reasons, a large proportion of trials have remained unpublished (negative publication bias!) (note: Cochrane Reviews try to lower this kind of bias by applying no language limits and including unpublished data, i.e. conference proceedings, too)
  • Many trials have been published in journals without being electronically indexed as trials, which makes them difficult to find. (note: this has been tremendously improved since the Consort-statement, which is an evidence-based, minimum set of recommendations for reporting RCTs, and by the Cochrane retag-project, discussed above)
  • Astonishing growth has occurred in the number of reports of clinical trials since the middle of the 20th century, and in reports of systematic reviews since the 1980s—and a plateau in growth has not yet been reached.
  • Trials are now registered in prospective trial registers at inception, theoretically enabling an overview of all published and unpublished trials (note: this will also facilitate to find out reasons for not publishing data, or alteration of primary outcomes)
  • Once the International Committee of Medical Journal Editors announced that their journals would no longer publish trials that had not been prospectively registered, far more ongoing trials were being registered per week (200 instead of 30). In 2007, the US Congress made detailed prospective trial registration legally mandatory.

The authors do not discuss that better reporting of trials and the retag project might have facilitated the indexing and retrieval of trials.

How Close Are We to Archie Cochrane’s Goal?

According to the authors there are various reasons why Archie Cochrane’s goal will not be achieved without some serious changes in course:

  • The increase in systematic reviews didn’t displace other less reliable forms of information (Figs 3 and 4)
  • Only a minority of trials have been assessed in systematic review
  • The workload involved in producing reviews is increasing
  • The bulk of systematic reviews are now many years out of date.

Where to Now?

In this paragraph the authors discuss what should be changed:

  • Prioritize trials
  • Wider adoption of the concept that trials will not be supported unless a SR has shown the trial to be necessary.
  • Prioritizing SR’s: reviews should address questions that are relevant to patients, clinicians and policymakers.
  • Chose between elaborate reviews that answer a part of the relevant questions or “leaner” reviews of most of what we want to know. Apparently the authors have already chosen for the latter: they prefer:
    • shorter and less elaborate reviews
    • faster production ànd update of SR’s
    • no unnecessary inclusion of other study types other than randomized trials. (unless it is about less common adverse effects)
  • More international collaboration and thereby a better use  of resources for SR’s and HTAs. As an example of a good initiative they mention “KEEP Up,” which will aim to harmonise updating standards and aggregate updating results, initiated and coordinated by the German Institute for Quality and Efficiency in Health Care (IQWiG) and involving key systematic reviewing and guidelines organisations such as the Cochrane Collaboration, Duodecim, the Scottish Intercollegiate Guidelines Network (SIGN), and the National Institute for Health and Clinical Excellence (NICE).

Summary and comments

The main aim of this paper is to discuss  to which extent the medical profession has managed to make “critical summaries, by speciality or subspeciality, adapted periodically, of all relevant randomized controlled trials”, as proposed 30 years ago by Archie Cochrane.

Emphasis of the paper is mostly on the number of trials and systematic reviews, not on qualitative aspects. Furthermore there is too much emphasis on the methods determining the number of trials and reviews.

The main conclusion of the authors is that an astonishing growth has occurred in the number of reports of clinical trials as well as in the number of SR’s, but that these systematic pieces of evidence shrink into insignificance compared to the a-systematic narrative reviews or case reports published. That is an important, but not an unexpected conclusion.

Bastian et al don’t address whether systematic reviews have made the growing number of trials easier to access or digest. Neither do they go into developments that have facilitated the retrieval of clinical trials and aggregate evidence from databases like PubMed: the Cochrane retag-project, the Consort-statement, the existence of publication types and search filters (they use themselves to filter out trials and systematic reviews). They also skip other sources than systematic reviews, that make it easier to find the evidence: Databases with Evidence Based Guidelines, the TRIP database, Clinical Evidence.
As Clay Shirky said: “It’s Not Information Overload. It’s Filter Failure.”

It is also good to note that case reports and narrative reviews serve other aims. For medical practitioners rare case reports can be very useful for their clinical practice and good narrative reviews can be valuable for getting an overview in the field or for keeping up-to-date. You just have to know when to look for what.

Bastian et al have several suggestions for improvement, but these suggestions are not always underpinned. For instance, they propose access to all systematic reviews and trials. Perfect. But how can this be attained? We could stimulate authors to publish their trials in open access papers. For Cochrane reviews this would be desirable but difficult, as we cannot demand from authors who work for months for free to write a SR to pay the publications themselves. The Cochrane Collab is an international organization that does not receive subsidies for this. So how could this be achieved?

In my opinion, we can expect the most important benefits from prioritizing of trials ànd SR’s, faster production ànd update of SR’s, more international collaboration and less duplication. It is a pity the authors do not mention other projects than “Keep up”.  As discussed in previous posts, the Cochrane Collaboration also recognizes the many issues raised in this paper, and aims to speed up the updates and to produce evidence on priority topics (see here and here). Evidence aid is an example of a successful effort.  But this is only the Cochrane Collaboration. There are many more non-Cochrane systematic reviews produced.

And then we arrive at the next issue: Not all systematic reviews are created equal. There are a lot of so called “systematic reviews”, that aren’t the conscientious, explicit and judicious created synthesis of evidence as they ought to be.

Therefore, I do not think that the proposal that each single trial should be preceded by a systematic review, is a very good idea.
In the Netherlands writing a SR is already required for NWO grants. In practice, people just approach me, as a searcher, the days before Christmas, with the idea to submit the grant proposal (including the SR) early in January. This evidently is a fast procedure, but doesn’t result in a high standard SR, upon which others can rely.

Another point is that this simple and fast production of SR’s will only lead to a larger increase in number of SR’s, an effect that the authors wanted to prevent.

Of course it is necessary to get a (reliable) picture of what has already be done and to prevent unnecessary duplication of trials and systematic reviews. It would the best solution if we would have a triplet (nano-publications)-like repository of trials and systematic reviews done.

Ideally, researchers and doctors should first check such a database for existing systematic reviews. Only if no recent SR is present they could continue writing a SR themselves. Perhaps it sometimes suffices to search for trials and write a short synthesis.

There is another point I do not agree with. I do not think that SR’s of interventions should only include RCT’s . We should include those study types that are relevant. If RCT’s furnish a clear proof, than RCT’s are all we need. But sometimes – or in some topics/specialties- RCT’s are not available. Inclusion of other study designs and rating them with GRADE (proposed by Guyatt) gives a better overall picture. (also see the post: #notsofunny: ridiculing RCT’s and EBM.

The authors strive for simplicity. However, the real world isn’t that simple. In this paper they have limited themselves to evidence of the effects of health care interventions. Finding and assessing prognostic, etiological and diagnostic studies is methodologically even more difficult. Still many clinicians have these kinds of questions. Therefore systematic reviews of other study designs (diagnostic accuracy or observational studies) are also of great importance.

In conclusion, whereas I do not agree with all points raised, this paper touches upon a lot of important issues and achieves what can be expected from a discussion paper:  a thorough shake-up and a lot of discussion.

References

  1. Bastian, H., Glasziou, P., & Chalmers, I. (2010). Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLoS Medicine, 7 (9) DOI: 10.1371/journal.pmed.1000326

Related Articles

Advertisements




I’ve got Good News and I’ve got Bad News

26 01 2010

If someone tells you: “I’ve got Good News and I’ve got Bad News”, you probably ask this person: “Well, tell me the bad news first!”

Laika’s MedLibLog has good and bad news for you.

The Bad News is, that this blog didn’t make it to the Finals of the sixth annual Medical Weblog Awards, organized by Medgadget. (see earlier post)

The Good news is that this keeps me from the stress that inevitably comes with following the stats and seeing how your blog is lagging more and more behind. Plus you don’t have to waste time desperately trying to mobilize your husband to just press the *$%# vote button (choosing the right person: me), no matter how many times he says he doesn’t care a bit – (“and wouldn’t it be better to spend less time on blogging anyway?”)

This reminds me of something I’ve tried to suppress, namely that this blog didn’t make it to the shortlists of the Dutch Bloggies 2009 either (see Laika’s MedLibLog on the Longlist of the DutchBloggies!)

The Good news is that many high quality blogs did make it to the finals. Including The Blog that Ate Manhattan, Clinical Cases and Images, Musings of a Distractible Mind (Best Medical Weblog) , other things amanzi (Best Literary Medical Weblog), Allergy Notes, Clinical Cases and Images, Life in the Fast Lane (Best Clinical Sciences Weblog), ScienceRoll (Best Medical Technologies/Informatics Weblog).

Best of all, the superb blog I nominated for Best Medical WeblogDr Shock MD PhD made it to the finals as well!!

But it is hard to understand that blogs like EverythingHealth and Body in Mind with many nominations are not among the finalists. That underlines that contests are very subjective, but so are individual preferences for blogs. It is all in the game.

Anyway you can start voting for your favorite blogs tomorrow. Please have a look at the finalists here at Medgadget, so you can decide who deserves your votes.

Finally I would like to conclude with positive news concerning this blog. This week’s “Cochrane in the news” features the post on Cochrane Evidence Aid. It is on the Cochrane homepage today.

Photo Credit

Best Literary Medical Weblog
Reblog this post [with Zemanta]




#FollowFriday #FF the EBM-Skeptics @cochranecollab @EvidenceMatters @oracknows @ACPinternists

27 11 2009

FollowFriday is a twitter tradition in which twitter users recommend other users to follow (on Friday) by twittering their name(s), the hashtags #FF or #FollowFriday, and the reason for their recommendation(s).

Since the roll out of Twitter lists I add the #FollowFriday Recommendations to a (semi-)permanent #FollowFriday Twitter list: @laikas/followfridays-ff

This week I have added 4 people to the #FollowFriday list who are all twittering about EBM and/or are skeptics and/or belong to the Cochrane Collaboration. Since there are many interesting people in this field, I also made a separate Twitterlist: @laikas/ebm-cochrane-sceptics

The following people are added to both my #followfridays-ff (n=36) and ebm-cochrane-sceptics (n=46) lists. If you are on twitter you can follow these lists.
I’m sure I forgot somebody. If I did, let me know and I’ll see if I include that person.

All 4 tweople have twittered about the new and much discussed breast cancer screening guidelines.

  1. @ACPinternists* is the Communications Department of the American College of Physicians (ACP). I know ACP from the ACP-Journal club with its excellent critical appraised topics, in a section of the well known Annals of Internal Medicine. The uproar over the new U.S. breast cancer screening guidelines started with the publication of 3 articles in Ann Intern Med.
    *Mmm, when I come to think of it, shouldn’t @ACPinternists be added to the biomedical journals Twitter lists as well?
  2. @EvidenceMatters is really an invaluable tweeter with a high output of many different kinds of tweets, often (no surprise) related to Evidence Based Medicine. He (?) is very inspiring. My post “screening can’t hurt, can it” was inspired by one of his tweets.
  3. @cochranecollab stands for the Cochrane Collaboration. Like @acpinternists the tweets are mostly unidirectional, but provide interesting information related to EBM and/or the Cochrane Collaboration. Disclosure: I’m not entirely neutral.
  4. @oracknows. Who doesn’t know Orac? Orac is “a (not so) humble pseudonymous surgeon/scientist with an ego just big enough to delude himself that someone might actually care about his miscellaneous”. His tweets are valuable because of his high quality posts on his blog Respectful Insolence: Orac mostly uses Twitter as a publication platform. I really can recommend his excellent explanation of the new breast cancer guidelines.

You may also want to read:

Reblog this post [with Zemanta]




Role of Consumer Networks in Evidence Based Health Information

11 11 2009

Guest author: Janet Wale
member of the Cochrane Consumer Network

People are still struggling with evidence or modern medicine – clinicians, patients, health consumers, carers and the public alike. Part of this is because we always thought medicine was based on quality research, or evidence. It is not only that. For evidence to be used most effectively in healthcare systems researchers, clinicians and ‘the existing or potential patients and carers’ have to communicate and resonate with each other – to share knowledge and responsibilities both in developing the evidence and in individual decision making. On the broader population level, this may include consultation but is best achieved by developing partnerships.

The Cochrane Collaboration develops a large number of the published systematic reviews of best evidence on healthcare interventions, available electronically on The Cochrane Library. Systematic reviews are integral to the collation of evidence to inform clinical practice guidelines. They are also an integral part of health technology assessments, where the cost-effectiveness of healthcare interventions is determined for a particular health system.

With the availability of the Internet we are able to readily share information. We are also acutely aware of disadvantage for many of the World’s populations. What this has meant is pooled efforts. Now we have not only the World Health Organization but also The Cochrane Collaboration, Guidelines International Network, and Health Technology Assessment International. What is common among these organizations? They involve the users of health care, including patients, consumers and carers. The latter three organizations have a formal consumer/patient and citizen group that informs their work. In this way we work to make the evidence relevant, accessible and being used. We all have to be discerning whatever knowledge we are given and apply it to ourselves.

This is  a short post on request.
It also appeared as a comment at:
http://e-patients.net/archives/2009/11/tell-the-fda-the-whole-story-please.html

Reblog this post [with Zemanta]




#Cochrane Colloquium 2009: Better Working Relationship between Cochrane and Guideline Developers

19 10 2009

singapore CCLast week I attended the annual Cochrane Colloquium in Singapore. I will summarize some of the meetings.

Here is a summary of an interesting (parallel) special session: Creating a closer working relationship between Cochrane and Guideline Developers. This session was brought together as a partnership between the Guidelines International Network (G-I-N) and The Cochrane Collaboration to look at the current experience of guideline developers and their use of Cochrane reviews (see abstract).

Emma Tavender of the EPOC Australian Satellite, Australia reported on the survey carried out by the UK Cochrane Centre to identify the use of Cochrane reviews in guidelines produced in the UK ) (not attended this presentation) .

Pwee Keng Ho, Ministry of Health, Singapore, is leading the Health Technology Assessment (HTA) and guideline development program of the Singapore Ministry of Health. He spoke about the issues faced as a guideline developer using Cochrane reviews or -in his own words- his task was: “to summarize whether guideline developers like Cochrane Systematic reviews or not” .

Keng Ho presented the results of 3 surveys of different guideline developers. Most surveys had very few respondents: 12-29 if I remember it well.

Each survey had approximately the same questions, but in a different order. On the face of it, the 3 surveys gave the same picture.

Main points:

  • some guideline developers are not familiar with Cochrane Systematic Reviews
  • others have no access to it.
  • of those who are familiar with the Cochrane Reviews and do have access to it, most found the Cochrane reviews useful and reliable. (in one survey half of the respondents were neutral)
  • most importantly they actually did use the Cochrane reviews for most of their guidelines.
  • these guideline developers also used the Cochrane methodology to make their guidelines (whereas most physicians are not inclined to use the exhaustive search strategies and systematic approach of the Cochrane Collaboration)
  • An often heard critique of Guideline developers concerned the non-comprehensive coverage of topics by Cochrane Reviews. However, unlike in Western countries, the Singapore minister of Health mentioned acupuncture and herbs as missing topics (for certain diseases).

This incomplete coverage caused by a not-demand driven choice of subjects was a recurrent topic at this meeting and a main issue recognized by the entire Cochrane Community. Therefore priority setting of Cochrane Systematic reviews is one of the main topics addressed at this Colloquium and in the Cochrane Strategic review.

Kay Dickersin of the US Cochrane Center, USA, reported on the issues raised at the stakeholders meeting held in June 2009 in the US (see here for agenda) on whether systematic reviews can effectively inform guideline development, with a particular focus on areas of controversy and debate.

The Stakeholder summit concentrated on using quality SR’s for guidelines. This is different from effectiveness research, for which the Institute of Medicine (IOM) sets the standards: local and specialist guidelines require a different expertise and approach.

All kinds of people are involved in the development of guidelines, i.e. nurses, consumers, physicians.
Important issues to address, point by point:

  • Some may not understand the need to be systematic
  • How to get physicians on board: they are not very comfortable with extensive searching and systematic work
  • Ongoing education, like how-to workshops, is essential
  • What to do if there is no evidence?
  • More transparency; handling conflicts of interest
  • Guidelines differ, including the rating of the evidence. Almost everyone in the Stakeholders meeting used GRADE to grade the evidence, but not as it was originally described. There were numerous variations on the same theme. One question is whether there should be one system or not.
  • Another -recurrent- issue was that Guidelines should be made actionable.

Here are podcasts covering the meeting

Gordon Guyatt, McMaster University, Canada, gave  an outline of the GRADE approach and the purpose of ‘Summary of Findings’ tables, and how both are perceived by Cochrane review authors and guideline developers.

Gordon Guyatt, whose magnificent book ” Users’ Guide to the Medical Literature”  (JAMA-Evidence) lies at my desk, was clearly in favor of adherence to the original Grade-guidelines. Forty organizations have adopted these Grade Guidelines.

Grade stands for “Grading of Recommendations Assessment, Development and Evaluation”  system. It is used for grading evidence when submitting a clinical guidelines article. Six articles in the BMJ are specifically devoted to GRADE (see here for one (full text); and 2 (PubMed)). GRADE not only takes the rigor of the methods  into account, but also the balance between the benefits and the risks, burdens, and costs.

Suppose  a guideline would recommend  to use thrombolysis to treat disease X, because a good quality small RCTs show thrombolysis to be slightly but significantly more effective than heparin in this disease. However by relying on only direct evidence from the RCT’s it isn’t taken into account that observational studies have long shown that thrombolysis enhances the risk of massive bleeding in diseases Y and Z. Clearly the risk of harm is the same in disease X: both benefits and harms should be weighted.
Guyatt gave several other examples illustrating the importance of grading the evidence and the understandable overview presented in the Summary of Findings Table.

Another issue is that guideline makers are distressingly ready to embrace surrogate endpoints instead of outcomes that are more relevant to the patient. For instance it is not very meaningful if angiographic outcomes are improved, but mortality or the recurrence of cardiovascular disease are not.
GRADE takes into account if indirect evidence is used: It downgrades the evidence rating.  Downgrading also occurs in case of low quality RCT’s or the non-trade off of benefits versus harms.

Guyatt pleaded for uniform use of GRADE, and advised everybody to get comfortable with it.

Although I must say that it can feel somewhat uncomfortable to give absolute rates to non-absolute differences. These are really man-made formulas, people agreed upon. On the other hand it is a good thing that it is not only the outcome of the RCT’s with respect to benefits (of sometimes surrogate markers) that count.

A final remark of Guyatt: ” Everybody makes the claim they are following evidence based approach, but you have to learn them what that really means.”
Indeed, many people talk about their findings and/or recommendations being evidence based, because “EBM sells well”, but upon closer examination many reports are hardly worth the name.

Reblog this post [with Zemanta]




Cochrane 2.0 Workshop at the Cochrane Colloquium #CC2009

12 10 2009

Today Chris Mavergames and I held a workshop at the Cochrane Colloquium, entitled:  Web 2.0 for Cochrane (see previous post and abstract of the workshop)

First I gave an introduction into Medicine 2.0 and (thus) Web 2.0. Chris, Web Operations Manager and Information Architect of the Cochrane Collaboration, talked more about which Web 2.0 tools were already used by the Cochrane Collaboration and which Web 2.0 might be useful as such.

We had half an hour for discussion which was easily filled. There was no doubt about the usefulness of Web 2.0 for the Cochrane in this group. Therefore, there was ample room for discussing technical aspects, like:

  • Can you load your RSS feed of a PubMed search in Reference Manager? (According to Chris you can)
  • How can you deal with this lot of information (by following a specific subject, or not too much people – not many updates on a daily basis; you don’t have to follow it all, just pick up the headlines, when you can)
  • Are you involved in a Wiki that is successful? (it appears very difficult to involve people)
  • What happens if people comment or upload picture on facebook (of the Cochrane collaboration) in an appropriate way (Chris: didn’t happen, but you have to check and remove them)
  • How do you follow tweets (we showed Tweetdeckhashtags # and #followfridays)
  • What is the worst thing that happened to you (regarding web 2.0)? Chris and I thought a long time. Chris: that I revealed something that wasn’t officially public yet (though appeared to be o.k.). Me: spam (but I remove it/don’t approve it).
    Later I remembered two better (worse) examples, like the “Clinical Reader” social misbehaviour, a good example of how “branding” should not be done, and sites that publish top 50 and 100 list of bloggers just to get more traffic to their spam websites

Below is my presentation on Slideshare.

The (awful) green blackgound color indicates I went “live” on the web. As a reminder of what I did, I included some screendumps.

The current workshop was just meant to introduce and discuss Medicine 2.0 and Cochrane 2.0.

I hope we have a vivid discussion Wednesday when the plenary lectures deal with Cochrane 2.0.

The answers to my question on Twitter

  1. Why Web 2.0 is useful? (or not)
  2. Why we need Cochrane 2.0? (or not)

can be found on Visibletweets (temporary) and saved as: Quoteurl.com/sggq0 (permanent selection).

I think it would be good when these points are taken into account during the Cochrane 2.0 plenary discussions.

* possible WIKI (+ links) might appear at http://medicine20.wetpaint.com/page/Cochrane+2.0

Reblog this post [with Zemanta]




This week I will blog from…..

10 10 2009

35167809 singapore colloquiumPicture taken by Chris Mavergames http://twitpic.com/kxrnl

Chris and I will facilitate a web 2.0 workshop for the Cochrane (see here, for all workshops see here).
The entire program can be viewed at the Cochrane Colloquium site.

Chris Mavergames, Web Operations Manager and Information Architect of the Cochrane Collaboration will also give a plenary presentation entitled:
Cochrane for the Twitter generation:
inserting ourselves into the ‘conversation
‘”.

The session has the promising title: The Cochrane Library – brave new world?

Here is the introductory text of the session:

The Cochrane Collaboration is not unique in facing a considerable challenge to the way it packages and disseminates healthcare information. The proliferation of communication platforms and social networking sites provides opportunities to reach new audiences, but how far can or should the Collaboration go in embracing these new media? In this session we hear from speakers who are at the heart of the discussions about The Cochrane Library’s future direction, including the Library’s Editor in Chief. We finish the session with reflections on the week’s discussions with respect to the Strategic Review (…)

Request (for the workshop, not the plenary session):
If you ‘re on Twitter, could you please tell the participants of the (small) web 2.0 workshop  your opinion on the following, using the hashtag #CC20.
*

  1. Why Web 2.0 is useful? (or not)
  2. Why we need Cochrane 2.0? (or not)

An example of such an answer (from @Berci):

#CC20 Web 2.0 opens up the world and eases communication. Cochrane 2.0 is needed bc such an important database should have a modern platform

If you don’t have Twitter you can add your comment here and I will post it for you (if you leave a name).

Thanks for all who have contributed so far.

—–

*this is only for our small-scaled workshop, I propose to use #CC2009 for the conference itself.

Reblog this post [with Zemanta]