The Scatter of Medical Research and What to do About it.

18 05 2012

ResearchBlogging.orgPaul Glasziou, GP and professor in Evidence Based Medicine, co-authored a new article in the BMJ [1]. Similar to another paper [2] I discussed before [3] this paper deals with the difficulty for clinicians of staying up-to-date with the literature. But where the previous paper [2,3] highlighted the mere increase in number of research articles over time, the current paper looks at the scatter of randomized clinical trials (RCTs) and systematic reviews (SR’s) accross different journals cited in one year (2009) in PubMed.

Hofmann et al analyzed 7 specialties and 9 sub-specialties, that are considered the leading contributions to the burden of disease in high income countries.

They followed a relative straightforward method for identifying the publications. Each search string consisted of a MeSH term (controlled  term) to identify the selected disease or disorders, a publication type [pt] to identify the type of study, and the year of publication. For example, the search strategy for randomized trials in cardiology was: “heart diseases”[MeSH] AND randomized controlled trial[pt] AND 2009[dp]. (when searching “heart diseases” as a MeSH, narrower terms are also searched.) Meta-analysis[pt] was used to identify systematic reviews.

Using this approach Hofmann et al found 14 343 RCTs and 3214 SR’s published in 2009 in the field of the selected (sub)specialties. There was a clear scatter across journals, but this scatter varied considerably among specialties:

“Otolaryngology had the least scatter (363 trials across 167 journals) and neurology the most (2770 trials across 896 journals). In only three subspecialties (lung cancer, chronic obstructive pulmonary disease, hearing loss) were 10 or fewer journals needed to locate 50% of trials. The scatter was less for systematic reviews: hearing loss had the least scatter (10 reviews across nine journals) and cancer the most (670 reviews across 279 journals). For some specialties and subspecialties the papers were concentrated in specialty journals; whereas for others, few of the top 10 journals were a specialty journal for that area.
Generally, little overlap occurred between the top 10 journals publishing trials and those publishing systematic reviews. The number of journals required to find all trials or reviews was highly correlated (r=0.97) with the number of papers for each specialty/ subspecialty.”

Previous work already suggested that this scatter of research has a long tail. Half of the publications is in a minority of papers, whereas the remaining articles are scattered among many journals (see Fig below).

Click to enlarge en see legends at BMJ 2012;344:e3223 [CC]

The good news is that SRs are less scattered and that general journals appear more often in the top 10 journals publishing SRs. Indeed for 6 of the 7 specialties and 4 of the 9 subspecialties, the Cochrane Database of Systematic Reviews had published the highest number of systematic reviews, publishing between 6% and 18% of all the systematic reviews published in each area in 2009. The bad news is that even keeping up to date with SRs seems a huge, if not impossible, challenge.

In other words, it is not sufficient for clinicians to rely on personal subscriptions to a few journals in their specialty (which is common practice). Hoffmann et al suggest several solutions to help clinicians cope with the increasing volume and scatter of research publications.

  • a central library of systematic reviews (but apparently the Cochrane Library fails to fulfill such a role according to the authors, because many reviews are out of date and are perceived as less clinically relevant)
  • registry of planned and completed systematic reviews, such as prospero. (this makes it easier to locate SRs and reduces bias)
  • Synthesis of Evidence and synopses, like the ACP-Jounal Club which summarizes the best evidence in internal medicine
  • Specialised databases that collate and critically appraise randomized trials and systematic reviews, like www.pedro.org.au for physical therapy. In my personal experience, however, this database is often out of date and not comprehensive
  • Journal scanning services like EvidenceUpdates from mcmaster.ca), which scans over 120 journals, filters articles on the basis of quality, has practising clinicians rate them for relevance and newsworthiness, and makes them available as email alerts and in a searchable database. I use this service too, but besides that not all specialties are covered, the rating of evidence may not always be objective (see previous post [4])
  • The use of social media tools to alert clinicians to important new research.

Most of these solutions are (long) existing solutions that do not or only partly help to solve the information overload.

I was surprised that the authors didn’t propose the use of personalized alerts. PubMed’s My NCBI feature allows to create automatic email alerts on a topic and to subscribe to electronic tables of contents (which could include ACP journal Club). Suppose that a physician browses 10 journals roughly covering 25% of the trials. He/she does not need to read all the other journals from cover to cover to avoid missing one potentially relevant trial. Instead it is far more efficient to perform a topic search to filter relevant studies from journals that seldom publish trials on the topic of interest. One could even use the search of Hoffmann et al to achieve this.* Although in reality, most clinical researchers will have narrower fields of interest than all studies about endocrinology and neurology.

At our library we are working at creating deduplicated, easy to read, alerts that collate table of contents of certain journals with topic (and author) searches in PubMed, EMBASE and other databases. There are existing tools that do the same.

Another way to reduce the individual work (reading) load is to organize journals clubs or even better organize regular CATs (critical appraised topics). In the Netherlands, CATS are a compulsory item for residents. A few doctors do the work for many. Usually they choose topics that are clinically relevant (or for which the evidence is unclear).

The authors shortly mention that their search strategy might have missed  missed some eligible papers and included some that are not truly RCTs or SRs, because they relied on PubMed’s publication type to retrieve RCTs and SRs. For systematic reviews this may be a greater problem than recognized, for the authors have used meta-analyses[pt] to identify systematic reviews. Unfortunately PubMed has no publication type for systematic reviews, but it may be clear that there are many more systematic reviews that meta-analyses. Possibly systematical reviews might even have a different scatter pattern than meta-analyses (i.e. the latter might be preferentially included in core journals).

Furthermore not all meta-analyses and systematic reviews are reviews of RCTs (thus it is not completely fair to compare MAs with RCTs only). On the other hand it is a (not discussed) omission of this study, that only interventions are considered. Nowadays physicians have many other questions than those related to therapy, like questions about prognosis, harm and diagnosis.

I did a little imperfect search just to see whether use of other search terms than meta-analyses[pt] would have any influence on the outcome. I search for (1) meta-analyses [pt] and (2) systematic review [tiab] (title and abstract) of papers about endocrine diseases. Then I subtracted 1 from 2 (to analyse the systematic reviews not indexed as meta-analysis[pt])

Thus:

(ENDOCRINE DISEASES[MESH] AND SYSTEMATIC REVIEW[TIAB] AND 2009[DP]) NOT META-ANALYSIS[PT]

I analyzed the top 10/11 journals publishing these study types.

This little experiment suggests that:

  1. the precise scatter might differ per search: apparently the systematic review[tiab] search yielded different top 10/11 journals (for this sample) than the meta-analysis[pt] search. (partially because Cochrane systematic reviews apparently don’t mention systematic reviews in title and abstract?).
  2. the authors underestimate the numbers of Systematic Reviews: simply searching for systematic review[tiab] already found appr. 50% additional systematic reviews compared to meta-analysis[pt] alone
  3. As expected (by me at last), many of the SR’s en MA’s were NOT dealing with interventions, i.e. see the first 5 hits (out of 108 and 236 respectively).
  4. Together these findings indicate that the true information overload is far greater than shown by Hoffmann et al (not all systematic reviews are found, of all available search designs only RCTs are searched).
  5. On the other hand this indirectly shows that SRs are a better way to keep up-to-date than suggested: SRs  also summarize non-interventional research (the ratio SRs of RCTs: individual RCTs is much lower than suggested)
  6. It also means that the role of the Cochrane Systematic reviews to aggregate RCTs is underestimated by the published graphs (the MA[pt] section is diluted with non-RCT- systematic reviews, thus the proportion of the Cochrane SRs in the interventional MAs becomes larger)

Well anyway, these imperfections do not contradict the main point of this paper: that trials are scattered across hundreds of general and specialty journals and that “systematic reviews” (or meta-analyses really) do reduce the extent of scatter, but are still widely scattered and mostly in different journals to those of randomized trials.

Indeed, personal subscriptions to journals seem insufficient for keeping up to date.
Besides supplementing subscription by  methods such as journal scanning services, I would recommend the use of personalized alerts from PubMed and several prefiltered sources including an EBM search machine like TRIP (www.tripdatabase.com/).

*but I would broaden it to find all aggregate evidence, including ACP, Clinical Evidence, syntheses and synopses, not only meta-analyses.

**I do appreciate that one of the co-authors is a medical librarian: Sarah Thorning.

References

  1. Hoffmann, Tammy, Erueti, Chrissy, Thorning, Sarah, & Glasziou, Paul (2012). The scatter of research: cross sectional comparison of randomised trials and systematic reviews across specialties BMJ, 344 : 10.1136/bmj.e3223
  2. Bastian, H., Glasziou, P., & Chalmers, I. (2010). Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLoS Medicine, 7 (9) DOI: 10.1371/journal.pmed.1000326
  3. How will we ever keep up with 75 trials and 11 systematic reviews a day (laikaspoetnik.wordpress.com)
  4. Experience versus Evidence [1]. Opioid Therapy for Rheumatoid Arthritis Pain. (laikaspoetnik.wordpress.com)
Advertisements




Experience versus Evidence [1]. Opioid Therapy for Rheumatoid Arthritis Pain.

5 12 2011

ResearchBlogging.orgRheumatoid arthritis (RA) is a chronic auto-immune disease, which causes inflammation of the joints that eventually leads to progressive joint destruction and deformity. Patients have swollen, stiff and painful joints.  The main aim of treatment is to reduce swelling  and inflammation, to alleviate pain and stiffness and to maintain normal joint function. While there is no cure, it is important to properly manage pain.

The mainstays of therapy in RA are disease-modifying anti-rheumatic drugs (DMARDs) and non-steroidal anti-inflammatory drugs (NSAIDs). These drugs primarily target inflammation. However, since inflammation is not the only factor that causes pain in RA, patients may not be (fully) responsive to treatment with these medications.
Opioids are another class of pain-relieving substance (analgesics). They are frequently used in RA, but their role in chronic cancer pain, including RA, is not firmly established.

A recent Cochrane Systematic Review [1] assessed the beneficial and harmful effects of opioids in RA.

Eleven studies (672 participants) were included in the review.

Four studies only assessed the efficacy of  single doses of different analgesics, often given on consecutive days. In each study opioids reduced pain (a bit) more than placebo. There were no differences in effectiveness between the opioids.

Seven studies between 1-6 weeks in duration assessed 6 different oral opioids either alone or combined with non-opioid analgesics.
The only strong opioid investigated was controlled-release morphine sulphate, in a single study with 20 participants.
Six studies compared an opioid (often combined with an non-opioid analgesic) to placebo. Opioids were slightly better than placebo in improving patient reported global impression of clinical change (PGIC)  (3 studies, 324 participants: relative risk (RR) 1.44, 95% CI 1.03 to 2.03), but did not lower the  number of withdrawals due to inadequate analgesia in 4 studies.
Notably none of the 11 studies reported the primary and probably more clinical relevant outcome “proportion of participants reporting ≥ 30% pain relief”.

On the other hand adverse events (most commonly nausea, vomiting, dizziness and constipation) were more frequent in patients receiving opioids compared to placebo (4 studies, 371 participants: odds ratio 3.90, 95% CI 2.31 to 6.56). Withdrawal due to adverse events was  non-significantly higher in the opioid-treated group.

Comparing opioids to other analgesics instead of placebos seems more relevant. Among the 11 studies, only 1 study compared an opioid (codeine with paracetamol) to an NSAID (diclofenac). This study found no difference in efficacy or safety between the two treatments.

The 11 included studies were very heterogeneous (i.e. different opioid studied, with or without concurrent use of non-opioid analgesics, different outcomes measured) and the risk of bias was generally high. Furthermore, most studies were published before 2000 (less optimal treatment of RA).

The authors therefore conclude:

In light of this, the quantitative findings of this review must be interpreted with great caution. At best, there is weak evidence in favour of the efficacy of opioids for the treatment of pain in patients with RA but, as no study was longer than six weeks in duration, no reliable conclusions can be drawn regarding the efficacy or safety of opioids in the longer term.

This was the evidence, now the opinion.

I found this Cochrane Review via an EvidenceUpdates email alert from the BMJ Group and McMaster PLUS.

EvidenceUpdate alerts are meant to “provide you with access to current best evidence from research, tailored to your own health care interests, to support evidence-based clinical decisions. (…) All citations are pre-rated for quality by research staff, then rated for clinical relevance and interest by at least 3 members of a worldwide panel of practicing physicians”

I usually don’t care about the rating, because it is mostly 5-6 on a scale of 7. This was also true for the current SR.

There is a more detailed rating available (when clicking the link, free registration required). Usually, the newsworthiness of SR’s scores relatively low. (because it summarizes ‘old’ studies?). Personally I would think that the relevance and newsworthiness would be higher for the special interest group, pain.

But the comment of the first of the 3 clinical raters was most revealing:

He/she comments:

As a Palliative care physician and general internist, I have had excellent results using low potency opiates for RA and OA pain. The palliative care literature is significantly more supportive of this approach vs. the Cochrane review.

Thus personal experience wins from evidence?* How did this palliative care physician assess effectiveness? Just give a single dose of an opiate? How did he rate the effectiveness of the opioids? Did he/she compare it to placebo or NSAID (did he compare it at all?), did he/she measure adverse effects?

And what is “The palliative care literature”  the commenter is referring to? Apparently not this Cochrane Review. Apparently not the 11 controlled trials included in the Cochrane review. Apparently not the several other Cochrane reviews on use of opioids for non-chronic cancer pain, and not the guidelines, syntheses and synopsis I found via the TRIP-database. All conclude that using opioids to treat non-cancer chronic pain is supported by very limited evidence, that adverse effects are common and that long-term use may lead to opioid addiction.

I’m sorry to note that although the alerting service is great as an alert, such personal ratings are not very helpful for interpreting and *true* rating of the evidence.

I would rather prefer a truly objective, structured critical appraisal like this one on a similar topic by DARE (“Opioids for chronic noncancer pain: a meta-analysis of effectiveness and side effects”)  and/or an objective piece that puts the new data into clinical perspective.

*Just to be clear, the own expertise and opinions of experts are also important in decision making. Rightly, Sackett [2] emphasized that good doctors use both individual clinical expertise and the best available external evidence. However, that doesn’t mean that one personal opinion and/or preference replaces all the existing evidence.

References 

  1. Whittle SL, Richards BL, Husni E, & Buchbinder R (2011). Opioid therapy for treating rheumatoid arthritis pain. Cochrane database of systematic reviews (Online), 11 PMID: 22071805
  2. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, & Richardson WS (1996). Evidence based medicine: what it is and what it isn’t. BMJ (Clinical research ed.), 312 (7023), 71-2 PMID: 8555924
Enhanced by Zemanta




Evidence Based Point of Care Summaries [2] More Uptodate with Dynamed.

18 10 2011

ResearchBlogging.orgThis post is part of a short series about Evidence Based Point of Care Summaries or POCs. In this series I will review 3 recent papers that objectively compare a selection of POCs.

In the previous post I reviewed a paper from Rita Banzi and colleagues from the Italian Cochrane Centre [1]. They analyzed 18 POCs with respect to their “volume”, content development and editorial policy. There were large differences among POCs, especially with regard to evidence-based methodology scores, but no product appeared the best according to the criteria used.

In this post I will review another paper by Banzi et al, published in the BMJ a few weeks ago [2].

This article examined the speed with which EBP-point of care summaries were updated using a prospective cohort design.

First the authors selected all the systematic reviews signaled by the American College of Physicians (ACP) Journal Club and Evidence-Based Medicine Primary Care and Internal Medicine from April to December 2009. In the same period the authors selected all the Cochrane systematic reviews labelled as “conclusion changed” in the Cochrane Library. In total 128 systematic reviews were retrieved, 68 from the literature surveillance journals (53%) and 60 (47%) from the Cochrane Library. Two months after the collection started (June 2009) the authors did a monthly screen for a year to look for potential citation of the identified 128 systematic reviews in the POCs.

Only those 5 POCs were studied that were ranked in the top quarter for at least 2 (out of 3) desirable dimensions, namely: Clinical Evidence, Dynamed, EBM Guidelines, UpToDate and eMedicine. Surprisingly eMedicine was among the selected POCs, having a rating of “1” on a scale of 1 to 15 for EBM methodology. One would think that Evidence-based-ness is a fundamental prerequisite  for EBM-POCs…..?!

Results were represented as a (rather odd, but clear) “survival analysis” ( “death” = a citation in a summary).

Fig.1 : Updating curves for relevant evidence by POCs (from [2])

I will be brief about the results.

Dynamed clearly beated all the other products  in its updating speed.

Expressed in figures, the updating speed of Dynamed was 78% and 97% greater than those of EBM Guidelines and Clinical Evidence, respectively. Dynamed had a median citation rate of around two months and EBM Guidelines around 10 months, quite close to the limit of the follow-up, but the citation rate of the other three point of care summaries (UpToDate, eMedicine, Clinical Evidence) were so slow that they exceeded the follow-up period and the authors could not compute the median.

Dynamed outperformed the other POC’s in updating of systematic reviews independent of the route. EBM Guidelines and UpToDate had similar overall updating rates, but Cochrane systematic reviews were more likely to be cited by EBM Guidelines than by UpToDate (odds ratio 0.02, P<0.001). Perhaps not surprising, as EBM Guidelines has a formal agreement with the Cochrane Collaboration to use Cochrane contents and label its summaries as “Cochrane inside.” On the other hand, UpToDate was faster than EBM Guidelines in updating systematic reviews signaled by literature surveillance journals.

Dynamed‘s higher updating ability was not due to a difference in identifying important new evidence, but to the speed with which this new information was incorporated in their summaries. Possibly the central updating of Dynamed by the editorial team might account for the more prompt inclusion of evidence.

As the authors rightly point out, slowness in updating could mean that new relevant information is ignored and could thus affect the validity of point of care information services”.

A slower updating rate may be considered more important for POCs that “promise” to “continuously update their evidence summaries” (EBM-Guidelines) or to “perform a continuous comprehensive review and to revise chapters whenever important new information is published, not according to any specific time schedule” (UpToDate). (see table with description of updating mechanisms )

In contrast, Emedicine doesn’t provide any detailed information on updating policy, another reason that it doesn’t belong to this list of best POCs.
Clinical Evidence, however, clearly states, We aim to update Clinical Evidence reviews annually. In addition to this cycle, details of clinically important studies are added to the relevant reviews throughout the year using the BMJ Updates service.” But BMJ Updates is not considered in the current analysis. Furthermore, patience is rewarded with excellent and complete summaries of evidence (in my opinion).

Indeed a major limitation of the current (and the previous) study by Banzi et al [1,2] is that they have looked at quantitative aspects and items that are relatively “easy to score”, like “volume” and “editorial quality”, not at the real quality of the evidence (previous post).

Although the findings were new to me, others have recently published similar results (studies were performed in the same time-span):

Shurtz and Foster [3] of the Texas A&M University Medical Sciences Library (MSL) also sought to establish a rubric for evaluating evidence-based medicine (EBM) point-of-care tools in a health sciences library.

They, too, looked at editorial quality and speed of updating plus reviewing content, search options, quality control, and grading.

Their main conclusion is that “differences between EBM tools’ options, content coverage, and usability were minimal, but that the products’ methods for locating and grading evidence varied widely in transparency and process”.

Thus this is in line with what Banzi et al reported in their first paper. They also share Banzi’s conclusion about differences in speed of updating

“DynaMed had the most up-to-date summaries (updated on average within 19 days), while First Consult had the least up to date (updated on average within 449 days). Six tools claimed to update summaries within 6 months or less. For the 10 topics searched, however, only DynaMed met this claim.”

Table 3 from Shurtz and Foster [3] 

Ketchum et al [4] also conclude that DynaMed the largest proportion of current (2007-2009) references (170/1131, 15%). In addition they found that Dynamed had the largest total number of references (1131/2330, 48.5%).

Yes, and you might have guessed it. The paper of Andrea Ketchum is the 3rd paper I’m going to review.

I also recommend to read the paper of the librarians Shurtz and Foster [3], which I found along the way. It has too much overlap with the Banzi papers to devote a separate post to it. Still it provides better background information then the Banzi papers, it focuses on POCs that claim to be EBM and doesn’t try to weigh one element over another. 

References

  1. Banzi, R., Liberati, A., Moschetti, I., Tagliabue, L., & Moja, L. (2010). A Review of Online Evidence-based Practice Point-of-Care Information Summary Providers Journal of Medical Internet Research, 12 (3) DOI: 10.2196/jmir.1288
  2. Banzi, R., Cinquini, M., Liberati, A., Moschetti, I., Pecoraro, V., Tagliabue, L., & Moja, L. (2011). Speed of updating online evidence based point of care summaries: prospective cohort analysis BMJ, 343 (sep22 2) DOI: 10.1136/bmj.d5856
  3. Shurtz, S., & Foster, M. (2011). Developing and using a rubric for evaluating evidence-based medicine point-of-care tools Journal of the Medical Library Association : JMLA, 99 (3), 247-254 DOI: 10.3163/1536-5050.99.3.012
  4. Ketchum, A., Saleh, A., & Jeong, K. (2011). Type of Evidence Behind Point-of-Care Clinical Information Products: A Bibliometric Analysis Journal of Medical Internet Research, 13 (1) DOI: 10.2196/jmir.1539
  5. Evidence Based Point of Care Summaries [1] No “Best” Among the Bests? (laikaspoetnik.wordpress.com)
  6. How will we ever keep up with 75 Trials and 11 Systematic Reviews a Day? (laikaspoetnik.wordpress.com
  7. UpToDate or Dynamed? (Shamsha Damani at laikaspoetnik.wordpress.com)
  8. How Evidence Based is UpToDate really? (laikaspoetnik.wordpress.com)

Related articles (automatically generated)





MedLibs Round: Update & Call for Submissions June 2010

4 06 2010

In the past months we had some excellent hosts of the round, really “la crème de la crème” of the medical information/libarary blogosphere:

2010 was heralded by Dr Shock MD PhD, followed by Emerging Technologies Librarian (@pfanderson) The Krafty Librarian (@krafty) and @Eagledawg (Nikki Dettmar).

Nikki  hosted the round for a second time, but now on her new blog: Eagledawg.net. The title: E(Patients)-I(Pad)-O(pportunities):Medlibs Round

Last Month the round was hosted by Danni (Danni4info) at The Health Informaticist, my favorite English EBM-library blog. It is a great round again, about “dealing with PubMed trending analysis, liability in information provision, the ‘splinternet’, a search engine optimisation (SEO) teaser from CILIP’s fresh off the presses Update magazine, and more. Missed it? You can read it here.

And now we have a few days left to submit our posts for the Next MedLibs Round, hosted by yet another excellent EBM/librarian blogger: @creaky at EBM and Clinical Support Librarians@UCHC.

She would like posts about “Reference Questions (or People) I Won’t Forget” (thus “memorable” encounters that took place in a public service/reference desk setting, over your career) or “how the library/librarian” has helped you.
But as always other relevant and good quality posts related to medical information and medical librarianship will also be considered.

For more details see the (2nd!) Call for submissions post at EBM and Clinical Support Librarians@UCHC

I am sure you all have a story to tell. So please share it with @creaky and us!

As always, you can submit the permalink (URL) (of your post(s) on your blog) here.

************

I would also like to take the opportunity to ask if there are any med- or medlib-bloggers out there who would like to host the MEDLIBS round August, September, October.

The MEDLIBs Round is still called the MedLibs round because I got too little response (6 votes including mine) to the poll with other name suggestions. Neither did I get any suggestions regarding the design of the MEDLIBS-logo, Robin of Survive the Journey has offered to make [for details see request here]. I hope you will take the time to fill in the poll below, and to think about any suggestions for a logo. Thanks!

@ links to the twitteraccounts





When more is less: Truncation, Stemming and Pluralization in the Cochrane Library

5 01 2010

I’m on two mail lists of the Cochrane Collaboration, one is the TSC -list (TSC=Trials Search Coordinator) and the other the IRMG-list. IMRG stands for Information Retrieval Methods Group (of the Cochrane). Sometimes, difficult search problems are posted on the list. It is challenging to try to find the solutions. I can’t remember that a solution was not found.

A while ago a member of the list was puzzled why he got the following retrieval result from the Cochrane Library:

ID Search Hits
#1 (breast near tumour* ) ….. 254
#2 (breast near tumour) …… 640
#3 (breast near tumor*) ….. 428
#4 (breast near tumor) …… 640

where near = adjacent (thus breast should be just before tumour) and the asterisk * is the truncation symbol.  At the end of the word an asterisk is used for all terms that begin with that basic word root. Thus tumour* should find: tumours and tumour and thus broaden the search.

The results are odd, because #2 (without truncation) gives more hits than #1 (with truncation), and the same is true for #4 versus #3. One would expect truncation to give more results. What could be the reason behind it?

I suspected the problem had to do with the truncation. I searched for breast and tumour with or without truncation (#1 to #4) and only tumour* gave odd results: tumour* gave much less results than tumour. (to exclude that it had to do with the fields being searched I only searched the fields ti (title), ab (abstract) and kw (keywords))

Records found with tumour, not with tumour*, contained the word tumor (not shown). Thus tumour automatically searches for tumor (and vice versa). This process is called stemming.

According to the Help-function of the Cochrane Library:

Stemming: The stemming feature within the search allows words with small spelling variants to be matched. The term tumor will also match tumour.

In addition, as I realized later, the Cochrane has pluralization and singularization features.

Pluralization and singularization matches Pluralized forms of words also match singular versions, and vice versa. The term drugs will find both drug and drugs. To match either just the singular or plural form of a terms, use an exact match search and include the word in quotation marks.

Indeed (tumor* OR tumour*) (or shortly tumo*r*) retrieves a little more than tumor OR tumour: words like tumoral, tumorous, tumorectomy. Not particularly useful, although it might not be disadvantagous when used adjacent to breast, as this will filter most noise.

tumor spelling variants searched in the title (ti) only: it doesn't matter how you spell tumor (#8, #9, #10,#11), as long as you don't truncate (while using a single variant)

Thus stemming, pluralization and singularization only work without truncation. In case of truncation you should add the spelling variants yourselves if case stemming/pluralization takes place. This is useful if you’re interested in other word variants that are not automatically accounted for.

Put it another way: knowing that stemming and pluralization takes place you can simply search for the single or plural form, American or English spelling. So breast near tumor (or simply breast tumor) would have been o.k. This is the reason why these features were introduced in the first way. 😉

By the way, truncation and stemming (but not pluralization) are also features in PubMed. And this can give similar and other problems. But this will be dealt with in another blogpost.

Reblog this post [with Zemanta]




#CECEM David Tovey -the Cochrane Library’s First Editor in Chief

13 06 2009

cochrane-symbolThis week I was attending another congress, the Continental European Cochrane Entities Meeting (CECEM).

This annual meeting is meant for staff from Cochrane Entities, thus Centre Staff, RGC’s (Review Group Coordinators), TSC’s (Trial Search Coordinators) and other staff members of the Cochrane Collaboration based in Continental Europe.

CECEM 2009 was held in Maastricht, the beautiful old Roman city in the South of the Netherlands. The city where my father was born and where I spend many holidays.

One interesting presentation was by Cochranes’ 1st Editor in chief, David Tovey, previously GP in an urban practice in London for 14 years and  Editorial Director of the BMJ Group’s ‘Knowledge’ division (responsible for BMJ Clinical Evidence and its sister product Best Treatments, see announcement in Medical News Today)

David began with saying that the end user is really the key person and that the impact of the Cochrane Reviews is most important.

“How is it that a Senior health manager in the UK may shrug his shoulders when you ask him if he has ever heard of Cochrane?”

“How do we make sure that our work had impact? Should we make use of user generated content?”

Quality is central, but quality depends on four pillars. Cochrane reviews should be reliable, timely, relevant and accessible.

Cochrane Tovey wit

How quality is perceived is dependent on the end users. There are several kinds of end users, each with his own priorities.

  1. doctor: wants comprehensive and up-to-date info, wants to understand and get answers quickly.
  2. patient: trustworthiness, up-to-date, wants to be able to make sense of it.
  3. scientist: wants to see how the conclusions are derived.
  4. policy and guideline-makers.

Reliable: Several articles have shown Cochrane Systematic Reviews to be more reliable then other systematic reviews  (Moher, PLOS BMJ)*

Timely: First it takes time to submit a title of a Cochrane Review and then it takes at least 2 years before a protocol becomes a review. Some reviews take even longer than 2 years. So there is room for improvement.

Patients are also very important as end user. Strikingly, the systematic review about the use of cranberry to prevent recurrent urinary tract infection is the most frequently viewed article,- and this is not because the doctors are most interested in this particular treatment….

Doctors: Doctors often rely on their colleagues for a quick and trustworthy answer. Challenge: “can we make consulting the Cochrane Library as easy as asking a colleague: thus timely and easy?”

Solutions?

  • making plain language summaries more understandable
  • Summary of Findings
  • podcasts of systematic reviews (very successful till now), .e. see an earlier post.
  • Web 2.0 innovations

Key challenges:

  • ensure and develop consistent quality
  • (timely) updating
  • putting the customer first: applicability & prioritization
  • web delivery
  • resources (not every group has the same resources)
  • make clear what an update means and how important this update is: are there new studies found? are these likely to change conclusions or not? When was the last amendment to the search?

I found the presentation very interesting. What I also liked is that David stayed with us for two days -also during the social program- and was easy approachable. I support the idea of a user-centric approach very much. However, I had expected the emphasis to be less on the timeliness (of updates for instance), but more on how users (patients, doctors) can get more involved and how we review the subjects that are most urgently needed. Indeed, when I twittered that Tovey suggested that we “make consulting the Cochrane Library as easy as asking a colleague”, Jon Brassey of TRIP answered that a lot has to be done to fulfill this, as the Cochrane only answers 2 out of 350+ questions asked by GPs in the UK, a statement that appeared to be based on his own experience (Jon is founder of the TRIP-database).

But in principle I think that Jon is correct. Right now too few questions (in the field of interventions) are directly answered by Cochrane Systematic Reviews and too little is done to reach and involve the Cochrane Library users.

13-6-2009 15-43-17 twitter CECEM discussion

click to enlarge

During the CECEM other speakers addressed some of these issues in more detail. André Knottnerus, Chair of the Dutch Health Council, discussed “the impact of Cochrane Reviews”, and Rob the Bie of the Rehabilitation & Related Therapies field discussed “Bridging the  gap between evidenced based practice and practice based evidence”, while Dave Brooker launched ideas about how to implement Web 2.0 tools. I hope to summarize these (and other) presentations in a blogpost later on.

*have to look this up

NOTE (2009-11-10).

I had forgotten about this blank “citation” till this post was cited quite in another context (see comment: http://e-patients.net/archives/2009/11/tell-the-fda-the-whole-story-please.html) and someone commented that the asterisk to the “the amazing statement” had still to be looked up,  indirectly arguing that this statement thus was not reliable- and continuing by giving an example of a typically flawed Cochrane Review that hit the headlines 4 years ago, a typical exception to the rule that “Cochrane systematic reviews are more reliable than other systematic reviews”. Of course when it is said that A is more trustworthy than B it is meant on average. I’m a searcher, and on average the Cochrane searchers are excellent, but when I do my best I surely can find some that are not good at all. Without doubt that also pertains to other parts of Cochrane Systematic Reviews.
In addition -and that was the topic of the presentation- there is room for improvement.

Now about the asterisk, which according to Susannah should have been (YIKES!) 100 times bigger. This was a post based on a live presentation and I couldn’t pick up all the references on the slides while making notes. I had hoped that David Tovey would have made his ppt public, so I could have checked the references he gave. But he didn’t and so I forgot about it. Now I’ve looked some references up, and, although they might not be identical to the references that David mentioned, they are in line with what he said:

  1. Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG, 2007. Epidemiology and Reporting Characteristics of Systematic Reviews. PLoS Med 4(3): e78. doi:10.1371/journal.pmed.0040078 (free full text)
  2. The PLoS Medicine Editors 2007 Many Reviews Are Systematic but Some Are More Transparent and Completely Reported than Others. PLoS Med 4(3): e147. doi:10.1371/journal.pmed.0040147 (free full text; editorial coment on [1]
  3. Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D, 2009. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. Apr;62(4):380-386.e1. Epub 2009 Jan 6. [PubMed -citation]
  4. Anders W Jørgensen, Jørgen Hilden, Peter C Gøtzsche, 2006. Cochrane reviews compared with industry supported meta-analyses and other meta-analyses of the same drugs: systematic review BMJ  2006;333:782, doi: 10.1136/bmj.38973.444699.0B (free full text)
  5. Alejandro R Jadad, Michael Moher, George P Browman, Lynda Booker, Christopher Sigouin, Mario Fuentes, Robert Stevens (2000) Systematic reviews and meta-analyses on treatment of asthma: critical evaluation BMJ 2000;320:537-540, doi: 10.1136/bmj.320.7234.537 (free full text)

In previous posts I regularly discussed that (Merck’s Ghostwriters, Haunted Papers and Fake Elsevier Journals and One Third of the Clinical Cancer Studies Report Conflict of Interest) that pharma-sponsored trials rarely produce results that are unfavorable to the companies’ products [e.g. see here for an overview, and many papers of Lisa Bero].

Also pertinent to the abovementioned discussion at E-patient-Net is my earlier post: The Trouble with Wikipedia as a Source for Medical Information. (references still not in the correct order. Yikes!)

Reblog this post [with Zemanta]




Podcasts: Cochrane Library and MedlinePlus

13 12 2008

podcastI added two podcasts to the Google-speadsheet wiki: Best Medical podcasts, made by Ves Dimov (see my previous post here): Cochrane reviews and Medline Plus.

Ves Dimov has described his top 5 podcasts in another post [1]. For other medical podcasts see [2,3,4].

A podcast is nothing more than a digital audio or video file, just like any other song or MP3 file on your computer. They can be listened to, saved and shared on the internet. Although podcasts were initially meant for i-pods (hence podcast), you can also subscribe to podcasts by other Podcast-readers, Web browsers or RSS-Readers.

I would like to shortly review the two podcasts.

1. Cochrane Reviews (Click here for Feed)

The Cochrane Library, published by John Wiley for The Cochrane Collaboration, is updated and expanded every three months.
The Cochrane podcasts are freely available audio summaries of:

  • highlights of each quarterly issue. This is just a summary of main topics. Example below (with bad handling of the microphone):
  • a selection of systematic reviews from The Cochrane Library. I found the ones below very interesting and may blog about them later.
    It is often said that Cochrane Reviews are difficult to understand and that even physicians find them hard to read. The podcasts I’ve heard are very informative and understandable for doctors, journalists, librarians and patients. The essentials of the conclusions are very clear. I think it would be a good thing if all Cochrane Reviews were podcasted this way.

Adverse events of formoterol (and salmeterol) in asthma

St John’s wort for major depression

podcasts-cochrane-library

Cochrane Podcasts of issue 4 2008: you can listen or subscibe to and/or download/embed the podcasts

2. MedlinePlus (click here for feed)

The MedlinePlus podcasts is a weekly series of highlights of health news and accompanying information from MedlinePlus.The update is generally given by Donald A.B. Lindberg, M.D., Director of the National Library of Medicine.
It is very clearly indicated how you can listen or subscribe to these podcasts. There is also a transcript.

The last audio is about the negative results of the huge Vitamine E-Selenium (SELECT) Prostate Cancer, I described almost a month ago in this post.
It is rather long (with disclaimers and links like “go to double u double u double u …dot com etcetera”), but understandable and about interesting topics.

podcasts-medlineplus

More Reading, viewing or listening (click on grey):

  1. MD Ves Dimov has described his top 5 podcasts, including JAMA Audio Commentary and NEJM This Week podcast at his blog. He also gives a short description how you can subscribe to the podcasts/videocasts.
  2. Very good and complete medical podcasts-directory at learnoutloud.com. Not only podcast-series, but also individual podcasts, such as class lessons of statistics (which are difficult to follow without seeing figures) or psychology.
  3. Dean Giustini: [pdf] “Podcasting” howto + select list of medical podcasts http://weblogs.elearning.ubc.ca/googlescholar/CHLA_ABSC_podcasting.pdf
  4. new2.gif See also:Dean Giustini, UBC Health and Library Wiki: Podcasts and Videocasts (very comprehensive!)
  5. And if you want to know more why podcasts are useful than view this short commoncraft you-tube video.

——-

nl vlag NL flagEen podcast is gewoon en digitaal audio of video bestand, net als elk ander MP3 bestand op je computer. Je kunt ze beluisteren, downloaden en delen. Hoewel podcasts oorspronkelijk voor i-pods bedoeld waren (vandaar podcast), kun je je ook op podcasts abonneren via andere Podcast-readers, Web browsers of RSS-Readers.

Hier bespreek ik twee podcasts die ik aan de Google-speadsheet wiki Best Medical podcasts heb toegevoegd (zie eerder bericht): Cochrane reviews en Medline Plus.

Ves Dimov heeft zijn top 5 podcasts op zijn blog beschreven [1]. Voor andere medische podcasts, zie [2,3].

1. Cochrane Reviews (Klik hier voor feed)

The Cochrane podcasts zijn gratis audio samenvattingen van:

  • De belangrijkste onderwerpen van elke 3-maandelijkse update van de Cochrane Library.

2. MedlinePlus (klik hier voor feed)

Medline Plus podcasts zijn een wekelijkse serie van hoogtepunten uit het gezondheidsnieuws van de MedlinePlus. De update wordt meestal verzorgd door Donald A.B. Lindberg, M.D., baas van de National Library of Medicine.
Het wordt duidelijk aangegeven hoe je de podcasts kunt beluisteren en hoe je een abonnement (feed) kunt nemen. Er is ook een transcript. Dit heb je er wel een beetje bij nodig. De tekst is verder duidelijk, maar erg droog en lang (incl disclaimers en links. “go to double u double u double u …dot com etcetera”).

Hier is een audio van de laatste week over de negatieve resultaten van de grootschalige Vitamine E-Selenium (SELECT) prostaat kanker trial, Idie ik een maand geleden reeds op dit blog beschreef.

Meer lezen: zie links in engelstalig gedeelte.