Evidence Based Point of Care Summaries [2] More Uptodate with Dynamed.

18 10 2011

ResearchBlogging.orgThis post is part of a short series about Evidence Based Point of Care Summaries or POCs. In this series I will review 3 recent papers that objectively compare a selection of POCs.

In the previous post I reviewed a paper from Rita Banzi and colleagues from the Italian Cochrane Centre [1]. They analyzed 18 POCs with respect to their “volume”, content development and editorial policy. There were large differences among POCs, especially with regard to evidence-based methodology scores, but no product appeared the best according to the criteria used.

In this post I will review another paper by Banzi et al, published in the BMJ a few weeks ago [2].

This article examined the speed with which EBP-point of care summaries were updated using a prospective cohort design.

First the authors selected all the systematic reviews signaled by the American College of Physicians (ACP) Journal Club and Evidence-Based Medicine Primary Care and Internal Medicine from April to December 2009. In the same period the authors selected all the Cochrane systematic reviews labelled as “conclusion changed” in the Cochrane Library. In total 128 systematic reviews were retrieved, 68 from the literature surveillance journals (53%) and 60 (47%) from the Cochrane Library. Two months after the collection started (June 2009) the authors did a monthly screen for a year to look for potential citation of the identified 128 systematic reviews in the POCs.

Only those 5 POCs were studied that were ranked in the top quarter for at least 2 (out of 3) desirable dimensions, namely: Clinical Evidence, Dynamed, EBM Guidelines, UpToDate and eMedicine. Surprisingly eMedicine was among the selected POCs, having a rating of “1” on a scale of 1 to 15 for EBM methodology. One would think that Evidence-based-ness is a fundamental prerequisite  for EBM-POCs…..?!

Results were represented as a (rather odd, but clear) “survival analysis” ( “death” = a citation in a summary).

Fig.1 : Updating curves for relevant evidence by POCs (from [2])

I will be brief about the results.

Dynamed clearly beated all the other products  in its updating speed.

Expressed in figures, the updating speed of Dynamed was 78% and 97% greater than those of EBM Guidelines and Clinical Evidence, respectively. Dynamed had a median citation rate of around two months and EBM Guidelines around 10 months, quite close to the limit of the follow-up, but the citation rate of the other three point of care summaries (UpToDate, eMedicine, Clinical Evidence) were so slow that they exceeded the follow-up period and the authors could not compute the median.

Dynamed outperformed the other POC’s in updating of systematic reviews independent of the route. EBM Guidelines and UpToDate had similar overall updating rates, but Cochrane systematic reviews were more likely to be cited by EBM Guidelines than by UpToDate (odds ratio 0.02, P<0.001). Perhaps not surprising, as EBM Guidelines has a formal agreement with the Cochrane Collaboration to use Cochrane contents and label its summaries as “Cochrane inside.” On the other hand, UpToDate was faster than EBM Guidelines in updating systematic reviews signaled by literature surveillance journals.

Dynamed‘s higher updating ability was not due to a difference in identifying important new evidence, but to the speed with which this new information was incorporated in their summaries. Possibly the central updating of Dynamed by the editorial team might account for the more prompt inclusion of evidence.

As the authors rightly point out, slowness in updating could mean that new relevant information is ignored and could thus affect the validity of point of care information services”.

A slower updating rate may be considered more important for POCs that “promise” to “continuously update their evidence summaries” (EBM-Guidelines) or to “perform a continuous comprehensive review and to revise chapters whenever important new information is published, not according to any specific time schedule” (UpToDate). (see table with description of updating mechanisms )

In contrast, Emedicine doesn’t provide any detailed information on updating policy, another reason that it doesn’t belong to this list of best POCs.
Clinical Evidence, however, clearly states, We aim to update Clinical Evidence reviews annually. In addition to this cycle, details of clinically important studies are added to the relevant reviews throughout the year using the BMJ Updates service.” But BMJ Updates is not considered in the current analysis. Furthermore, patience is rewarded with excellent and complete summaries of evidence (in my opinion).

Indeed a major limitation of the current (and the previous) study by Banzi et al [1,2] is that they have looked at quantitative aspects and items that are relatively “easy to score”, like “volume” and “editorial quality”, not at the real quality of the evidence (previous post).

Although the findings were new to me, others have recently published similar results (studies were performed in the same time-span):

Shurtz and Foster [3] of the Texas A&M University Medical Sciences Library (MSL) also sought to establish a rubric for evaluating evidence-based medicine (EBM) point-of-care tools in a health sciences library.

They, too, looked at editorial quality and speed of updating plus reviewing content, search options, quality control, and grading.

Their main conclusion is that “differences between EBM tools’ options, content coverage, and usability were minimal, but that the products’ methods for locating and grading evidence varied widely in transparency and process”.

Thus this is in line with what Banzi et al reported in their first paper. They also share Banzi’s conclusion about differences in speed of updating

“DynaMed had the most up-to-date summaries (updated on average within 19 days), while First Consult had the least up to date (updated on average within 449 days). Six tools claimed to update summaries within 6 months or less. For the 10 topics searched, however, only DynaMed met this claim.”

Table 3 from Shurtz and Foster [3] 

Ketchum et al [4] also conclude that DynaMed the largest proportion of current (2007-2009) references (170/1131, 15%). In addition they found that Dynamed had the largest total number of references (1131/2330, 48.5%).

Yes, and you might have guessed it. The paper of Andrea Ketchum is the 3rd paper I’m going to review.

I also recommend to read the paper of the librarians Shurtz and Foster [3], which I found along the way. It has too much overlap with the Banzi papers to devote a separate post to it. Still it provides better background information then the Banzi papers, it focuses on POCs that claim to be EBM and doesn’t try to weigh one element over another. 

References

  1. Banzi, R., Liberati, A., Moschetti, I., Tagliabue, L., & Moja, L. (2010). A Review of Online Evidence-based Practice Point-of-Care Information Summary Providers Journal of Medical Internet Research, 12 (3) DOI: 10.2196/jmir.1288
  2. Banzi, R., Cinquini, M., Liberati, A., Moschetti, I., Pecoraro, V., Tagliabue, L., & Moja, L. (2011). Speed of updating online evidence based point of care summaries: prospective cohort analysis BMJ, 343 (sep22 2) DOI: 10.1136/bmj.d5856
  3. Shurtz, S., & Foster, M. (2011). Developing and using a rubric for evaluating evidence-based medicine point-of-care tools Journal of the Medical Library Association : JMLA, 99 (3), 247-254 DOI: 10.3163/1536-5050.99.3.012
  4. Ketchum, A., Saleh, A., & Jeong, K. (2011). Type of Evidence Behind Point-of-Care Clinical Information Products: A Bibliometric Analysis Journal of Medical Internet Research, 13 (1) DOI: 10.2196/jmir.1539
  5. Evidence Based Point of Care Summaries [1] No “Best” Among the Bests? (laikaspoetnik.wordpress.com)
  6. How will we ever keep up with 75 Trials and 11 Systematic Reviews a Day? (laikaspoetnik.wordpress.com
  7. UpToDate or Dynamed? (Shamsha Damani at laikaspoetnik.wordpress.com)
  8. How Evidence Based is UpToDate really? (laikaspoetnik.wordpress.com)

Related articles (automatically generated)

Advertisements




Evidence Based Point of Care Summaries [1] No “Best” Among the Bests?

13 10 2011

ResearchBlogging.orgFor many of today’s busy practicing clinicians, keeping up with the enormous and ever growing amount of medical information, poses substantial challenges [6]. Its impractical to do a PubMed search to answer each clinical question and then synthesize and appraise the evidence. Simply, because busy health care providers have limited time and many questions per day.

As repeatedly mentioned on this blog ([67]), it is far more efficient to try to find aggregate (or pre-filtered or pre-appraised) evidence first.

Haynes ‘‘5S’’ levels of evidence (adapted by [1])

There are several forms of aggregate evidence, often represented as the higher layers of an evidence pyramid (because they aggregate individual studies, represented by the lowest layer). There are confusingly many pyramids, however [8] with different kinds of hierarchies and based on different principles.

According to the “5S” paradigm[9] (now evolved to 6S -[10]) the peak of the pyramid are the ideal but not yet realized computer decision support systems, that link the individual patient characteristics to the current best evidence. According to the 5S model the next best source are Evidence Based Textbooks.
(Note: EBM and textbooks almost seem a contradiction in terms to me, personally I would not put many of the POCs somewhere at the top. Also see my post: How Evidence Based is UpToDate really?)

Whatever their exact place in the EBM-pyramid, these POCs are helpful to many clinicians. There are many different POCs (see The HLWIKI Canada for a comprehensive overview [11]) with a wide range of costs, varying from free with ads (e-Medicine) to very expensive site licenses (UpToDate). Because of the costs, hospital libraries have to choose among them.

Choices are often based on user preferences and satisfaction and balanced against costs, scope of coverage etc. Choices are often subjective and people tend to stick to the databases they know.

Initial literature about POCs concentrated on user preferences and satisfaction. A New Zealand study [3] among 84 GPs showed no significant difference in preference for, or usage levels of DynaMed, MD Consult (including FirstConsult) and UpToDate. The proportion of questions adequately answered by POCs differed per study (see introduction of [4] for an overview) varying from 20% to 70%.
McKibbon and Fridsma ([5] cited in [4]) found that the information resources chosen by primary care physicians were seldom helpful in providing the correct answers, leading them to conclude that:

“…the evidence base of the resources must be strong and current…We need to evaluate them well to determine how best to harness the resources to support good clinical decision making.”

Recent studies have tried to objectively compare online point-of-care summaries with respect to their breadth, content development, editorial policy, the speed of updating and the type of evidence cited. I will discuss 3 of these recent papers, but will review each paper separately. (My posts tend to be pretty long and in-depth. So in an effort to keep them readable I try to cut down where possible.)

Two of the three papers are published by Rita Banzi and colleagues from the Italian Cochrane Centre.

In the first paper, reviewed here, Banzi et al [1] first identified English Web-based POCs using Medline, Google, librarian association websites, and information conference proceedings from January to December 2008. In order to be eligible, a product had to be an online-delivered summary that is regularly updated, claims to provide evidence-based information and is to be used at the bedside.

They found 30 eligible POCs, of which the following 18 databases met the criteria: 5-Minute Clinical Consult, ACP-Pier, BestBETs, CKS (NHS), Clinical Evidence, DynaMed, eMedicine,  eTG complete, EBM Guidelines, First Consult, GP Notebook, Harrison’s Practice, Health Gate, Map Of Medicine, Micromedex, Pepid, UpToDate, ZynxEvidence.

They assessed and ranked these 18 point-of-care products according to: (1) coverage (volume) of medical conditions, (2) editorial quality, and (3) evidence-based methodology. (For operational definitions see appendix 1)

From a quantitive perspective DynaMed, eMedicine, and First Consult were the most comprehensive (88%) and eTG complete the least (45%).

The best editorial quality of EBP was delivered by Clinical Evidence (15), UpToDate (15), eMedicine (13), Dynamed (11) and eTG complete (10). (Scores are shown in brackets)

Finally, BestBETs, Clinical Evidence, EBM Guidelines and UpToDate obtained the maximal score (15 points each) for best evidence-based methodology, followed by DynaMed and Map Of Medicine (12 points each).
As expected eMedicine, eTG complete, First Consult, GP Notebook and Harrison’s Practice had a very low EBM score (1 point each). Personally I would not have even considered these online sources as “evidence based”.

The calculations seem very “exact”, but assumptions upon which these figures are based are open to question in my view. Furthermore all items have the same weight. Isn’t the evidence-based methodology far more important than “comprehensiveness” and editorial quality?

Certainly because “volume” is “just” estimated by analyzing to which extent 4 random chapters of the ICD-10 classification are covered by the POCs. Some sources, like Clinical Evidence and BestBets (scoring low for this item) don’t aim to be comprehensive but only “answer” a limited number of questions: they are not textbooks.

Editorial quality is determined by scoring of the specific indicators of transparency: authorship, peer reviewing procedure, updating, disclosure of authors’ conflicts of interest, and commercial support of content development.

For the EB methodology, Banzi et al scored the following indicators:

  1. Is a systematic literature search or surveillance the basis of content development?
  2. Is the critical appraisal method fully described?
  3. Are systematic reviews preferred over other types of publication?
  4. Is there a system for grading the quality of evidence?
  5. When expert opinion is included is it easily recognizable over studies’ data and results ?

The  score for each of these indicators is 3 for “yes”, 1 for “unclear”, and 0 for “no” ( if judged “not adequate” or “not reported.”)

This leaves little room for qualitative differences and mainly relies upon adequate reporting. As discussed earlier in a post where I questioned the evidence-based-ness of UpToDate, there is a difference between tailored searches and checking a limited list of sources (indicator 1.). It also matters whether the search is mentioned or not (transparency), whether it is qualitatively ok and whether it is extensive or not. For lists, it matters how many sources are “surveyed”. It also matters whether one or both methods are used… These important differences are not reflected by the scores.

Furthermore some points may be more important than others. Personally I find step 1 the most important. For what good is appraising and grading if it isn’t applied to the most relevant evidence? It is “easy” to do a grading or to copy it from other sources (yes, I wouldn’t be surprised if some POCs are doing this).

On the other hand, a zero for one indicator can have too much weight on the score.

Dynamed got 12 instead of the maximum 15 points, because their editorial policy page didn’t explicitly describe their absolute prioritization of systematic reviews although they really adhere to that in practice (see comment by editor-in-chief  Brian Alper [2]). Had Dynamed received the deserved 15 points for this indicator, they would have had the highest score overall.

The authors further conclude that none of the dimensions turned out to be significantly associated with the other dimensions. For example, BestBETs scored among the worst on volume (comprehensiveness), with an intermediate score for editorial quality, and the highest score for evidence-based methodology.  Overall, DynaMed, EBM Guidelines, and UpToDate scored in the top quartile for 2 out of 3 variables and in the 2nd quartile for the 3rd of these variables. (but as explained above Dynamed really scored in the top quartile for all 3 variables)

On basis of their findings Banzi et al conclude that only a few POCs satisfied the criteria, with none excelling in all.

The finding that Pepid, eMedicine, eTG complete, First Consult, GP Notebook, Harrison’s Practice and 5-Minute Clinical Consult only obtained 1 or 2 of the maximum 15 points for EBM methodology confirms my “intuitive grasp” that these sources really don’t deserve the label “evidence based”. Perhaps we should make a more strict distinction between “point of care” databases as a point where patients and practitioners interact, particularly referring to the context of the provider-patient dyad (definition by Banzi et al) and truly evidence based summaries. Only few of the tested databases would fit the latter definition. 

In summary, Banzi et al reviewed 18 Online Evidence-based Practice Point-of-Care Information Summary Providers. They comprehensively evaluated and summarized these resources with respect to coverage (volume) of medical conditions, editorial quality, and (3) evidence-based methodology. 

Limitations of the study, also according to the authors, were the lack of a clear definition of these products, arbitrariness of the scoring system and emphasis on the quality of reporting. Furthermore the study didn’t really assess the products qualitatively (i.e. with respect to performance). Nor did it take into account that products might have a different aim. Clinical Evidence only summarizes evidence on the effectiveness of treatments of a limited number of diseases, for instance. Therefore it scores bad on volume while excelling on the other items. 

Nevertheless it is helpful that POCs are objectively compared and it may help as starting point for decisions about acquisition.

References (not in chronological order)

  1. Banzi, R., Liberati, A., Moschetti, I., Tagliabue, L., & Moja, L. (2010). A Review of Online Evidence-based Practice Point-of-Care Information Summary Providers Journal of Medical Internet Research, 12 (3) DOI: 10.2196/jmir.1288
  2. Alper, B. (2010). Review of Online Evidence-based Practice Point-of-Care Information Summary Providers: Response by the Publisher of DynaMed Journal of Medical Internet Research, 12 (3) DOI: 10.2196/jmir.1622
  3. Goodyear-Smith F, Kerse N, Warren J, & Arroll B (2008). Evaluation of e-textbooks. DynaMed, MD Consult and UpToDate. Australian family physician, 37 (10), 878-82 PMID: 19002313
  4. Ketchum, A., Saleh, A., & Jeong, K. (2011). Type of Evidence Behind Point-of-Care Clinical Information Products: A Bibliometric Analysis Journal of Medical Internet Research, 13 (1) DOI: 10.2196/jmir.1539
  5. McKibbon, K., & Fridsma, D. (2006). Effectiveness of Clinician-selected Electronic Information Resources for Answering Primary Care Physicians’ Information Needs Journal of the American Medical Informatics Association, 13 (6), 653-659 DOI: 10.1197/jamia.M2087
  6. How will we ever keep up with 75 Trials and 11 Systematic Reviews a Day? (laikaspoetnik.wordpress.com)
  7. 10 + 1 PubMed Tips for Residents (and their Instructors) (laikaspoetnik.wordpress.com)
  8. Time to weed the (EBM-)pyramids?! (laikaspoetnik.wordpress.com)
  9. Haynes RB. Of studies, syntheses, synopses, summaries, and systems: the “5S” evolution of information services for evidence-based healthcare decisions. Evid Based Med 2006 Dec;11(6):162-164. [PubMed]
  10. DiCenso A, Bayley L, Haynes RB. ACP Journal Club. Editorial: Accessing preappraised evidence: fine-tuning the 5S model into a 6S model. Ann Intern Med. 2009 Sep 15;151(6):JC3-2, JC3-3. PubMed PMID: 19755349 [free full text].
  11. How Evidence Based is UpToDate really? (laikaspoetnik.wordpress.com)
  12. Point_of_care_decision-making_tools_-_Overview (hlwiki.slais.ubc.ca)
  13. UpToDate or Dynamed? (Shamsha Damani at laikaspoetnik.wordpress.com)

Related articles (automatically generated)





UpToDate or Dynamed?

5 07 2009

Guest author: Shamsha Damani (@shamsha) ;
Submission for the July Medlib’s Round

Doctors and other healthcare providers are busy folks. They often don’t have time to go through all the primary literature, find the best evidence, critique it and apply it to their patients in real-time. This is where point-of-care resources shine and make life a bit easier. There are several such tools out there, but the two that I use on a regular basis are UpToDate and DynaMed. There are others like InfoPoems, ACP’s PIER, MD Consult and BMJ’s Point of Care. I often get asked which ones are the best to use and why. The librarian answer to this question: depends on what you are looking for! Not a fair answer I admit, so I wanted to highlight some pros and cons of UpToDate and DynaMed to help you better determine what route to take the next time you find yourself in need of a quick answer to a clinical question.

UpToDate

Pros:

  • Comprehensive coverage
  • Easy-to-read writing style
  • The introduction of grading the evidence is certainly very welcome!

Cons:

  • Expensive
  • Conflict of interest policy a bit perplexing
  • Search feature could use a makeover
  • Remote access at a high premium
  • Not accessible via smart phones
  • They didn’t come to MLA’09 this year and medical librarians felt snubbed (ok, that is not a con, just an observation!)

DynaMed

Pros:

  • Bulleted format is easy to read
  • Remote access part of subscription
  • No conflict of interest with authors
  • A lot of the evidence is graded
  • Accessible on PDAs (iPhones and Blackberries included!)

Cons:

  • The user interface is a bit 1990s and could use a makeover
  • The coverage is not as extensive yet, though they keep adding more topics

A lot has been written about UpToDate and DynaMed, both in PubMed as well as on various blogs. Jacqueline also did a fabulous post of the evidence-based-ness of UpToDate not too long ago. I used to think that I should pick one and stick to it, but have recently found myself re-thinking this attitude. I think that we need to keep in mind that these are point-of-care tools and should not be utilized as one’s only source of information. Use the tool to get an idea about current evidence and combine it with your own clinical judgment when needed at point-of-care. If suspicious, look up the primary literature the good old way by using MEDLINE or other such databases. A point-of-care database will get you started; however, it is not meant to be a one-stop-shop.

I can almost hear people saying: so which one do you prefer anyways? That’s like asking me if I prefer Coke or Pepsi. My honest answer: both! (databases as well as beverages!). So what is a busy clinician to do? If you have access to both (or more), spend some time playing with them and see which one you like. Everyone has a different searching and learning style and it is sometimes a matter of preference. DynaMed’s concise structure may be appealing to newbies, whereas seasoned clinicians may prefer UpToDate’s narrative approach. Based on my very unscientific observation of Twitter conversations, it appears that clinicians in general prefer UpToDate whereas librarians prefer DynaMed. Could this be because UpToDate markets heavily to clinicians and snubs librarians? Or could it be the price? Or could it be the age-old debate on what is evidence? I don’t know the answer, partly because I find it all a bit too political. I’ve seen healthcare providers often use Google or Wikipedia for medical answers, which is quite sad. If you are using either UpToDate or DynaMed (or another similar product), you have already graduated to the big leagues and are a true EBM player! So relax and don’t feel like you have to pick a side. I find myself using both on a regular basis; the degree of success I have with each can be gauged by my daily Twitter feed!

Shamsha Damani





How Evidence Based is UpToDate really?

5 04 2009

logo-uptodate-2KevinMD or Kevin Pho is one of the top physician bloggers. He writes many posts per day, often provocatively commenting on breaking medical news or other blogposts.

A few weeks ago Kevin wrote a post on comparative effectiveness research [5] (tweet below), which is “(funded) research to evaluate and compare clinical outcomes, effectiveness, risk, and benefits of two or more medical treatments or services that address a particular medical condition” (definition from DB Medical Rants).

utd1-uptodate-kevin

Kevin stated that doctors certainly need an authoritative, unbiased, source to base their decisions on and that that kind of information is already there in the form of UpToDate®. According to Kevin:

For those who don’t know, UpToDate is a peer-reviewed, evidence-based, medical encyclopedia [1] available via DVD or online that’s revised every 3 months. It does not carry advertisements, and is funded entirely via paid subscriptions [3]. I am a big proponent, and like many other doctors, could not practice medicine effectively without it by my side.[2]

Kevin Pho also refers to a recent study showing that hospitals who used UpToDate scored better on patient safety and complication measures, as well as length of stay, when compared to institutions who did not use the resource.[4]

Kevin’s post actually summarized a post of yet another well known blogger, Val Jones MD (dr Val) of Better Health. In her blog post Dr Val wonders whether we should incentivize hospitals and providers to use UpToDate more regularly. Incentives can range from pay for performance bonuses to malpractice immunity for physicians who adhere to UpToDate’s, evidence-based, unbiased, clinical recommendations. According to Dr. Val, this might be an effective and easy way to target the problem of inconsistent practice styles on a national level, since many physicians know and respect UpToDate.[5]

The tweet of KevinMD elicited many responses on Twitter. To read most of the discussion on twitter follow this link.

Below I shall discuss the points addressed in the blogpost of KevinMD and DrVal. When relevant I will show/discuss the tweets as well.***

[1] Is UpToDate Evidence based?
The main discussion point on twitter was to which extent UpToDate is evidence based. As you can see below (the oldest tweet is at the bottom, the newest at the top) the opinions differ as to the level of UpToDate’s “evidence-basedness”. It varies from the one extreme of UpToDate doing systematic reviews and being entirely evidence based (drval) to ‘a slant of EBM*’ (@kevinmd) and UpToDate being an online book with narrative reviews.

utd2-tot2-uptodate

UpToDate used to be entirely an online book with (excellent) narrative reviews written by experts in the field. From 2006 onwards UpToDate began grading recommendations for treatment and screening using a modification of the GRADE system. Nowadays UpToDate calls its database an evidence based, peer reviewed information resource. According UpToDate the evidence is compiled from:

  • Hand-searching of over 400 peer-reviewed journals
  • Electronic searching of databases including MEDLINE, The Cochrane Database, Clinical Evidence, and ACP Journal Club
  • Guidelines that adhere to principles of evidence evaluation
  • Published information regarding clinical trials such as reports from the FDA and NIH
  • Proceedings of major national meetings
  • The clinical experience and observations of our authors, editors, and peer reviewers

Although it is an impressive list of EBM-sources, this does not mean that UpToDate itself is evidence based. A selection of journals to be ‘handsearched’ will undoubtedly lead to positive publication bias (most positive results will reach the major journals). The electronic searches -if done- are not displayed and therefore the quality of any search performed cannot be checked. It is also unclear on which basis articles are in- or excluded. And although UpToDate may summarize evidence from Systematic Reviews, including Cochrane Systematic Reviews it does not perform Systematic Reviews itself. At the most it gives a synthesis of the evidence, which is (still) gathered in a rather nontransparent way. Thus the definition of @kevinmd comes closest: “it gives an evidence based slant”. After all, Evidence-based medicine is a set of procedures, pre-appraised resources and information tools to assist practitioners to apply evidence from research in the care of individual patients” (McKibbon, K.A., see defintions at  the scharr webpage). Merely summarizing and /or referring to evidence is not enough to be evidence based.
It is also not clear what peer reviewed implies, i.e.can articles (chapters) be rejected by peer reviewers?

As a consequence the chapters differ in quality. Regularly I don’t find the available evidence in UpToDate. That is also true for students and docs preparing a Critically Appraised Topic (CAT). In my experience, UpToDate is hardly ever useful for finding recent evidence on a not too common question. @Allergynotes tweeted a specific example on chronic urticaria and H. pylori, where the available evidence could not be found in UpToDate.
In an older post (2007)*** @Allergynotes (Ves Dimov) commented on an interesting post by Dr. RW: “Are you UpToDate dependent?” by citing an old proverb: “beware the man of a single book (homo unius libri), which describes people with limited knowledge. The current version of the Internet has billions of scientific journal pages and the answer to your questions must be somewhere out there.” Ves:

“I don’t think anybody should be dependent on a single source. If one cannot practice medicine without UpToDate, may be one should not practice at all.”

Likewise, an anonymous commenter on Kevin’ posts stated:

“Don’t overlook the fact that there is a lot of good research outside of UpToDate. This is a great source, but if it’s your only source you’re closing off a tremendous amount of the literature. The articles are also written by people, and are subject to the biases of individuals.”

In another comment Dr. Matthew Mintz of the excellent blog with the same name puts forward that many of the authors have substantial ties to the pharmaceutical industry, meaning that UptoDate (although not financed) is not completely unbiased.

utd1-uptodate-allergynotes-laikas-evidence-not-always-found-3[2] Usefulness
@Allergynotes rightly states that usability/perceived usefulness my be more important to physicians (than real usefulness) and that we should look at what make UpToDate so useful rather than just say “it’s not EBM”. In one of his posts Ves Dimov (@allergynotes) refers to a (Dutch) paper showing that answers to questions posed during daily patient care are more likely to be answered by UpToDate than PubMed.** At my hospital some doctors (especially intern med docs) consider UpToDate as their Bible. It is without doubt that UpToDate is a very useful source both for clinicians, patients and even librarians. It is ideal for background questions (How can disease X be treated, what is the differential diagnosis?, what might be the cause of this disease?), to look up things and as a starting point. And it has a broad coverage. However the point here was not whether UpToDate is a useful source for clinicians – but whether it is a sufficiently unbiased evidence based source to incentive docs to follow its recommendations and its recommendations alone. Or as Shamsha says it: “I don’t like putting all my eggs in UpToDate’s basket.

utd-ebm-eggs-shamsha[3] Disadvantages/Alternatives
As highlighted by the twitter discussions (read from down up), the major disadvantages of UpToDate are its high pricing, its ridigity, monopolistic tendencies and strict denial of remote access. I don’t know if you have seen the recent post of David Rothman on a very unpolite, aggressive vendor trying to push a trial. Most of David readers guess the vendor was from UpToDate (2nd: MD consult). Is it reasonable to positively discriminate in favor of UpToDate, while not everyone may be able to afford this costly database or may prefer another source? Incentives will only enhance UpToDate’s monopolistic position.
The most ideal situation would be an open source UTD, as suggested by @nursedan. Allergynotes thinks that this should be possible. A role for Web 2.0 in EBM?

It should be noted that (besides the databases mentioned in the tweets) there are also other freely available evidence based sources, like

utd1-uptodate-price-disadv

[4] Hospitals using UptoDate score better?
Kevin and Dr Val also refer to a study in International Journal of Medical Informatics showing hospitals that used UptoDate scored better than hospitals that didn’t (even in a dose-response way). This study is shown prominently at the UpToDate’s site.

Now let’s just “score” the Evidence.

First one can wonder how representative this article is. A quick and dirty Google search gives many hits on the very same subject not (directly) linking to UpToDate. For instance, a paper published in the January issue of Ann Intern Med tells us the results from a large-scale study of more than 40 hospitals and 160,000 patients showing that when health information technologies replace paper forms and handwritten notes, both hospitals and patients benefit strongly (fewer complications, lower mortality rates, and lower costs). Etcetera. One would like to know how the evidence in the “UpToDate paper” would relate to other studies or even better one would like to see a head to head comparison of UpToDate with any other (specific) evidence based source.

The Impact Factor of INT J MED INFORM is 1.579. This says nothing about its value, but such a paper wouldn’t likely appear in UpToDate’s handsearch list.

More important 2 of the 4 authors are from UpToDate. This is an important bias.

Furthermore the study is a retrospective and observational study, comparing hospitals with online access to UpToDate with other acute care hospitals. According to the GRADE system this would automatically yield a Grade C: low-quality evidence from observational evidence. Most important, as admitted by the authors, the study could not fully account for additional features at the included hospitals that may also have been associated with better health outcomes. It is easy to imagine, for instance that a hospital being able to subscribe to UpToDate has a medical staff that was already predisposed to delivering higher quality care or might have a greater budget ;). And although the average, severity-adjusted lenght of stay was significantly shorter in UpToDate® hospitals than in other hospitals with a P value of less than 0.0001, the mean difference was only 0.167 days with a not very impressive 95% confidence interval of 0.081–0.252 days.

[5] Incentives?
Based on the above arguments I don’t think it would be reasonable, effective or fair to incentive those hospitals or doctors that consult (and can afford) UpToDate or to indirectly punish those that don’t (because they don’t have the money or they have a good alternative).

Furthermore such a positive discrimination would not solve the problem of lack of head to head comparison, what was what it was all about. Dr Mintz explains this very clearly in his comment to Kevin.

“… the authors of UptoDate are providing their own summary of already published data, which most is funded by industry. This is similarly true of other so-called unbiased sources.(..)
The problem goes even deeper than the potential bias of industry funded research, which has been consistently shown to be favorable to the sponsor. The fact that most research, and virtually all therapeutic research is funded by the industry allows the industry to dictate what scientific knowledge is available, and by default clinical practice.(…)

There are hundreds of important studies that are never done because the industry only takes a “safe” bet.
We need comparative effectiveness not just to see whether the more expensive treatment is worth the cost, but we also need it to answer scientifically important questions that the industry will unlikely fund.”

*EBM = evidence based medicine
** It should be noted though that an other interface of PubMed is used in this hospital, to allow recording of the queries.The study participants were doctors in internal medicine.
*** I’ve added this sentence because people thought I merely summarized the tweets. In addition I added some new references.

References:

You may also want to read: