Of Mice and Men Again: New Genomic Study Helps Explain why Mouse Models of Acute Inflammation do not Work in Men

25 02 2013

ResearchBlogging.org

This post is update after a discussion at Twitter with @animalevidence who pointed me at a great blog post at Speaking of Research ([19], a repost of [20], highlighting the shortcomings of the current study using just one single inbred strain of mice (C57Bl6)  [2013-02-26]. Main changes are in blue

A recent paper published in PNAS [1] caused quite a stir both inside and outside the scientific community. The study challenges the validity of using mouse models to test what works as a treatment in humans. At least this is what many online news sources seem to conclude: “drug testing may be a waste of time”[2], “we are not mice” [3, 4], or a bit more to the point: mouse models of inflammation are worthless [5, 6, 7].

But basically the current study looks only at one specific area, the area of inflammatory responses that occur in critically ill patients after severe trauma and burns (SIRS, Systemic Inflammatory Response Syndrome). In these patients a storm of events may eventually lead to organ failure and death. It is similar to what may occur after sepsis (but here the cause is a systemic infection).

Furthermore the study only uses one single approach: it compares the gene response patterns in serious human injuries (burns, trauma) and a human model partially mimicking these inflammatory diseases (human healthy volunteers receiving  a low dose endotoxin) with the corresponding three animal models (burns, trauma, endotoxin).

And, as highlighted by Bill Barrington of “Understand Nutrition” [8], the researchers have only tested the gene profiles in one single strain of mice: C57Bl6 (B6 for short). If B6 was the only model used in practice this would be less of a problem. But according to Mark Wanner of the Jackson Laboratory [19, 20]:

 It is now well known that some inbred mouse strains, such as the C57BL/6J (B6 for short) strain used, are resistant to septic shock. Other strains, such as BALB and A/J, are much more susceptible, however. So use of a single strain will not provide representative results.

The results in itself are very clear. The figures show at a glance that there is no correlation whatsoever between the human and B6 mouse expression data.

Seok and 36 other researchers from across the USA  looked at approximately 5500 human genes and their mouse analogs. In humans, burns and traumatic injuries (and to a certain extent the human endotoxin model) triggered the activation of a vast number of genes, that were not triggered in the present C57Bl6 mouse models. In addition the genomic response is longer lasting in human injuries. Furthermore, the top 5 most activated and most suppressed pathways in human burns and trauma had no correlates in mice. Finally, analysis of existing data in the Gene Expression (GEO) Database showed that the lack of correlation between mouse and human studies was also true for other acute inflammatory responses, like sepsis and acute infection.

This is a high quality study with interesting results. However, the results are not as groundbreaking as some media suggest.

As discussed by the authors [1], mice are known to be far more resilient to inflammatory challenge than humans*: a million fold higher dose of endotoxin than the dose causing shock in humans is lethal to mice.* This, and the fact that “none of the 150  candidate agents that progressed to human trials has proved successful in critically ill patients” already indicates that the current approach fails.

[This is not entirely correct the endotoxin/LPS dose in mice is 1000–10,000 times the dose required to induce severe disease with shock in humans [20] and mice that are resilient to endotoxin may still be susceptible to infection. It may well be that the endotoxin response is not a good model for the late effects of  sepsis]

The disappointing trial results have forced many researchers to question not only the usefulness of the current mouse models for acute inflammation [9,10; refs from 11], but also to rethink the key aspects of the human response itself and the way these clinical trials are performed [12, 13, 14]. For instance, emphasis has always been on the exuberant inflammatory reaction, but the subsequent immunosuppression may also be a major contributor to the disease. There is also substantial heterogeneity among patients [13-14] that may explain why some patients have a good prognosis and others haven’t. And some of the initially positive results in human trials have not been reproduced in later studies either (benefit of intense glucose control and corticosteroid treatment) [12]. Thus is it fair to blame only the mouse studies?

dick mouse

dick mouse (Photo credit: Wikipedia)

The coverage by some media is grist to the mill of people who think animal studies are worthless anyway. But one cannot extrapolate these findings to other diseases. Furthermore, as referred to above, the researchers have only tested the gene profiles in one single strain of mice: C57Bl6, meaning that “The findings of Seok et al. are solely applicable to the B6 strain of mice in the three models of inflammation they tested. They unduly generalize these findings to mouse models of inflammation in general. [8]“

It is true that animal studies, including rodent studies, have their limitations. But what are the alternatives? In vitro studies are often even more artificial, and direct clinical testing of new compounds in humans is not ethical.

Obviously, the final proof of effectiveness and safety of new treatments can only be established in human trials. No one will question that.

A lot can be said about why animal studies often fail to directly translate to the clinic [15]. Clinical disparities between the animal models and the clinical trials testing the treatment (like in sepsis) are one reason. Other important reasons may be methodological flaws in animal studies (i.e. no randomization, wrong statistics) and publication bias: non-publication of “negative” results appears to be prevalent in laboratory animal research.[15-16]. Despite their shortcomings, animal studies and in vitro studies offer a way to examine certain aspects of a process, disease or treatment.

In summary, this study confirms that the existing (C57Bl6) mouse model doesn’t resemble the human situation in the systemic response following acute traumatic injury or sepsis: the genomic response is entirely different, in magnitude, duration and types of changes in expression.

The findings are not new: the shortcomings of the mouse model(s) were long known. It remains enigmatic why the researchers chose only one inbred strain of mice, and of all mice only the B6-strain, which is less sensitive to endotoxin, and only develop acute kidney injury (part of organ failure) at old age (young mice were used) [21]. In this paper from 2009 (!) various reasons are given why the animal models didn’t properly mimic the human disease and how this can be improved. The authors stress that:

the genetically heterogeneous human population should be more accurately represented by outbred mice, reducing the bias found in inbred strains that might contain or lack recessive disease susceptibility loci, depending on selective pressures.” 

Both Bill Barrington [8] and Mark Wanner [18,19] propose the use of “diversity outbred cross or collaborative cross mice that  provide additional diversity.” Indeed, “replicating genetic heterogeneity and critical clinical risk factors such as advanced age and comorbid conditions (..) led to improved models of sepsis and sepsis-induced AKI (acute kidney injury). 

The authors of the PNAS paper suggest that genomic analysis can aid further in revealing which genes play a role in the perturbed immune response in acute inflammation, but it remains to be seen whether this will ultimately lead to effective treatments of sepsis and other forms of acute inflammation.

It also remains to be seen whether comprehensive genomic characterization will be useful in other disease models. The authors suggest for instance,  that genetic profiling may serve as a guide to develop animal models. A shotgun analyses of gene expression of thousands of genes was useful in the present situation, because “the severe inflammatory stress produced a genomic storm affecting all major cellular functions and pathways in humans which led to sufficient perturbations to allow comparisons between the genes in the human conditions and their analogs in the murine models”. But rough analysis of overall expression profiles may give little insight in the usefulness of other animal models, where genetic responses are more subtle.

And predicting what will happen is far less easy that to confirm what is already known….

NOTE: as said the coverage in news and blogs is again quite biased. The conclusion of a generally good Dutch science  news site (the headline and lead suggested that animal models of immune diseases are crap [6]) was adapted after a critical discussion at Twitter (see here and here), and a link was added to this blog post). I wished this occurred more often….
In my opinion the most balanced summaries can be found at the science-based blogs: ScienceBased Medicine [11] and NIH’s Director’s Blog [17], whereas “Understand Nutrition” [8] has an original point of view, which is further elaborated by Mark Wanner at Speaking of Research [19] and Genetics and your health Blog [20]

References

  1. Seok, J., Warren, H., Cuenca, A., Mindrinos, M., Baker, H., Xu, W., Richards, D., McDonald-Smith, G., Gao, H., Hennessy, L., Finnerty, C., Lopez, C., Honari, S., Moore, E., Minei, J., Cuschieri, J., Bankey, P., Johnson, J., Sperry, J., Nathens, A., Billiar, T., West, M., Jeschke, M., Klein, M., Gamelli, R., Gibran, N., Brownstein, B., Miller-Graziano, C., Calvano, S., Mason, P., Cobb, J., Rahme, L., Lowry, S., Maier, R., Moldawer, L., Herndon, D., Davis, R., Xiao, W., Tompkins, R., , ., Abouhamze, A., Balis, U., Camp, D., De, A., Harbrecht, B., Hayden, D., Kaushal, A., O’Keefe, G., Kotz, K., Qian, W., Schoenfeld, D., Shapiro, M., Silver, G., Smith, R., Storey, J., Tibshirani, R., Toner, M., Wilhelmy, J., Wispelwey, B., & Wong, W. (2013). Genomic responses in mouse models poorly mimic human inflammatory diseases Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1222878110
  2. Drug Testing In Mice May Be a Waste of Time, Researchers Warn 2013-02-12 (science.slashdot.org)
  3. Susan M Love We are not mice 2013-02-14 (Huffingtonpost.com)
  4. Elbert Chu  This Is Why It’s A Mistake To Cure Mice Instead Of Humans 2012-12-20(richarddawkins.net)
  5. Derek Low. Mouse Models of Inflammation Are Basically Worthless. Now We Know. 2013-02-12 (pipeline.corante.com)
  6. Elmar Veerman. Waardeloos onderzoek. Proeven met muizen zeggen vrijwel niets over ontstekingen bij mensen. 2013-02-12 (wetenschap24.nl)
  7. Gina Kolata. Mice Fall Short as Test Subjects for Humans’ Deadly Ills. 2013-02-12 (nytimes.com)

  8. Bill Barrington. Are Mice Reliable Models for Human Disease Studies? 2013-02-14 (understandnutrition.com)
  9. Raven, K. (2012). Rodent models of sepsis found shockingly lacking Nature Medicine, 18 (7), 998-998 DOI: 10.1038/nm0712-998a
  10. Nemzek JA, Hugunin KM, & Opp MR (2008). Modeling sepsis in the laboratory: merging sound science with animal well-being. Comparative medicine, 58 (2), 120-8 PMID: 18524169
  11. Steven Novella. Mouse Model of Sepsis Challenged 2013-02-13 (http://www.sciencebasedmedicine.org/index.php/mouse-model-of-sepsis-challenged/)
  12. Wiersinga WJ (2011). Current insights in sepsis: from pathogenesis to new treatment targets. Current opinion in critical care, 17 (5), 480-6 PMID: 21900767
  13. Khamsi R (2012). Execution of sepsis trials needs an overhaul, experts say. Nature medicine, 18 (7), 998-9 PMID: 22772540
  14. Hotchkiss RS, Coopersmith CM, McDunn JE, & Ferguson TA (2009). The sepsis seesaw: tilting toward immunosuppression. Nature medicine, 15 (5), 496-7 PMID: 19424209
  15. van der Worp, H., Howells, D., Sena, E., Porritt, M., Rewell, S., O’Collins, V., & Macleod, M. (2010). Can Animal Models of Disease Reliably Inform Human Studies? PLoS Medicine, 7 (3) DOI: 10.1371/journal.pmed.1000245
  16. ter Riet, G., Korevaar, D., Leenaars, M., Sterk, P., Van Noorden, C., Bouter, L., Lutter, R., Elferink, R., & Hooft, L. (2012). Publication Bias in Laboratory Animal Research: A Survey on Magnitude, Drivers, Consequences and Potential Solutions PLoS ONE, 7 (9) DOI: 10.1371/journal.pone.0043404
  17. Dr. Francis Collins. Of Mice, Men and Medicine 2013-02-19 (directorsblog.nih.gov)
  18. Tom/ Mark Wanner Why mice may succeed in research when a single mouse falls short (2013-02-15) (speakingofresearch.com) [repost, with introduction]
  19. Mark Wanner Why mice may succeed in research when a single mouse falls short (2013-02-13/) (http://community.jax.org) %5Boriginal post]
  20. Warren, H. (2009). Editorial: Mouse models to study sepsis syndrome in humans Journal of Leukocyte Biology, 86 (2), 199-201 DOI: 10.1189/jlb.0309210
  21. Doi, K., Leelahavanichkul, A., Yuen, P., & Star, R. (2009). Animal models of sepsis and sepsis-induced kidney injury Journal of Clinical Investigation, 119 (10), 2868-2878 DOI: 10.1172/JCI39421




BAD Science or BAD Science Journalism? – A Response to Daniel Lakens

10 02 2013

ResearchBlogging.orgTwo weeks ago  there was a hot debate among Dutch Tweeps on “bad science, bad science journalism and bad science communication“. This debate was started and fueled by different Dutch blog posts on this topic.[1,4-6]

A controversial post, with both fierce proponents and fierce opposition was the post by Daniel Lakens [1], an assistant professor in Applied Cognitive Psychology.

I was among the opponents. Not because I don’t like a new fresh point of view, but because of a wrong reasoning and because Daniel continuously compares apples and oranges.

Since Twitter debates can’t go in-depth and lack structure and since I cannot comment to his Google sites blog, I pursue my discussion here.

The title of Daniels post is (freely translated, like the rest of his post):

Is this what one calls good science?” 

In his post he criticizes a Dutch science journalist, Hans van Maanen, and specifically his recent column [2], where Hans discusses a paper published in Pediatrics [3].

This longitudinal study tested the Music Marker theory among 309 Dutch kids. The researchers gathered information about the kids’ favorite types of music and tracked incidents of “minor delinquency”, such as shoplifting or vandalism, from the time they were 12 until they reached age 16 [4]. The researchers conclude that liking music that goes against the mainstream (rock, heavy metal, gothic, punk, African American music, and electronic dance music) at age 12 is a strong predictor of future minor delinquency at 16, in contrast to chart pop, classic music, jazz.

The University press office send out a press release [5 ], which was picked up by news media [4,6] and one of the Dutch authors of this study,  Loes Keijsers,  tweeted enthusiastically: “Want to know whether a 16 year old adult will suffer from delinquency, than look at his music taste at age 12!”

According to Hans, Loes could have easily broadcasted (more) balanced tweets, likeMusic preference doesn’t predict shoplifting” or “12 year olds who like Bach keep quiet about shoplifting when 16.” But even then, Hans argues, the tweets wouldn’t have been scientifically underpinned either.

In column style Hans explains why he thinks that the study isn’t methodologically strong: no absolute numbers are given; 7 out of 11 (!) music styles are positively associated with delinquency, but these correlations are not impressive: the strongest predictor (Gothic music preference) can explain no more than 9%  of the variance in delinquent behaviour, which can include anything from shoplifting, vandalism, fighting, graffiti spraying, switching price tags.  Furthermore the risks of later “delinquent” behavior are small:  on a scale 1 (never) to 4 (4 times or more) the mean risk was 1,12. Hans also wonders whether it is a good idea to monitor kids with a certain music taste.

Thus Hans concludesthis study isn’t good science”. Daniel, however, concludes that Hans’ writing is not good science journalism.

First Daniel recalls he and other PhD’s took a course on how to peer review scientific papers. On basis of their peer review of a (published) article 90% of the students decided to reject it. The two main lessons learned by Daniel were:

  • It is easy to critize a scientific paper and grind it down. No single contribution to science (no single article) is perfect.
  • New scientific insights, although imperfect, are worth sharing, because they help to evolve science. *¹

According to Daniel science jounalists often make the same mistakes as the peer reviewing PhD-students: critisizing the individuel studies without a “meta-view” on science.

Peer review and journalism however are different things (apples and oranges if you like).

Peer review (with all its imperfections) serves to filter, check and to improve the quality of individual scientific papers (usually) before they are published  [10]. My papers that passed peer review, were generally accepted. Of course there were the negative reviewers, often  the ignorant ones, and the naggers, but many reviewers had critique that helped to improve my paper, sometimes substantially. As a peer reviewer myself I only try to separate the wheat from the chaff and to enhance the quality of the papers that pass.

Science journalism also has a filter function: it filters already peer reviewed scientific papers* for its readership, “the public” by selecting novel relevant science and translating the scientific, jargon-laded language, into language readers can understand and appreciate. Of course science journalists should put the publication into perspective (call it “meta”).

Surely the PhD-students finger exercise resembles the normal peer review process as much as peer review resembles science journalism.

I understand that pure nitpicking seldom serves a goal, but this rarely occurs in science journalism. The opposite, however, is commonplace.

Daniel disapproves Hans van Maanen’s criticism, because Hans isn’t “meta” enough. Daniel: “Arguing whether an effect size is small or mediocre is nonsense, because no individual study gives a good estimate of the effect size. You need to do more research and combine the results in a meta-analysis”.

Apples and oranges again.

Being “meta” has little to do with meta-analysis. Being meta is … uh … pretty meta. You could think of it as seeing beyond (meta) the findings of one single study*.

A meta-analysis, however, is a statistical technique for combining the findings from independent, but comparable (homogeneous) studies in order to more powerfully estimate the true effect size (pretty exact). This is an important, but difficult methodological task for a scientist, not a journalist. If a meta-analysis on the topic exist, journalists should take this into account, of course (and so should the researchers). If not, they should put the single study in broader perspective (what does the study add to existing knowledge?) and show why this single study is or is not well done?

Daniel takes this further by stating that “One study is no study” and that journalists who simply echo the press releases of a study ànd journalists who just amply criticizes only single publication (like Hans) are clueless about science.

Apples and oranges! How can one lump science communicators (“media releases”), echoing journalists (“the media”) and critical journalists together?

I see more value in a critical analysis than a blind rejoicing of hot air. As long as the criticism guides the reader to appreciate the study.

And if there is just one single novel study, that seems important enough to get media attention, shouldn’t we judge the research on its own merits?

Then Daniel asks himself: “If I do criticize those journalists, shouldn’t I criticize those scientists who published just a single study and wrote a press release about it? “

His conclusion? “No”.

Daniel explains: science never provides absolute certainty, at the most the evidence is strong enough to state what is likely true. This can only be achieved by a lot of research by different investigators. 

Therefore you should believe in your ideas and encourage other scientists to pursue your findings. It doesn’t help when you say that music preference doesn’t predict shoplifting. It does help when you use the media to draw attention to your research. Many researchers are now aware of the “Music Marker Theory”. Thus the press release had its desired effect. By expressing a firm belief in their conclusions, they encourage other scientists to spend their sparse time on this topic. These scientists will try to repeat and falsify the study, an essential step in Cumulative Science. At a time when science is under pressure, scientists shouldn’t stop writing enthusiastic press releases or tweets. 

The latter paragraph is sheer nonsense!

Critical analysis of one study by a journalist isn’t what undermines the  public confidence in science. Rather it’s the media circus, that blows the implications of scientific findings out of proportion.

As exemplified by the hilarious PhD Comic below research results are propagated by PR (science communication), picked up by media, broadcasted, spread via the internet. At the end of the cycle conclusions are reached, that are not backed up by (sufficient) evidence.

PhD Comics – The news Cycle

Daniel is right about some things. First one study is indeed no study, in the sense that concepts are continuously tested and corrected: falsification is a central property of science (Popper). He is also right that science doesn’t offer absolute certainty (an aspect that is often not understood by the public). And yes, researchers should believe in their findings and encourage other scientists to check and repeat their experiments.

Though not primarily via the media. But via the normal scientific route. Good scientists will keep track of new findings in their field anyway. Suppose that only findings that are trumpeted in the media would be pursued by other scientists?

7-2-2013 23-26-31 media & science

And authors shouldn’t make overstatements. They shouldn’t raise expectations to a level which cannot be met. The Dutch study only shows weak associations. It simply isn’t true that the Dutch study allows us to “predict” at an individual level if a 12 year old will “act out” at 16.

This doesn’t help lay-people to understand the findings and to appreciate science.

The idea that media should just serve to spotlight a paper, seems objectionable to me.

Going back to the meta-level: what about the role of science communicators, media, science journalists and researchers?

According to Maarten Keulemans, journalist, we should just get rid of all science communicators as a layer between scientists and journalists [7]. But Michel van Baal [9] and Roy Meijer[8] have a point when they say that  journalists do a lot PR-ing too and they should do better than to rehash news releases.*²

Now what about Daniel criticism of van Maanen? In my opinion, van Maanen is one of those rare critical journalists who serve as an antidote against uncritical media diarrhea (see Fig above). Comparable to another lone voice in the media: Ben Goldacre. It didn’t surprise me that Daniel didn’t approve of him (and his book Bad Science) either [11]. 

Does this mean that I find Hans van Maanen a terrific science journalist? No, not really. I often agree with him (i.e. see this post [12]). He is one of those rare journalists who has real expertise in research methodology . However, his columns don’t seem to be written for a large audience: they seem too complex for most lay people. One thing I learned during a scientific journalism course, is that one should explain all jargon to one’s audience.

Personally I find this critical Dutch blog post[13] about the Music Marker Theory far more balanced. After a clear description of the study, Linda Duits concludes that the results of the study are pretty obvious, but that the the mini-hype surrounding this research is caused by the positive tone of the press release. She stresses that prediction is not predetermination and that the musical genres are not important: hiphop doesn’t lead to criminal activity and metal not to vandalism.

And this critical piece in Jezebel [14],  reaches far more people by talking in plain, colourful language, hilarious at times.

It also a swell title: “Delinquents Have the Best Taste in Music”. Now that is an apt conclusion!

———————-

*¹ Since Daniel doesn’t refer to  open (trial) data access nor the fact that peer review may , I ignore these aspects for the sake of the discussion.

*² Coincidence? Keulemans has covered  the music marker study quite uncritically (positive).

Photo Credits

http://www.phdcomics.com/comics/archive.php?comicid=1174

References

  1. Daniel Lakens: Is dit nou goede Wetenschap? – Jan 24, 2013 (sites.google.com/site/lakens2/blog)
  2. Hans van Maanen: De smaak van boefjes in de dop,De Volkskrant, Jan 12, 2013 (vanmaanen.org/hans/columns/)
  3. ter Bogt, T., Keijsers, L., & Meeus, W. (2013). Early Adolescent Music Preferences and Minor Delinquency PEDIATRICS DOI: 10.1542/peds.2012-0708
  4. Lindsay Abrams: Kids Who Like ‘Unconventional Music’ More Likely to Become Delinquent, the Atlantic, Jan 18, 2013
  5. Muziekvoorkeur belangrijke voorspeller voor kleine criminaliteit. Jan 8, 2013 (pers.uu.nl)
  6. Maarten Keulemans: Muziek is goede graadmeter voor puberaal wangedrag – De Volkskrant, 12 januari 2013  (volkskrant.nl)
  7. Maarten Keulemans: Als we nou eens alle wetenschapscommunicatie afschaffen? – Jan 23, 2013 (denieuwereporter.nl)
  8. Roy Meijer: Wetenschapscommunicatie afschaffen, en dan? – Jan 24, 2013 (denieuwereporter.nl)
  9. Michel van Baal. Wetenschapsjournalisten doen ook aan PR – Jan 25, 2013 ((denieuwereporter.nl)
  10. What peer review means for science (guardian.co.uk)
  11. Daniel Lakens. Waarom raadde Maarten Keulemans me Bad Science van Goldacre aan? Oct 25, 2012
  12. Why Publishing in the NEJM is not the Best Guarantee that Something is True: a Response to Katan – Sept 27, 2012 (laikaspoetnik.wordpress.com)
  13. Linda Duits: Debunk: worden pubers crimineel van muziek? (dieponderzoek.nl)
  14. Lindy west: Science: “Delinquents Have the Best Taste in Music” (jezebel.com)




Why Publishing in the NEJM is not the Best Guarantee that Something is True: a Response to Katan

27 10 2012

ResearchBlogging.orgIn a previous post [1] I reviewed a recent  Dutch study published in the New England Journal of Medicine (NEJM [2] about the effects of sugary drinks on the body mass index of school children.

The study got widely covered by the media. The NRC, for which the main author Martijn Katan works as a science columnist,  columnist, spent  two full (!) pages on the topic -with no single critical comment-[3].
As if this wasn’t enough, the latest column of Katan again dealt with his article (text freely available at mkatan.nl)[4].

I found Katan’s column “Col hors Catégorie” [4] quite arrogant, especially because he tried to belittle a (as he called it) “know-it-all” journalist who criticized his work  in a rivaling newspaper. This wasn’t fair, because the journalist had raised important points [5, 1] about the work.

The piece focussed on the long road of getting papers published in a top journal like the NEJM.
Katan considers the NEJM as the “Tour de France” among  medical journals: it is a top achievement to publish in this paper.

Katan also states that “publishing in the NEJM is the best guarantee something is true”.

I think the latter statement is wrong for a number of reasons.*

  1. First, most published findings are false [6]. Thus journals can never “guarantee”  that published research is true.
    Factors that  make it less likely that research findings are true include a small effect size,  a greater number and lesser preselection of tested relationships, selective outcome reporting, the “hotness” of the field (all applying more or less to Katan’s study, he also changed the primary outcomes during the trial[7]), a small study, a great financial interest and a low pre-study probability (not applicable) .
  2. It is true that NEJM has a very high impact factor. This is  a measure for how often a paper in that journal is cited by others. Of course researchers want to get their paper published in a high impact journal. But journals with high impact factors often go for trendy topics and positive results. In other words it is far more difficult to publish a good quality study with negative results, and certainly in an English high impact journal. This is called publication bias (and language bias) [8]. Positive studies will also be more frequently cited (citation bias) and will more likely be published more than once (multiple publication bias) (indeed, Katan et al already published about the trial [9], and have not presented all their data yet [1,7]). All forms of bias are a distortion of the “truth”.
    (This is the reason why the search for a (Cochrane) systematic review must be very sensitive [8] and not restricted to core clinical journals, but even include non-published studies: for these studies might be “true”, but have failed to get published).
  3. Indeed, the group of Ioannidis  just published a large-scale statistical analysis[10] showing that medical studies revealing “very large effects” seldom stand up when other researchers try to replicate them. Often studies with large effects measure laboratory and/or surrogate markers (like BMI) instead of really clinically relevant outcomes (diabetes, cardiovascular complications, death)
  4. More specifically, the NEJM does regularly publish studies about pseudoscience or bogus treatments. See for instance this blog post [11] of ScienceBased Medicine on Acupuncture Pseudoscience in the New England Journal of Medicine (which by the way is just a review). A publication in the NEJM doesn’t guarantee it isn’t rubbish.
  5. Importantly, the NEJM has the highest proportion of trials (RCTs) with sole industry support (35% compared to 7% in the BMJ) [12] . On several occasions I have discussed these conflicts of interests and their impact on the outcome of studies ([13, 14; see also [15,16] In their study, Gøtzsche and his colleagues from the Nordic Cochrane Centre [12] also showed that industry-supported trials were more frequently cited than trials with other types of support, and that omitting them from the impact factor calculation decreased journal impact factors. The impact factor decrease was even 15% for NEJM (versus 1% for BMJ in 2007)! For the journals who provided data, income from the sales of reprints contributed to 3% and 41% of the total income for BMJ and The Lancet.
    A recent study, co-authored by Ben Goldacre (MD & science writer) [17] confirms that  funding by the pharmaceutical industry is associated with high numbers of reprint ordersAgain only the BMJ and the Lancet provided all necessary data.
  6. Finally and most relevant to the topic is a study [18], also discussed at Retractionwatch[19], showing that  articles in journals with higher impact factors are more likely to be retracted and surprise surprise, the NEJM clearly stands on top. Although other reasons like higher readership and scrutiny may also play a role [20], it conflicts with Katan’s idea that  “publishing in the NEJM is the best guarantee something is true”.

I wasn’t aware of the latter study and would like to thank drVes and Ivan Oranski for responding to my crowdsourcing at Twitter.

References

  1. Sugary Drinks as the Culprit in Childhood Obesity? a RCT among Primary School Children (laikaspoetnik.wordpress.com)
  2. de Ruyter JC, Olthof MR, Seidell JC, & Katan MB (2012). A trial of sugar-free or sugar-sweetened beverages and body weight in children. The New England journal of medicine, 367 (15), 1397-406 PMID: 22998340
  3. NRC Wim Köhler Eén kilo lichter.NRC | Zaterdag 22-09-2012 (http://archief.nrc.nl/)
  4. Martijn Katan. Col hors Catégorie [Dutch], (published in de NRC,  (20 oktober)(www.mkatan.nl)
  5. Hans van Maanen. Suiker uit fris, De Volkskrant, 29 september 2012 (freely accessible at http://www.vanmaanen.org/)
  6. Ioannidis, J. (2005). Why Most Published Research Findings Are False PLoS Medicine, 2 (8) DOI: 10.1371/journal.pmed.0020124
  7. Changes to the protocol http://clinicaltrials.gov/archive/NCT00893529/2011_02_24/changes
  8. Publication Bias. The Cochrane Collaboration open learning material (www.cochrane-net.org)
  9. de Ruyter JC, Olthof MR, Kuijper LD, & Katan MB (2012). Effect of sugar-sweetened beverages on body weight in children: design and baseline characteristics of the Double-blind, Randomized INtervention study in Kids. Contemporary clinical trials, 33 (1), 247-57 PMID: 22056980
  10. Pereira, T., Horwitz, R.I., & Ioannidis, J.P.A. (2012). Empirical Evaluation of Very Large Treatment Effects of Medical InterventionsEvaluation of Very Large Treatment Effects JAMA: The Journal of the American Medical Association, 308 (16) DOI: 10.1001/jama.2012.13444
  11. Acupuncture Pseudoscience in the New England Journal of Medicine (sciencebasedmedicine.org)
  12. Lundh, A., Barbateskovic, M., Hróbjartsson, A., & Gøtzsche, P. (2010). Conflicts of Interest at Medical Journals: The Influence of Industry-Supported Randomised Trials on Journal Impact Factors and Revenue – Cohort Study PLoS Medicine, 7 (10) DOI: 10.1371/journal.pmed.1000354
  13. One Third of the Clinical Cancer Studies Report Conflict of Interest (laikaspoetnik.wordpress.com)
  14. Merck’s Ghostwriters, Haunted Papers and Fake Elsevier Journals (laikaspoetnik.wordpress.com)
  15. Lexchin, J. (2003). Pharmaceutical industry sponsorship and research outcome and quality: systematic review BMJ, 326 (7400), 1167-1170 DOI: 10.1136/bmj.326.7400.1167
  16. Smith R (2005). Medical journals are an extension of the marketing arm of pharmaceutical companies. PLoS medicine, 2 (5) PMID: 15916457 (free full text at PLOS)
  17. Handel, A., Patel, S., Pakpoor, J., Ebers, G., Goldacre, B., & Ramagopalan, S. (2012). High reprint orders in medical journals and pharmaceutical industry funding: case-control study BMJ, 344 (jun28 1) DOI: 10.1136/bmj.e4212
  18. Fang, F., & Casadevall, A. (2011). Retracted Science and the Retraction Index Infection and Immunity, 79 (10), 3855-3859 DOI: 10.1128/IAI.05661-11
  19. Is it time for a Retraction Index? (retractionwatch.wordpress.com)
  20. Agrawal A, & Sharma A (2012). Likelihood of false-positive results in high-impact journals publishing groundbreaking research. Infection and immunity, 80 (3) PMID: 22338040

——————————————–

* Addendum: my (unpublished) letter to the NRC

Tour de France.
Nadat het NRC eerder 2 pagina’ s de loftrompet over Katan’s nieuwe studie had afgestoken, vond Katan het nodig om dit in zijn eigen column dunnetjes over te doen. Verwijzen naar je eigen werk mag, ook in een column, maar dan moeten wij daar als lezer wel wijzer van worden. Wat is nu de boodschap van dit stuk “Col hors Catégorie“? Het beschrijft vooral de lange weg om een wetenschappelijke studie gepubliceerd te krijgen in een toptijdschrift, in dit geval de New England Journal of Medicine (NEJM), “de Tour de France onder de medische tijdschriften”. Het stuk eindigt met een tackle naar een journalist “die dacht dat hij het beter wist”. Maar ach, wat geeft dat als de hele wereld staat te jubelen? Erg onsportief, omdat die journalist (van Maanen, Volkskrant) wel degelijk op een aantal punten scoorde. Ook op Katan’s kernpunt dat een NEJM-publicatie “de beste garantie is dat iets waar is” valt veel af te dingen. De NEJM heeft inderdaad een hoge impactfactor, een maat voor hoe vaak artikelen geciteerd worden. De NEJM heeft echter ook de hoogste ‘artikelterugtrekkings’ index. Tevens heeft de NEJM het hoogste percentage door de industrie gesponsorde klinische trials, die de totale impactfactor opkrikken. Daarnaast gaan toptijdschriften vooral voor “positieve resultaten” en “trendy onderwerpen”, wat publicatiebias in de hand werkt. Als we de vergelijking met de Tour de France doortrekken: het volbrengen van deze prestigieuze wedstrijd garandeert nog niet dat deelnemers geen verboden middelen gebruikt hebben. Ondanks de strenge dopingcontroles.




#EAHIL2012 CEC 2: Visibility & Impact – Library’s New Role to Enhance Visibility of Researchers

4 07 2012

This week I’m blogging at (and mostly about) the 13th EAHIL conference in Brussels. EAHIL stands for European Association for Health Information and Libraries.

The second Continuing Education Course (CEC) I followed was given by Tiina Heino and Katri Larmo of the Terkko Meilahti Campus Library at the University of Helsinki in Finland.

The full title of the course was Visibility and impact – library’s new role: How the library can support the researcher to get visibility and generate impact to researcher’s work. You can read the abstract here.

The hands-on workshop mainly concentrated on the social bookmarking sites ConnoteaMendeley and Altmetric.

Furthermore we got information on CiteULike, ORCID,  Faculty of 1000 Posters and Pinterest. Also services developed in Terkko, such as ScholarChart and TopCited Articles, were shortly demonstrated.

What I especially liked in the hands on session is that the tutors had prepared a wikispace with all the information and links on the main page ( https://visibility2012.wikispaces.com) and a separate page for each participant to edit (here is my page). You could add links to your created accounts and embed widgets for Mendeley.

There was sufficient time to practice and try the tools. And despite the great number of participants there was ample room for questions (& even for making a blog draft ;)).

The main message of the tutors is that the process of publishing scientific research doesn’t end at publishing the article: it is equally important what happens after the research has been published. Visibility and impact in the scientific community and in the society are  crucial  for making the research go forward as well as for getting research funding and promoting the researcher’s career. The Fig below (taken from the presentation) visualizes this process.

The tutors discussed ORCID, Open Researcher and contributor ID, that will be introduced later this year. It is meant to solve the author name ambiguity problem in scholarly communication by central registry of unique identifiers for each author (because author names can’t be used to reliably identify all scholarly author). It will be possible for authors to create, manage and share their ORCID record without membership fee. For further information see several publications and presentations by Martin Fenner. I found this one during the course while browsing Mendeley.

Once published the author’s work can be promoted using bookmarking tools, like CiteULike, Connotea and Mendeley. You can easily registrate for Connotea and Mendeley using your Facebook account. These social bookmarking tools are also useful for networking, i.e. to discover individuals and groups with the same field of interest. It is easy to synchronize your Mendeley with your CiteULike account.

Mendeley is available in a desktop and a web version. The web version offers a public profile for researchers, a catalog of documents, and collaborative groups (the cloud of Mendeley). The desktop version of Mendeley is specially suited for reference management and organizing your PDF’s. That said Mendeley seems most suitable for serendipitous use (clicking and importing a reference you happen to see and like) and less useful for managing and deduplicating large numbers of records, i.e. for a systematic review.
Also (during the course) it was not possible to import several PubMed records at once in either CiteULike or Mendeley.

What stroke me when I tried Mendeley is that there were many small or dead groups. A search for “cochrane”  for instance yielded one large group Cochrane QES Register, owned by Andrew Booth, and 3 groups with one member (thus not really a group), with 0 (!) to 6 papers each! It looks like people are trying Mendeley and other tools just for a short while. Indeed, most papers I looked up in PubMed were not bookmarked at all. It makes you wonder how widespread the use of these bookmarking tools is. It probably doesn’t help that there are so many tools with different purposes and possibilities.

Another tool that we tried was Altmetric. This is a free bookmarklet on scholarly articles which allows you to track the conversations around scientific articles online. It shows the tweets, blogposts, Google+ and Facebook mentions, and the numbers of bookmarks on Mendeley, CiteULike and Connotea.

I tried the tool on a paper I blogged about , ie. Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up?

The bookmarklet showed the tweets and the blogposts mentioning the paper.

Indeed altmetrics did correctly refer to my blog (even to 2 posts).

I liked altmetrics*, but saying that it is suitable for scientific metrics is a step too far. For people interested in this topic I would like to refer -again- to a post of Martin Fenner on altmetrics (in general).  He stresses that “usage metrics”  has its limitations because of its proness  to “gaming” (cheating).

But the current workshop didn’t address the shortcomings of the tools, for it was meant as a first practical acquaintance with the web 2.0 tools.

For the other tools (Faculty of 1000 Posters, Pinterest) and the services developed in Terkko, such as ScholarChart and TopCited Articles,  see the wikipage and the presentation:

*Coincidentally I’m preparing a post on handy chrome extensions to look for tweets about a webpage. Altmetric is another tool which seems very suitable for this purpose

Related articles





Jeffrey Beall’s List of Predatory, Open-Access Publishers, 2012 Edition

19 12 2011

Perhaps you remember that I previously wrote [1] about  non-existing and/or low quality scammy open access journals. I specifically wrote about Medical Science Journals of  the http://www.sciencejournals.cc/ series, which comprises 45 titles, none of which having published any article yet.

Another blogger, David M [2] also had negative experiences with fake peer review invitations from sciencejournals. He even noticed plagiarism.

Later I occasionally found other posts about open access spam, like the post of Per Ola Kristensson [3] (specifically about Bentham, Hindawi and InTech OA publishers), of Peter Murray-Rust [4] ,a chemist interested in OA (about spam journals and conferences, specifically about Scientific Research Publishing) and of Alan Dove PhD [5] (specifically about The Journal of Computational Biology and Bioinformatics Research (JCBBR) published by Academic Journals).

But now it appears that there is an entire list of “Predatory, Open-Access Publishers”. This list was created by Jeffrey Beall, academic librarian at the University of Colorado Denver. He just updated the list for 2012 here (PDF-format).

According to Jeffrey predatory, open-access publishers

are those that unprofessionally exploit the author-pays model of open-access publishing (Gold OA) for their own profit. Typically, these publishers spam professional email lists, broadly soliciting article submissions for the clear purpose of gaining additional income. Operating essentially as vanity presses, these publishers typically have a low article acceptance threshold, with a false-front or non-existent peer review process. Unlike professional publishing operations, whether subscription-based or ethically-sound open access, these predatory publishers add little value to scholarship, pay little attention to digital preservation, and operate using fly-by-night, unsustainable business models.

Jeffrey recommends not to do business with the following (illegitimate) publishers, including submitting article manuscripts, serving on editorial boards, buying advertising, etc. According to Jeffrey, “there are numerous traditional, legitimate journals that will publish your quality work for free, including many legitimate, open-access publishers”.

(For sake of conciseness, I only describe the main characteristics, not always using the same wording; please see the entire list for the full descriptions.)

Watchlist: Publishers, that may show some characteristics of  predatory, open-access publisher
  • Hindawi Way too many journals than can be properly handled by one publisher
  • MedKnow Publications vague business model. It charges for the PDF version
  • PAGEPress many dead links, a prominent link to PayPal
  • Versita Open paid subscription for print form. ..unclear business model

An asterisk (*) indicates that the publisher is appearing on this list for the first time.

How complete and reliable is this list?

Clearly, this list is quite exhaustive. Jeffrey did a great job listing  many dodgy OA journals. We should watch (many) of these OA publishers with caution. Another good thing is that the list is updated annually.

(http://www.sciencejournals.cc/ described in my previous post is not (yet) on the list ;)  but I will inform Jeffrey).

Personally, I would have preferred a distinction between real bogus or spammy journals and journals that seem to have “too many journals to properly handle” or that ask (too much ) money for subscription/from the author. The scientific content may still be good (enough).

Furthermore, I would rather see a neutral description of what is exactly wrong about a journal. Especially because “Beall’s list” is a list and not a blog post (or is it?). Sometimes the description doesn’t convince me that the journal is really bogus or predatory.

Examples of subjective portrayals:

  • Dove Press:  This New Zealand-based medical publisher boasts high-quality appearing journals and articles, yet it demands a very high author fee for publishing articles. Its fleet of journals is large, bringing into question how it can properly fulfill its promise to quickly deliver an acceptance decision on submitted articles.
  • Libertas Academia “The tag line under the name on this publisher’s page is “Freedom to research.” It might better say “Freedom to be ripped off.” 
  • Hindawi  .. This publisher has way too many journals than can be properly handled by one publisher, I think (…)

I do like funny posts, but only if it is clear that the post is intended to be funny. Like the one by Alan Dove PhD about JCBBR.

JCBBR is dedicated to increasing the depth of research across all areas of this subject.

Translation: we’re launching a new journal for research that can’t get published anyplace else.

The journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence in this subject area.

We’ll take pretty much any crap you excrete.

Hattip: Catherine Arnott Smith, PhD at the MedLib-L list.

  1. I Got the Wrong Request from the Wrong Journal to Review the Wrong Piece. The Wrong kind of Open Access Apparently, Something Wrong with this Inherently… (laikaspoetnik.wordpress.com)
  2. A peer-review phishing scam (blog.pita.si)
  3. Academic Spam and Open Access Publishing (blog.pokristensson.com)
  4. What’s wrong with Scholarly Publishing? New Journal Spam and “Open Access” (blogs.ch.cam.ac.uk)
  5. From the Inbox: Journal Spam (alandove.com)
  6. Beall’s List of Predatory, Open-Access Publishers. 2012 Edition (http://metadata.posterous.com)
  7. Silly Sunday #42 Open Access Week around the Globe (laikaspoetnik.wordpress.com)




FUTON Bias. Or Why Limiting to Free Full Text Might not Always be a Good Idea.

8 09 2011

ResearchBlogging.orgA few weeks ago I was discussing possible relevant papers for the Twitter Journal Club  (Hashtag #TwitJC), a succesful initiative on Twitter, that I have discussed previously here and here [7,8].

I proposed an article, that appeared behind a paywall. Annemarie Cunningham (@amcunningham) immediately ran the idea down, stressing that open-access (OA) is a pre-requisite for the TwitJC journal club.

One of the TwitJC organizers, Fi Douglas (@fidouglas on Twitter), argued that using paid-for journals would defeat the objective that  #TwitJC is open to everyone. I can imagine that fee-based articles could set a too high threshold for many doctors. In addition, I sympathize with promoting OA.

However, I disagree with Annemarie that an OA (or rather free) paper is a prerequisite if you really want to talk about what might impact on practice. On the contrary, limiting to free full text (FFT) papers in PubMed might lead to bias: picking “low hanging fruit of convenience” might mean that the paper isn’t representative and/or doesn’t reflect the current best evidence.

But is there evidence for my theory that selecting FFT papers might lead to bias?

Lets first look at the extent of the problem. Which percentage of papers do we miss by limiting for free-access papers?

survey in PLOS by Björk et al [1] found that one in five peer reviewed research papers published in 2008 were freely available on the internet. Overall 8,5% of the articles published in 2008 (and 13,9 % in Medicine) were freely available at the publishers’ sites (gold OA).  For an additional 11,9% free manuscript versions could be found via the green route:  i.e. copies in repositories and web sites (7,8% in Medicine).
As a commenter rightly stated, the lag time is also important, as we would like to have immediate access to recently published research, yet some publishers (37%) impose an access-embargo of 6-12 months or more. (these papers were largely missed as the 2008 OA status was assessed late 2009).

PLOS 2009

The strength of the paper is that it measures  OA prevalence on an article basis, not on calculating the share of journals which are OA: an OA journal generally contains a lower number of articles.
The authors randomly sampled from 1.2 million articles using the advanced search facility of Scopus. They measured what share of OA copies the average researcher would find using Google.

Another paper published in  J Med Libr Assoc (2009) [2], using similar methods as the PLOS survey examined the state of open access (OA) specifically in the biomedical field. Because of its broad coverage and popularity in the biomedical field, PubMed was chosen to collect their target sample of 4,667 articles. Matsubayashi et al used four different databases and search engines to identify full text copies. The authors reported an OA percentage of 26,3 for peer reviewed articles (70% of all articles), which is comparable to the results of Björk et al. More than 70% of the OA articles were provided through journal websites. The percentages of green OA articles from the websites of authors or in institutional repositories was quite low (5.9% and 4.8%, respectively).

In their discussion of the findings of Matsubayashi et al, Björk et al. [1] quickly assessed the OA status in PubMed by using the new “link to Free Full Text” search facility. First they searched for all “journal articles” published in 2005 and then repeated this with the further restrictions of “link to FFT”. The PubMed OA percentages obtained this way were 23,1 for 2005 and 23,3 for 2008.

This proportion of biomedical OA papers is gradually increasing. A chart in Nature’s News Blog [9] shows that the proportion of papers indexed on the PubMed repository each year has increased from 23% in 2005 to above 28% in 2009.
(Methods are not shown, though. The 2008 data are higher than those of Björk et al, who noticed little difference with 2005. The Data for this chart, however, are from David Lipman, NCBI director and driving force behind the digital OA archive PubMed Central).
Again, because of the embargo periods, not all literature is immediately available at the time that it is published.

In summary, we would miss about 70% of biomedical papers by limiting for FFT papers. However, we would miss an even larger proportion of papers if we limit ourselves to recently published ones.

Of course, the key question is whether ignoring relevant studies not available in full text really matters.

Reinhard Wentz of the Imperial College Library and Information Service already argued in a visionary 2002 Lancet letter[3] that the availability of full-text articles on the internet might have created a new form of bias: FUTON bias (Full Text On the Net bias).

Wentz reasoned that FUTON bias will not affect researchers who are used to comprehensive searches of published medical studies, but that it will affect staff and students with limited experience in doing searches and that it might have the same effect in daily clinical practice as publication bias or language bias when doing systematic reviews of published studies.

Wentz also hypothesized that FUTON bias (together with no abstract available (NAA) bias) will affect the visibility and the impact factor of OA journals. He makes a reasonable cause that the NAA-bias will affect publications on new, peripheral, and under-discussion subjects more than established topics covered in substantive reports.

The study of Murali et al [4] published in Mayo Proceedings 2004 confirms that the availability of journals on MEDLINE as FUTON or NAA affects their impact factor.

Of the 324 journals screened by Murali et al. 38.3% were FUTON, 19.1%  NAA and 42.6% had abstracts only. The mean impact factor was 3.24 (±0.32), 1.64 (±0.30), and 0.14 (±0.45), respectively! The authors confirmed this finding by showing a difference in impact factors for journals available in both the pre and the post-Internet era (n=159).

Murali et al informally questioned many physicians and residents at multiple national and international meetings in 2003. These doctors uniformly admitted relying on FUTON articles on the Web to answer a sizable proportion of their questions. A study by Carney et al (2004) [5] showed  that 98% of the US primary care physicians used the Internet as a resource for clinical information at least once a week and mostly used FUTON articles to aid decisions about patient care or patient education and medical student or resident instruction.

Murali et al therefore conclude that failure to consider FUTON bias may not only affect a journal’s impact factor, but could also limit consideration of medical literature by ignoring relevant for-fee articles and thereby influence medical education akin to publication or language bias.

This proposed effect of the FFT limit on citation retrieval for clinical questions, was examined in a  more recent study (2008), published in J Med Libr Assoc [6].

Across all 4 questions based on a research agenda for physical therapy, the FFT limit reduced the number of citations to 11.1% of the total number of citations retrieved without the FFT limit in PubMed.

Even more important, high-quality evidence such as systematic reviews and randomized controlled trials were missed when the FFT limit was used.

For example, when searching without the FFT limit, 10 systematic reviews of RCTs were retrieved against one when the FFT limit was used. Likewise when searching without the FFT limit, 28 RCTs were retrieved and only one was retrieved when the FFT limit was used.

The proportion of missed studies (appr. 90%) is higher than in the studies mentioned above. Possibly this is because real searches have been tested and that only relevant clinical studies  have been considered.

The authors rightly conclude that consistently missing high-quality evidence when searching clinical questions is problematic because it undermines the process of Evicence Based Practice. Krieger et al finally conclude:

“Librarians can educate health care consumers, scientists, and clinicians about the effects that the FFT limit may have on their information retrieval and the ways it ultimately may affect their health care and clinical decision making.”

It is the hope of this librarian that she did a little education in this respect and clarified the point that limiting to free full text might not always be a good idea. Especially if the aim is to critically appraise a topic, to educate or to discuss current best medical practice.

References

  1. Björk, B., Welling, P., Laakso, M., Majlender, P., Hedlund, T., & Guðnason, G. (2010). Open Access to the Scientific Journal Literature: Situation 2009 PLoS ONE, 5 (6) DOI: 10.1371/journal.pone.0011273
  2. Matsubayashi, M., Kurata, K., Sakai, Y., Morioka, T., Kato, S., Mine, S., & Ueda, S. (2009). Status of open access in the biomedical field in 2005 Journal of the Medical Library Association : JMLA, 97 (1), 4-11 DOI: 10.3163/1536-5050.97.1.002
  3. WENTZ, R. (2002). Visibility of research: FUTON bias The Lancet, 360 (9341), 1256-1256 DOI: 10.1016/S0140-6736(02)11264-5
  4. Murali NS, Murali HR, Auethavekiat P, Erwin PJ, Mandrekar JN, Manek NJ, & Ghosh AK (2004). Impact of FUTON and NAA bias on visibility of research. Mayo Clinic proceedings. Mayo Clinic, 79 (8), 1001-6 PMID: 15301326
  5. Carney PA, Poor DA, Schifferdecker KE, Gephart DS, Brooks WB, & Nierenberg DW (2004). Computer use among community-based primary care physician preceptors. Academic medicine : journal of the Association of American Medical Colleges, 79 (6), 580-90 PMID: 15165980
  6. Krieger, M., Richter, R., & Austin, T. (2008). An exploratory analysis of PubMed’s free full-text limit on citation retrieval for clinical questions Journal of the Medical Library Association : JMLA, 96 (4), 351-355 DOI: 10.3163/1536-5050.96.4.010
  7. The #TwitJC Twitter Journal Club, a new Initiative on Twitter. Some Initial Thoughts. (laikaspoetnik.wordpress.com)
  8. The Second #TwitJC Twitter Journal Club (laikaspoetnik.wordpress.com)
  9. How many research papers are freely available? (blogs.nature.com)




To Retract or Not to Retract… That’s the Question

7 06 2011

In the previous post I discussed [1] that editors of Science asked for the retraction of a paper linking XMRV retrovirus to ME/CFS.

The decision of the editors was based on the failure of at least 10 other studies to confirm these findings and on growing support that the results were caused by contamination. When the authors refused to retract their paper, Science issued an Expression of Concern [2].

In my opinion retraction is premature. Science should at least await the results of two multi-center studies, that were designed to confirm or disprove the results. These studies will continue anyway… The budget is already allocated.

Furthermore, I can’t suppress the idea that Science asked for a retraction to exonerate themselves for the bad peer review (the paper had serious flaws) and their eagerness to swiftly publish the possibly groundbreaking study.

And what about the other studies linking the XMRV to ME/CFS or other diseases: will these also be retracted?
And what happens in the improbable case that the multi-center studies confirm the 2009 paper? Would Science republish the retracted paper?

Thus in my opinion, it is up to other scientists to confirm or disprove findings published. Remember that falsifiability was Karl Popper’s basic scientific principle. My conclusion was that “fraud is a reason to retract a paper and doubt is not”. 

This is my opinion, but is this opinion shared by others?

When should editors retract a paper? Is fraud the only reason? When should editors issue a letter of concern? Are there guidelines?

Let first say that even editors don’t agree. Schekman, the editor-in chief of PNAS, has no direct plans to retract another paper reporting XMRV-like viruses in CFS [3].

Schekman considers it “an unusual situation to retract a paper even if the original findings in a paper don’t hold up: it’s part of the scientific process for different groups to publish findings, for other groups to try to replicate them, and for researchers to debate conflicting results.”

Back at the Virology Blog [4] there was also a vivid discussion about the matter. Prof. Vincent Ranciello gave the following answer in response to a question of a reader:

I don’t have any hard numbers on how often journals ask scientists to retract a paper, only my sense that it is very rare. Author retractions are more frequent, but I’m only aware of a handful of those in a year. I can recall a few other cases in which the authors were asked to retract a paper, but in those cases scientific fraud was involved. That’s not the case here. I don’t believe there is a standard policy that enumerates how such decisions are made; if they exist they are not public.

However, there is a Guideline for editors, the Guidance from the Committee on Publication Ethics (COPE) (PDF) [5]

Ivanoranski, of the great blog Retraction Watch, linked to it when we discussed reasons for retraction.

With regard to retraction the COPE-guidelines state that journal editors should consider retracting a publication if:

  1. they have clear evidence that the findings are unreliable, either as a result of misconduct (e.g. data fabrication) or honest error (e.g. miscalculation or experimental error)
  2. the findings have previously been published elsewhere without proper crossreferencing, permission or justification (i.e. cases of redundant publication)
  3. it constitutes plagiarism
  4. it reports unethical research

According to the same guidelines journal editors should consider issuing an expression of concern if:

  1. they receive inconclusive evidence of research or publication misconduct by the authors 
  2. there is evidence that the findings are unreliable but the authors’ institution will not investigate the case 
  3. they believe that an investigation into alleged misconduct related to the publication either has not been, or would not be, fair and impartial or conclusive 
  4. an investigation is underway but a judgement will not be available for a considerable time

Thus in the case of the Science XMRV/CSF paper an expression of concern certainly applies (all 4 points) and one might even consider a retraction, because the results seem unreliable (point 1). But it is not 100%  established that the findings are false. There is only serious doubt……

The guidelines seem to leave room for separate decisions. To retract a paper in case of plain fraud is not under discussion. But when is an error sufficiently established ànd important to warrant retraction?

Apparently retractions are on the rise. Although still rare (0.02% of all publications by the late 2000s) there has been a tenfold increase in retractions compared to the early 1980s (see review at Scholarly Kitchen [6] about two papers: [7] and [8]). However it is unclear whether increasing rates of retraction reflect more fraudulent or erroneous papers or a better diligence. The  first paper [7] also highlights that, out of fear of litigation, editors are generally hesitant to retract an article without the author’s permission.

At the blog Nerd Alert they give a nice overview [9] (based on Retraction Watch, but then summarized in one post ;) ) . They clarify that papers are retracted for “less dastardly reasons then those cases that hit the national headlines and involve purposeful falsification of data”, such as the fraudulent papers of Andrew Wakefield (autism caused by vaccination). Besides the mistaken publication of the same paper twice, data over-interpretation, plagiarism and the like, the reason can also be more trivial: ordering the wrong mice or using an incorrectly labeled bottle.

Still, scientist don’t unanimously agree that such errors should lead to retraction.

Drug Monkey blogs about his discussion [10] with @ivanoransky over a recent post at Retraction Watch, which asks whether a failure to replicate a result justifies a retraction [11]“. Ivanoransky presents a case, where a researcher (B) couldn’t reproduce the findings of another lab (A) and demonstrated mutations in the published protein sequence that excluded the mechanism proposed in A’s paper. This wasn’t retracted, possibly because B didn’t follow the published experimental protocols of A in all details. (reminds me of the XMRV controversy). 

Drugmonkey says (quote):  (cross-posted at Scientopia here — hmmpf isn’t that an example of redundant publication?)

“I don’t give a fig what any journals might wish to enact as a policy to overcompensate for their failures of the past.
In my view, a correction suffices” (provided that search engines like Google and PubMed make clear that the paper was in fact corrected).

Drug Monkey has a point there. A clear watermark should suffice.

However, we should note that most papers are retracted by authors, not the editors/journals, and that the majority of “retracted papers” remain available. Just 13.2% are deleted from the journal’s website. And 31% are not clearly labelled as such.

Summary of how the naïve reader is alerted to paper retraction (from Table 2 in [7], see: Scholarly Kitchen [6])

  • Watermark on PDF (41.1%)
  • Journal website (33.4%)
  • Not noted anywhere (31.8%)
  • Note appended to PDF (17.3%)
  • PDF deleted from website (13.2%)

My conclusion?

Of course fraudulent papers should be retracted. Also papers with obvious errors that invalidate the conclusions.

However, we should be extremely hesitant to retract papers that can’t be reproduced, if there is no undisputed evidence of error.

Otherwise we should retract almost all published papers at one point or another. Because if Professor Ioannides is right (and he probably is) “Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong”. ( see previous post [12],  “Lies, Damned Lies, and Medical Science” [13])  and Ioannides’ crushing article “Why most published research findings are false [14]”)

All retracted papers (and papers with major deficiencies and shortcomings) should be clearly labeled as such (as Drugmonkey proposed, not only at the PDF and at the Journal website, but also by search engines and biomedical databases).

Or lets hope, with Biochembelle [15], that the future of scientific publishing will make retractions for technical issues obsolete (whether in the form of nano-publications [16] or otherwise):

One day the scientific community will trade the static print-type approach of publishing for a dynamic, adaptive model of communication. Imagine a manuscript as a living document, one perhaps where all raw data would be available, others could post their attempts to reproduce data, authors could integrate corrections or addenda….

NOTE: Retraction Watch (@ivanoransky) and @laikas have voted in @drugmonkeyblog‘s poll about what a retracted paper means [here]. Have you?

References

  1. Science Asks to Retract the XMRV-CFS Paper, it Should Never Have Accepted in the First Place. (laikaspoetnik.wordpress.com 2011-06-02)
  2. Alberts B. Editorial Expression of Concern. Science. 2011-05-31.
  3. Given Doubt Cast on CFS-XMRV Link, What About Related Research? (blogs.wsj.com)
  4. XMRV is a recombinant virus from mice  (Virology Blog : 2011/05/31)
  5. Retractions: Guidance from the Committee on Publication Ethics (COPE) Elizabeth Wager, Virginia Barbour, Steven Yentis, Sabine Kleinert on behalf of COPE Council:
    http://www.publicationethics.org/files/u661/Retractions_COPE_gline_final_3_Sept_09__2_.pdf
  6. Retract This Paper! Trends in Retractions Don’t Reveal Clear Causes for Retractions (scholarlykitchen.sspnet.org)
  7. Wager E, Williams P. Why and how do journals retract articles? An analysis of Medline retractions 1988-2008. J Med Ethics. 2011 Apr 12. [Epub ahead of print] 
  8. Steen RG. Retractions in the scientific literature: is the incidence of research fraud increasing? J Med Ethics. 2011 Apr;37(4):249-53. Epub 2010 Dec 24.
  9. Don’t touch that blot. (nerd-alert.net/blog/weeklies/ : 2011/02/25)
  10. What_does_a_retracted_paper_mean? (scienceblogs.com/drugmonkey: 2011/06/03)
  11. So when is a retraction warranted? The long and winding road to publishing a failure to replicate (retractionwatch.wordpress.com : 2011/06/03/)
  12. Much Ado About ADHD-Research: Is there a Misrepresentation of ADHD in Scientific Journals? (laikaspoetnik.wordpress.com 2011-06-02)
  13. “Lies, Damned Lies, and Medical Science” (theatlantic.com :2010/11/)
  14. Ioannidis, J. (2005). Why Most Published Research Findings Are False. PLoS Medicine, 2 (8) DOI: 10.1371/journal.pmed.0020124
  15. Retractions: What are they good for? (biochembelle.wordpress.com : 2011/06/04/)
  16. Will Nano-Publications & Triplets Replace The Classic Journal Articles? (laikaspoetnik.wordpress.com 2011-06-02)

NEW* (Added 2011-06-08):

 





How a Valentine’s Editorial about Chocolate & Semen Lead to the Resignation of Top Surgeon Greenfield

27 04 2011
Children's Valentine in somewhat questionable ...

Image via Wikipedia

Dr. Lazar Greenfield, recently won the election as the new President of  ACS (American College of Surgeons). This position would crown his achievements. For Greenfield was a truly pre-eminent surgeon. He is best known for his development of an intracaval filter bearing his name. This device probably has saved many lives by preventing blood clots from going into the lungs. He has been highly productive having authored more than 360 scientific articles in peer-reviewed journals, 128 book chapters as well as 2 textbooks.

Greenfield also happened to have a minor side job as the editor-in-chief of Elsevier’s Surgery News. Surgery News is not a peer-reviewed journal, but what Greenfield later defines as a monthly throw-away newspaper (of the kind Elsevier produces a lot).

As an-editor-in chief Greenfield wrote open editorials (opinion pieces) for Surgery News. He found a very suitable theme for the February issue: Valentine’s day.

Valentine’s Day is about love, and the editorial was about romantic gut feeling possibly having a physiological basis. In other words, the world of  sexual chemical signals that give you butterflies-feelings. The editorial jumps from mating preferences of fruit flies, stressed female rotifers turning into males and synchronization of menstrual cycles of women who live together, to a study suggesting that “exposure” to semen makes female college students less depressed. All 4 topics are based on scientific research, published in peer review papers.

Valentines Day asks for giving this “scientific” story a twist, so he concludes the editorial as follows:

“So there’s a deeper bond between men and women than St. Valentine would have suspected, and now we know there’s a better gift for that day than chocolates.”

Now, everybody knows that that conclusion ain’t supported by the data.
This would have required at least a double-blind randomized trial, comparing the mood-enhancing effects of chocolate compared to …….  (yikes!).

Just joking, of course…., similar as dear Lazar was trying to be funny….

No, the editorial wasn’t particularly funny.

And somehow it isn’t pleasant to think of a man’s love fluid wrapped in a ribbon and a box with hearts, while you expect some chocolates. Furthermore it suggests that sperm is something a man just gives/donates/injects, not a resultant of mutual love.

However this was the opposite of what Greenfield had in mind:

The biochemical properties of semen that were reviewed have been documented in peer-reviewed journals and represent the remarkable way that Nature promotes bonding between men and women, not something demeaning.”

Thus the man just tried to “Amuse his readers” and highlight research on “some fascinating new findings related to semen.”

I would have appreciated a more subtle ending of the editorial, but I would take no offense.

….Unlike many of his fellow female surgeons.  The Women in Surgery Committee and the Association of Women Surgeons considered his editorial as “demeaning to women” (NY-Times).

He offered his sincere apologies and resigned as Editor-in-Chief of the paper. The publication was retracted. As a matter of fact the entire February issue of Surgery News was taken off the ACS-website. Luckily, Retraction Watch published the editorial in its entirety.

Greenfield’s apologies weren’t enough, women surgeons brought the issue to the Board of Regents, who asked him to resign, which he eventually did.

A few weeks later he wrote a resentful letter. This is not a smart thing to do, but is understandable for several reasons. First, he didn’t he mean to be offensive and made his apologies. Second, he has an exemplary career as a longtime mentor and advocate of women in surgery. Third, true reason for his resign wasn’t the implicit plead for unprotected sex, but rather that the editorial reflected “a macho culture in surgery that needed to change.” Fourth, his life is ruined over something trivial.

Why can’t one write a lighthearted opinion-piece at Valentine’s day without getting resigned? Is it because admitting that “the “bond between men and women” is natural and runs deep” is one of those truths you cannot utter (Paul Rahe).

Is this perhaps typically American?

Elmar Veerman (Dutch Journalist, science editor at VPRO) comments at at Retraction Watch:

(…) Frankly, I don’t see the problem. I find it rather funny and harmless. Perhaps because I’m from Europe, where most people have a more relaxed attitude towards sex. Something like ‘nipplegate’ could never happen here (a nipple on tv, so what).  (…) I have been wondering for years why so many Americans seem to think violence is fine and sex is scary.

Not only female surgeons  object to the editorial. Well-known male (US) surgeons “fillet” the editorial at their blogs: Jeffrey Parks at Buckeye Surgeon ( 1 and 2), Orac Knows at Respectful Insolence (1 and 2) and Skeptical Scalpel (the latter quite mildly).

Jeffrey and Orac do not only think the man is humorless and a sexist, but also that the science behind the mood-enhancing aspects of semen is crap.

Although Jeffrey only regards “The “science” a little suspect as per Orac.”…. Because of course: “Orac knows.”

Orac exaggerates what Greenfield has said in the “breathtakingly inappropriate and embarrassing article  for Surgery News”, as he calls it. [1]:  “Mood-enhancing effects of semen” becomes in Orac’s words  the cure for female depression and  “a woman needs a man to inject his seed into her in order to be truly happy“.
Of course, it is not fair to twist words this way.

The criticism of Orac against the science that supports Dr. Greenfield’s joke is as follows: The first two studies are not related to human biology and the semen study” is “about as lame a study as can be imagined. Not only is it a study in which causation is implied by correlation, but to me the evidence of correlation is not even that compelling.”  

Orac is right about that. In his second post Orac continues (in response to the authors of the semen paper, who defend Greenfield and suggest they had obtained “more evidence”):

(..)so I was curious about where they had published their “replication.” PubMed has a wonderful feature in which it pops up “related citations” in the right sidebar of any citation you look up. I didn’t recall seeing any related citations presenting confirmatory data for Gallup et al’s study. I searched PubMed using the names of all three authors of the original “semen” study and found no publications regarding the antidepressant properties of semen since the original 2002 study cited by Dr. Greenfield. I found a lot of publications about yawning and mental states, but no followup study or replication of the infamous “semen” study. color me unimpressed” [2](..)

Again, I agree with Orac: the authors didn’t publish any confirmatory data.
But looking at related articles is not a good way to check if related articles have been published: PubMed creates this set by comparing words from the title, abstract, and MeSH terms using a word-weighted algorithm. It is goal is mainly to increase serendipity.

I didn’t have time to do a proper Pubmed search, which should include all kinds of synonyms for sperm and mood/depression. I just checked the papers citing Gallups original article in Google Scholar and found 29 hits (no Gallop papers indeed), including various articles by Costa & Brody i.e. the freely available letter (discussing their research): Greater Frequency of Penile–Vaginal Intercourse Without Condoms is Associated with Better Mental Health. This letter was a response to an opposite finding by the way.

I didn’t look at the original articles and I don’t really expect much of it. However, it just shows the Gallop study is not the only study, linking semen to positive health effects.

Assuming Greenfield had more than a joke in mind, and wanted to reflect on the state of art of health aspects of semen, it surprises me that he didn’t dig any further than this article from 2002.

Is it because he really based his editorial on a review in Scientific American from 2010, called “An ode to the many evolved virtues of human semen” [3,4], which describes Gallup’s study and, strikingly, also starts with discussing menstrual synchrony.

Greenfield could have discussed other, better documented, properties of semen, like its putative protection from pre-eclampsia (see references in Wikipedia)[5]

Or even better, he could have cited other sexual chemical signals that give you butterflies-feelings, like smell!

In stead of “Gut Feelings” the title could have been “In the nose of the beholder” or “The Smell of Love” [6].

And Greenfield could have concluded:

“So there’s more in the air than St. Valentine would have suspected, and now we know there’s a better gift for that day than chocolates: perfume.

And no one would have bothered and would have done with the paper as one usually does with throwaways.

Notes

  1. Coincidentally, while reading Orac’s post I saw a Research Blogging post mentioned in the side bar: masturbation-and-restless-leg-syndrome. …Admittedly, this was a friday-weird-science post and a thorough review of a case study.
  2. It would probably have been easier to check their website with an overview of publications
  3. Mentioned in a comment somewhere, but I can’t track it down.
  4. If Greenfield used Scientific American as a source he should have read it all to the end, where the author states: I bid adieu, please accept, in all sincerity, my humblest apologies for what is likely to be a flood of bad, off-color jokes—men saying, “I’m not a medical doctor, but my testicles are licensed pharmaceutical suppliers” and so on—tracing its origins back to this innocent little article. Ladies, forgive me for what I have done.”
  5. Elmar Veerman has written a review on this topic in 2000 at Kennislink: http://www.kennislink.nl/publicaties/sperma-als-natuurlijke-bescherming (Dutch)
  6. As a matter of fact these are actual titles of scientific papers.




Friday Foolery #39. Peer Review LOL, How to Write a Comment & The Best Rejection Letter Evvah!

15 04 2011

LOL? Peer review?! Comments?

Peer review is never funny, you think.
It is hard to review papers, especially when they are poorly written. From the author’s point of view, it is annoying and frustrating to see a paper rejected on basis of comments of peer reviewers, who either don’t understand the paper or thwart you in your attempts to get the paper published, for instance because you are a competitor in the field.

Still, from a (great) distance the peer review process can be funny… in some respects.

Read for instance a collection of memorable quotes from peer review critiques of the past year in Environmental Microbiology (EM does this each December). Here are some excerpts:

  • Done! Difficult task, I don’t wish to think about constipation and faecal flora during my holidays!
  • This paper is desperate. Please reject it completely and then block the author’s email ID so they can’t use the online system in future.
  • It is sad to see so much enthusiasm and effort go into analyzing a dataset that is just not big enough.
  • The abstract and results read much like a laundry list.
  • .. I would suggest that EM is setting up a fund that pays for the red wine reviewers may need to digest manuscripts like this one.
  • I have to admit that I would have liked to reject this paper because I found the tone in the Reply to the Reviewers so annoying.
  • I started to review this but could not get much past the abstract.
  • This paper is awfully written. There is no adequate objective and no reasonable conclusion. The literature is quoted at random and not in the context of argument…
  • Stating that the study is confirmative is not a good start for the Discussion.
  • I suppose that I should be happy that I don’t have to spend a lot of time reviewing this dreadful paper; however I am depressed that people are performing such bad science.
  • Preliminary and intriguing results that should be published elsewhere.
  • Reject – More holes than my grandad’s string vest!
  • The writing and data presentation are so bad that I had to leave work and go home early and then spend time to wonder what life is about.
  • Very much enjoyed reading this one, and do not have any significant comments. Wish I had thought of this one.
  • This is a long, but excellent report. [...] It hurts me a little to have so little criticism of a manuscript.

More seriously, the Top 20 Reasons (Negative Comments) Written by the Reviewers Recommending Rejection of 123 Medical Education Manuscripts can be found at Academic Medicine (vol 76, no . 9 / 2 0 0 1). The top 5 is:

  1. Statistics: inappropriate, incomplete, or insufficiently described, etc.  11.2 %
  2. Overinterpretation of the results 8.7 %
  3. Inappropriate, suboptimal, insufficiently described instrument 7.3%
  4. Sample too small or biased  5.6 %
  5. Text difficult to follow, to understand 3.9%

Neuroskeptic describes 9 types of review decisions in the The Wheel of Peer Review. Was your paper reviewed by “Bee-in-your-Bonnet” or by “Cite Me, Me, Me!”

Rejections are of all times. Perhaps the best rejection letter ever is written by Sir David Brewster editor of The Edinburgh Journal of Science to Charles Babbage on July 3, 1821. Noted in James Gleick’s, The Information. A History, a Theory, a Flood

Excerpt at Marginal Revolution (HT @TwistedBacteria):

The subjects you propose for a series of Mathematical and Metaphysical Essays are so very profound, that there is perhaps not a single subscriber to our Journal who could follow them. 

Responses to a rejection are also of all ages. See this video anno 1945 (yes this scene has been used tons of times for other purposes)

Need tips?

Read How to Publish a Scientific Comment in 1 2 3 Easy Steps (well literally 123 steps) by Prof. Rick Trebino. Based on real life. It is Hilarious!

PhD comics made a paper review worksheet (you don’t even have to read the manuscript!) and gives you advise how NOT to address reviewer comments. LOL.

And here is a Sample Cover Letter for Journal Manuscript Resubmissions. Ain’t that easy?

Yet if you are still unsuccessful and want a definitive decision rendered within hours of submission you can always send your paper to the Journal of Universal Rejection.





Kaleidoscope 2: 2010 wk 31

8 08 2010

Almost a year ago I started a new series Kaleidoscope, with a “kaleidoscope” of facts, findings, views and news gathered over the last 1-2 weeks.
It never got beyond the first edition. Perhaps the introduction of this Kaleidoscope was to overwhelming & dazzling: lets say it was very rich in content. Or as
Andrew Spong tweeted: “Part cornucopia, part cabinet of wonders, it’s @laikas Kaleidoscope 2009 wk 47″

This is  a reprise in a (somewhat) “shorter” format. Lets see how it turns out.

This edition will concentrate on Social Media (Blogging, Twitter Google Wave). I fear that I won’t keep my promise, if I deal with more topics.

Medical Grand Rounds and News from the Blogosphere

Life in the Fast Lane is the host of this weeks Grand Rounds. This edition is truly terrific, if not terrifying. Not only does it contain “killer posts”, each medblogger has also been coupled to its preferred deadly Aussie critter.
Want to know how a full time ER-doctor/educator/textbook author/blogger/editor /health search engine director manages to complete work-related tasks …when the kids are either at school or asleep(!), then read this recent interview with Mike Cadogan, the founder of Life in the Fast Lane.

Don’t forget to submit your medical blog post to next weeks Grand Rounds over at Dispatch From Second Base. Instructions and theme details can be found on the post “You are invited to Grand Rounds!“ (update here).

And certainly don’t forget to submit your post related to medical information to the MedLibs Round (about medical information) here. More details can be found at Laika’s MedLibLog and at Highlight Health, the host of the upcoming Edition.
(sorry, writing this post took longer than I thought: you have one day left for submission)

Dr Shock of the blog with the same name advises us to submit good quality, easy-to-understand posts dealing with science, environment or medicine to Scientia Pro Publica via the blog carnival submission form.

There is a new on-line science blogging community – Scientopia, till now mostly consisting of bloggers who left Scienceblogs after (but not because of) Pepsigate. New members can only be added to the collective by invitation (?). Obviously, pepsi-researchers will not be invited, but it remains to be seen who will…  Hopefully it doesn’t become an elitist club.
Virginia Heffernan (NY-Times) has an outspoken opinion about the (ex-) sciencebloggers, illustrated by this one-liner

“ScienceBlogs has become Fox News for the religion-baiting, peak-oil crowd.”

Although I don’t appreciate the ranting-style of some of the blogs myself (the sub-“South Park” blasphemy style of PZ Myers, as Virginia puts it). I don’t think most Scienceblogs deserve to be labelled as “preoccupied with trivia, name-calling and saber rattling”.
See balanced responses at: NeurodojoNeuron Culture & Neuroanthropology (anything with neuro- makes sense, I guess).
Want to understand more about ScienceBlogs and why it was such a terrific community, then read Bora Z’s (rather long) ScienceBlog farewell post.

Oh.. and there is yet another new science blogging platform: http://www.labspaces.net/, that has evolved from a science news aggregator . It looks slick.

Social Media

Speaking about Twitter, did you know that  Twitter reached its 20 billionth tweet over the weekend, a milestone that came just a few months after hitting the 10 billion tweet mark!? (read more in the Guardian)

Well and if you have no idea WHAT THE FUCK IS MY SOCIAL MEDIA “STRATEGY”? you might click the link to get some (new) ideas. You probably need to refresh the site a couple of times to find the right answer.

First-year medical school and master’s of medicine students of Stanford University will receive an i-pad at the start of the year. The extremely tech-savvy Students do appreciate the gift:

“Especially in medicine, we’re using so many different resources, including all the syllabuses and slides. I’m able to pull them up and search them whenever I need to. It’s a fantastic idea.”

Good news for Facebook friends: VoIP giant Vonage has just introduced a new iPhone, iPod touch and Android app that allows users to call their Facebook friends for free (Mashable).

It was a shock – or wasn’t it – that Google pulled the plug on Google Wave (RRW), after being available to the general public for only 78 days?  The unparalleled tool that “could change the web”, but was too complex to be understood. Here are some thoughts why Google wave failed.  Since much of the Code is open source, ambitious developers may pick up where Google left.

Votes down for the social media site Digg.com: an undercover investigation has exposed that a group of influential conservative members were involved in censorship, deliberately trying to ban progressives, by “burying them” (voting down), which effectively means these progressives don’t get enough “digs” to reach the front page where most users spend their time.

Votes up for Healthcare Social Media Europe (#HCSMEU), which just celebrated its first birthday.

Miscellanous

A very strange move: a journal has changed a previously stated conclusion of a previously published paper after a Reuters Health story about serious shortcomings in the report. Read more about it at Gary Schwitzer’s HealthNewsReview Blog.

Finally for the EBM-addicts among us: The Center of Evidence Based Medicine released a new (downloadable) Levels of Evidence Table. At the CEBM-blog they stress that hierarchies of evidence have been somewhat inflexibly used, but are essentially a heuristic, or short-cut to finding the likely best evidence. At first sight the new Table looks simpler, and more easy to use.

Are you a Twitter user? Tweet this!





MedLibs Round 2.6

11 07 2010

Welcome to this months edition of MedLib’s Round, a blog carnival of “excellent blog posts in the field of medical information”.

This round is a little belated, because of late submissions and my absence earlier this week.
But lets wait no longer …..!

Peer Review, Impact Factors & Conflict of Interest

Walter Jessen at Highlight HEALTH writes about the NIH Peer Review process. Included is an interesting video, that provides an inside look at how scientists from across the US review NIH grant applications for scientific and technical merit. These scientists do seem take their job seriously.

But what about peer review of scientific papers? Richard Smith, doctor, former editor of the BMJ and a proponent of open access publishing, wrote a controversial post at the BMJ Groups Blog called scrap peer review and beware of “top journals. Indeed  the “top journals” publish the sexy stuff, whereas evidence comprises both the glamorous and the unglamorous. But is prepublication peer review really that bad and should we only filter afterwards?

In a thoughtful post at his Nature blog Confessions of a (former) Lab Rat another Richard (Grant) argues that although peer review suffers terribly from several shortcomings it is still required. Richard Grant also clears up one misconception:

Peer review, done properly, might guarantee that work is done correctly and to the best of our ability and best intentions, but it will not tell you if a particular finding is right–that’s the job of other experimenters everywhere; to repeat the experiments and to build on them.

At Scholarly Kitchen (about what is hot and cooking in scholarly publishing) they don’t think peer review is a clear concept, since the list of ingredients differ per journal and article. Read their critical analysis and suggestions for improvement of the standard recipe here.

The science blogosphere was buzzing in outrage about the adding a corporate nutrition blog sponsored by PepsiCo to ScienceBlog (i.e see this post at the Guardian Science Blog). ScienceBlogs is the platform of eminent science bloggers, like OracPharyngula and Molecule of the Day. After some bloggers left ScienceBlog and others threatened to do so, the Pepsico Blog was retracted.

An interesting view is presented by David Crotty at Scholarly Kitchen. He states that it is “hypocritical for ScienceBlog’s bloggers to have objected so strenuously: ScienceBlogs has never been a temple of purity, free of bias or agenda.” Furthermore the bloggers enjoy more traffic and a fee for being a scienceblogger, and promote their “own business” too. David finds it particularly ironic that these complaints come from the science blogosphere, which has regularly been a bastion of support for the post-publication review philosophy. Read more here.

Indeed according to a note of Scienceblog at the disappeared blog their intention was “to engage industry in pursuit of science-driven social change”, although it was clearly not the right way.

The partiality of business, including pharma, makes it’s presence in and use of Social Media somewhat tricky. Still it is important for pharma to get involved in web2.0. Interested in a discussion on this topic? Than follow the tags #HCSM (HealthCare Social Media) and #HCSMEU (Europe) on Twitter.
Andrew Spong, has launched an open wiki, where you can read all about #HCSMEU.

The value of journal impact factors is also debatable. In the third part of the series “Show me the evidence” Kathleen Crea at EBM and Clinical Support Librarians @ UCHC starts with an excerpt of an article with the intriguing title “The Top-Ten in Journal Impact Factor Manipulation”:

The assumption that Impact Factor (IF) is a number absolutely proportional to science quality has led to misuses beyond the index’s original scope, even in the opinion of its devisor.”

The post itself (Teaching & Learning in Medicine, Research Methodology, Biostatistics: Show Me the Evidence (Part 3)b) is not so much about evidence, but offers a wealth of information about  journal impact factors, comparisons of sites for citation analysis, and some educational materials for teaching others about citation analysis. Not only are Journal Citation Reports and SCOPUS discussed, but also the Eigenfactor, h-index and JANE.

Perhaps we need another system of publishing and peer review? Will the future be to publish triplets and peer review these via Twitter by as many reviewers as possible? Read about this proposal of Barend Mons (of the same group that created JANE) at this blog. Here you can also find a critical review of an article comparing Google Scholar and PubMed for retrieving evidence.

Social Media, Blogs & Web 2.0 tools

There are several tools to manage the scientific articles, like CiteULike and Mendeley. At his blog Gobbledygook Martin Fenner discusses the pros and cons of a new web-based tool specifically for discussing papers in Journal Clubs: JournalFire

At the The Health Informaticists they found an interesting new feature of Skype:  screen sharing. Here you can read all about it.

Andrew Sprong explains at his blog STweM how to create a PDF archive of hashtagged tweets using whatthehashtag?! and Google DocsScribd or Slideshare. A tweet archive is very useful in case of  live tweet or stream sessions at conferences. (each tweet is then labeled with a # or hashtag, but tweets are lost after a few days if not archived)

L1010201At Cool Toy of the DayPatricia Anderson posts a lot about healthcare tools. She submitted Cool Toys Pic of the day – Eyewriter“, a tool for allowing persons with ALS and paralysis to draw artwork with their eyes. But you find a lot more readworthy posts at this blog and her main blog Emerging Technologies Librarian.

Heidi Allen at Heidi Allen Digital Strategy started a discussion on the meaning of social-medicine for Physicians. The link to the original submission doesn’t work right now, but if you follow this link you see several posts on social-medicine, including “Physicians in Social Media”, where 3 well-known physicians give their view on the meaning of social-medicine.

Dr Shock at Dr Shock MD PhD, wonders whether “the information on postpartum depression in popular lay magazines correspond to scientific knowledge?” Would it surprise you that this is not the case for many articles on this topic?

The post of Guus van den Brekel at DigiCMB with the inspiring title Discovering new seas of knowledge partly goes about the seas of knowledge gained at the EAHIL2010 (European Association for Health Information and Libraries) meeting, with an overview of many sessions, and materials when possible. And I should stress when possible, because the other  part of the post is about the difficulty of obtaining access to this sea of knowledge. Guus wonders:

In this age of Open Access, web 2.0 and the expectancy of the “users” -being us librarians (…) one would assume that much (if not all) is freely available via Conferences websites and/or social media. Why then do I find it hard to find the extra info about those events, including papers and slides and possibly even webcasts? Are we still not into the share-mode and overprotective to one’s own achievements(….)

Guus makes a good point,especially in this era, when not all of us are able to go and visit far away places. Luckily we have Guus who did a good job of compiling as much material as possible.

Wondering about the evidence for the usefulness of web 2.0, then have a look at this excellent wiki by Dean Giustini: http://hlwiki.slais.ubc.ca/index.php/Evidence-based_web_2.0.
The Health Librarianship Wiki Canada (the mother wiki) has a great new design and is a very rich source of information for medical librarians.

Another good source for recent peer reviewed papers about using social media in medicine and healthcare is a new series by Bertalan Mesko at Science Roll. First it was called Evidence Based Social Media News and now Social media journal club.

EHR and the clinical librarian.

Nikki Dettmar presents two posts on Electronic Health Records at Eagledawg.net, inspired by a recent Medical Library Association meeting that included a lot about electronic health records (EHRs). In the first part “Electronic Health Records: Not All About the Machine” she mentions the launch of an OpenNotes study that “evaluates the impact on both patients and physicians of sharing, through online medical record portals, the comments and observations made by physicians after each patient encounter.” The second post is entitled “a snapshot of ephemeral chaos“. And yes the title says it all.

Bertalan Mesko at Science Roll describes a try out of a Cardiology Resident and Research Fellow in Google Wave to see whether that platform is suitable for creating a database of the electronic records of a virtual patient. The database looks fine at first glance, but is it safe?

Alisha764’s Blog celebrated its 1 year anniversary in February. Alisha Miles aim for the next year is to not only post more but to focus on hospital libraries including her experience as a hospital librarian. Excellent idea, Alisha! I liked the post Rounding: A solo medical librarian’s perspective with several practical tips if you join the round as a librarian. I hope you can find time to write more like this, Alisha!

Our next host is Walter Jessen at Highlight HEALTH. You can already start submitting the link to a (relevant) post you have written here.

See the MedLibs Archive for more information.

Photo Credits:





Will Nano-Publications & Triplets Replace The Classic Journal Articles?

23 06 2010

ResearchBlogging.org“Libraries and journals articles as we know them will cease to exists” said Barend Mons at the symposium in honor of our Library 25th Anniversary (June 3rd). “Possibly we will have another kind of party in another 25 years”…. he continued, grinning.

What he had to say the next half hour intrigued me. And although I had no pen with me (it was our party, remember), I thought it was interesting enough to devote a post to it.

I’m basing this post not only on my memory (we had a lot of Italian wine at the buffet), but on an article Mons referred to [1], a Dutch newspaper article [2]), other articles [3-6] and Powerpoints [7-9] on the topic.

This is a field I know little about, so I will try to keep it simple (also for my sake).

Mons started by touching on a problem that is very familiar to doctors, scientists and librarians: information overload by a growing web of linked data.  He showed a picture that looked like the one at the right (though I’m sure those are Twitter Networks).

As he said elsewhere [3]:

(..) the feeling that we are drowning in information is widespread (..) we often feel that we have no satisfactory mechanisms in place to make sense of the data generated at such a daunting speed. Some pharmaceutical companies are apparently seriously considering refraining from performing any further genome-wide association studies (… whole genome association –…) as the world is likely to produce many more data than these companies will ever be able to analyze with currently available methods .

With the current search engines we have to do a lot of digging to get the answers [8]. Computers are central to this digging, because there is no way people can stay updated, even in their own field.

However,  computers can’t deal with the current web and the scientific  information as produced in the classic articles (even the electronic versions), because of the following reasons:

  1. Homonyms. Words that sound or are the same but have a different meaning. Acronyms are notorious in this respect. Barend gave PSA as an example, but, without realizing it, he used a better example: PPI. This means Protein Pump Inhibitor to me, but apparently Protein Protein Interactions to him.
  2. Redundancy. To keep journal articles readable we often use different words to denote the same. These do not add to the real new findings in a paper. In fact the majority of digital information is duplicated repeatedly. For example “Mosquitoes transfer malaria”, is a factual statement repeated in many consecutive papers on the subject.
  3. The connection between words is not immediately clear (for a computer). For instance, anti-TNF inhibitors can be used to treat skin disorders, but the same drugs can also cause it.
  4. Data are not structured beforehand.
  5. Weight: some “facts” are “harder” than others.
  6. Not all data are available or accessible. Many data are either not published (e.g. negative studies), not freely available or not easy to find.  Some portals (GoPubmed, NCBI) provide structural information (fields, including keywords), but do not enable searching full text.
  7. Data are spread. Data are kept in “data silos” not meant for sharing [8](ppt2). One would like to simultaneously query 1000 databases, but this would require semantic web standards for publishing, sharing and querying knowledge from diverse sources…..

In a nutshell, the problem is as Barend put it: “Why bury data first and then mine it again?” [9]

Homonyms, redundancy and connection can be tackled, at least in the field Barend is working in (bioinformatics).

Different terms denoting the same concept (i.e. synonyms) can be mapped to a single concept identifier (i.e. a list of synonyms), whereas identical terms used to indicate different concepts (i.e. homonyms) can be resolved by a disambiguation algorithm.

The shortest meaningful sentence is a triplet: a combination of subject, predicate and object. A triplet indicates the connection and direction.  “Mosquitoes cause/transfer malaria”  is such a triplet, where mosquitoes and malaria are concepts. In the field of proteins: “UNIPROT 05067 is a protein” is a triplet (where UNIPROT 05067 and protein are concepts), as are: “UNIprotein 05067 is located in the membrane” and “UNIprotein 0506 interacts with UNIprotein 0506″[8].  Since these triplets  (statements)  derive from different databases, consistent naming and availability of  information is crucial to find them. Barend and colleagues are the people behind Wikiproteins, an open, collaborative wiki  focusing on proteins and their role in biology and medicine [4-6].

Concepts and triplets are widely accepted in the world of bio-informatics. To have an idea what this means for searching, see the search engine Quertle, which allows semantic search of PubMed & full-text biomedical literature, automatic extraction of key concepts; Searching for ESR1 $BiologicalProcess will search abstracts mentioning all kind of processes where ESR1 (aka ERα, ERalpha, EStrogen Receptor 1) are involved. The search can be refined by choosing ‘narrower terms’ like “proliferation” or “transcription”.

The new aspects is that Mons wants to turn those triplets into (what he calls) nano-publications. Because not every statement is as ‘hard’, nano-publications are weighted by assigning numbers from 0 (uncertain) to 1 (very certain). The nano-publication “mosquitoes transfer malaria” will get a number approaching 1.

Such nano-publications offer little shading and possibility for interpretation and discussion. Mons does not propose to entirely replace traditional articles by nano-publications. Quote [3]:

While arguing that research results should be available in the form of nano-publications, are emphatically not saying that traditional, classical papers should not be published any longer. But their role is now chiefly for the official record, the “minutes of science” , and not so much as the principle medium for the exchange of scientific results. That exchange, which increasingly needs the assistance of computers to be done properly and comprehensively, is best done with machine-readable, semantically consistent nano-publications.

According to Mons, authors and their funders should start requesting and expecting the papers that they have written and funded to be semantically coded when published, preferably by the publisher and otherwise by libraries: the technology exists to provide Web browsers with the functionality for users to identify nano-publications, and annotate them.

Like the wikiprotein-wiki, nano-publications will be entirely open access. It will suffice to properly cite the original finding/publication.

In addition there is a new kind of “peer review”. An expert network is set up to immediately assess a twittered nano-publication when it comes out, so that  the publication is assessed by perhaps 1000 experts instead of 2 or 3 reviewers.

On a small-scale, this is already happening. Nano-publications are send as tweets to people like Gert Jan van Ommen (past president of HUGO and co-author of 5 of my publications (or v.v.)) who then gives a red (don’t believe) or a green light (believe) via one click on his blackberry.

As  Mons put it, it looks like a subjective event, quite similar to “dislike” and “like” in social media platforms like Facebook.

Barend often referred to a PLOS ONE paper by van Haagen et al [1], showing the superiority of the concept-profile based approach not only in detecting explicitly described PPI’s, but also in inferring new PPI’s.

[You can skip the part below if you're not interested in details of this paper]

Van Haagen et al first established a set of a set of 61,807 known human PPIs and of many more probable Non-Interacting Protein Pairs (NIPPs) from online human-curated databases (and NIPPs also from the IntAct database).

For the concept-based approach they used the concept-recognition software Peregrine, which includes synonyms and spelling variations  of concepts and uses simple heuristics to resolve homonyms.

This concept-profile based approach was compared with several other approaches, all depending on co-occurrence (of words or concepts):

  • Word-based direct relation. This approach uses direct PubMed queries (words) to detect if proteins co-occur in the same abstract (thus the names of two proteins are combined with the boolean ‘AND’). This is the simplest approach and represents how biologists might use PubMed to search for information.
  • Concept-based direct relation (CDR). This approach uses concept-recognition software to find PPIs, taking synonyms into account, and resolving homonyms. Here two concepts (h.l. two proteins) are detected if they co-occur in the same abstract.
  • STRING. The STRING database contains a text mining score which is based on direct co-occurrences in literature.

The results show that, using concept profiles, 43% of the known PPIs were detected, with a specificity of 99%, and 66% of all known PPIs with a specificity of 95%. In contrast, the direct relations methods and STRING show much lower scores:

Word-based CDR Concept profiles STRING
Sensitivity at spec = 99% 28% 37% 43% 39%
Sensitivity at spec = 95% 33% 41% 66% 41%
Area under Curve 0.62 0.69 0.90 0.69

These findings suggested that not all proteins with high similarity scores are known to interact but may be related in another way, e.g.they could be involved in the same pathway or be part of the same protein complex, but do not physically interact. Indeed concept-based profiling was superior in predicting relationships between proteins potentially present in the same complex or pathway (thus A-C inferred from concurrence protein pairs A-B and B-C).

Since there is often a substantial time lag between the first publication of a finding, and the time the PPI is entered in a database, a retrospective study was performed to examine how many of the PPIs that would have been predicted by the different methods in 2005 were confirmed in 2007. Indeed, using concept profiles, PPIs could be efficiently predicted before they enter PPI databases and before their interaction was explicitly described in the literature.

The practical value of the method for discovery of novel PPIs is illustrated by the experimental confirmation of the inferred physical interaction between CAPN3 and PARVB, which was based on frequent co-occurrence of both proteins with concepts like Z-disc, dysferlin, and alpha-actinin. The relationships between proteins predicted are broader than PPIs, and include proteins in the same complex or pathway. Dependent on the type of relationships deemed useful, the precision of the method can be as high as 90%.

In line with their open access policy, they have made the full set of predicted interactions available in a downloadable matrix and through the webtool Nermal, which lists the most likely interaction partners for a given protein.

According to Mons, this framework will be a very rich source for new discoveries, as it will enable scientists to prioritize potential interaction partners for further testing.

Barend Mons started with the statement that nano-publications will replace the classic articles (and the need for libraries). However, things are never as black as they seem.
Mons showed that a nano-publication is basically a “peer-reviewed, openly available” triplet. Triplets can be effectively retrieved ànd inferred from available databases/papers using a
concept-based approach.
Nevertheless, effectivity needs to be enhanced by semantically coding triplets when published.

What will this mean for clinical medicine? Bioinformatics is quite another discipline, with better structured and more straightforward data (interaction, identity, place). Interestingly, Mons and van Haage plan to do further studies, in which they will evaluate whether the use of concept profiles can also be applied in the prediction of other types of relations, for instance between drugs or genes and diseases. The future will tell whether the above-mentioned approach is also useful in clinical medicine.

Implementation of the following (implicit) recommendations would be advisable, independent of the possible success of nano-publications:

  • Less emphasis on “publish or perish” (thus more on the data themselves, whether positive, negative, trendy or not)
  • Better structured data, partly by structuring articles. This has already improved over the years by introducing structured abstracts, availability of extra material (appendices, data) online and by guidelines, such as STARD (The Standards for Reporting of Diagnostic Accuracy)
  • Open Access
  • Availability of full text
  • Availability of raw data

One might argue that disclosing data is unlikely when pharma is involved. It is very hopeful therefore, that a group of major pharmaceutical companies have announced that they will share pooled data from failed clinical trials in an attempt to figure out what is going wrong in the studies and what can be done to improve drug development (10).

Unfortunately I don’t dispose of Mons presentation. Therefore two other presentations about triplets, concepts and the semantic web.

&

References

  1. van Haagen HH, ‘t Hoen PA, Botelho Bovo A, de Morrée A, van Mulligen EM, Chichester C, Kors JA, den Dunnen JT, van Ommen GJ, van der Maarel SM, Kern VM, Mons B, & Schuemie MJ (2009). Novel protein-protein interactions inferred from literature context. PloS one, 4 (11) PMID: 19924298
  2. Twitteren voor de wetenschap, Maartje Bakker, Volskrant (2010-06-05) (Twittering for Science)
  3. Barend Mons and Jan Velterop (?) Nano-Publication in the e-science era (Concept Web Alliance, Netherlands BioInformatics Centre, Leiden University Medical Center.) http://www.nbic.nl/uploads/media/Nano-Publication_BarendMons-JanVelterop.pdf, assessed June 20th, 2010.
  4. Mons, B., Ashburner, M., Chichester, C., van Mulligen, E., Weeber, M., den Dunnen, J., van Ommen, G., Musen, M., Cockerill, M., Hermjakob, H., Mons, A., Packer, A., Pacheco, R., Lewis, S., Berkeley, A., Melton, W., Barris, N., Wales, J., Meijssen, G., Moeller, E., Roes, P., Borner, K., & Bairoch, A. (2008). Calling on a million minds for community annotation in WikiProteins Genome Biology, 9 (5) DOI: 10.1186/gb-2008-9-5-r89
  5. Science Daily (2008/05/08) Large-Scale Community Protein Annotation — WikiProteins
  6. Boing Boing: (2008/05/28) WikiProteins: a collaborative space for biologists to annotate proteins
  7. (ppt1) SWAT4LS 2009Semantic Web Applications and Tools for Life Sciences http://www.swat4ls.org/
    Amsterdam, Science Park, Friday, 20th of November 2009
  8. (ppt2) Michel Dumontier: triples for the people scientists liberating biological knowledge with the semantic web
  9. (ppt3, only slide shown): Bibliography 2.0: A citeulike case study from the Wellcome Trust Genome Campus – by Duncan Hill (EMBL-EBI)
  10. WSJ (2010/06/11) Drug Makers Will Share Data From Failed Alzheimer’s Trials




The Trouble with Wikipedia as a Source for Medical Information

14 09 2009

This post was chosen as an Editor's Selection for ResearchBlogging.org

Do you ever use Wikipedia? I do and so do many other people. It is for free, easy to use, and covers many subjects.

But do you ever use Wikipedia to look up scientific or medical information? Probably everyone does so once in a while. Dave Munger (Researchblogging) concluded a discussion on Twitter as follows:

Logo of the English Wikipedia
Image via Wikipedia

“Wikipedia’s information quality is better than any encyclopedia, online or off. And, yes, it’s also easy to use”.

Wikipedia is an admirable initiative. It is a large online collaborative, multilingual encyclopedia written by contributors around the world.
But the key question is whether you can rely on Wikipedia as the sole source for medical, scientific or even popular information.

Well, you simply can’t and here are a few examples/findings to substantiate this point.

RANKING AND USE

E-patients

When you search  for diabetes in Google (EN), Wikipedia’s entry about diabetes ranks second, below the American Diabetes Association Home Page. A recent study published in the J Am Med Inform Assoc [1] confirms what you would expect: that the English Wikipedia is a prominent source of online health information. Wikipedia ranked among the first ten results in more than 70% of search engines and health-keywords tested, and outranked other sources in case of rare disease-related keywords. Wikipedia’s articles were viewed more frequently than the corresponding MedlinePlus Topic pages. This corroborates another study that can be downloaded from the internet here [10]. This study by Envision Solutions, LLC, licensed under the Creative Commons License, concluded that the exposure of Internet user’s to health-related user-generated media (UGM) is significant, Wikipedia being the most reference resource on Google and Yahoo.

The following (also from envisionsolutionsnow.com, from 2007 [10]) illustrates the impact of this finding:

According to the Pew Internet & American Life Project*, 10 million US adults search online for information on health each day [1]. Most (66%) begin their research on a search engine like Yahoo or Google. In addition, Americans are saying that the information they find on the Internet is having an impact. According to Pew, “53% of health seekers report that their most recent health information session [influenced] how they take care of themselves or care for someone else.” In addition, 56% say the information they find online has boosted their confidence in their healthcare decision-making abilities.

And according to an update from the Pew Internet & American Life Project (2009) [11]:

In 2000, 46% of American adults had access to the internet, 5% of U.S. households had broadband connections, and 25% of American adults looked online for health information. Now, 74% of American adults go online, 57% of American households have broadband connections, and 61% of adults look online for health information.

Thus a lot of people look online for health care questions and are more inclined to use highly ranked sources.
This is not unique for Health topics but is a general phenomenon, i.e. see this mini-study performed by a curious individual: 96.6% of Wikipedia Pages Rank in Google’s Top 10 [12]. The extreme high traffic to Wikipedia due to search referrals has  even been been denounced by SEO-people (see here) [13]: if you type “holiday” Wikipedia provides little value when ranking in the top 10: everybody knows what a holiday is ;)

Medical students use it too.

A nightmare for most educators in the curriculum is that students rely on UGM or Web 2.0 sites as a source  of medical information. Just walk along medical students as they work behind their computers and take a quick glance at the pages they are consulting. These webpages often belong to the above category.

AnneMarie Cunningham, GP and Clinical Lecturer in the UK, did a little informal “survey” on the subject. She asked 31 first year medical students about their early clinical attachments in primary and secondary care and summerized the results on her blog Wishful Thinking in Medical Education [14]. By far and away Wikipedia was the most common choice to look up unfamiliar clinical topics.

AnneMarie:

‘Many students said I know I shouldn’t but….’ and then qualified that they used Wikipedia first because it was easy to understand, they felt it was reasonably reliable, and accessible. One student used it to search directly from her phone when on placement..

50% of the doctors use it!

But these are only medical students. Practicing doctors won’t use Wikipedia to solve their clinical questions, because they know where to find reliable medical information.

Wrong!

The New Scientist cites a report [15] of US healthcare consultancy Manhattan Research (April 2009), stating that that 50 percent of the doctors turn to Wikipedia for medical information.

A recent qualitative study published in Int J Med Inform [2] examined the “Web 2.0″ use by 35 junior physicians in the UK. Diaries and interviews encompassing 177 days of internet use or 444 search incidents, analyzed via thematic analysis. Although concepts are loosely defined (Web 2.0, internet and UMG are not properly defined, i.e. Google is seen as a web 2.0 tool (!) [see Annemarie’s critical review [16] the results clearly show that 89% of these young physicians use at least one “Web 2.0 tool” (including Google!) in their medical practice, with 80% (28/35) reporting the use of wikis. The visit of wiki’s is largely accounted for by visits to Wikipedia: this was the second most commonly visited site, used in 26% (115/44) of cases and by 70% (25/35) of all physicians. Notably, only one respondent made regular contribution to a medical wiki site.

The main motivation for using the Internet for information seeking was the accessibility and ease of use over other tools (like textbooks), the uptodateness, the broad coverage and the extras such as interactive immages. On the other hand most clinicians realized that there was a limitation in the quality or usefulness of information found. It is reassuring that most doctors used UGM like Wikipedia for background or open questions, to fulfill the need for more in depth knowledge on a subject, or to find information for patients, not for immediate solving of clinical questions.

The Int J Med Inform article has been widely covered by blogs: i.e. see Wishful Thinking in Medical Education [16], Dr Shock, MD, PhD [17], Life in the Fast Lane [18], Clinical Cases and Images Blog [19] and Scienceroll [20].

Apparently some doctors also heavily rely on Wikipedia that they refer to Wikipedia articles in publications (see the Int. J Cardiol. PubMed [3] abstract below)!!

8-9-2009 14-03-15 Int J cardiol wikipedia references 2

WHY WIKIPEDIA IS NOT (YET) A TRUSTWORTHY AND HIGH QUALITY HEALTH SITE

Whether the common use of Wikipedia by e-patient, medical students and doctors is disadvantageous depends on the quality and the trustworthiness of the Wikipedia articles, and that is in its turn dependent on who writes the articles.

Basically, the strength of Wikipedia is it weakness: anyone can write anything on any subject, and anyone can edit it, anonymously.

Negative aspects include its coverage (choice of subjects but also the depth of coverage), the “overlinking”, the sometimes frustating interactions between authors and editors, regularly leading to (often polite) “revision wars“, but above all the lack of ‘expert’ authors or peer review. This may result in incomplete, wrong or distorted information.

Positive aspects are its accessibility, currency, availability in many languages, and the collective “authorship” (which is an admirable concept).

The following humorist video shows how the wisdom of the crowds can lead to chaos, incorrect and variable information.

SCOPE AND ACCURACY (What has been covered, how deep and how good) :

Too much, too little, too ….

With respect to its coverage one study in the Journal of Computer-Mediated Communication (2008) [4] concludes:

Differences in the interests and attention of Wikipedia’s editors mean that some areas, in the traditional sciences, for example, are better covered than others. (…)
Overall, we found that the degree to which Wikipedia is lacking depends heavily on one’s perspective. Even in the least covered areas, because of its sheer size, Wikipedia does well, but since a collection that is meant to represent general knowledge is likely to be judged by the areas in which it is weakest, it is important to identify these areas and determine why they are not more fully elaborated. It cannot be a coincidence that two areas that are particularly lacking on Wikipedia—law and medicine—are also the purview of licensed experts.

It is not unexpected though that Wikipedia’s topical coverage is driven by the interests of its users.

Sometimes data are added to Wikipedia, that are in itself correct, but controversial. Recently, Wikipedia published the 10 inkblots (Scienceroll, [21]) of the Rorschach test, along with common responses for each. This had led to complaints by Psychologists , who argue that the site is jeopardizing one of the oldest continuously used psychological assessment tests (NY Times [22]).

The actual coverage of medical subjects may vary greatly. In one study [5], abstract-format, 2007) Wikipedia entries were screened for the most commonly performed inpatient surgical procedures in the U.S. Of the 39 procedures, 35 were indexed on Wikipedia. 85.7% of these articles were deemed appropriate for patients. All 35 articles presented accurate content, although only 62.9% (n=22) were free of critical omissions. Risks of the procedures were significantly underreported. There was a correlation between an entry’s quality and how often it was edited.

Wikipedia may even be less suitable for drug information questions, questions that one-third of all Internet health-seekers search for. A study in Annals of Pharmacotherapy [6] comparing the scope, completeness, and accuracy of drug information in Wikipedia to a free, online, traditionally edited database (Medscape Drug Reference [MDR]) showed that  Wikipedia answered significantly fewer drug information questions (40.0%) compared with MDR (82.5%; p < 0.001) and that Wikipedia answers were less complete. Although no factual errors were found, errors of omission were higher in Wikipedia (n = 48) than in MDR (n = 14). The authors did notice a marked improvement in Wikipedia over time. The authors conclude:

This study suggests that Wikipedia may be a useful point of engagement for consumers looking for drug information, but that it should be supplementary to, rather than the sole source of, drug information. This is due, in part, to our findings that Wikipedia has a more narrow scope, is less complete, and has more errors of omission versus the comparator database. Consumers relying on incomplete entries for drug information risk being ill-informed with respect to important safety features such as adverse drug events, contraindications, drug interactions, and use in pregnancy.
These errors of omission may prove to be a substantial and largely hidden danger associated with exclusive use of
user-edited drug information sources.

Alternatively, user-edited sites may serve as an effective means of disseminating drug information and are promising as a means of more actively involving consumers in their own care. However, health professionals should not use user-edited sites as authoritative sources in their clinical practice, nor should they recommend them to patients without knowing the limitations and providing sufficient additional information and counsel…

Not Evidence Based

German researches found [7], not surprisingly, that Wikipedia (as well as two major German statutory health insurances):

“…failed to meet relevant criteria, and key information such as the presentation of probabilities of success on patient-relevant outcomes, probabilities of unwanted effects, and unbiased risk communication was missing. On average items related to the objectives of interventions, the natural course of disease and treatment options were only rated as “partially fulfilled”. (..)  In addition, the Wikipedia information tended to achieve lower comprehensibility. In conclusion(..) Wikipedia (..) does not meet important criteria of evidence-based patient and consumer information though…”

Wrong, misleading, inaccurate

All above studies point at the incompleteness of Wikipedia. Even more serious is the fact that some of the Wikipedia addings are wrong or misleading. Sometimes on purpose. The 15 biggest wikipedia blunders [23] include the death announcements of Ted Kennedy (when he was still alive),  Robert Byrd and others. Almost hilarious are the real time Wikipedia revisions after the presumed death of Kennedy and the death of Ken Lay (suicide, murde, heart attack? [24).

In the field of medicine, several drug companies have been caught altering Wikipedia entries. The first drug company messing with Wikipedia was AstraZeneca. References claiming that Seroquel allegedly made teenagers “more likely to think about harming or killing themselves” were deleted by a user of a computer registered to the drug company [25], according to Times [26]. Employees of Abbott Laboratories have also been altering entries to Wikipedia to “eliminate information questioning the safety of its top-selling drugs.”(See WSJ-blog [27] , brandweeknrx.com [28], and recently Kevin MD[29])

These are “straightforward” examples of fraudulent material. But sometimes the Wikipedia articles are more subtly colored by positive or negative bias.

Take for instance the English entry on Evidence Based Medicine (in fact the reason why I started this post). Totally open-minded I checked the entry, which was automatically generated in one of my posts by Zemanta. First I was surprised by the definition of EBM:

Evidence-based medicine (EBM) aims to apply the best available evidence gained from the scientific method to medical decision making. It seeks to assess the quality of evidence of the risks and benefits of treatments (including lack of treatment).

instead of the usually cited Sacket-definition (this is only cited at the end of the paper):

“the practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research”

In short, the whole article lacks cohesion: the definitions of EBM are not correct, there is too much emphasis on not directly relevant information (4 ways to grade the evidence and 3 statistical measures), the limitations are overemphasized (cf. chapter 7 with 6 in the Figure below) and put out of perspective.

Apparently this has also been noted by Wikipedia, because there is a notice on the Evidence Based Medicine Page saying:

This article has been nominated to be checked for its neutrality. Discussion of this nomination can be found on the talk page. (May 2009)

9-9-2009 9-55-04 wikipedia EBM start smal

Much to my surprise the article had been written by Mr-Natural-Health, who’s account seems not to be in use since 2004  and who is currently active as User:John Gohde. Mr Natural Health is a member of WikiProject Alternative medicine.

Now why in earth would some advocate of CAM write the Wikipedia EBM-entry? I can think of 4 (not mutually exclusive) reasons:

  1. When you’re an EBM-nonbeliever or opponent this is THE chance to misinform readers about EBM (to the advantage of CAM).
  2. The author was invited to write this entry.
  3. No EBM-specialist or epidemiologist is willing to write the entry, or to write for Wikipedia in general (perhaps because they find Wikipedia lacks trustworthiness?)
  4. EBM specialists/epidemiologists are not “allowed”/hindered to make major amendments to the text, let alone rewrite it.

According to Mr Naturopath point 2 is THE reason he wrote this article. Now the next question is “exactly by whom was he invited?” But the TALK-page reveals that Mr Naturapath makes it a tough job for other, better qualified writers, to edit the page (point 4). To see how difficult it is for someone to re-edit a page, please see the TALK-page. In fact, one look at this page discourages me from ever trying to make some amendments to any Wikpedia text.

SOLUTIONS?

Changes to Wikipedia’s organization

Wikipedia has long grasped that its Achilles heel is the free editability (see for instance this interview with Wikipedia’s founder [30]). Therefore, “WikiProjects” was initiated to help coordinate and organize the writing and editing of articles on a certain topic, as well as “Citizendium” which is an English-language wiki-based free encyclopedia project aimed to improve the Wikipedia model by providing a “reliable” encyclopedia. “It hopes to achieve this by requiring all contributors to use their real names, by strictly moderating the project for unprofessional behavior, by providing what it calls “gentle expert oversight” of everyday contributors, and also through its “approved articles,” which have undergone a form of peer-review by credentialed topic experts and are closed to real-time editing.”

Starting this fall Wikipedia will launch an optional feature called “WikiTrust” will color code every word of the encyclopedia based on the reliability of its author and the length of time it has persisted on the page: Text from questionable sources starts out with a bright orange background, while text from trusted authors gets a lighter shade.

9-9-2009 15-25-36 wikipedia wikiproject medicine

The Wikipedia EBM article is within the scope of these two projects, and this is good news. However, Wikipedia still clings to the idea that: “Everyone is welcome to join in this endeavor (regardless of medical qualifications!).” In my opinion, it would be better if Wikipedia gave precedence to experts instead of hobbyists/ people from another field, because the former can be expected to know what they are talking about. It is quite off-putting for experts to contribute. See this shout-out:

Who are these so-called experts who will qualify material? From what I’ve seen so far, being an academic expert in a particular field hardly protects one from edit wars–Julie and 172 are two primary examples of this. Meanwhile, the only qualification I have seen so far is that they have a B.A. Gimme a friggin’ break! (and before I get accused of academic elitism, I make it known that I dropped out of college and spend an inordinate amount of time at work correcting the BS from the BAs, MAs, and PhDs).

While anyone can still edit entries, the site is testing pages that require changes to be approved by an experienced Wikipedia editor before they show up, the so called Flagged protection and patrolled revisions. (see Wikimedia) This proposal is only for articles that are currently under normal mechanisms of protection (i.e. the Obama-article cannot be edited by a newcomer).

Although this seems logic, it is questionable whether “experienced” editors are per definition better qualified than newcomers. A recent interesting analysis of the Augmented Social Cognition group [31], (cited in the Guardian [32]) shows a slowdown in growth of Wikipedia activity, with the activity slightly declining in all classes of editors except for the highest-frequency class of editors (1000+ edits). Here is an increase in their monthly edits.

In addition the study shows growing resistance from the Wikipedia community to new content. The total percentage of reverted edits increased steadily over the years, but more interestingly, low-frequency or occasional editors experienced a visibly greater resistance compared to high-frequency editors . Together this points at a growing resistance from the Wikipedia community to new content, especially when the edits come from occasional editors.

This is more or less in line with an earlier finding [9] showing that Wikipedia members feel more comfortable expressing themselves on the net than off-line and scored lower on agreeableness and openness compared to non-Wikipedians, a finding that was interpreted as consistent with the possibility that contributing to Wikipedia serves mainly egocentric motives.

Image representing Medpedia as depicted in Cru...
Image via CrunchBase

Encouraging students, doctors and scientists (provisional)

One way of improving content, is to encourage experts to write. To achieve that the information on Wikipedia is of the highest quality and up-to-date, the NIH is encouraging its scientists and science writers to edit and even initiate Wikipedia articles in their fields [36]. It joined with the Wikimedia Foundation, to host  a training session on the tools and rules of wiki culture, at NIH headquarters in Bethesda.

A less noncommital approach is the demand to “Publish in Wikipedia or perish”, as described in Nature News [9]. Anyone submitting to a section of the journal RNA Biology will, in the future, be required to also submit a Wikipedia page that summarizes the work. The journal will then peer review the page before publishing it in Wikipedia.” The project is described in detail here [10] and the wiki can be viewed here

Wiki’s for experts.

One possible solution is that scientist and medica experts contribute to wiki’s other than the Wikipedia. One such wiki is the wiki-surgery [5]. PubDrugRxWiki , WikiProteins [11] and Gene Wiki [12] are other examples. In general, scientists are more inclined to contribute to these specialists wiki’s, that have oversight and formal contributions by fellow practitioners (this is also true for the RNA-wiki)

A medical Wikipedia

Yet another solution is a medical wikipedia, such as Ganfyd or Medpedia . Ganfyd is written by medical professionals. To qualify to edit or contribute to the main content of Medpedia approved editors must have an M.D., D.O., or Ph.D. in a biomedical field. Others, however, may contribute by writing in suggestions for changes to the site using the “Make a suggestion” link at the top of each page. Suggestions are reviewed by approved editors. Whether these medical wikipedias will succeed will depend on the input of experts and their popularity: to what extent will they be consulted by people with health questions?

I would like to end with a quote from Berci during twitterview (link in Wikipedia):

@Berci : @diariomedico And as Wikipedians say, Wikipedia is the best source to start with in your research, but should never be the last one. #DM1 9 months ago

REFERENCES

ResearchBlogging.orgScientific Articles

  1. Laurent, M., & Vickers, T. (2009). Seeking Health Information Online: Does Wikipedia Matter? Journal of the American Medical Informatics Association, 16 (4), 471-479 DOI: 10.1197/jamia.M3059
  2. Hughes, B., Joshi, I., Lemonde, H., & Wareham, J. (2009). Junior physician’s use of Web 2.0 for information seeking and medical education: A qualitative study International Journal of Medical Informatics, 78 (10), 645-655 DOI: 10.1016/j.ijmedinf.2009.04.008
  3. Lee, C., Teo, C., & Low, A. (2009). Fulminant dengue myocarditis masquerading as acute myocardial infarction International Journal of Cardiology, 136 (3) DOI: 10.1016/j.ijcard.2008.05.023
  4. Halavais, A., & Lackaff, D. (2008). An Analysis of Topical Coverage of Wikipedia Journal of Computer-Mediated Communication, 13 (2), 429-440 DOI: 10.1111/j.1083-6101.2008.00403.x
  5. Devgan, L., Powe, N., Blakey, B., & Makary, M. (2007). Wiki-Surgery? Internal validity of Wikipedia as a medical and surgical reference Journal of the American College of Surgeons, 205 (3) DOI: 10.1016/j.jamcollsurg.2007.06.190
  6. Clauson, K., Polen, H., Boulos, M., & Dzenowagis, J. (2008). Scope, Completeness, and Accuracy of Drug Information in Wikipedia Annals of Pharmacotherapy, 42 (12), 1814-1821 DOI: 10.1345/aph.1L474 (free full text)
  7. Mühlhauser I, & Oser F (2008). [Does WIKIPEDIA provide evidence-based health care information? A content analysis] Zeitschrift fur Evidenz, Fortbildung und Qualitat im Gesundheitswesen, 102 (7), 441-8 PMID: 19209572
  8. Amichai–Hamburger, Y., Lamdan, N., Madiel, R., & Hayat, T. (2008). Personality Characteristics of Wikipedia Members CyberPsychology & Behavior, 11 (6), 679-681 DOI: 10.1089/cpb.2007.0225
  9. Butler, D. (2008). Publish in Wikipedia or perish Nature DOI: 10.1038/news.2008.1312
  10. Daub, J., Gardner, P., Tate, J., Ramskold, D., Manske, M., Scott, W., Weinberg, Z., Griffiths-Jones, S., & Bateman, A. (2008). The RNA WikiProject: Community annotation of RNA families RNA, 14 (12), 2462-2464 DOI: 10.1261/rna.1200508
  11. Mons, B., Ashburner, M., Chichester, C., van Mulligen, E., Weeber, M., den Dunnen, J., van Ommen, G., Musen, M., Cockerill, M., Hermjakob, H., Mons, A., Packer, A., Pacheco, R., Lewis, S., Berkeley, A., Melton, W., Barris, N., Wales, J., Meijssen, G., Moeller, E., Roes, P., Borner, K., & Bairoch, A. (2008). Calling on a million minds for community annotation in WikiProteins Genome Biology, 9 (5) DOI: 10.1186/gb-2008-9-5-r89
  12. Huss, J., Orozco, C., Goodale, J., Wu, C., Batalov, S., Vickers, T., Valafar, F., & Su, A. (2008). A Gene Wiki for Community Annotation of Gene Function PLoS Biology, 6 (7) DOI: 10.1371/journal.pbio.0060175
    Other Publications, blogposts
    (numbers in text need to be adapted)

  13. Envision Solutions, LLC. Diving Deeper Into Online Health Search – Examining Why People Trust Internet Content & The Impact Of User-Generated Media (2007) http://www.envisionsolutionsnow.com/pdf/Studies/Online_Health_Search.pdf Accessed August 2009 (CC)
  14. New data available of the the Pew Internet & American Life Project are available here)
  15. http://www.thegooglecache.com/white-hat-seo/966-of-wikipedia-pages-rank-in-googles-top-10/
  16. http://www.seoptimise.com/blog/2008/05/why-wikipedias-google-rankings-are-a-joke.html
  17. http://wishfulthinkinginmedicaleducation.blogspot.com/2009/06/where-do-first-year-medical-students.html
  18. http://www.newscientist.com/article/mg20327185.500-should-you-trust-health-advice-from-the-web.html?page=1
  19. http://wishfulthinkinginmedicaleducation.blogspot.com/2009/07/where-do-junior-doctors-look-things-up.html
  20. http://www.shockmd.com/2009/07/06/how-and-why-junior-physicians-use-web-20/
  21. http://sandnsurf.medbrains.net/2009/07/how-and-why-junior-docs-use-web-20/
  22. Wikipedia used by 70% of junior physicians, dominates search results for health queries (casesblog.blogspot.com)
  23. http://scienceroll.com/2009/07/06/junior-physicians-and-web-2-0-call-for-action/
  24. http://scienceroll.com/2009/08/03/rorschach-test-scandal-on-wikipedia-poll/
  25. http://www.nytimes.com/2009/07/29/technology/internet/29inkblot.html (Rorschach)
  26. http://www.pcworld.com/article/170874/the_15_biggest_wikipedia_blunders.html
  27. http://www.futureofthebook.org/blog/archives/2006/07/reuters_notices_wikipedia_revi.html
  28. http://en.wikipedia.org/w/index.php?diff=prev&oldid=144007397
  29. http://business.timesonline.co.uk/tol/business/industry_sectors/media/article2264150.ece
  30. http://blogs.wsj.com/health/2007/08/30/abbott-labs-in-house-wikipedia-editor/
  31. http://www.brandweeknrx.com/2007/08/abbott-caught-a.html
  32. http://www.kevinmd.com/blog/2009/08/op-ed-wikipedia-isnt-really-the-patients-friend.html
  33. http://www.businessweek.com/technology/content/dec2005/tc20051214_441708.htm?campaign_id=topStories_ssi_5
  34. http://asc-parc.blogspot.com/2009/08/part-2-more-details-of-changing-editor.html
  35. http://www.guardian.co.uk/technology/2009/aug/12/wikipedia-deletionist-inclusionist
  36. http://www.washingtonpost.com/wp-dyn/content/article/2009/07/27/AR2009072701912.html
Reblog this post [with Zemanta]




MedLib’s Round, First Edition

13 02 2009

Welcome to the first edition of MedLib’s Round, a blog carnival of the “best blog posts in the field of medical librarianship”.

shht-librarian-costume1Starting a new blog carnival is often difficult. You have to recruit bloggers, who want to participate by submitting blogposts and/or hosting future editions. (see this older post on Scienceroll - Thanks @hleman).

I didn’t sound out people to find if they were interested, but just gave it a try. — Therefore, I was very pleased that the idea was so enthusiastically received by many medical librarians ànd physicians from all over the world. Emergency physician Mike Cadogan (@sandnsurf) of Life in the First Lane already added the MedLib’s Round to his listing of Blogs Rankings and Rounds before it had even started.

Blog carnivals are meant to spread the word not only about established, but also about new bloggers. I’m therefore delighted that several librarians were inspired to (re)start blogging.

Shamsha Damani (@shamsha) accepted the invitation to become a guest writer on this blog to be able to submit a post (see below).

Alisha Miles (@alisha764) who start tweeting in Januari started her own blog Alisha 764 with the post “I am a Tree” saying: “I am no longer a mushroom, I am now a tree. Thank you to all of the other librarians’ posts & tweets that inspired me to start this blog.” Which clearly refers to the comment of @sandnsurf to the blogpost “What I learned in 2008 (about Web 2.0)“: “the most important thing is that you are actually a tree in this ecosystem, you are out there experimenting, thinking and trying to drive the revolution further…Most of my colleagues are still mushrooms…

The Pilgrimthinkera librarian explores health literacy, patient education and consumer health issues) even wrote a blogpost entitled “Thank you, Laika, for taking the initiative to start up a MedLib Blog Carnival. It was just the kick in the pants I needed to get back to blogging, with the added promise of some increased interest and posting from everyone.”

Thus apart from being a post-aggregator, a blog carnival can also inspire people with similar interests and connect them. From my own experience I know you can feel lonely as a blogger. So please  take a look at the above mentioned blogs/twitter accounts and help them to flourish into full grown trees, so we can all enjoy their fruits (and vice versa).

AND NOW FOR…..THE FIRST MEDLIB’S ROUND

The MedLib’s Round is about medical librarian stuff. This field is much broader than searching PubMed or interlibrary loaning; it is related to all stages in the publication and medical information cycles (searching, citing, managing, writing, publishing, social networking).

This carnival covers many facets of that cycle.

SEARCHING THE WEB

For medical librarians searching is an important facet of their job. There are different sources to search, including “the World Wide Web” and bibliographic databases like PubMed.

Hope Leman of AltSearchEngines has compiled a list of Top 10 Health Search Engines of 2008. She urges all those interested in medical search to give these tools a spin. Her Top 10 bares great resemblance to the Top 8 Bedside Health Search Engines 2008 of @sandnsurf (Mike Cadogan), indicating that the same engines are appreciated and used by physicians as well.
GoPubMed ranks 2 in both lists. According to Hope “GoPubMed is a useful complement to PubMed proper, particularly to determine who the leading authorities are on particular topics.
For further details on how to use GoPubMed see an earlier post of Mike and several posts of David Rothman (here and here).

On first position in both lists is the federated search engine Mednar. Hope submitted a second post merely devoted to this health search engine: Mednar Search…and Hope said, “It is good.” Well, if Hope, an expert in search engines, recommends Mednar it must be good. According to Hope Mednar is useful for (medical) librarians, as well as busy front-line clinicians and clinical researchers. Its main advantages are its ease of use, its elegant interface and “the access to an array of databases that are simply not mined by other health search engines, also called “The Invisible Web” (gray literature and similar hard-to find content)“. It is an useful complement to PubMed in that there is a shorter lag time before the very latest articles can be found.
Recently others have also reviewed Mednar, including (of course) @sandnsurf , as well as Creaky of EBM and Clinical Support Librarians@UCHC who concluded “I liked the results well-enough, but won’t give up using the precise technical limits and search filters available in PubMed, or the comprehensive, deep searches available by using the 15,000 journals indexed in Scopus”.

SEARCHING PUBMED (and Widgets)

3262152119_a1cc3c28a4-sl-award-guusGuus van den Brekel of DigiCMB , who just won the Alliance Virtual Library Golden Leaf Awards 2009 (Second Life), told me that PubMed is by far the most frequently used search database by the hospital staff and students of the University Medical Center Groningen, where he works. In 2007, EVERY 2 MINS somebody used the Pubmed link, and every 30 seconds somebody clicked the SFX-link resolver in PubMed. Guus believes that such a tool needs to be published to as many platforms as possible, and in any format the patrons would like them. So far a Toolbar, Widget, HTML-box, OpenSearch pretty much covers that wish. The Widgets can be found at PubMed Search & News Widget

PubMed has introduced (or rather continuously introduces) several changes, that have been amply discussed here. Major changes include the Advanced Search, the citation sensor and the way terms typed in the search bar are translated. Non-librarians often don’t know that PubMed automatically maps the words, but the way this is done has changed, i.e. multi-term words are split. In her post Mapping door PubMed, written in Dutch and English, de Bibliotheker shows that this altered mapping can have both unexpected positive and negative effects, and that it is always important to check the Details Tab.

Among the things that Nicole Dettmar (Eagle Dawg) of the Eagle Dawg Blog addresses at her post Eagle Dawg Blog: Hidden in the Bookshelf: PubMed & Discovery Initiative is the new Discovery Initiative of the NCBI, which is an effort to make the full potential of the NCBI Web services and underlying databases more available to users. Nicole gives various interesting links, which will tell you more about the upcoming changes.

MANAGING INFORMATION AND REFERENCES

Like many of her colleagues medical librarian Anne Welsh First Person Narrative noticed clinicians prefer to perform one word Google-style searches (hé, does that sound familiar!). However, realizing that her medical library “expert opinion” was based on nothing more than a series of anecdotes, Anne decided to have a  fish around for research on clinicians’ search strategies and information needs. Curious about the outcome? Then read the summary of the evidence in her well written research blogging post “Limiting the Dataset.

Indeed it is hard to keep up with the literature. Apart from specific (often Google-style searches), most clinicians also try to read a few interesting journals, for instance the BMJ and the Lancet. Instead of going to the library it is also possible to take an email alert or a RSS feed to the journals of your choice. You can generate custom RSS feeds in PubMed for you favorite search and/or Journal, but this is a kind of cumbersome procedure for most people not used to it (see for instance my earlier post in Dutch and this post of David Rothman – a must-read for people not acquainted with the use of RSS for this purpose).
Physician and medicine2.0 pioneer Ves Dimov of the Clinical Cases and Images – Blog has another solution to set up a RSS feed to journals, which I found astonishing simple and pretty awesome, because of the conveniently arrangement of the results. All you need is a free Google account to create Your Own “Medical Journal” with iGoogle Personalized Page. Want to know how it works, then please read his easy-to-follow post, which he has specially updated for this occasion. Ves has also included some ready made RSS feeds of the “Big Five” medical journals (NEJM, JAMA, BMJ, Lancet and Annals) plus 2-3 subspecialty journals as well as several podcasts in iGoogle.

Now, once you have the PDF’s of the papers you like you would like to store them in a handy way. Another physician, the Dutch psychiatrist Dr Shock MD PhD with a very eloguent blog of the same name, explores the use of Mendeley, a free social software for managing and sharing research papers and a Web 2.0 site for discovering research trends and connecting to like-minded academics (see Mendeley Manage Share and Discover Research Papers). Dr. Shock didn’t make up his mind yet whether he prefers Mendeley or Labmeeting (described in another post) as an online library. But offline he uses Sente, which he finds absolutely perfect. A chimera between Sente and one of the other tools would be his ideal management system.

PUBLISHING

Michelle Kraft of The Krafty Librarian was totally blown away by a presentation on Interactive Science Publishing at PSP 2009 Annual Conference (where she also gave a presentation herself). I didn’t know what interactive science publishing really meant, but Michelle can illustrate things so well, that you can readily imagine it all. This was needed as I could not access the examples she referred to without the risk of my computer becoming too slow or worse. But I understand from Michelle that it is a revolutionary new method of viewing online journals, although there are some answers to be addressed as well (see her post)

Imagine having the “PDF” of an article on congenital heart defects and be able to hear the heart sounds plus the video recording of the heart. The video would be more than just a snippet, it would be the entire video sectioned into “chapters” referenced within the various areas of the article. So while you are reading the article you can click on the link within the text referencing the image, sound, etc. and the image immediately jumps to that section the video. Imagine the data behind a large randomized controlled trial available in its entirety to all readers to be manipulated, reused, and viewed.

Another new publishing format is discussed by Shamsha Damani (@shamsha) on this blog (see: “How to make EBM easy to swallow“). Shamsha informs us that the BMJ will be publishing two summaries for each research article published. One called BMJ PICO, prepared by the authors, breaks down the article into the different EBM elements. The other called Short Cuts is written by BMJ itself. Here she hopes BMJ will shine, providing an easy to follow unbiased view of the article. Indeed, it would be very welcomed if more papers were in the ready-appraised-format, similar as found in the ACP-Journal Club. However, in the BMJ, it is the PICO-format written by the authors themselves which has the EBM structure, and is most preferred by the readers. According to some (including me) the Short Cuts are a bit woolly. Or as Shamsha says: “Personally I think it would have been better to have the BMJ reviewers write the PICO format, and do a bit more thorough critiquing”.

SOCIAL MEDIA & NLM, GOVERNMENTAL ORGANIZATIONS AND MEDICAL LIBRARIANS

In the same blogpost as mentioned above @Eagledawg mentions that the recent introduction of the #pubmed tag in Twitter (with the aim that you can later search for messages with this tag, see real time results here) led to various responses, which are not really appreciated as useful by the NLM because of the extreme short length of the tweets (140 characters including tag). It strikes Nicole that the NLM is not present on twitter (in contrast to the FDA and the CDC, also see a post of David Rothman). A good example of how the government could use using social media to respond to citizens is given by Andrew Wilson, a member of the recently introduced social-media team of the Department of Health and Human Service, who responded to the peanut-butter-and-salmonella recall issue on Twitter.

An interview with Andrew Wilson can be found here.
And, by the way The Library of Congres (see Dean Giustini’s blog) and the Cochrane Collaboration have also joined Twitter.

Health 2.0 people are well represented on Twitter. See for instance this list of Twitter Doctors, Medical Students and Medicine-related. made by @medicalstudent There is also a great slideshare presentation of @PhilBaumann on 140 health care uses for Twitter.

But how is Twitter used by medical librarians? David Rothman is not a huge fan of Twitter (he prefers friendfeed), but he does refer to a list of Great & growing resource for libraries/librarians on Twitter!
Dean Giustini
of UBC Academic Search – Google Scholar Blog wonders why there aren’t More Canadian (mapple Leaf) Librarians on Twitter? Well, I don’t know whether this is typical for Canadians, I don’t see many Dutch medical librarians either.
Dean plans to
write something for an upcoming issue of a health library journal about Twitter. Want to have an idea what Twitter is about, please read his short post on Twitter. Already on twitter but looking for twitterers in all the wrong places” than forget one bad idea and follow the half dozen good ideas Patricia gives in her excellent post on Twitter.

And what about the presence of the abovementioned contributors to this first Grand Round? Without exception they are all on Twitter and all but one use it on a regular basis. Now, assuming that most medical librarians aren’t on Twitter, doesn’t tell that something about this group? I wonder if Twitter presence is not the main reason for the swift start of this First MedLib’s Round.

That’s it for this edition.

741879088_29d01c359b_m-another-dead-librarian
I hope you enjoyed this first MedLib’s Round.
I surely enjoyed reading the many interesting and good quality posts that were submitted.

The next round will be hosted by Dragonfly, March 10.
Please submit your
favorite blog article to the next edition of medlib’s round before March 8 by using the carnival submission form (here) (!). Submission to the form makes it easier for the host to summarize the articles.

p.s. Perhaps you would like to host a future edition as well. If so, please inform me which edition (off May) you would like to host.

Jacqueline (“Laika”)


Photo credits (Flickr-CC)

Librarian’s Costume by Librarian Avenger

Namro Orman, SL

Another Dead Librarian by Doug!








Follow

Get every new post delivered to your Inbox.

Join 610 other followers