Of Mice and Men Again: New Genomic Study Helps Explain why Mouse Models of Acute Inflammation do not Work in Men

25 02 2013

ResearchBlogging.org

This post is update after a discussion at Twitter with @animalevidence who pointed me at a great blog post at Speaking of Research ([19], a repost of [20], highlighting the shortcomings of the current study using just one single inbred strain of mice (C57Bl6)  [2013-02-26]. Main changes are in blue

A recent paper published in PNAS [1] caused quite a stir both inside and outside the scientific community. The study challenges the validity of using mouse models to test what works as a treatment in humans. At least this is what many online news sources seem to conclude: “drug testing may be a waste of time”[2], “we are not mice” [3, 4], or a bit more to the point: mouse models of inflammation are worthless [5, 6, 7].

But basically the current study looks only at one specific area, the area of inflammatory responses that occur in critically ill patients after severe trauma and burns (SIRS, Systemic Inflammatory Response Syndrome). In these patients a storm of events may eventually lead to organ failure and death. It is similar to what may occur after sepsis (but here the cause is a systemic infection).

Furthermore the study only uses one single approach: it compares the gene response patterns in serious human injuries (burns, trauma) and a human model partially mimicking these inflammatory diseases (human healthy volunteers receiving  a low dose endotoxin) with the corresponding three animal models (burns, trauma, endotoxin).

And, as highlighted by Bill Barrington of “Understand Nutrition” [8], the researchers have only tested the gene profiles in one single strain of mice: C57Bl6 (B6 for short). If B6 was the only model used in practice this would be less of a problem. But according to Mark Wanner of the Jackson Laboratory [19, 20]:

 It is now well known that some inbred mouse strains, such as the C57BL/6J (B6 for short) strain used, are resistant to septic shock. Other strains, such as BALB and A/J, are much more susceptible, however. So use of a single strain will not provide representative results.

The results in itself are very clear. The figures show at a glance that there is no correlation whatsoever between the human and B6 mouse expression data.

Seok and 36 other researchers from across the USA  looked at approximately 5500 human genes and their mouse analogs. In humans, burns and traumatic injuries (and to a certain extent the human endotoxin model) triggered the activation of a vast number of genes, that were not triggered in the present C57Bl6 mouse models. In addition the genomic response is longer lasting in human injuries. Furthermore, the top 5 most activated and most suppressed pathways in human burns and trauma had no correlates in mice. Finally, analysis of existing data in the Gene Expression (GEO) Database showed that the lack of correlation between mouse and human studies was also true for other acute inflammatory responses, like sepsis and acute infection.

This is a high quality study with interesting results. However, the results are not as groundbreaking as some media suggest.

As discussed by the authors [1], mice are known to be far more resilient to inflammatory challenge than humans*: a million fold higher dose of endotoxin than the dose causing shock in humans is lethal to mice.* This, and the fact that “none of the 150  candidate agents that progressed to human trials has proved successful in critically ill patients” already indicates that the current approach fails.

[This is not entirely correct the endotoxin/LPS dose in mice is 1000–10,000 times the dose required to induce severe disease with shock in humans [20] and mice that are resilient to endotoxin may still be susceptible to infection. It may well be that the endotoxin response is not a good model for the late effects of  sepsis]

The disappointing trial results have forced many researchers to question not only the usefulness of the current mouse models for acute inflammation [9,10; refs from 11], but also to rethink the key aspects of the human response itself and the way these clinical trials are performed [12, 13, 14]. For instance, emphasis has always been on the exuberant inflammatory reaction, but the subsequent immunosuppression may also be a major contributor to the disease. There is also substantial heterogeneity among patients [13-14] that may explain why some patients have a good prognosis and others haven’t. And some of the initially positive results in human trials have not been reproduced in later studies either (benefit of intense glucose control and corticosteroid treatment) [12]. Thus is it fair to blame only the mouse studies?

dick mouse

dick mouse (Photo credit: Wikipedia)

The coverage by some media is grist to the mill of people who think animal studies are worthless anyway. But one cannot extrapolate these findings to other diseases. Furthermore, as referred to above, the researchers have only tested the gene profiles in one single strain of mice: C57Bl6, meaning that “The findings of Seok et al. are solely applicable to the B6 strain of mice in the three models of inflammation they tested. They unduly generalize these findings to mouse models of inflammation in general. [8]“

It is true that animal studies, including rodent studies, have their limitations. But what are the alternatives? In vitro studies are often even more artificial, and direct clinical testing of new compounds in humans is not ethical.

Obviously, the final proof of effectiveness and safety of new treatments can only be established in human trials. No one will question that.

A lot can be said about why animal studies often fail to directly translate to the clinic [15]. Clinical disparities between the animal models and the clinical trials testing the treatment (like in sepsis) are one reason. Other important reasons may be methodological flaws in animal studies (i.e. no randomization, wrong statistics) and publication bias: non-publication of “negative” results appears to be prevalent in laboratory animal research.[15-16]. Despite their shortcomings, animal studies and in vitro studies offer a way to examine certain aspects of a process, disease or treatment.

In summary, this study confirms that the existing (C57Bl6) mouse model doesn’t resemble the human situation in the systemic response following acute traumatic injury or sepsis: the genomic response is entirely different, in magnitude, duration and types of changes in expression.

The findings are not new: the shortcomings of the mouse model(s) were long known. It remains enigmatic why the researchers chose only one inbred strain of mice, and of all mice only the B6-strain, which is less sensitive to endotoxin, and only develop acute kidney injury (part of organ failure) at old age (young mice were used) [21]. In this paper from 2009 (!) various reasons are given why the animal models didn’t properly mimic the human disease and how this can be improved. The authors stress that:

the genetically heterogeneous human population should be more accurately represented by outbred mice, reducing the bias found in inbred strains that might contain or lack recessive disease susceptibility loci, depending on selective pressures.” 

Both Bill Barrington [8] and Mark Wanner [18,19] propose the use of “diversity outbred cross or collaborative cross mice that  provide additional diversity.” Indeed, “replicating genetic heterogeneity and critical clinical risk factors such as advanced age and comorbid conditions (..) led to improved models of sepsis and sepsis-induced AKI (acute kidney injury). 

The authors of the PNAS paper suggest that genomic analysis can aid further in revealing which genes play a role in the perturbed immune response in acute inflammation, but it remains to be seen whether this will ultimately lead to effective treatments of sepsis and other forms of acute inflammation.

It also remains to be seen whether comprehensive genomic characterization will be useful in other disease models. The authors suggest for instance,  that genetic profiling may serve as a guide to develop animal models. A shotgun analyses of gene expression of thousands of genes was useful in the present situation, because “the severe inflammatory stress produced a genomic storm affecting all major cellular functions and pathways in humans which led to sufficient perturbations to allow comparisons between the genes in the human conditions and their analogs in the murine models”. But rough analysis of overall expression profiles may give little insight in the usefulness of other animal models, where genetic responses are more subtle.

And predicting what will happen is far less easy that to confirm what is already known….

NOTE: as said the coverage in news and blogs is again quite biased. The conclusion of a generally good Dutch science  news site (the headline and lead suggested that animal models of immune diseases are crap [6]) was adapted after a critical discussion at Twitter (see here and here), and a link was added to this blog post). I wished this occurred more often….
In my opinion the most balanced summaries can be found at the science-based blogs: ScienceBased Medicine [11] and NIH’s Director’s Blog [17], whereas “Understand Nutrition” [8] has an original point of view, which is further elaborated by Mark Wanner at Speaking of Research [19] and Genetics and your health Blog [20]

References

  1. Seok, J., Warren, H., Cuenca, A., Mindrinos, M., Baker, H., Xu, W., Richards, D., McDonald-Smith, G., Gao, H., Hennessy, L., Finnerty, C., Lopez, C., Honari, S., Moore, E., Minei, J., Cuschieri, J., Bankey, P., Johnson, J., Sperry, J., Nathens, A., Billiar, T., West, M., Jeschke, M., Klein, M., Gamelli, R., Gibran, N., Brownstein, B., Miller-Graziano, C., Calvano, S., Mason, P., Cobb, J., Rahme, L., Lowry, S., Maier, R., Moldawer, L., Herndon, D., Davis, R., Xiao, W., Tompkins, R., , ., Abouhamze, A., Balis, U., Camp, D., De, A., Harbrecht, B., Hayden, D., Kaushal, A., O’Keefe, G., Kotz, K., Qian, W., Schoenfeld, D., Shapiro, M., Silver, G., Smith, R., Storey, J., Tibshirani, R., Toner, M., Wilhelmy, J., Wispelwey, B., & Wong, W. (2013). Genomic responses in mouse models poorly mimic human inflammatory diseases Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1222878110
  2. Drug Testing In Mice May Be a Waste of Time, Researchers Warn 2013-02-12 (science.slashdot.org)
  3. Susan M Love We are not mice 2013-02-14 (Huffingtonpost.com)
  4. Elbert Chu  This Is Why It’s A Mistake To Cure Mice Instead Of Humans 2012-12-20(richarddawkins.net)
  5. Derek Low. Mouse Models of Inflammation Are Basically Worthless. Now We Know. 2013-02-12 (pipeline.corante.com)
  6. Elmar Veerman. Waardeloos onderzoek. Proeven met muizen zeggen vrijwel niets over ontstekingen bij mensen. 2013-02-12 (wetenschap24.nl)
  7. Gina Kolata. Mice Fall Short as Test Subjects for Humans’ Deadly Ills. 2013-02-12 (nytimes.com)

  8. Bill Barrington. Are Mice Reliable Models for Human Disease Studies? 2013-02-14 (understandnutrition.com)
  9. Raven, K. (2012). Rodent models of sepsis found shockingly lacking Nature Medicine, 18 (7), 998-998 DOI: 10.1038/nm0712-998a
  10. Nemzek JA, Hugunin KM, & Opp MR (2008). Modeling sepsis in the laboratory: merging sound science with animal well-being. Comparative medicine, 58 (2), 120-8 PMID: 18524169
  11. Steven Novella. Mouse Model of Sepsis Challenged 2013-02-13 (http://www.sciencebasedmedicine.org/index.php/mouse-model-of-sepsis-challenged/)
  12. Wiersinga WJ (2011). Current insights in sepsis: from pathogenesis to new treatment targets. Current opinion in critical care, 17 (5), 480-6 PMID: 21900767
  13. Khamsi R (2012). Execution of sepsis trials needs an overhaul, experts say. Nature medicine, 18 (7), 998-9 PMID: 22772540
  14. Hotchkiss RS, Coopersmith CM, McDunn JE, & Ferguson TA (2009). The sepsis seesaw: tilting toward immunosuppression. Nature medicine, 15 (5), 496-7 PMID: 19424209
  15. van der Worp, H., Howells, D., Sena, E., Porritt, M., Rewell, S., O’Collins, V., & Macleod, M. (2010). Can Animal Models of Disease Reliably Inform Human Studies? PLoS Medicine, 7 (3) DOI: 10.1371/journal.pmed.1000245
  16. ter Riet, G., Korevaar, D., Leenaars, M., Sterk, P., Van Noorden, C., Bouter, L., Lutter, R., Elferink, R., & Hooft, L. (2012). Publication Bias in Laboratory Animal Research: A Survey on Magnitude, Drivers, Consequences and Potential Solutions PLoS ONE, 7 (9) DOI: 10.1371/journal.pone.0043404
  17. Dr. Francis Collins. Of Mice, Men and Medicine 2013-02-19 (directorsblog.nih.gov)
  18. Tom/ Mark Wanner Why mice may succeed in research when a single mouse falls short (2013-02-15) (speakingofresearch.com) [repost, with introduction]
  19. Mark Wanner Why mice may succeed in research when a single mouse falls short (2013-02-13/) (http://community.jax.org) %5Boriginal post]
  20. Warren, H. (2009). Editorial: Mouse models to study sepsis syndrome in humans Journal of Leukocyte Biology, 86 (2), 199-201 DOI: 10.1189/jlb.0309210
  21. Doi, K., Leelahavanichkul, A., Yuen, P., & Star, R. (2009). Animal models of sepsis and sepsis-induced kidney injury Journal of Clinical Investigation, 119 (10), 2868-2878 DOI: 10.1172/JCI39421




Why Publishing in the NEJM is not the Best Guarantee that Something is True: a Response to Katan

27 10 2012

ResearchBlogging.orgIn a previous post [1] I reviewed a recent  Dutch study published in the New England Journal of Medicine (NEJM [2] about the effects of sugary drinks on the body mass index of school children.

The study got widely covered by the media. The NRC, for which the main author Martijn Katan works as a science columnist,  columnist, spent  two full (!) pages on the topic -with no single critical comment-[3].
As if this wasn’t enough, the latest column of Katan again dealt with his article (text freely available at mkatan.nl)[4].

I found Katan’s column “Col hors Catégorie” [4] quite arrogant, especially because he tried to belittle a (as he called it) “know-it-all” journalist who criticized his work  in a rivaling newspaper. This wasn’t fair, because the journalist had raised important points [5, 1] about the work.

The piece focussed on the long road of getting papers published in a top journal like the NEJM.
Katan considers the NEJM as the “Tour de France” among  medical journals: it is a top achievement to publish in this paper.

Katan also states that “publishing in the NEJM is the best guarantee something is true”.

I think the latter statement is wrong for a number of reasons.*

  1. First, most published findings are false [6]. Thus journals can never “guarantee”  that published research is true.
    Factors that  make it less likely that research findings are true include a small effect size,  a greater number and lesser preselection of tested relationships, selective outcome reporting, the “hotness” of the field (all applying more or less to Katan’s study, he also changed the primary outcomes during the trial[7]), a small study, a great financial interest and a low pre-study probability (not applicable) .
  2. It is true that NEJM has a very high impact factor. This is  a measure for how often a paper in that journal is cited by others. Of course researchers want to get their paper published in a high impact journal. But journals with high impact factors often go for trendy topics and positive results. In other words it is far more difficult to publish a good quality study with negative results, and certainly in an English high impact journal. This is called publication bias (and language bias) [8]. Positive studies will also be more frequently cited (citation bias) and will more likely be published more than once (multiple publication bias) (indeed, Katan et al already published about the trial [9], and have not presented all their data yet [1,7]). All forms of bias are a distortion of the “truth”.
    (This is the reason why the search for a (Cochrane) systematic review must be very sensitive [8] and not restricted to core clinical journals, but even include non-published studies: for these studies might be “true”, but have failed to get published).
  3. Indeed, the group of Ioannidis  just published a large-scale statistical analysis[10] showing that medical studies revealing “very large effects” seldom stand up when other researchers try to replicate them. Often studies with large effects measure laboratory and/or surrogate markers (like BMI) instead of really clinically relevant outcomes (diabetes, cardiovascular complications, death)
  4. More specifically, the NEJM does regularly publish studies about pseudoscience or bogus treatments. See for instance this blog post [11] of ScienceBased Medicine on Acupuncture Pseudoscience in the New England Journal of Medicine (which by the way is just a review). A publication in the NEJM doesn’t guarantee it isn’t rubbish.
  5. Importantly, the NEJM has the highest proportion of trials (RCTs) with sole industry support (35% compared to 7% in the BMJ) [12] . On several occasions I have discussed these conflicts of interests and their impact on the outcome of studies ([13, 14; see also [15,16] In their study, Gøtzsche and his colleagues from the Nordic Cochrane Centre [12] also showed that industry-supported trials were more frequently cited than trials with other types of support, and that omitting them from the impact factor calculation decreased journal impact factors. The impact factor decrease was even 15% for NEJM (versus 1% for BMJ in 2007)! For the journals who provided data, income from the sales of reprints contributed to 3% and 41% of the total income for BMJ and The Lancet.
    A recent study, co-authored by Ben Goldacre (MD & science writer) [17] confirms that  funding by the pharmaceutical industry is associated with high numbers of reprint ordersAgain only the BMJ and the Lancet provided all necessary data.
  6. Finally and most relevant to the topic is a study [18], also discussed at Retractionwatch[19], showing that  articles in journals with higher impact factors are more likely to be retracted and surprise surprise, the NEJM clearly stands on top. Although other reasons like higher readership and scrutiny may also play a role [20], it conflicts with Katan’s idea that  “publishing in the NEJM is the best guarantee something is true”.

I wasn’t aware of the latter study and would like to thank drVes and Ivan Oranski for responding to my crowdsourcing at Twitter.

References

  1. Sugary Drinks as the Culprit in Childhood Obesity? a RCT among Primary School Children (laikaspoetnik.wordpress.com)
  2. de Ruyter JC, Olthof MR, Seidell JC, & Katan MB (2012). A trial of sugar-free or sugar-sweetened beverages and body weight in children. The New England journal of medicine, 367 (15), 1397-406 PMID: 22998340
  3. NRC Wim Köhler Eén kilo lichter.NRC | Zaterdag 22-09-2012 (http://archief.nrc.nl/)
  4. Martijn Katan. Col hors Catégorie [Dutch], (published in de NRC,  (20 oktober)(www.mkatan.nl)
  5. Hans van Maanen. Suiker uit fris, De Volkskrant, 29 september 2012 (freely accessible at http://www.vanmaanen.org/)
  6. Ioannidis, J. (2005). Why Most Published Research Findings Are False PLoS Medicine, 2 (8) DOI: 10.1371/journal.pmed.0020124
  7. Changes to the protocol http://clinicaltrials.gov/archive/NCT00893529/2011_02_24/changes
  8. Publication Bias. The Cochrane Collaboration open learning material (www.cochrane-net.org)
  9. de Ruyter JC, Olthof MR, Kuijper LD, & Katan MB (2012). Effect of sugar-sweetened beverages on body weight in children: design and baseline characteristics of the Double-blind, Randomized INtervention study in Kids. Contemporary clinical trials, 33 (1), 247-57 PMID: 22056980
  10. Pereira, T., Horwitz, R.I., & Ioannidis, J.P.A. (2012). Empirical Evaluation of Very Large Treatment Effects of Medical InterventionsEvaluation of Very Large Treatment Effects JAMA: The Journal of the American Medical Association, 308 (16) DOI: 10.1001/jama.2012.13444
  11. Acupuncture Pseudoscience in the New England Journal of Medicine (sciencebasedmedicine.org)
  12. Lundh, A., Barbateskovic, M., Hróbjartsson, A., & Gøtzsche, P. (2010). Conflicts of Interest at Medical Journals: The Influence of Industry-Supported Randomised Trials on Journal Impact Factors and Revenue – Cohort Study PLoS Medicine, 7 (10) DOI: 10.1371/journal.pmed.1000354
  13. One Third of the Clinical Cancer Studies Report Conflict of Interest (laikaspoetnik.wordpress.com)
  14. Merck’s Ghostwriters, Haunted Papers and Fake Elsevier Journals (laikaspoetnik.wordpress.com)
  15. Lexchin, J. (2003). Pharmaceutical industry sponsorship and research outcome and quality: systematic review BMJ, 326 (7400), 1167-1170 DOI: 10.1136/bmj.326.7400.1167
  16. Smith R (2005). Medical journals are an extension of the marketing arm of pharmaceutical companies. PLoS medicine, 2 (5) PMID: 15916457 (free full text at PLOS)
  17. Handel, A., Patel, S., Pakpoor, J., Ebers, G., Goldacre, B., & Ramagopalan, S. (2012). High reprint orders in medical journals and pharmaceutical industry funding: case-control study BMJ, 344 (jun28 1) DOI: 10.1136/bmj.e4212
  18. Fang, F., & Casadevall, A. (2011). Retracted Science and the Retraction Index Infection and Immunity, 79 (10), 3855-3859 DOI: 10.1128/IAI.05661-11
  19. Is it time for a Retraction Index? (retractionwatch.wordpress.com)
  20. Agrawal A, & Sharma A (2012). Likelihood of false-positive results in high-impact journals publishing groundbreaking research. Infection and immunity, 80 (3) PMID: 22338040

——————————————–

* Addendum: my (unpublished) letter to the NRC

Tour de France.
Nadat het NRC eerder 2 pagina’ s de loftrompet over Katan’s nieuwe studie had afgestoken, vond Katan het nodig om dit in zijn eigen column dunnetjes over te doen. Verwijzen naar je eigen werk mag, ook in een column, maar dan moeten wij daar als lezer wel wijzer van worden. Wat is nu de boodschap van dit stuk “Col hors Catégorie“? Het beschrijft vooral de lange weg om een wetenschappelijke studie gepubliceerd te krijgen in een toptijdschrift, in dit geval de New England Journal of Medicine (NEJM), “de Tour de France onder de medische tijdschriften”. Het stuk eindigt met een tackle naar een journalist “die dacht dat hij het beter wist”. Maar ach, wat geeft dat als de hele wereld staat te jubelen? Erg onsportief, omdat die journalist (van Maanen, Volkskrant) wel degelijk op een aantal punten scoorde. Ook op Katan’s kernpunt dat een NEJM-publicatie “de beste garantie is dat iets waar is” valt veel af te dingen. De NEJM heeft inderdaad een hoge impactfactor, een maat voor hoe vaak artikelen geciteerd worden. De NEJM heeft echter ook de hoogste ‘artikelterugtrekkings’ index. Tevens heeft de NEJM het hoogste percentage door de industrie gesponsorde klinische trials, die de totale impactfactor opkrikken. Daarnaast gaan toptijdschriften vooral voor “positieve resultaten” en “trendy onderwerpen”, wat publicatiebias in de hand werkt. Als we de vergelijking met de Tour de France doortrekken: het volbrengen van deze prestigieuze wedstrijd garandeert nog niet dat deelnemers geen verboden middelen gebruikt hebben. Ondanks de strenge dopingcontroles.




#EAHIL2012 CEC 2: Visibility & Impact – Library’s New Role to Enhance Visibility of Researchers

4 07 2012

This week I’m blogging at (and mostly about) the 13th EAHIL conference in Brussels. EAHIL stands for European Association for Health Information and Libraries.

The second Continuing Education Course (CEC) I followed was given by Tiina Heino and Katri Larmo of the Terkko Meilahti Campus Library at the University of Helsinki in Finland.

The full title of the course was Visibility and impact – library’s new role: How the library can support the researcher to get visibility and generate impact to researcher’s work. You can read the abstract here.

The hands-on workshop mainly concentrated on the social bookmarking sites ConnoteaMendeley and Altmetric.

Furthermore we got information on CiteULike, ORCID,  Faculty of 1000 Posters and Pinterest. Also services developed in Terkko, such as ScholarChart and TopCited Articles, were shortly demonstrated.

What I especially liked in the hands on session is that the tutors had prepared a wikispace with all the information and links on the main page ( https://visibility2012.wikispaces.com) and a separate page for each participant to edit (here is my page). You could add links to your created accounts and embed widgets for Mendeley.

There was sufficient time to practice and try the tools. And despite the great number of participants there was ample room for questions (& even for making a blog draft ;)).

The main message of the tutors is that the process of publishing scientific research doesn’t end at publishing the article: it is equally important what happens after the research has been published. Visibility and impact in the scientific community and in the society are  crucial  for making the research go forward as well as for getting research funding and promoting the researcher’s career. The Fig below (taken from the presentation) visualizes this process.

The tutors discussed ORCID, Open Researcher and contributor ID, that will be introduced later this year. It is meant to solve the author name ambiguity problem in scholarly communication by central registry of unique identifiers for each author (because author names can’t be used to reliably identify all scholarly author). It will be possible for authors to create, manage and share their ORCID record without membership fee. For further information see several publications and presentations by Martin Fenner. I found this one during the course while browsing Mendeley.

Once published the author’s work can be promoted using bookmarking tools, like CiteULike, Connotea and Mendeley. You can easily registrate for Connotea and Mendeley using your Facebook account. These social bookmarking tools are also useful for networking, i.e. to discover individuals and groups with the same field of interest. It is easy to synchronize your Mendeley with your CiteULike account.

Mendeley is available in a desktop and a web version. The web version offers a public profile for researchers, a catalog of documents, and collaborative groups (the cloud of Mendeley). The desktop version of Mendeley is specially suited for reference management and organizing your PDF’s. That said Mendeley seems most suitable for serendipitous use (clicking and importing a reference you happen to see and like) and less useful for managing and deduplicating large numbers of records, i.e. for a systematic review.
Also (during the course) it was not possible to import several PubMed records at once in either CiteULike or Mendeley.

What stroke me when I tried Mendeley is that there were many small or dead groups. A search for “cochrane”  for instance yielded one large group Cochrane QES Register, owned by Andrew Booth, and 3 groups with one member (thus not really a group), with 0 (!) to 6 papers each! It looks like people are trying Mendeley and other tools just for a short while. Indeed, most papers I looked up in PubMed were not bookmarked at all. It makes you wonder how widespread the use of these bookmarking tools is. It probably doesn’t help that there are so many tools with different purposes and possibilities.

Another tool that we tried was Altmetric. This is a free bookmarklet on scholarly articles which allows you to track the conversations around scientific articles online. It shows the tweets, blogposts, Google+ and Facebook mentions, and the numbers of bookmarks on Mendeley, CiteULike and Connotea.

I tried the tool on a paper I blogged about , ie. Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up?

The bookmarklet showed the tweets and the blogposts mentioning the paper.

Indeed altmetrics did correctly refer to my blog (even to 2 posts).

I liked altmetrics*, but saying that it is suitable for scientific metrics is a step too far. For people interested in this topic I would like to refer -again- to a post of Martin Fenner on altmetrics (in general).  He stresses that “usage metrics”  has its limitations because of its proness  to “gaming” (cheating).

But the current workshop didn’t address the shortcomings of the tools, for it was meant as a first practical acquaintance with the web 2.0 tools.

For the other tools (Faculty of 1000 Posters, Pinterest) and the services developed in Terkko, such as ScholarChart and TopCited Articles,  see the wikipage and the presentation:

*Coincidentally I’m preparing a post on handy chrome extensions to look for tweets about a webpage. Altmetric is another tool which seems very suitable for this purpose

Related articles





The Scatter of Medical Research and What to do About it.

18 05 2012

ResearchBlogging.orgPaul Glasziou, GP and professor in Evidence Based Medicine, co-authored a new article in the BMJ [1]. Similar to another paper [2] I discussed before [3] this paper deals with the difficulty for clinicians of staying up-to-date with the literature. But where the previous paper [2,3] highlighted the mere increase in number of research articles over time, the current paper looks at the scatter of randomized clinical trials (RCTs) and systematic reviews (SR’s) accross different journals cited in one year (2009) in PubMed.

Hofmann et al analyzed 7 specialties and 9 sub-specialties, that are considered the leading contributions to the burden of disease in high income countries.

They followed a relative straightforward method for identifying the publications. Each search string consisted of a MeSH term (controlled  term) to identify the selected disease or disorders, a publication type [pt] to identify the type of study, and the year of publication. For example, the search strategy for randomized trials in cardiology was: “heart diseases”[MeSH] AND randomized controlled trial[pt] AND 2009[dp]. (when searching “heart diseases” as a MeSH, narrower terms are also searched.) Meta-analysis[pt] was used to identify systematic reviews.

Using this approach Hofmann et al found 14 343 RCTs and 3214 SR’s published in 2009 in the field of the selected (sub)specialties. There was a clear scatter across journals, but this scatter varied considerably among specialties:

“Otolaryngology had the least scatter (363 trials across 167 journals) and neurology the most (2770 trials across 896 journals). In only three subspecialties (lung cancer, chronic obstructive pulmonary disease, hearing loss) were 10 or fewer journals needed to locate 50% of trials. The scatter was less for systematic reviews: hearing loss had the least scatter (10 reviews across nine journals) and cancer the most (670 reviews across 279 journals). For some specialties and subspecialties the papers were concentrated in specialty journals; whereas for others, few of the top 10 journals were a specialty journal for that area.
Generally, little overlap occurred between the top 10 journals publishing trials and those publishing systematic reviews. The number of journals required to find all trials or reviews was highly correlated (r=0.97) with the number of papers for each specialty/ subspecialty.”

Previous work already suggested that this scatter of research has a long tail. Half of the publications is in a minority of papers, whereas the remaining articles are scattered among many journals (see Fig below).

Click to enlarge en see legends at BMJ 2012;344:e3223 [CC]

The good news is that SRs are less scattered and that general journals appear more often in the top 10 journals publishing SRs. Indeed for 6 of the 7 specialties and 4 of the 9 subspecialties, the Cochrane Database of Systematic Reviews had published the highest number of systematic reviews, publishing between 6% and 18% of all the systematic reviews published in each area in 2009. The bad news is that even keeping up to date with SRs seems a huge, if not impossible, challenge.

In other words, it is not sufficient for clinicians to rely on personal subscriptions to a few journals in their specialty (which is common practice). Hoffmann et al suggest several solutions to help clinicians cope with the increasing volume and scatter of research publications.

  • a central library of systematic reviews (but apparently the Cochrane Library fails to fulfill such a role according to the authors, because many reviews are out of date and are perceived as less clinically relevant)
  • registry of planned and completed systematic reviews, such as prospero. (this makes it easier to locate SRs and reduces bias)
  • Synthesis of Evidence and synopses, like the ACP-Jounal Club which summarizes the best evidence in internal medicine
  • Specialised databases that collate and critically appraise randomized trials and systematic reviews, like www.pedro.org.au for physical therapy. In my personal experience, however, this database is often out of date and not comprehensive
  • Journal scanning services like EvidenceUpdates from mcmaster.ca), which scans over 120 journals, filters articles on the basis of quality, has practising clinicians rate them for relevance and newsworthiness, and makes them available as email alerts and in a searchable database. I use this service too, but besides that not all specialties are covered, the rating of evidence may not always be objective (see previous post [4])
  • The use of social media tools to alert clinicians to important new research.

Most of these solutions are (long) existing solutions that do not or only partly help to solve the information overload.

I was surprised that the authors didn’t propose the use of personalized alerts. PubMed’s My NCBI feature allows to create automatic email alerts on a topic and to subscribe to electronic tables of contents (which could include ACP journal Club). Suppose that a physician browses 10 journals roughly covering 25% of the trials. He/she does not need to read all the other journals from cover to cover to avoid missing one potentially relevant trial. Instead it is far more efficient to perform a topic search to filter relevant studies from journals that seldom publish trials on the topic of interest. One could even use the search of Hoffmann et al to achieve this.* Although in reality, most clinical researchers will have narrower fields of interest than all studies about endocrinology and neurology.

At our library we are working at creating deduplicated, easy to read, alerts that collate table of contents of certain journals with topic (and author) searches in PubMed, EMBASE and other databases. There are existing tools that do the same.

Another way to reduce the individual work (reading) load is to organize journals clubs or even better organize regular CATs (critical appraised topics). In the Netherlands, CATS are a compulsory item for residents. A few doctors do the work for many. Usually they choose topics that are clinically relevant (or for which the evidence is unclear).

The authors shortly mention that their search strategy might have missed  missed some eligible papers and included some that are not truly RCTs or SRs, because they relied on PubMed’s publication type to retrieve RCTs and SRs. For systematic reviews this may be a greater problem than recognized, for the authors have used meta-analyses[pt] to identify systematic reviews. Unfortunately PubMed has no publication type for systematic reviews, but it may be clear that there are many more systematic reviews that meta-analyses. Possibly systematical reviews might even have a different scatter pattern than meta-analyses (i.e. the latter might be preferentially included in core journals).

Furthermore not all meta-analyses and systematic reviews are reviews of RCTs (thus it is not completely fair to compare MAs with RCTs only). On the other hand it is a (not discussed) omission of this study, that only interventions are considered. Nowadays physicians have many other questions than those related to therapy, like questions about prognosis, harm and diagnosis.

I did a little imperfect search just to see whether use of other search terms than meta-analyses[pt] would have any influence on the outcome. I search for (1) meta-analyses [pt] and (2) systematic review [tiab] (title and abstract) of papers about endocrine diseases. Then I subtracted 1 from 2 (to analyse the systematic reviews not indexed as meta-analysis[pt])

Thus:

(ENDOCRINE DISEASES[MESH] AND SYSTEMATIC REVIEW[TIAB] AND 2009[DP]) NOT META-ANALYSIS[PT]

I analyzed the top 10/11 journals publishing these study types.

This little experiment suggests that:

  1. the precise scatter might differ per search: apparently the systematic review[tiab] search yielded different top 10/11 journals (for this sample) than the meta-analysis[pt] search. (partially because Cochrane systematic reviews apparently don’t mention systematic reviews in title and abstract?).
  2. the authors underestimate the numbers of Systematic Reviews: simply searching for systematic review[tiab] already found appr. 50% additional systematic reviews compared to meta-analysis[pt] alone
  3. As expected (by me at last), many of the SR’s en MA’s were NOT dealing with interventions, i.e. see the first 5 hits (out of 108 and 236 respectively).
  4. Together these findings indicate that the true information overload is far greater than shown by Hoffmann et al (not all systematic reviews are found, of all available search designs only RCTs are searched).
  5. On the other hand this indirectly shows that SRs are a better way to keep up-to-date than suggested: SRs  also summarize non-interventional research (the ratio SRs of RCTs: individual RCTs is much lower than suggested)
  6. It also means that the role of the Cochrane Systematic reviews to aggregate RCTs is underestimated by the published graphs (the MA[pt] section is diluted with non-RCT- systematic reviews, thus the proportion of the Cochrane SRs in the interventional MAs becomes larger)

Well anyway, these imperfections do not contradict the main point of this paper: that trials are scattered across hundreds of general and specialty journals and that “systematic reviews” (or meta-analyses really) do reduce the extent of scatter, but are still widely scattered and mostly in different journals to those of randomized trials.

Indeed, personal subscriptions to journals seem insufficient for keeping up to date.
Besides supplementing subscription by  methods such as journal scanning services, I would recommend the use of personalized alerts from PubMed and several prefiltered sources including an EBM search machine like TRIP (www.tripdatabase.com/).

*but I would broaden it to find all aggregate evidence, including ACP, Clinical Evidence, syntheses and synopses, not only meta-analyses.

**I do appreciate that one of the co-authors is a medical librarian: Sarah Thorning.

References

  1. Hoffmann, Tammy, Erueti, Chrissy, Thorning, Sarah, & Glasziou, Paul (2012). The scatter of research: cross sectional comparison of randomised trials and systematic reviews across specialties BMJ, 344 : 10.1136/bmj.e3223
  2. Bastian, H., Glasziou, P., & Chalmers, I. (2010). Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLoS Medicine, 7 (9) DOI: 10.1371/journal.pmed.1000326
  3. How will we ever keep up with 75 trials and 11 systematic reviews a day (laikaspoetnik.wordpress.com)
  4. Experience versus Evidence [1]. Opioid Therapy for Rheumatoid Arthritis Pain. (laikaspoetnik.wordpress.com)




Friday Foolery #49: The Shortest Abstract Ever! [2]

30 03 2012

In a previous Friday Foolery post I mentioned what I thought was the shortest abstract ever.

 “Probably not”.

But a reader (Trollface”pointed out in a comment that there was an even shorter (and much older) abstract of a paper in the Bulletin of the Seismological Society of America. It was published in 1974.

The abstract simply says: Yes.

It could only be beaten by an abstract saying: “No”, “!”, “?” or a blank one.





Jeffrey Beall’s List of Predatory, Open-Access Publishers, 2012 Edition

19 12 2011

Perhaps you remember that I previously wrote [1] about  non-existing and/or low quality scammy open access journals. I specifically wrote about Medical Science Journals of  the http://www.sciencejournals.cc/ series, which comprises 45 titles, none of which having published any article yet.

Another blogger, David M [2] also had negative experiences with fake peer review invitations from sciencejournals. He even noticed plagiarism.

Later I occasionally found other posts about open access spam, like the post of Per Ola Kristensson [3] (specifically about Bentham, Hindawi and InTech OA publishers), of Peter Murray-Rust [4] ,a chemist interested in OA (about spam journals and conferences, specifically about Scientific Research Publishing) and of Alan Dove PhD [5] (specifically about The Journal of Computational Biology and Bioinformatics Research (JCBBR) published by Academic Journals).

But now it appears that there is an entire list of “Predatory, Open-Access Publishers”. This list was created by Jeffrey Beall, academic librarian at the University of Colorado Denver. He just updated the list for 2012 here (PDF-format).

According to Jeffrey predatory, open-access publishers

are those that unprofessionally exploit the author-pays model of open-access publishing (Gold OA) for their own profit. Typically, these publishers spam professional email lists, broadly soliciting article submissions for the clear purpose of gaining additional income. Operating essentially as vanity presses, these publishers typically have a low article acceptance threshold, with a false-front or non-existent peer review process. Unlike professional publishing operations, whether subscription-based or ethically-sound open access, these predatory publishers add little value to scholarship, pay little attention to digital preservation, and operate using fly-by-night, unsustainable business models.

Jeffrey recommends not to do business with the following (illegitimate) publishers, including submitting article manuscripts, serving on editorial boards, buying advertising, etc. According to Jeffrey, “there are numerous traditional, legitimate journals that will publish your quality work for free, including many legitimate, open-access publishers”.

(For sake of conciseness, I only describe the main characteristics, not always using the same wording; please see the entire list for the full descriptions.)

Watchlist: Publishers, that may show some characteristics of  predatory, open-access publisher
  • Hindawi Way too many journals than can be properly handled by one publisher
  • MedKnow Publications vague business model. It charges for the PDF version
  • PAGEPress many dead links, a prominent link to PayPal
  • Versita Open paid subscription for print form. ..unclear business model

An asterisk (*) indicates that the publisher is appearing on this list for the first time.

How complete and reliable is this list?

Clearly, this list is quite exhaustive. Jeffrey did a great job listing  many dodgy OA journals. We should watch (many) of these OA publishers with caution. Another good thing is that the list is updated annually.

(http://www.sciencejournals.cc/ described in my previous post is not (yet) on the list ;)  but I will inform Jeffrey).

Personally, I would have preferred a distinction between real bogus or spammy journals and journals that seem to have “too many journals to properly handle” or that ask (too much ) money for subscription/from the author. The scientific content may still be good (enough).

Furthermore, I would rather see a neutral description of what is exactly wrong about a journal. Especially because “Beall’s list” is a list and not a blog post (or is it?). Sometimes the description doesn’t convince me that the journal is really bogus or predatory.

Examples of subjective portrayals:

  • Dove Press:  This New Zealand-based medical publisher boasts high-quality appearing journals and articles, yet it demands a very high author fee for publishing articles. Its fleet of journals is large, bringing into question how it can properly fulfill its promise to quickly deliver an acceptance decision on submitted articles.
  • Libertas Academia “The tag line under the name on this publisher’s page is “Freedom to research.” It might better say “Freedom to be ripped off.” 
  • Hindawi  .. This publisher has way too many journals than can be properly handled by one publisher, I think (…)

I do like funny posts, but only if it is clear that the post is intended to be funny. Like the one by Alan Dove PhD about JCBBR.

JCBBR is dedicated to increasing the depth of research across all areas of this subject.

Translation: we’re launching a new journal for research that can’t get published anyplace else.

The journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence in this subject area.

We’ll take pretty much any crap you excrete.

Hattip: Catherine Arnott Smith, PhD at the MedLib-L list.

  1. I Got the Wrong Request from the Wrong Journal to Review the Wrong Piece. The Wrong kind of Open Access Apparently, Something Wrong with this Inherently… (laikaspoetnik.wordpress.com)
  2. A peer-review phishing scam (blog.pita.si)
  3. Academic Spam and Open Access Publishing (blog.pokristensson.com)
  4. What’s wrong with Scholarly Publishing? New Journal Spam and “Open Access” (blogs.ch.cam.ac.uk)
  5. From the Inbox: Journal Spam (alandove.com)
  6. Beall’s List of Predatory, Open-Access Publishers. 2012 Edition (http://metadata.posterous.com)
  7. Silly Sunday #42 Open Access Week around the Globe (laikaspoetnik.wordpress.com)




Friday Foolery #44. The Shortest Abstract Ever?

2 12 2011

This is the shortest abstract I’ve ever seen:

“probably not”

With many thanks to Michelynn McKnight, PhD, AHIP, Associate Professor, School of Library and Information Science, Louisiana State University, who put it on the MEDLIB-L listserv, saying :  “Not exactly structured …. but a great laugh!”

According to Zemanta (articles related to this post) Future Twit also blogged about it.

Related articles





FUTON Bias. Or Why Limiting to Free Full Text Might not Always be a Good Idea.

8 09 2011

ResearchBlogging.orgA few weeks ago I was discussing possible relevant papers for the Twitter Journal Club  (Hashtag #TwitJC), a succesful initiative on Twitter, that I have discussed previously here and here [7,8].

I proposed an article, that appeared behind a paywall. Annemarie Cunningham (@amcunningham) immediately ran the idea down, stressing that open-access (OA) is a pre-requisite for the TwitJC journal club.

One of the TwitJC organizers, Fi Douglas (@fidouglas on Twitter), argued that using paid-for journals would defeat the objective that  #TwitJC is open to everyone. I can imagine that fee-based articles could set a too high threshold for many doctors. In addition, I sympathize with promoting OA.

However, I disagree with Annemarie that an OA (or rather free) paper is a prerequisite if you really want to talk about what might impact on practice. On the contrary, limiting to free full text (FFT) papers in PubMed might lead to bias: picking “low hanging fruit of convenience” might mean that the paper isn’t representative and/or doesn’t reflect the current best evidence.

But is there evidence for my theory that selecting FFT papers might lead to bias?

Lets first look at the extent of the problem. Which percentage of papers do we miss by limiting for free-access papers?

survey in PLOS by Björk et al [1] found that one in five peer reviewed research papers published in 2008 were freely available on the internet. Overall 8,5% of the articles published in 2008 (and 13,9 % in Medicine) were freely available at the publishers’ sites (gold OA).  For an additional 11,9% free manuscript versions could be found via the green route:  i.e. copies in repositories and web sites (7,8% in Medicine).
As a commenter rightly stated, the lag time is also important, as we would like to have immediate access to recently published research, yet some publishers (37%) impose an access-embargo of 6-12 months or more. (these papers were largely missed as the 2008 OA status was assessed late 2009).

PLOS 2009

The strength of the paper is that it measures  OA prevalence on an article basis, not on calculating the share of journals which are OA: an OA journal generally contains a lower number of articles.
The authors randomly sampled from 1.2 million articles using the advanced search facility of Scopus. They measured what share of OA copies the average researcher would find using Google.

Another paper published in  J Med Libr Assoc (2009) [2], using similar methods as the PLOS survey examined the state of open access (OA) specifically in the biomedical field. Because of its broad coverage and popularity in the biomedical field, PubMed was chosen to collect their target sample of 4,667 articles. Matsubayashi et al used four different databases and search engines to identify full text copies. The authors reported an OA percentage of 26,3 for peer reviewed articles (70% of all articles), which is comparable to the results of Björk et al. More than 70% of the OA articles were provided through journal websites. The percentages of green OA articles from the websites of authors or in institutional repositories was quite low (5.9% and 4.8%, respectively).

In their discussion of the findings of Matsubayashi et al, Björk et al. [1] quickly assessed the OA status in PubMed by using the new “link to Free Full Text” search facility. First they searched for all “journal articles” published in 2005 and then repeated this with the further restrictions of “link to FFT”. The PubMed OA percentages obtained this way were 23,1 for 2005 and 23,3 for 2008.

This proportion of biomedical OA papers is gradually increasing. A chart in Nature’s News Blog [9] shows that the proportion of papers indexed on the PubMed repository each year has increased from 23% in 2005 to above 28% in 2009.
(Methods are not shown, though. The 2008 data are higher than those of Björk et al, who noticed little difference with 2005. The Data for this chart, however, are from David Lipman, NCBI director and driving force behind the digital OA archive PubMed Central).
Again, because of the embargo periods, not all literature is immediately available at the time that it is published.

In summary, we would miss about 70% of biomedical papers by limiting for FFT papers. However, we would miss an even larger proportion of papers if we limit ourselves to recently published ones.

Of course, the key question is whether ignoring relevant studies not available in full text really matters.

Reinhard Wentz of the Imperial College Library and Information Service already argued in a visionary 2002 Lancet letter[3] that the availability of full-text articles on the internet might have created a new form of bias: FUTON bias (Full Text On the Net bias).

Wentz reasoned that FUTON bias will not affect researchers who are used to comprehensive searches of published medical studies, but that it will affect staff and students with limited experience in doing searches and that it might have the same effect in daily clinical practice as publication bias or language bias when doing systematic reviews of published studies.

Wentz also hypothesized that FUTON bias (together with no abstract available (NAA) bias) will affect the visibility and the impact factor of OA journals. He makes a reasonable cause that the NAA-bias will affect publications on new, peripheral, and under-discussion subjects more than established topics covered in substantive reports.

The study of Murali et al [4] published in Mayo Proceedings 2004 confirms that the availability of journals on MEDLINE as FUTON or NAA affects their impact factor.

Of the 324 journals screened by Murali et al. 38.3% were FUTON, 19.1%  NAA and 42.6% had abstracts only. The mean impact factor was 3.24 (±0.32), 1.64 (±0.30), and 0.14 (±0.45), respectively! The authors confirmed this finding by showing a difference in impact factors for journals available in both the pre and the post-Internet era (n=159).

Murali et al informally questioned many physicians and residents at multiple national and international meetings in 2003. These doctors uniformly admitted relying on FUTON articles on the Web to answer a sizable proportion of their questions. A study by Carney et al (2004) [5] showed  that 98% of the US primary care physicians used the Internet as a resource for clinical information at least once a week and mostly used FUTON articles to aid decisions about patient care or patient education and medical student or resident instruction.

Murali et al therefore conclude that failure to consider FUTON bias may not only affect a journal’s impact factor, but could also limit consideration of medical literature by ignoring relevant for-fee articles and thereby influence medical education akin to publication or language bias.

This proposed effect of the FFT limit on citation retrieval for clinical questions, was examined in a  more recent study (2008), published in J Med Libr Assoc [6].

Across all 4 questions based on a research agenda for physical therapy, the FFT limit reduced the number of citations to 11.1% of the total number of citations retrieved without the FFT limit in PubMed.

Even more important, high-quality evidence such as systematic reviews and randomized controlled trials were missed when the FFT limit was used.

For example, when searching without the FFT limit, 10 systematic reviews of RCTs were retrieved against one when the FFT limit was used. Likewise when searching without the FFT limit, 28 RCTs were retrieved and only one was retrieved when the FFT limit was used.

The proportion of missed studies (appr. 90%) is higher than in the studies mentioned above. Possibly this is because real searches have been tested and that only relevant clinical studies  have been considered.

The authors rightly conclude that consistently missing high-quality evidence when searching clinical questions is problematic because it undermines the process of Evicence Based Practice. Krieger et al finally conclude:

“Librarians can educate health care consumers, scientists, and clinicians about the effects that the FFT limit may have on their information retrieval and the ways it ultimately may affect their health care and clinical decision making.”

It is the hope of this librarian that she did a little education in this respect and clarified the point that limiting to free full text might not always be a good idea. Especially if the aim is to critically appraise a topic, to educate or to discuss current best medical practice.

References

  1. Björk, B., Welling, P., Laakso, M., Majlender, P., Hedlund, T., & Guðnason, G. (2010). Open Access to the Scientific Journal Literature: Situation 2009 PLoS ONE, 5 (6) DOI: 10.1371/journal.pone.0011273
  2. Matsubayashi, M., Kurata, K., Sakai, Y., Morioka, T., Kato, S., Mine, S., & Ueda, S. (2009). Status of open access in the biomedical field in 2005 Journal of the Medical Library Association : JMLA, 97 (1), 4-11 DOI: 10.3163/1536-5050.97.1.002
  3. WENTZ, R. (2002). Visibility of research: FUTON bias The Lancet, 360 (9341), 1256-1256 DOI: 10.1016/S0140-6736(02)11264-5
  4. Murali NS, Murali HR, Auethavekiat P, Erwin PJ, Mandrekar JN, Manek NJ, & Ghosh AK (2004). Impact of FUTON and NAA bias on visibility of research. Mayo Clinic proceedings. Mayo Clinic, 79 (8), 1001-6 PMID: 15301326
  5. Carney PA, Poor DA, Schifferdecker KE, Gephart DS, Brooks WB, & Nierenberg DW (2004). Computer use among community-based primary care physician preceptors. Academic medicine : journal of the Association of American Medical Colleges, 79 (6), 580-90 PMID: 15165980
  6. Krieger, M., Richter, R., & Austin, T. (2008). An exploratory analysis of PubMed’s free full-text limit on citation retrieval for clinical questions Journal of the Medical Library Association : JMLA, 96 (4), 351-355 DOI: 10.3163/1536-5050.96.4.010
  7. The #TwitJC Twitter Journal Club, a new Initiative on Twitter. Some Initial Thoughts. (laikaspoetnik.wordpress.com)
  8. The Second #TwitJC Twitter Journal Club (laikaspoetnik.wordpress.com)
  9. How many research papers are freely available? (blogs.nature.com)




I Got the Wrong Request from the Wrong Journal to Review the Wrong Piece. The Wrong kind of Open Access Apparently, Something Wrong with this Inherently…

27 08 2011

Meanwhile you might want to listen to “Wrong” (Depeche Mode)


Yesterday I screened my spam-folder. Between all male enhancement and lottery winner announcements, and phishing mails for my bank account, there was an invitation to peer review a paper in “SCIENCE JOURNAL OF PATHOLOGY”.

Such an invitation doesn’t belong in the spam folder, doesn’t it? Thus I had a closer look and quickly screened the letter.

I don’t know what alarmed me first. The odd hard returns, the journal using a Gmail address, an invitation for a topic (autism) I knew nothing about, an abstract that didn’t make sense and has nothing to do with Pathology, the odd style of the letter: the informal, but impersonal introduction (How are you? I am sure you are busy with many activities right now) combined with a turgid style (the paper addresses issues of value to our broad-based audience, and that it cuts through the thick layers of theory and verbosity for them and makes sense of it all in a clean, cohesive manner) and some misspellings. And then I never had an invitation from an editor, starting with the impersonal “Colleagues”… 

But still it was odd. Why would someone take the trouble of writing such an invitation letter? For what purpose? And apparently the person did know that I was a scientist, who does -or is able to- peer review medical scientific papers. Since the mail was send to my Laika Gmail account, the most likely source for my contact info must have been my pseudonymous blog. I seldom use this mail account for scientific purposes.

What triggered my caution flag the most, was the topic: autism. I immediately linked this to the anti-vaccination quackery movement, that’s trying to give skeptic bloggers a hard time and fights a personal, not a scientific battle. I also linked it to #epigate, that was exposed at Liz Ditz I Speak of Dreams, a blog with autism as a niche topic.

#Epigate is the story of René Najeraby aka @EpiRen, a popular epidemiologist blogger who was asked to stop engaging in social media by his employers, after a series of complaints by a Mr X, who also threatened other pseudonymous commenters/bloggers criticizing his actions. According to Mr. X no one will be safe, because all i have to do is file a john doe – or hire a cyber investigator. these courses of action cost less than $10,000 each; which means every person who is afraid of the light can be exposed”  In another comment at Liz Ditz’ he actually says he will go after a specific individual: “Anarchic Teapot”.

Ok, I admit that the two issues might be totally coincidental, and they probably are, but I’m hypersensitive for people trying to silence me via my employers (because that did happen to me in the past). Anyway,asking a pseudonymous blogger to peer-review might be a way to hack the real identity of such a blogger. Perhaps far-fetched, I know.

But what would the “editor” do if I replied and said “yes”?

I became curious. Does The Science Journal of Pathology even exist?

Not in PubMed!!

But the Journal “Science Journal of Pathology” does exist on the Internet…. and John Morrison is the editor. But he is the only one. As a matter of fact he is the entire staff…. There are “search”, “current” and “archives” tabs, but the latter two are EMPTY.

So I would have the dubious honor of reviewing the first paper for this journal?…. ;)

  1. (First assumption – David) – High school kids are looking for someone to peer review (and thus improve) their essays to get better grades.
    (me: school kids could also be replaced by “non-successful or starting scientists”)
  2. (Second assumption – David) Perhaps they are only looking to fill out their sucker lists. If you’ve done a bad review, they may blackmail you in other to keep it quiet.
  3. (me) – The journal site might be a cover up for anything (still no clue what).
  4. (me) - The site might get a touch of credibility if the (upcoming) articles are stamped with : “peer-reviewed by…”
  5. (David & me) the scammers target PhD’s or people who the “editors” think have little experience in peer reviewing and/or consider it a honor to do so.
  6. (David & me) It is phishing scam.You have to register on the journal’s website in order to be able to review or submit. So they get your credentials. My intuition was that they might just try to track down the real name, address and department of a pseudonymous blogger, but I think that David’s assumption is more plausible. David thinks that a couple of people in Nigeria is just after your password for your mail, amazon, PayPal etc for “the vast majority of people uses the same password for all logins, which is terribly bad practice, but they don’t want to forget it.”

With David, I would like to warn you for this “very interesting phishing scheme”, which aims at academics and especially PhD’s. We have no clue as to their real intentions, but it looks scammy.

Besides that the scam may affect you personally, such non-existing and/or low quality open access journals do a bad service to the existing, high quality open access journals.

There should be ways to remove such scam websites from the net.

Notes

“Academic scams – my wife just received a version of this for an Autism article, PhD/DPhil/Masters students beware that mentions a receipt of a similar autism”
Related articles




To Retract or Not to Retract… That’s the Question

7 06 2011

In the previous post I discussed [1] that editors of Science asked for the retraction of a paper linking XMRV retrovirus to ME/CFS.

The decision of the editors was based on the failure of at least 10 other studies to confirm these findings and on growing support that the results were caused by contamination. When the authors refused to retract their paper, Science issued an Expression of Concern [2].

In my opinion retraction is premature. Science should at least await the results of two multi-center studies, that were designed to confirm or disprove the results. These studies will continue anyway… The budget is already allocated.

Furthermore, I can’t suppress the idea that Science asked for a retraction to exonerate themselves for the bad peer review (the paper had serious flaws) and their eagerness to swiftly publish the possibly groundbreaking study.

And what about the other studies linking the XMRV to ME/CFS or other diseases: will these also be retracted?
And what happens in the improbable case that the multi-center studies confirm the 2009 paper? Would Science republish the retracted paper?

Thus in my opinion, it is up to other scientists to confirm or disprove findings published. Remember that falsifiability was Karl Popper’s basic scientific principle. My conclusion was that “fraud is a reason to retract a paper and doubt is not”. 

This is my opinion, but is this opinion shared by others?

When should editors retract a paper? Is fraud the only reason? When should editors issue a letter of concern? Are there guidelines?

Let first say that even editors don’t agree. Schekman, the editor-in chief of PNAS, has no direct plans to retract another paper reporting XMRV-like viruses in CFS [3].

Schekman considers it “an unusual situation to retract a paper even if the original findings in a paper don’t hold up: it’s part of the scientific process for different groups to publish findings, for other groups to try to replicate them, and for researchers to debate conflicting results.”

Back at the Virology Blog [4] there was also a vivid discussion about the matter. Prof. Vincent Ranciello gave the following answer in response to a question of a reader:

I don’t have any hard numbers on how often journals ask scientists to retract a paper, only my sense that it is very rare. Author retractions are more frequent, but I’m only aware of a handful of those in a year. I can recall a few other cases in which the authors were asked to retract a paper, but in those cases scientific fraud was involved. That’s not the case here. I don’t believe there is a standard policy that enumerates how such decisions are made; if they exist they are not public.

However, there is a Guideline for editors, the Guidance from the Committee on Publication Ethics (COPE) (PDF) [5]

Ivanoranski, of the great blog Retraction Watch, linked to it when we discussed reasons for retraction.

With regard to retraction the COPE-guidelines state that journal editors should consider retracting a publication if:

  1. they have clear evidence that the findings are unreliable, either as a result of misconduct (e.g. data fabrication) or honest error (e.g. miscalculation or experimental error)
  2. the findings have previously been published elsewhere without proper crossreferencing, permission or justification (i.e. cases of redundant publication)
  3. it constitutes plagiarism
  4. it reports unethical research

According to the same guidelines journal editors should consider issuing an expression of concern if:

  1. they receive inconclusive evidence of research or publication misconduct by the authors 
  2. there is evidence that the findings are unreliable but the authors’ institution will not investigate the case 
  3. they believe that an investigation into alleged misconduct related to the publication either has not been, or would not be, fair and impartial or conclusive 
  4. an investigation is underway but a judgement will not be available for a considerable time

Thus in the case of the Science XMRV/CSF paper an expression of concern certainly applies (all 4 points) and one might even consider a retraction, because the results seem unreliable (point 1). But it is not 100%  established that the findings are false. There is only serious doubt……

The guidelines seem to leave room for separate decisions. To retract a paper in case of plain fraud is not under discussion. But when is an error sufficiently established ànd important to warrant retraction?

Apparently retractions are on the rise. Although still rare (0.02% of all publications by the late 2000s) there has been a tenfold increase in retractions compared to the early 1980s (see review at Scholarly Kitchen [6] about two papers: [7] and [8]). However it is unclear whether increasing rates of retraction reflect more fraudulent or erroneous papers or a better diligence. The  first paper [7] also highlights that, out of fear of litigation, editors are generally hesitant to retract an article without the author’s permission.

At the blog Nerd Alert they give a nice overview [9] (based on Retraction Watch, but then summarized in one post ;) ) . They clarify that papers are retracted for “less dastardly reasons then those cases that hit the national headlines and involve purposeful falsification of data”, such as the fraudulent papers of Andrew Wakefield (autism caused by vaccination). Besides the mistaken publication of the same paper twice, data over-interpretation, plagiarism and the like, the reason can also be more trivial: ordering the wrong mice or using an incorrectly labeled bottle.

Still, scientist don’t unanimously agree that such errors should lead to retraction.

Drug Monkey blogs about his discussion [10] with @ivanoransky over a recent post at Retraction Watch, which asks whether a failure to replicate a result justifies a retraction [11]“. Ivanoransky presents a case, where a researcher (B) couldn’t reproduce the findings of another lab (A) and demonstrated mutations in the published protein sequence that excluded the mechanism proposed in A’s paper. This wasn’t retracted, possibly because B didn’t follow the published experimental protocols of A in all details. (reminds me of the XMRV controversy). 

Drugmonkey says (quote):  (cross-posted at Scientopia here – hmmpf isn’t that an example of redundant publication?)

“I don’t give a fig what any journals might wish to enact as a policy to overcompensate for their failures of the past.
In my view, a correction suffices” (provided that search engines like Google and PubMed make clear that the paper was in fact corrected).

Drug Monkey has a point there. A clear watermark should suffice.

However, we should note that most papers are retracted by authors, not the editors/journals, and that the majority of “retracted papers” remain available. Just 13.2% are deleted from the journal’s website. And 31% are not clearly labelled as such.

Summary of how the naïve reader is alerted to paper retraction (from Table 2 in [7], see: Scholarly Kitchen [6])

  • Watermark on PDF (41.1%)
  • Journal website (33.4%)
  • Not noted anywhere (31.8%)
  • Note appended to PDF (17.3%)
  • PDF deleted from website (13.2%)

My conclusion?

Of course fraudulent papers should be retracted. Also papers with obvious errors that invalidate the conclusions.

However, we should be extremely hesitant to retract papers that can’t be reproduced, if there is no undisputed evidence of error.

Otherwise we should retract almost all published papers at one point or another. Because if Professor Ioannides is right (and he probably is) “Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong”. ( see previous post [12],  “Lies, Damned Lies, and Medical Science” [13])  and Ioannides’ crushing article “Why most published research findings are false [14]”)

All retracted papers (and papers with major deficiencies and shortcomings) should be clearly labeled as such (as Drugmonkey proposed, not only at the PDF and at the Journal website, but also by search engines and biomedical databases).

Or lets hope, with Biochembelle [15], that the future of scientific publishing will make retractions for technical issues obsolete (whether in the form of nano-publications [16] or otherwise):

One day the scientific community will trade the static print-type approach of publishing for a dynamic, adaptive model of communication. Imagine a manuscript as a living document, one perhaps where all raw data would be available, others could post their attempts to reproduce data, authors could integrate corrections or addenda….

NOTE: Retraction Watch (@ivanoransky) and @laikas have voted in @drugmonkeyblog‘s poll about what a retracted paper means [here]. Have you?

References

  1. Science Asks to Retract the XMRV-CFS Paper, it Should Never Have Accepted in the First Place. (laikaspoetnik.wordpress.com 2011-06-02)
  2. Alberts B. Editorial Expression of Concern. Science. 2011-05-31.
  3. Given Doubt Cast on CFS-XMRV Link, What About Related Research? (blogs.wsj.com)
  4. XMRV is a recombinant virus from mice  (Virology Blog : 2011/05/31)
  5. Retractions: Guidance from the Committee on Publication Ethics (COPE) Elizabeth Wager, Virginia Barbour, Steven Yentis, Sabine Kleinert on behalf of COPE Council:
    http://www.publicationethics.org/files/u661/Retractions_COPE_gline_final_3_Sept_09__2_.pdf
  6. Retract This Paper! Trends in Retractions Don’t Reveal Clear Causes for Retractions (scholarlykitchen.sspnet.org)
  7. Wager E, Williams P. Why and how do journals retract articles? An analysis of Medline retractions 1988-2008. J Med Ethics. 2011 Apr 12. [Epub ahead of print] 
  8. Steen RG. Retractions in the scientific literature: is the incidence of research fraud increasing? J Med Ethics. 2011 Apr;37(4):249-53. Epub 2010 Dec 24.
  9. Don’t touch that blot. (nerd-alert.net/blog/weeklies/ : 2011/02/25)
  10. What_does_a_retracted_paper_mean? (scienceblogs.com/drugmonkey: 2011/06/03)
  11. So when is a retraction warranted? The long and winding road to publishing a failure to replicate (retractionwatch.wordpress.com : 2011/06/03/)
  12. Much Ado About ADHD-Research: Is there a Misrepresentation of ADHD in Scientific Journals? (laikaspoetnik.wordpress.com 2011-06-02)
  13. “Lies, Damned Lies, and Medical Science” (theatlantic.com :2010/11/)
  14. Ioannidis, J. (2005). Why Most Published Research Findings Are False. PLoS Medicine, 2 (8) DOI: 10.1371/journal.pmed.0020124
  15. Retractions: What are they good for? (biochembelle.wordpress.com : 2011/06/04/)
  16. Will Nano-Publications & Triplets Replace The Classic Journal Articles? (laikaspoetnik.wordpress.com 2011-06-02)

NEW* (Added 2011-06-08):

 





Science Asks to Retract the XMRV-CFS Paper, it Should Never Have Accepted in the First Place.

2 06 2011

Wow! Breaking!

As reported in WSJ earlier this week [1], editors of the journal Science asked Mikovits and her co-authors to voluntary retract their 2009 Science paper [2].

In this paper Mikovits and colleagues of the Whittemore Peterson Institute (WPI) and the Cleveland Clinic, reported the presence of xenotropic murine leukemia virus–related virus (XMRV) in peripheral blood mononuclear cells (PBMC) of patients with chronic fatigue syndrome (CFS). They used the very contamination-prone nested PCR to detect XMRV. This 2 round PCR enables detection of a rare target sequence by producing an unimaginable huge number of copies of that sequence.
XMRV was first demonstrated in cell lines and tissue samples of prostate cancer patients.

All the original authors, except for one [3], refused to retract the paper [4]. This prompted Science editor-in-chief Bruce Alberts to  issue an Expression of Concern [5], which was published two days earlier than planned because of the early release of the news in WSJ, mentioned above [1]. (see Retraction Watch [6]).

The expression of concern also follows the publication of two papers in the same journal.

In the first Science paper [7] Knox et al. found no Murine-Like Gammaretroviruses in any of the 61 CFS Patients previously identified as XMRV-positive, using the same PCR and culturing techniques as used by Lombardi et al. This paper made ERV (who consistently critiqued the Lombardi paper from the startlaugh-out-loud [8], because Knox also showed that human sera neutralize the virus in the blood,indicating it can hardly infect human cells in vivo. Knox also showed the WPIs sequences to be similar to the XMRV plasmid VP62, known to often contaminate laboratory agents.*

Contamination as the most likely reason for the positive WPI-results is also the message of the second Science paper. Here, Paprotka et al. [9]  show that XMRV was not present in the original prostate tumor that gave rise to the XMRV-positive 22Rv1 cell line, but originated -as a laboratory artifact- by recombination of two viruses during passaging the cell line in nude mice. For a further explanation see the Virology Blog [10].

Now Science editors have expressed their concern, the tweets, blogposts and health news articles are preponderantly negative about the XMRV findings in CFS/ME, where they earlier were positive or neutral. Tweets like “Mouse virus #XMRV doesn’t cause chronic fatigue #CFS http://t.co/Bekz9RG (Reuters) or “Origins of XMRV deciphered, undermining claims for a role in human disease: Delineation of the origin of… http://bit.ly/klDFuu #cancer” (National Cancer Institute) are unprecedented.

Thus the appeal by Science to retract the paper is justified?

Well yes and no.

The timing is rather odd:

  • Why does Science only express concern after publication of these two latest Science papers? There are almost a dozen other studies that failed to reproduce the WPI-findings. Moreover, 4 earlier papers in Retrovirology already indicated that disease-associated XMRV sequences are consistent with laboratory contamination. (see an overview of all published articles at A Photon in the Darkness [11])
  • There are still (neutral) scientist who believe that genuine human infections with XMRV still exist at a relatively low prevalence. (van der Kijl et al: xmrv is not a mousy virus [12])
  • And why doesn’t Science await the results from the official confirmation studies meant to finally settle whether XMRV exist in our blood supply and/or CFS (by the Blood Working Group and the NIH sponsored study by Lipkin et al.)
  • Why (and this is the most important question) did Science ever decide to publish the piece in the first place, as the study had several flaws.
I do believe that new research that turns existing paradigms upside down deserves a chance. Also a chance to get disproved. Yes such papers might be published in prominent scientific journals like Science, provided they are technically and methodologically sound at the very least. The Lombardi paper wasn’t.

Here I repeat my concerns expressed in earlier posts [13 and 14]. (please read these posts first, if you are unfamiliar with PCR).

Shortcomings in PCR-technique and study design**:

  • No positive control and no demonstration of the sensitivity of the PCR-assay. Usually a known concentration or a serial dilution of a (weakly) positive sample is taken as control. This allows to determine sensitivity of the assay.
  • Aspecific bands in negative samples (indicating suboptimal conditions)
  • Just one vial without added DNA per experiment as a negative control. (Negative controls are needed to exclude contamination).
  • CFS-Positive and negative samples are on separate gels (this increases bias, because conditions and chance of contamination are not the same for all samples, it also raises the question whether the samples were processed differently)
  • Furthermore only results obtained at the Cleveland Clinic are shown. (were similar results not obtained at the WPI? see below)
Contamination not excluded as a possible explanation
  • No variation in the XMRV-sequences detected (expected if the findings are real)
  • Although the PCR is near the detection limit, only single round products are shown. These are much stronger then expected even after two rounds. This is very confusing, because WPI later exclaimed that preculturing PBMC plus nested PCR (2 rounds) were absolutely required to get a positive result. But the Legend of Fig. 1 in the original Science paper clearly says PCR after one round. Strong (homogenous) bands after one round of PCR are highly suggestive of contamination.
  • No effort to exclude contamination of samples with mouse DNA (see below)
  • No determination of the viral DNA integration sites.

Mikovits also stressed that she never used the XMRV-positive cell lines in 2009. But what about the Cleveland Clinic, nota bene the institute that co-discovered XMRV and that had produced the strongly positive PCR-products (…after a single PCR-round…)?

On the other hand, the authors had other proof of the presence of retrovirus: detection of (low levels of) antibodies to XMRV in patient sera, and transmissibility of XMRV. On request they later applied the mouse mitochondrial assay to successfully exclude the presence of mouse DNA in their samples. (but this doesn’t exclude all forms of contamination, and certainly not at Cleveland Clinic)

These shortcomings alone should have been sufficient for the reviewers, had they seen it and /or deemed it of sufficient importance, to halt publication and to ask for additional studies**.

I was once in a similar situation. I found a rare cancer-specific chromosomal translocation in normal cells, but I couldn’t exclude PCR- contamination. The reviewers asked me to exclude contamination by sequencing the breakpoints, which only succeeded after two years of extra work. In retrospect I’m thankful to the reviewers for preventing me from publishing a possible faulty paper which could have ruined my career (yeah, because contamination is a real problem in PCR). And my paper improved tremendously by the additional experiments.

Yes it is peer review that failed here, Science. You should have asked for extra confirmatory tests and a better design in the first place. That would have spared a lot of anguish, and if the findings had been reproducible, more convincing and better data.

There were a couple of incidents after the study was published, that made me further doubt the robustness of WPI’s scientific data and even (after a while) I began to doubt whether WPI, and Judy Mikovits in particular, is adhering to good scientific (and ethical) practices.

  • WPI suddenly disclosed (Feb 18 2010) that culturing PBMC’s is necessary to obtain a positive PCR signal.  As a matter of fact they maintain this in their recent protest letter to Science. They refer to the original Science paper, but this paper doesn’t mention the need for culturing at all!! 
  • WPI suggests their researchers had detected XMRV in patient samples from both Dr. Kerr’s and Dr. van Kuppeveld’s ‘XMRV-negative’ CFS-cohorts. Thus in patient samples obtained without a culture-enrichment step…..  There can only be one truth:  main criticism on negative studies was that improper CFS-criteria were used. Thus either this CFS-population is wrongly defined and DOESN’t contain XMRV (with any method), OR it fulfills the criteria of CFS and the XMRV can be detected applying the proper technique. It is so confusing!..
  • Although Mikovits first reported that they found no to little virus variation, they later exclaimed to find a lot of variation.
  • WPI employees behave unprofessional towards colleague-scientists who failed to reproduce their findings.
Other questionable practices 
  • Mikovits also claims that people with autism harbor XMRV. One wonders which disease ISN’t associated with XMRV….
  • Despite the uncertainties about XMRV in CFS-patients, let alone the total LACK of demonstration of a CAUSAL RELATIONSHIP, Mikovits advocates the use of *not harmless* anti-retrovirals by CFS-patients.
  • At this stage of controversy, the WPI-XMRV test is sold as “a reliable diagnostic tool“ by a firm (VIP Dx) with strong ties to WPI. Mikovits even tells patients in a mail: “First of all the current diagnostic testing will define with essentially 100% accuracy! XMRV infected patients”. WTF!? 
  • This test is not endorsed in Belgium, and even Medicare only reimbursed 15% of the PCR-test.
  • The ties of WPI to RedLabs & VIP Dx are not clearly disclosed in the Science Paper. There is only a small Note (added in proof!)  that Lombardi is operations manager of VIP Dx, “in negotiations with the WPI to offer a diagnostic test for XMRV”.
Please see this earlier post [13] for broader coverage. Or read the post [16] of Keith Grimaldi, scientific director of Eurogene, and expert in personal genomics, who I asked to comment on the “diagnostic” tests. In his post he very clearly describes “what is exactly wrong about selling an unregulated clinical test  to a very vulnerable and exploitable group based on 1 paper on a small isolated sample”.

It is really surprising this wasn’t picked up by the media, by the government or by the scientific community. Will the new findings have any consequences for the XMRV-diagnostic tests? I fear WPI will get away with it for the time being. I agree with Lipkin, who coordinates the NIH-sponsored multi-center CFS-XMRV study that calls to retract the paper are premature at this point . Furthermore, -as addressed by WSJ [17]- if the Science paper is retracted, because XMRV findings are called into question, what about the papers also reporting a  link of XMRV-(like) viruses and CFS or prostate cancer?

WSJ reports, that Schekman, the editor-in chief of PNAS, has no direct plans to retract the paper of Alter et al reporting XMRV-like viruses in CFS [discussed in 18]. Schekman considers it “an unusual situation to retract a paper even if the original findings in a paper don’t hold up: it’s part of the scientific process for different groups to publish findings, for other groups to try to replicate them, and for researchers to debate conflicting results.”

I agree, this is a normal procedure, once the paper is accepted and published. Fraud is a reason to retract a paper, doubt is not.

Notes

*samples, NOT patients, as I saw a patient erroneous interpretation: “if it is contamination in te lab how can I have it as a patient?” (tweet is now deleted). No, according to the contamination -theory” XMRV-contamination is not IN you, but in the processed samples or in the reaction mixtures used.

** The reviewers did ask additional evidence, but not with respect to the PCR-experiments, which are most prone to contamination and false results.

  1. Chronic-Fatigue Paper Is Questioned (online.wsj.com)
  2. Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL, Ruscetti SK, Bagni RK, Petrow-Sadowski C, Gold B, Dean M, Silverman RH, & Mikovits JA (2009). Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. Science (New York, N.Y.), 326 (5952), 585-9 PMID: 19815723
  3. WPI Says No to Retraction / Levy Study Dashes Hopes /NCI Shuts the Door on XMR (phoenixrising.me)
  4. http://wpinstitute.org/news/docs/FinalreplytoScienceWPI.pdf
  5. Alberts B. Editorial Expression of Concern. Science. 2011 May 31.
  6. Science asks authors to retract XMRV-chronic fatigue paper; when they refuse, issue Expression of Concern. 2011/05/31/ (retractionwatch.wordpress.com)
  7. K. Knox, Carrigan D, Simmons G, Teque F, Zhou Y, Hackett Jr J, Qiu X, Luk K, Schochetman G, Knox A, Kogelnik AM & Levy JA. No Evidence of Murine-Like Gammaretroviruses in CFS Patients Previously Identified as XMRV-Infected. Science. 2011 May 31. (10.1126/science.1204963).
  8. XMRV and chronic fatigue syndrome: So long, and thanks for all the lulz, Part I [erv] (scienceblogs.com)
  9. Paprotka T, Delviks-Frankenberry KA, Cingoz O, Martinez A, Kung H-J, Tepper CG, Hu W-S , Fivash MJ, Coffin JM, & Pathak VK. Recombinant origin of the retrovirus XMRV. Science. 2011 May 31. (10.1126/science.1205292).
  10. XMRV is a recombinant virus from mice  (Virology Blog : 2011/05/31)
  11. Science asks XMRV authors to retract paper (photoninthedarkness.com : 2011/05/31)
  12. van der Kuyl AC, Berkhout B. XMRV: Not a Mousy Virus. J Formos Med Assoc. 2011 May;110(5):273-4. PDF
  13. Finally a Viral Cause of Chronic Fatigue Syndrome? Or Not? – How Results Can Vary and Depend on Multiple Factor (laikaspoetnik.wordpress.com: 2010/02/15/)
  14. Three Studies Now Refute the Presence of XMRV in Chronic Fatigue Syndrome (CFS) (laikaspoetnik.wordpress.com 2010/04/27)
  15. WPI Announces New, Refined XMRV Culture Test – Available Now Through VIP Dx in Reno (prohealth.com 2010/01/15)
  16. The murky side of physician prescribed LDTs (eurogene.blogspot.com : 2010/09/06)
  17. Given Doubt Cast on CFS-XMRV Link, What About Related Research? (blogs.wsj.com)
  18. Does the NHI/FDA Paper Confirm XMRV in CFS? Well, Ditch the MR and Scratch the X… and… you’ve got MLV. (laikaspoetnik.wordpress.com : 2010/08/30/)

Related articles





How a Valentine’s Editorial about Chocolate & Semen Lead to the Resignation of Top Surgeon Greenfield

27 04 2011
Children's Valentine in somewhat questionable ...

Image via Wikipedia

Dr. Lazar Greenfield, recently won the election as the new President of  ACS (American College of Surgeons). This position would crown his achievements. For Greenfield was a truly pre-eminent surgeon. He is best known for his development of an intracaval filter bearing his name. This device probably has saved many lives by preventing blood clots from going into the lungs. He has been highly productive having authored more than 360 scientific articles in peer-reviewed journals, 128 book chapters as well as 2 textbooks.

Greenfield also happened to have a minor side job as the editor-in-chief of Elsevier’s Surgery News. Surgery News is not a peer-reviewed journal, but what Greenfield later defines as a monthly throw-away newspaper (of the kind Elsevier produces a lot).

As an-editor-in chief Greenfield wrote open editorials (opinion pieces) for Surgery News. He found a very suitable theme for the February issue: Valentine’s day.

Valentine’s Day is about love, and the editorial was about romantic gut feeling possibly having a physiological basis. In other words, the world of  sexual chemical signals that give you butterflies-feelings. The editorial jumps from mating preferences of fruit flies, stressed female rotifers turning into males and synchronization of menstrual cycles of women who live together, to a study suggesting that “exposure” to semen makes female college students less depressed. All 4 topics are based on scientific research, published in peer review papers.

Valentines Day asks for giving this “scientific” story a twist, so he concludes the editorial as follows:

“So there’s a deeper bond between men and women than St. Valentine would have suspected, and now we know there’s a better gift for that day than chocolates.”

Now, everybody knows that that conclusion ain’t supported by the data.
This would have required at least a double-blind randomized trial, comparing the mood-enhancing effects of chocolate compared to …….  (yikes!).

Just joking, of course…., similar as dear Lazar was trying to be funny….

No, the editorial wasn’t particularly funny.

And somehow it isn’t pleasant to think of a man’s love fluid wrapped in a ribbon and a box with hearts, while you expect some chocolates. Furthermore it suggests that sperm is something a man just gives/donates/injects, not a resultant of mutual love.

However this was the opposite of what Greenfield had in mind:

The biochemical properties of semen that were reviewed have been documented in peer-reviewed journals and represent the remarkable way that Nature promotes bonding between men and women, not something demeaning.”

Thus the man just tried to “Amuse his readers” and highlight research on “some fascinating new findings related to semen.”

I would have appreciated a more subtle ending of the editorial, but I would take no offense.

….Unlike many of his fellow female surgeons.  The Women in Surgery Committee and the Association of Women Surgeons considered his editorial as “demeaning to women” (NY-Times).

He offered his sincere apologies and resigned as Editor-in-Chief of the paper. The publication was retracted. As a matter of fact the entire February issue of Surgery News was taken off the ACS-website. Luckily, Retraction Watch published the editorial in its entirety.

Greenfield’s apologies weren’t enough, women surgeons brought the issue to the Board of Regents, who asked him to resign, which he eventually did.

A few weeks later he wrote a resentful letter. This is not a smart thing to do, but is understandable for several reasons. First, he didn’t he mean to be offensive and made his apologies. Second, he has an exemplary career as a longtime mentor and advocate of women in surgery. Third, true reason for his resign wasn’t the implicit plead for unprotected sex, but rather that the editorial reflected “a macho culture in surgery that needed to change.” Fourth, his life is ruined over something trivial.

Why can’t one write a lighthearted opinion-piece at Valentine’s day without getting resigned? Is it because admitting that “the “bond between men and women” is natural and runs deep” is one of those truths you cannot utter (Paul Rahe).

Is this perhaps typically American?

Elmar Veerman (Dutch Journalist, science editor at VPRO) comments at at Retraction Watch:

(…) Frankly, I don’t see the problem. I find it rather funny and harmless. Perhaps because I’m from Europe, where most people have a more relaxed attitude towards sex. Something like ‘nipplegate’ could never happen here (a nipple on tv, so what).  (…) I have been wondering for years why so many Americans seem to think violence is fine and sex is scary.

Not only female surgeons  object to the editorial. Well-known male (US) surgeons “fillet” the editorial at their blogs: Jeffrey Parks at Buckeye Surgeon ( 1 and 2), Orac Knows at Respectful Insolence (1 and 2) and Skeptical Scalpel (the latter quite mildly).

Jeffrey and Orac do not only think the man is humorless and a sexist, but also that the science behind the mood-enhancing aspects of semen is crap.

Although Jeffrey only regards “The “science” a little suspect as per Orac.”…. Because of course: “Orac knows.”

Orac exaggerates what Greenfield has said in the “breathtakingly inappropriate and embarrassing article  for Surgery News”, as he calls it. [1]:  “Mood-enhancing effects of semen” becomes in Orac’s words  the cure for female depression and  “a woman needs a man to inject his seed into her in order to be truly happy“.
Of course, it is not fair to twist words this way.

The criticism of Orac against the science that supports Dr. Greenfield’s joke is as follows: The first two studies are not related to human biology and the semen study” is “about as lame a study as can be imagined. Not only is it a study in which causation is implied by correlation, but to me the evidence of correlation is not even that compelling.”  

Orac is right about that. In his second post Orac continues (in response to the authors of the semen paper, who defend Greenfield and suggest they had obtained “more evidence”):

(..)so I was curious about where they had published their “replication.” PubMed has a wonderful feature in which it pops up “related citations” in the right sidebar of any citation you look up. I didn’t recall seeing any related citations presenting confirmatory data for Gallup et al’s study. I searched PubMed using the names of all three authors of the original “semen” study and found no publications regarding the antidepressant properties of semen since the original 2002 study cited by Dr. Greenfield. I found a lot of publications about yawning and mental states, but no followup study or replication of the infamous “semen” study. color me unimpressed” [2](..)

Again, I agree with Orac: the authors didn’t publish any confirmatory data.
But looking at related articles is not a good way to check if related articles have been published: PubMed creates this set by comparing words from the title, abstract, and MeSH terms using a word-weighted algorithm. It is goal is mainly to increase serendipity.

I didn’t have time to do a proper Pubmed search, which should include all kinds of synonyms for sperm and mood/depression. I just checked the papers citing Gallups original article in Google Scholar and found 29 hits (no Gallop papers indeed), including various articles by Costa & Brody i.e. the freely available letter (discussing their research): Greater Frequency of Penile–Vaginal Intercourse Without Condoms is Associated with Better Mental Health. This letter was a response to an opposite finding by the way.

I didn’t look at the original articles and I don’t really expect much of it. However, it just shows the Gallop study is not the only study, linking semen to positive health effects.

Assuming Greenfield had more than a joke in mind, and wanted to reflect on the state of art of health aspects of semen, it surprises me that he didn’t dig any further than this article from 2002.

Is it because he really based his editorial on a review in Scientific American from 2010, called “An ode to the many evolved virtues of human semen” [3,4], which describes Gallup’s study and, strikingly, also starts with discussing menstrual synchrony.

Greenfield could have discussed other, better documented, properties of semen, like its putative protection from pre-eclampsia (see references in Wikipedia)[5]

Or even better, he could have cited other sexual chemical signals that give you butterflies-feelings, like smell!

In stead of “Gut Feelings” the title could have been “In the nose of the beholder” or “The Smell of Love” [6].

And Greenfield could have concluded:

“So there’s more in the air than St. Valentine would have suspected, and now we know there’s a better gift for that day than chocolates: perfume.

And no one would have bothered and would have done with the paper as one usually does with throwaways.

Notes

  1. Coincidentally, while reading Orac’s post I saw a Research Blogging post mentioned in the side bar: masturbation-and-restless-leg-syndrome. …Admittedly, this was a friday-weird-science post and a thorough review of a case study.
  2. It would probably have been easier to check their website with an overview of publications
  3. Mentioned in a comment somewhere, but I can’t track it down.
  4. If Greenfield used Scientific American as a source he should have read it all to the end, where the author states: I bid adieu, please accept, in all sincerity, my humblest apologies for what is likely to be a flood of bad, off-color jokes—men saying, “I’m not a medical doctor, but my testicles are licensed pharmaceutical suppliers” and so on—tracing its origins back to this innocent little article. Ladies, forgive me for what I have done.”
  5. Elmar Veerman has written a review on this topic in 2000 at Kennislink: http://www.kennislink.nl/publicaties/sperma-als-natuurlijke-bescherming (Dutch)
  6. As a matter of fact these are actual titles of scientific papers.




Friday Foolery #39. Peer Review LOL, How to Write a Comment & The Best Rejection Letter Evvah!

15 04 2011

LOL? Peer review?! Comments?

Peer review is never funny, you think.
It is hard to review papers, especially when they are poorly written. From the author’s point of view, it is annoying and frustrating to see a paper rejected on basis of comments of peer reviewers, who either don’t understand the paper or thwart you in your attempts to get the paper published, for instance because you are a competitor in the field.

Still, from a (great) distance the peer review process can be funny… in some respects.

Read for instance a collection of memorable quotes from peer review critiques of the past year in Environmental Microbiology (EM does this each December). Here are some excerpts:

  • Done! Difficult task, I don’t wish to think about constipation and faecal flora during my holidays!
  • This paper is desperate. Please reject it completely and then block the author’s email ID so they can’t use the online system in future.
  • It is sad to see so much enthusiasm and effort go into analyzing a dataset that is just not big enough.
  • The abstract and results read much like a laundry list.
  • .. I would suggest that EM is setting up a fund that pays for the red wine reviewers may need to digest manuscripts like this one.
  • I have to admit that I would have liked to reject this paper because I found the tone in the Reply to the Reviewers so annoying.
  • I started to review this but could not get much past the abstract.
  • This paper is awfully written. There is no adequate objective and no reasonable conclusion. The literature is quoted at random and not in the context of argument…
  • Stating that the study is confirmative is not a good start for the Discussion.
  • I suppose that I should be happy that I don’t have to spend a lot of time reviewing this dreadful paper; however I am depressed that people are performing such bad science.
  • Preliminary and intriguing results that should be published elsewhere.
  • Reject – More holes than my grandad’s string vest!
  • The writing and data presentation are so bad that I had to leave work and go home early and then spend time to wonder what life is about.
  • Very much enjoyed reading this one, and do not have any significant comments. Wish I had thought of this one.
  • This is a long, but excellent report. [...] It hurts me a little to have so little criticism of a manuscript.

More seriously, the Top 20 Reasons (Negative Comments) Written by the Reviewers Recommending Rejection of 123 Medical Education Manuscripts can be found at Academic Medicine (vol 76, no . 9 / 2 0 0 1). The top 5 is:

  1. Statistics: inappropriate, incomplete, or insufficiently described, etc.  11.2 %
  2. Overinterpretation of the results 8.7 %
  3. Inappropriate, suboptimal, insufficiently described instrument 7.3%
  4. Sample too small or biased  5.6 %
  5. Text difficult to follow, to understand 3.9%

Neuroskeptic describes 9 types of review decisions in the The Wheel of Peer Review. Was your paper reviewed by “Bee-in-your-Bonnet” or by “Cite Me, Me, Me!”

Rejections are of all times. Perhaps the best rejection letter ever is written by Sir David Brewster editor of The Edinburgh Journal of Science to Charles Babbage on July 3, 1821. Noted in James Gleick’s, The Information. A History, a Theory, a Flood

Excerpt at Marginal Revolution (HT @TwistedBacteria):

The subjects you propose for a series of Mathematical and Metaphysical Essays are so very profound, that there is perhaps not a single subscriber to our Journal who could follow them. 

Responses to a rejection are also of all ages. See this video anno 1945 (yes this scene has been used tons of times for other purposes)

Need tips?

Read How to Publish a Scientific Comment in 1 2 3 Easy Steps (well literally 123 steps) by Prof. Rick Trebino. Based on real life. It is Hilarious!

PhD comics made a paper review worksheet (you don’t even have to read the manuscript!) and gives you advise how NOT to address reviewer comments. LOL.

And here is a Sample Cover Letter for Journal Manuscript Resubmissions. Ain’t that easy?

Yet if you are still unsuccessful and want a definitive decision rendered within hours of submission you can always send your paper to the Journal of Universal Rejection.





Internet Sources & Blog Posts in a Reference List? Yes or No?

13 02 2011

A Dutch librarian asked me to join a blog carnival of Dutch Librarians. This carnival differs from medical blog carnivals (like the Grand Rounds and “Medical Information Matters“) in its approach. There is one specific topic which is discussed at individual blogs and summarized by the host in his carnival post.

The current topic is “Can you use an internet source”?

The motive of the archivist Christian van der Ven for starting this discussion was the response to a post at his blog De Digitale Archivaris. In this post he wondered whether blog posts could be used by students writing a paper. It struck him that students rarely use internet sources and that most teachers didn’t encourage or allow to use these.

Since I work as a medical information specialist I will adapt the question as follows:

“Can you refer to an internet source in a biomedical scientific article, paper, thesis or survey”?

I explicitly use “refer to” instead of “use”. Because I would prefer to avoid discussing “plagiarism” and “copyright”. Obviously I would object to any form of uncritical copying of a large piece of text without checking it’s reliability and copyright-issues (see below).

Previously, I have blogged about the trouble with Wikipedia as a source for information. In short, as Wikipedians say, Wikipedia is the best source to start with in your research, but should never be the last one (quote from @berci in a twitterinterview). In reality, most students and doctors do consult Wikipedia and dr. Google (see here and here). However, they may not (and mostly should not) use it as such in their writings. As I have indicated in the earlier post it is not (yet) a trustworthy source for scientific purposes.

But Internet is more than Wikipedia and random Googling. As a matter of fact most biomedical information is now in digital form. The speed at which biomedical knowledge is advancing is tremendous. Books are soon out of date. Thus most library users confine themselves to articles in peer-reviewed scientific papers or to datasets (geneticists). Generally my patrons search the largest freely available database PubMed to access citations in mostly peer-reviewed -and digital- journals. These are generally considered as (reliable)  internet sources. But they do not essentially differ from printed equivalents.

However there are other internet sources that provide reliable or useful information. What about publications by the National Health Council, an evidence based guideline by NICE and/or published evidence tables? What about synopses (critical appraisals) such as published by DARE, like this one? What about evidence summaries by Clinical Evidence like, this one? All excellent, evidence based, commendable online resources. Without doubt these can be used as a reference in a paper. Thus there is no clearcut answer to the abovementioned question. Whether an internet source should be used as a reference in a paper is dependent on the following:

  1. Is the source relevant?
  2. Is the source reliable?
  3. What is the purpose of the paper and the topic?

Furthermore it depends on the function of the reference (not mutually exclusive):

  1. To give credit
  2. To add credibility
  3. For transparency and reproducibility
  4. To help readers find further information
  5. For illustration (as an example)

Lets illustrate this with a few examples.

  • Students who write an overview on a medical topic can use any relevant reference, including narrative reviews, UpToDate and other internet sites if appropriate .
  • Interns who have to prepare a CAT (critically appraised topic) should refer to 2-3 papers, providing the highest evidence (i.e. a systematic review and/or randomized controlled trial).
  • Authors writing systematic reviews only include high quality primary studies (except for the introduction perhaps). In addition they should (ideally) check congress abstracts, clinical trial registers (like clinicaltrials.gov), or actual raw data (i.e. produced by a pharmaceutical company).
  • Authors of narrative reviews may include all kinds of sources. That is also true for editorials, primary studies or theses. Reference lists should be as accurate and complete as possible (within the limits posed by for instance the journal).

Blog, wikis, podcasts and tweets.
Papers can also refer to blog posts, wikis or even tweets (there is APA guidance how to cite these). Such sources can merely be referred to because they serve as an example (articles about social media in Medicine for instance, like this recent paper in Am Pharm Assoc that analyzes pharmacy-centric blogs.

Blog posts are usually seen as lacking in factual reliability. However, there are many blogs, run by scientists, that are (or can be) a trustworthy source. As a matter of fact it would be inappropriate not to cite these sources, if  the information was valuable, useful and actually used in the paper.
Some examples of excellent biomedical web 2.0 sources.

  • The Clinical Cases and Images Blog of Ves Dimov, MD (drVes at Twitter), a rich source of clinical cases. My colleague once found the only valuable information (a rare patient case) at Dr Ves’ blog, not in PubMed or other regular sources. Why not cite this blog post, if this patient case was to be published?
  • Researchblogging.org is an aggregator of expert blogposts about peer-reviewed research. There are many other high quality scientific blogging platforms like Scientopia, the PLOSblogs etc. These kind of blogs critically analyse peer reviewed papers. For instance this blog post by Marya Zilberberg reveals how a RCT stopped early due to efficacy can still be severely flawed, but lead to a level one recommendation. Very useful information that you cannot find in the actual published study nor in the evidence based guideline
  • An example of an excellent and up-to-date wiki is the open HLWIKI (maintained by Dean Giustini, @giustini at Twitter) with entries about health librarianship, social media and current information technology topics, having over 565+ pages of content since 2006! It has a very rich content with extensive reference lists and could thus be easily used in papers on library topics.
  • Another concept is usefulchem.wikispaces.com (an initiative of Jean Claude Bradley, discussed in a previous post. This is not only a wiki but also an open notebook, where actual primary scientific data can be found. Very impressive.
  • There is also WikiProteins (part of a conceptwiki), an open, collaborative wiki  focusing on proteins and their role in biology and medicine.

I would like to end my post with two thoughts.

First the world is not static. In the future scientific claims could be represented as formal RDF statements/triplets  instead of or next to the journal publications as we know them (see post on nanopublications). Such “statements” (already realized with regard to proteins and genes) are more easily linked and retrieved. In effect, peer review doesn’t prevent fraud, misrepresentation or overstatements.

Another side of the coin in this “blogs as an internet source”-dicussion is whether the citation is always appropriate and/or accurate?

Today a web page (cardio.nl/ACS/StudiesRichtlijnenProtocollen.html), evidently meant for education of residents, linked to one of my posts. Almost the entire post was copied including a figure, but the only link used was one of my tags EBM (hidden in the text).  Even worse, blog posts are sometimes mentioned to give credit to disputable context. I’ve mentioned the tactics of Organized Wisdom before. More recently a site called deathbyvaccination.com links out of context to one of my blog post. Given the recent revelation of fraudulent anti-vaccine papers, I’m not very happy with that kind of “attribution”.

Related Articles








Follow

Get every new post delivered to your Inbox.

Join 607 other followers