Why Publishing in the NEJM is not the Best Guarantee that Something is True: a Response to Katan

27 10 2012

ResearchBlogging.orgIn a previous post [1] I reviewed a recent  Dutch study published in the New England Journal of Medicine (NEJM [2] about the effects of sugary drinks on the body mass index of school children.

The study got widely covered by the media. The NRC, for which the main author Martijn Katan works as a science columnist,  columnist, spent  two full (!) pages on the topic -with no single critical comment-[3].
As if this wasn’t enough, the latest column of Katan again dealt with his article (text freely available at mkatan.nl)[4].

I found Katan’s column “Col hors Catégorie” [4] quite arrogant, especially because he tried to belittle a (as he called it) “know-it-all” journalist who criticized his work  in a rivaling newspaper. This wasn’t fair, because the journalist had raised important points [5, 1] about the work.

The piece focussed on the long road of getting papers published in a top journal like the NEJM.
Katan considers the NEJM as the “Tour de France” among  medical journals: it is a top achievement to publish in this paper.

Katan also states that “publishing in the NEJM is the best guarantee something is true”.

I think the latter statement is wrong for a number of reasons.*

  1. First, most published findings are false [6]. Thus journals can never “guarantee”  that published research is true.
    Factors that  make it less likely that research findings are true include a small effect size,  a greater number and lesser preselection of tested relationships, selective outcome reporting, the “hotness” of the field (all applying more or less to Katan’s study, he also changed the primary outcomes during the trial[7]), a small study, a great financial interest and a low pre-study probability (not applicable) .
  2. It is true that NEJM has a very high impact factor. This is  a measure for how often a paper in that journal is cited by others. Of course researchers want to get their paper published in a high impact journal. But journals with high impact factors often go for trendy topics and positive results. In other words it is far more difficult to publish a good quality study with negative results, and certainly in an English high impact journal. This is called publication bias (and language bias) [8]. Positive studies will also be more frequently cited (citation bias) and will more likely be published more than once (multiple publication bias) (indeed, Katan et al already published about the trial [9], and have not presented all their data yet [1,7]). All forms of bias are a distortion of the “truth”.
    (This is the reason why the search for a (Cochrane) systematic review must be very sensitive [8] and not restricted to core clinical journals, but even include non-published studies: for these studies might be “true”, but have failed to get published).
  3. Indeed, the group of Ioannidis  just published a large-scale statistical analysis[10] showing that medical studies revealing “very large effects” seldom stand up when other researchers try to replicate them. Often studies with large effects measure laboratory and/or surrogate markers (like BMI) instead of really clinically relevant outcomes (diabetes, cardiovascular complications, death)
  4. More specifically, the NEJM does regularly publish studies about pseudoscience or bogus treatments. See for instance this blog post [11] of ScienceBased Medicine on Acupuncture Pseudoscience in the New England Journal of Medicine (which by the way is just a review). A publication in the NEJM doesn’t guarantee it isn’t rubbish.
  5. Importantly, the NEJM has the highest proportion of trials (RCTs) with sole industry support (35% compared to 7% in the BMJ) [12] . On several occasions I have discussed these conflicts of interests and their impact on the outcome of studies ([13, 14; see also [15,16] In their study, Gøtzsche and his colleagues from the Nordic Cochrane Centre [12] also showed that industry-supported trials were more frequently cited than trials with other types of support, and that omitting them from the impact factor calculation decreased journal impact factors. The impact factor decrease was even 15% for NEJM (versus 1% for BMJ in 2007)! For the journals who provided data, income from the sales of reprints contributed to 3% and 41% of the total income for BMJ and The Lancet.
    A recent study, co-authored by Ben Goldacre (MD & science writer) [17] confirms that  funding by the pharmaceutical industry is associated with high numbers of reprint ordersAgain only the BMJ and the Lancet provided all necessary data.
  6. Finally and most relevant to the topic is a study [18], also discussed at Retractionwatch[19], showing that  articles in journals with higher impact factors are more likely to be retracted and surprise surprise, the NEJM clearly stands on top. Although other reasons like higher readership and scrutiny may also play a role [20], it conflicts with Katan’s idea that  “publishing in the NEJM is the best guarantee something is true”.

I wasn’t aware of the latter study and would like to thank drVes and Ivan Oranski for responding to my crowdsourcing at Twitter.


  1. Sugary Drinks as the Culprit in Childhood Obesity? a RCT among Primary School Children (laikaspoetnik.wordpress.com)
  2. de Ruyter JC, Olthof MR, Seidell JC, & Katan MB (2012). A trial of sugar-free or sugar-sweetened beverages and body weight in children. The New England journal of medicine, 367 (15), 1397-406 PMID: 22998340
  3. NRC Wim Köhler Eén kilo lichter.NRC | Zaterdag 22-09-2012 (http://archief.nrc.nl/)
  4. Martijn Katan. Col hors Catégorie [Dutch], (published in de NRC,  (20 oktober)(www.mkatan.nl)
  5. Hans van Maanen. Suiker uit fris, De Volkskrant, 29 september 2012 (freely accessible at http://www.vanmaanen.org/)
  6. Ioannidis, J. (2005). Why Most Published Research Findings Are False PLoS Medicine, 2 (8) DOI: 10.1371/journal.pmed.0020124
  7. Changes to the protocol http://clinicaltrials.gov/archive/NCT00893529/2011_02_24/changes
  8. Publication Bias. The Cochrane Collaboration open learning material (www.cochrane-net.org)
  9. de Ruyter JC, Olthof MR, Kuijper LD, & Katan MB (2012). Effect of sugar-sweetened beverages on body weight in children: design and baseline characteristics of the Double-blind, Randomized INtervention study in Kids. Contemporary clinical trials, 33 (1), 247-57 PMID: 22056980
  10. Pereira, T., Horwitz, R.I., & Ioannidis, J.P.A. (2012). Empirical Evaluation of Very Large Treatment Effects of Medical InterventionsEvaluation of Very Large Treatment Effects JAMA: The Journal of the American Medical Association, 308 (16) DOI: 10.1001/jama.2012.13444
  11. Acupuncture Pseudoscience in the New England Journal of Medicine (sciencebasedmedicine.org)
  12. Lundh, A., Barbateskovic, M., Hróbjartsson, A., & Gøtzsche, P. (2010). Conflicts of Interest at Medical Journals: The Influence of Industry-Supported Randomised Trials on Journal Impact Factors and Revenue – Cohort Study PLoS Medicine, 7 (10) DOI: 10.1371/journal.pmed.1000354
  13. One Third of the Clinical Cancer Studies Report Conflict of Interest (laikaspoetnik.wordpress.com)
  14. Merck’s Ghostwriters, Haunted Papers and Fake Elsevier Journals (laikaspoetnik.wordpress.com)
  15. Lexchin, J. (2003). Pharmaceutical industry sponsorship and research outcome and quality: systematic review BMJ, 326 (7400), 1167-1170 DOI: 10.1136/bmj.326.7400.1167
  16. Smith R (2005). Medical journals are an extension of the marketing arm of pharmaceutical companies. PLoS medicine, 2 (5) PMID: 15916457 (free full text at PLOS)
  17. Handel, A., Patel, S., Pakpoor, J., Ebers, G., Goldacre, B., & Ramagopalan, S. (2012). High reprint orders in medical journals and pharmaceutical industry funding: case-control study BMJ, 344 (jun28 1) DOI: 10.1136/bmj.e4212
  18. Fang, F., & Casadevall, A. (2011). Retracted Science and the Retraction Index Infection and Immunity, 79 (10), 3855-3859 DOI: 10.1128/IAI.05661-11
  19. Is it time for a Retraction Index? (retractionwatch.wordpress.com)
  20. Agrawal A, & Sharma A (2012). Likelihood of false-positive results in high-impact journals publishing groundbreaking research. Infection and immunity, 80 (3) PMID: 22338040


* Addendum: my (unpublished) letter to the NRC

Tour de France.
Nadat het NRC eerder 2 pagina’ s de loftrompet over Katan’s nieuwe studie had afgestoken, vond Katan het nodig om dit in zijn eigen column dunnetjes over te doen. Verwijzen naar je eigen werk mag, ook in een column, maar dan moeten wij daar als lezer wel wijzer van worden. Wat is nu de boodschap van dit stuk “Col hors Catégorie“? Het beschrijft vooral de lange weg om een wetenschappelijke studie gepubliceerd te krijgen in een toptijdschrift, in dit geval de New England Journal of Medicine (NEJM), “de Tour de France onder de medische tijdschriften”. Het stuk eindigt met een tackle naar een journalist “die dacht dat hij het beter wist”. Maar ach, wat geeft dat als de hele wereld staat te jubelen? Erg onsportief, omdat die journalist (van Maanen, Volkskrant) wel degelijk op een aantal punten scoorde. Ook op Katan’s kernpunt dat een NEJM-publicatie “de beste garantie is dat iets waar is” valt veel af te dingen. De NEJM heeft inderdaad een hoge impactfactor, een maat voor hoe vaak artikelen geciteerd worden. De NEJM heeft echter ook de hoogste ‘artikelterugtrekkings’ index. Tevens heeft de NEJM het hoogste percentage door de industrie gesponsorde klinische trials, die de totale impactfactor opkrikken. Daarnaast gaan toptijdschriften vooral voor “positieve resultaten” en “trendy onderwerpen”, wat publicatiebias in de hand werkt. Als we de vergelijking met de Tour de France doortrekken: het volbrengen van deze prestigieuze wedstrijd garandeert nog niet dat deelnemers geen verboden middelen gebruikt hebben. Ondanks de strenge dopingcontroles.

One Third of the Clinical Cancer Studies Report Conflict of Interest

16 05 2009

While many of us just recovered from the news that Elsevier was paid to produce fake Journals to promote pharmaceutical products, another news item has appeared about “conflicts of interests in scientific publications”

This news is based on a new journal article from researchers from the University of Michigan’s Comprehensive Cancer Center in Ann Arbor, published in an early online edition of Cancer [1]

As mentioned in my previous post about the Elsevier “Fake Journals”, pharma-sponsored trials rarely produce results that are unfavorable to the companies’ products [e.g. see 3 for an overview, and many papers of Lisa Bero]. Concerned by these findings, the main medical journals now require researchers to disclose their potential conflicts of interest (COI).

The present study [1] analyzes the frequency of self-reported conflicts of interest (COI), source of study funding, and (their relationship with) other characteristics in original clinical cancer research (thus no reviews or basic research) published in 8 medical Journals in 2006. The 8 journals are high-impact clinical journals, 5 are oncology journals (Journal of Clinical Oncology, the Journal of the National Cancer Institute, Lancet Oncology,Clinical Cancer Research, Cancer) and 3 are core general medical journals (New England Journal of Medicine,JAMA, the Journal of the American Medical Association, Lancet).

In these medical journals 1534 original oncology studies were found. Twenty-nine percent of the oncology articles reported COI: 17% declared industrial funding and the remaining 12% of the studies had authors who were an employee of industry at the time of publication, or were funded by industry.

The study was thoroughly done: 2 students independently coded the articles and 2 other coders, blinded for the initial coding, assessed all randomized trials (within those 1543 papers) for the outcomes. They graded the authors’ subjective interpretations as positive (in favor of the intervention), neutral, or negative (in favor of the control arm). Overall survival was assessed quantitatively.

The main results:

  • Conflicts of interest varied by discipline (P<.001). Studies that had a corresponding author from a medical oncology department or division were most likely to have conflicts (45%), and studies from diagnostic radiology were least likely to have conflicts (4%)
  • Likewise the cancer type mattered, especially with regard to likelihood of industrial funding (P = .001). Studies on the male reproductive system and lung cancers scored highest and studies on neurological cancers scored lowest as to the likelihood of funding. (however there is some contradiction because gynecologic departments have a high score and gynecologic cancers have a relatively low score, conf. figures 1 and 2)
  • Continental origin was also an important variable (P<.001). COI were observed in 33% of the North American studies, 27% of the European studies, 5% of the Asian studies, and 40% of the studies from other locations.
  • COI was most likely in articles with male first and senior authors (P<.001).
  • Industry funded studies were more likely to focus on treatment (P<.001), and less on epidemiology, prevention, risk factors for incidence, screening, or diagnostic methods.
  • The randomized trials (n=124) that assessed survival were more likely to report positive survival outcomes when a COI was present (P=.04). (see below)

The paper has received a lot of media attention, initiated by the press release of the University of Michigan Health System itself. The data however are less shocking then they may seem. The main finding is that “conflicts of interest characterize a substantial minority of the clinically oriented cancer research published in high-impact medical journals”. This and the characteristics of the papers with COI (see above) add to earlier papers that report on the occurrence of COI in published articles, including papers in the field of clinical oncology.”

Some outcomes are not very surprising, such as that pharmaceutical industries and funding will be most involved in intervention studies in medical oncology studies (not so much in radiology or diagnostics).

In itself, COI does not mean that the results cannot be trusted or that they are plain wrong. Credibility could be questioned if only positive results are published or if the results are represented more positive then they really are.

Indeed, Jagsi et al show that “randomized trials with a COI were more likely to report positive survival outcomes (P=.04)”. However, the likelihood that the author interpretation was positive or more positive than the objective effect on overall survival wasn’t influenced by COI. And differences in industrial funding didn’t influence any of the blinded outcomes assessed. Also in this study, the non-neutral findings are emphasized. ;)

On the other hand, authors had to rely on the information given, i.e. not all conflicts of interests may have been reported. Another issue is that not all known COIs are disclosed to the public (i.e. medicalnewstoday)

The following conclusion of the lead author Reshma Jagsi seems most relevant:[2]

“Given the frequency we observed for conflicts of interest and the fact that conflicts were associated with study outcomes, I would suggest that merely disclosing conflicts is probably not enough. It’s becoming increasingly clear that we need to look more at how we can disentangle cancer research from industries”


  1. Jagsi, R., Sheets, N., Jankovic, A., Motomura, A., Amarnath, S., & Ubel, P. (2009). Frequency, nature, effects, and correlates of conflicts of interest in published clinical cancer research Cancer DOI: 10.1002/cncr.24315
  2. University of Michigan Health System (2009, May 13). 29 Percent Of Cancer Studies Report Conflict Of Interest. ScienceDaily. Retrieved May 14, 2009, from http://www.sciencedaily.com­ /releases/2009/05/090511090846.htm
  3. Smith R. Medical Journals Are an Extension of the Marketing Arm of Pharmaceutical Companies. PLoS Med. 2005 May; 2(5): e138. Published online 2005 May 17. doi: 10.1371/journal.pmed.0020138.

You may also want to read:

Hattip: @sciencebase, Reinout Rietveld (via NRC-next)


Get every new post delivered to your Inbox.

Join 607 other followers