Why Publishing in the NEJM is not the Best Guarantee that Something is True: a Response to Katan

27 10 2012

ResearchBlogging.orgIn a previous post [1] I reviewed a recent  Dutch study published in the New England Journal of Medicine (NEJM [2] about the effects of sugary drinks on the body mass index of school children.

The study got widely covered by the media. The NRC, for which the main author Martijn Katan works as a science columnist,  columnist, spent  two full (!) pages on the topic -with no single critical comment-[3].
As if this wasn’t enough, the latest column of Katan again dealt with his article (text freely available at mkatan.nl)[4].

I found Katan’s column “Col hors Catégorie” [4] quite arrogant, especially because he tried to belittle a (as he called it) “know-it-all” journalist who criticized his work  in a rivaling newspaper. This wasn’t fair, because the journalist had raised important points [5, 1] about the work.

The piece focussed on the long road of getting papers published in a top journal like the NEJM.
Katan considers the NEJM as the “Tour de France” among  medical journals: it is a top achievement to publish in this paper.

Katan also states that “publishing in the NEJM is the best guarantee something is true”.

I think the latter statement is wrong for a number of reasons.*

  1. First, most published findings are false [6]. Thus journals can never “guarantee”  that published research is true.
    Factors that  make it less likely that research findings are true include a small effect size,  a greater number and lesser preselection of tested relationships, selective outcome reporting, the “hotness” of the field (all applying more or less to Katan’s study, he also changed the primary outcomes during the trial[7]), a small study, a great financial interest and a low pre-study probability (not applicable) .
  2. It is true that NEJM has a very high impact factor. This is  a measure for how often a paper in that journal is cited by others. Of course researchers want to get their paper published in a high impact journal. But journals with high impact factors often go for trendy topics and positive results. In other words it is far more difficult to publish a good quality study with negative results, and certainly in an English high impact journal. This is called publication bias (and language bias) [8]. Positive studies will also be more frequently cited (citation bias) and will more likely be published more than once (multiple publication bias) (indeed, Katan et al already published about the trial [9], and have not presented all their data yet [1,7]). All forms of bias are a distortion of the “truth”.
    (This is the reason why the search for a (Cochrane) systematic review must be very sensitive [8] and not restricted to core clinical journals, but even include non-published studies: for these studies might be “true”, but have failed to get published).
  3. Indeed, the group of Ioannidis  just published a large-scale statistical analysis[10] showing that medical studies revealing “very large effects” seldom stand up when other researchers try to replicate them. Often studies with large effects measure laboratory and/or surrogate markers (like BMI) instead of really clinically relevant outcomes (diabetes, cardiovascular complications, death)
  4. More specifically, the NEJM does regularly publish studies about pseudoscience or bogus treatments. See for instance this blog post [11] of ScienceBased Medicine on Acupuncture Pseudoscience in the New England Journal of Medicine (which by the way is just a review). A publication in the NEJM doesn’t guarantee it isn’t rubbish.
  5. Importantly, the NEJM has the highest proportion of trials (RCTs) with sole industry support (35% compared to 7% in the BMJ) [12] . On several occasions I have discussed these conflicts of interests and their impact on the outcome of studies ([13, 14; see also [15,16] In their study, Gøtzsche and his colleagues from the Nordic Cochrane Centre [12] also showed that industry-supported trials were more frequently cited than trials with other types of support, and that omitting them from the impact factor calculation decreased journal impact factors. The impact factor decrease was even 15% for NEJM (versus 1% for BMJ in 2007)! For the journals who provided data, income from the sales of reprints contributed to 3% and 41% of the total income for BMJ and The Lancet.
    A recent study, co-authored by Ben Goldacre (MD & science writer) [17] confirms that  funding by the pharmaceutical industry is associated with high numbers of reprint ordersAgain only the BMJ and the Lancet provided all necessary data.
  6. Finally and most relevant to the topic is a study [18], also discussed at Retractionwatch[19], showing that  articles in journals with higher impact factors are more likely to be retracted and surprise surprise, the NEJM clearly stands on top. Although other reasons like higher readership and scrutiny may also play a role [20], it conflicts with Katan’s idea that  “publishing in the NEJM is the best guarantee something is true”.

I wasn’t aware of the latter study and would like to thank drVes and Ivan Oranski for responding to my crowdsourcing at Twitter.

References

  1. Sugary Drinks as the Culprit in Childhood Obesity? a RCT among Primary School Children (laikaspoetnik.wordpress.com)
  2. de Ruyter JC, Olthof MR, Seidell JC, & Katan MB (2012). A trial of sugar-free or sugar-sweetened beverages and body weight in children. The New England journal of medicine, 367 (15), 1397-406 PMID: 22998340
  3. NRC Wim Köhler Eén kilo lichter.NRC | Zaterdag 22-09-2012 (http://archief.nrc.nl/)
  4. Martijn Katan. Col hors Catégorie [Dutch], (published in de NRC,  (20 oktober)(www.mkatan.nl)
  5. Hans van Maanen. Suiker uit fris, De Volkskrant, 29 september 2012 (freely accessible at http://www.vanmaanen.org/)
  6. Ioannidis, J. (2005). Why Most Published Research Findings Are False PLoS Medicine, 2 (8) DOI: 10.1371/journal.pmed.0020124
  7. Changes to the protocol http://clinicaltrials.gov/archive/NCT00893529/2011_02_24/changes
  8. Publication Bias. The Cochrane Collaboration open learning material (www.cochrane-net.org)
  9. de Ruyter JC, Olthof MR, Kuijper LD, & Katan MB (2012). Effect of sugar-sweetened beverages on body weight in children: design and baseline characteristics of the Double-blind, Randomized INtervention study in Kids. Contemporary clinical trials, 33 (1), 247-57 PMID: 22056980
  10. Pereira, T., Horwitz, R.I., & Ioannidis, J.P.A. (2012). Empirical Evaluation of Very Large Treatment Effects of Medical InterventionsEvaluation of Very Large Treatment Effects JAMA: The Journal of the American Medical Association, 308 (16) DOI: 10.1001/jama.2012.13444
  11. Acupuncture Pseudoscience in the New England Journal of Medicine (sciencebasedmedicine.org)
  12. Lundh, A., Barbateskovic, M., Hróbjartsson, A., & Gøtzsche, P. (2010). Conflicts of Interest at Medical Journals: The Influence of Industry-Supported Randomised Trials on Journal Impact Factors and Revenue – Cohort Study PLoS Medicine, 7 (10) DOI: 10.1371/journal.pmed.1000354
  13. One Third of the Clinical Cancer Studies Report Conflict of Interest (laikaspoetnik.wordpress.com)
  14. Merck’s Ghostwriters, Haunted Papers and Fake Elsevier Journals (laikaspoetnik.wordpress.com)
  15. Lexchin, J. (2003). Pharmaceutical industry sponsorship and research outcome and quality: systematic review BMJ, 326 (7400), 1167-1170 DOI: 10.1136/bmj.326.7400.1167
  16. Smith R (2005). Medical journals are an extension of the marketing arm of pharmaceutical companies. PLoS medicine, 2 (5) PMID: 15916457 (free full text at PLOS)
  17. Handel, A., Patel, S., Pakpoor, J., Ebers, G., Goldacre, B., & Ramagopalan, S. (2012). High reprint orders in medical journals and pharmaceutical industry funding: case-control study BMJ, 344 (jun28 1) DOI: 10.1136/bmj.e4212
  18. Fang, F., & Casadevall, A. (2011). Retracted Science and the Retraction Index Infection and Immunity, 79 (10), 3855-3859 DOI: 10.1128/IAI.05661-11
  19. Is it time for a Retraction Index? (retractionwatch.wordpress.com)
  20. Agrawal A, & Sharma A (2012). Likelihood of false-positive results in high-impact journals publishing groundbreaking research. Infection and immunity, 80 (3) PMID: 22338040

——————————————–

* Addendum: my (unpublished) letter to the NRC

Tour de France.
Nadat het NRC eerder 2 pagina’ s de loftrompet over Katan’s nieuwe studie had afgestoken, vond Katan het nodig om dit in zijn eigen column dunnetjes over te doen. Verwijzen naar je eigen werk mag, ook in een column, maar dan moeten wij daar als lezer wel wijzer van worden. Wat is nu de boodschap van dit stuk “Col hors Catégorie“? Het beschrijft vooral de lange weg om een wetenschappelijke studie gepubliceerd te krijgen in een toptijdschrift, in dit geval de New England Journal of Medicine (NEJM), “de Tour de France onder de medische tijdschriften”. Het stuk eindigt met een tackle naar een journalist “die dacht dat hij het beter wist”. Maar ach, wat geeft dat als de hele wereld staat te jubelen? Erg onsportief, omdat die journalist (van Maanen, Volkskrant) wel degelijk op een aantal punten scoorde. Ook op Katan’s kernpunt dat een NEJM-publicatie “de beste garantie is dat iets waar is” valt veel af te dingen. De NEJM heeft inderdaad een hoge impactfactor, een maat voor hoe vaak artikelen geciteerd worden. De NEJM heeft echter ook de hoogste ‘artikelterugtrekkings’ index. Tevens heeft de NEJM het hoogste percentage door de industrie gesponsorde klinische trials, die de totale impactfactor opkrikken. Daarnaast gaan toptijdschriften vooral voor “positieve resultaten” en “trendy onderwerpen”, wat publicatiebias in de hand werkt. Als we de vergelijking met de Tour de France doortrekken: het volbrengen van deze prestigieuze wedstrijd garandeert nog niet dat deelnemers geen verboden middelen gebruikt hebben. Ondanks de strenge dopingcontroles.




Silly Sunday [33] Science, Journalists & Reporting

12 09 2010

I Friday I read a post of David Bradley at Sciscoop Science on six reasons why scientist should talk to reporters, which was based on an article in this week’s The Scientist magazine by Edyta Zielinska (registration required).

The main reasons why scientist should talk to reporters:

  • It’s your duty
  • It raises your profile with journal editors and funders
  • Your bosses will love it
  • You may pick up grant-writing tips
  • It gets the public excited about science
  • It’s better you than someone else

But the most strong part of the Zielinska article are the practical tips, which fall into 3 categories:

  • the medium matters (i.e. tv versus print)
  • getting the most out of a press call (KISS, significance metaphors)
  • Common press pitfalls, and how to avoid them (avoid oversimplification, errors, jargon, misquotes, sensational stories)

The article is concluded by a useful glossary. Read more: Why Trust A Reporter? – The Scientist

Alan Dangour has experienced what may happen when you report scientific evidence which is then covered by the news.

He and his group published systematic reviews that found no evidence of any important differences in the nutritional composition of foodstuffs grown using conventional and organic farming methods. There was also no evidence of nutrition-related health benefits from consuming organically produced foods.

The press quickly picked up on the story. The Times ran a front-page headline: “Organic food ‘has no extra health benefits’ ”, the Daily Express added “Official” while, in a wonderfully nuanced piece, the Daily Mail ran: “A cancerous conspiracy to poison your faith in organic food”.

Initially it was “tremendously exciting and flattering, but their findings were contrary to beliefs held by many and soon the hate-mails started flooding in. That’s why he concludes: “Come on scientists, stand up and fight!” when not the scientific evidence is called into question, but also your scientific skills, and  personal and professional integrity. Quite appropriately a Lancet editorial put it like this: “Eat the emotion but question the evidence”.

Journalists can also be target of hate mail or aggressive comments. In the whole XMRV-CFS torrent, patients seem to almost “adore” positive journalists (i.e. Amy Dockser Marcus of the WSJ Health Blog), while harassing those who are a bit more critical, like @elmarveerman of Noorderlicht author of “tiring viruses“). It has caused another journalist (who wrote about the same topic) to stop because people hurled curses at her. A good discussion is fine, but unfounded criticism is not, she reasoned.

Last  week, 2 other articles emphasized the need for science journalism to change.

One article by Matthew Nisbet at  Big Think elaborated on an idea on what Alice Bell calls “upstream science journalism.” Her blog post is based on her talk at Science Online London as part of a plenary panel with David Dobbs, Martin Robbins and Ed Yong on “Rebooting” (aka the future of) science journalism (video -of bad quality- included).

Upstream, we have the early stages of communication about some area of science: meetings, literature reviews or general lab gossip. Gradually these ideas are worked through, and the communicative output flows downstream towards the peer-reviewed and published journal article and perhaps, via a press release and maybe even a press conference, some mass media reporting.

This still is pretty vague to me. I think less pushed press releases copied by each and every news source and more background stories, giving insight in how science comes about and what it represents would be welcomed. As long as it isn’t too much like glorification of certain personalities. (More) gossip is also not what we’re waiting for.

Her examples and the interesting discussion that follows clarify that she thinks more of blogs and twitter as tools propelling upstream science journalism.

One main objection (or rather limitation) is that: “most science journalists/writers cover whatever they find interesting and what they believe their readers will find interesting (Ian Sample in comments).”

David ropeik

Wonderful goal, to have journalism serve society in this, or any way, but, forgive me, it’s a naive hope, common among those who observe journalism but haven’t done it.(…..)
Even those of us who feel journalism is a calling and serves an important civic role do not see ourselves principally as teachers or civil servants working in the name of some higher social cause, to educate the public about stuff we thought they should know. We want the lead story. We want our work to get attention. We want to have impact, sure, hopefully positive. But we don’t come into work everyday asking “what should the public know about?”

That’s reality. John Fleck (journalist) agrees that the need to “get a lot of attention” is a driving force in newsroom culture and decision-making, but stresses that the newspapers he worked for have always devoted a portion of their resources to things managers felt were important even if not attention-getting.

So truth in the middle?

Another blogpost -at Jay Rosen: Public Notebook gives advice to journalist “formerly known as the media”. Apart from advice as “you need to be blogging and you need to “get” mobile he want the next generation journalists to understand:

  1. Replace readers, viewers, listeners and consumers with the term “users.”
  2. Remember: the users know more than you do.
  3. There’s been a power shift; the mutualization of journalism is here. We bring important things to the table, and so do the users. Therefore we include them. “Seeing people as a public” means that.
  4. Describe the world in a way that helps people participate in it.  When people participate, they seek out information.
  5. Anyone can doesn’t mean everyone will. (…) It’s an emerging rule of thumb that suggests that if you get a group of 100 people online then one will create content, 10 will ‘interact’ with it (commenting or offering improvements) and the other 89 will just view it… So what’s the conclusion? Only that you shouldn’t expect too much online.
  6. The journalist is just a heightened case of an informed citizen, not a special class.
  7. Your authority starts with, “I’m there, you’re not, let me tell you about it.”
  8. Somehow, you need to listen to demand and give people what they have no way to demand (…) because they don’t know about it yet
  9. In your bid to be trusted, don’t take the View From Nowhere; instead, tell people where you’re coming from.
  10. Breathe deeply of what DeTocqueville said: “Newspapers make associations and associations make newspapers.”

I think those are useful and practical tips, some of which fit in with the idea of more upstream journalism.

O.k. that’s enough for now. We have been pretty serious on the topic. But it is a Friday Fun/ Silly Sunday post. So bring in the comics.

These are self-explanatory, aren’t they?

(HT: David Bradley and commenter on Facebook. Can’t find it anymore. Facebook is hard to search)

From SMBC comics: http://www.smbc-comics.com/index.php?db=comics&id=1623

Come on scientists, stand up and fight! From where I’m sitting it looks as if we are under attack from those who not only want to question the importance of scientific evidence but also to cast doubt on our scientific skills, and our personal and professional integrity. In the year of the 350th anniversary of the Royal Society we must defend the importance of scientific evidence and stand up for science.

I’m quite lucky. My research is just about interesting enough to discuss at dinner. It helps that I’m a public health nutritionist and, at least at dinner, my friends are generally happy to talk about food and sometimes even health. I work on projects including nutritional and physical activity interventions designed to maintain health and function in later life and the impact our love affair with animal foods has on both the environment and public health. Dressed up, and with a light touch of spin, these are all possible dinner party conversations.

My first brush with an audience outside the narrow circles of academia came soon after completing my PhD on the growth of the legs of Amerindian children (the things you used to be able to get funding for!). It turns out that leg length is a sensitive marker of diet and health in early childhood. Later work in England showed that the legs of English boys and girls are now longer than they were 20 years ago, probably because of improved diet and environmental conditions. The great British press loved this story. Lots of photos of long-legged women adorned the newspapers and one national paper even ran a competition to find Britain’s longest legs! This was a good story — easy to understand, straightforward to report and not challenging any pre-existing beliefs.

However, I have recently had a different experience of what can happen when you report scientific evidence. Last year, a team of us from the London School of Hygiene & Tropical Medicine released two systematic reviews on the nutritional quality and nutrition-related health benefits of organically produced foods. The research had been commissioned by the Food Standards Agency and had taken more than a year to complete.

We were not the first people to ask whether there were any differences in nutritional composition or health benefits of foods produced under different production regimens but it became clear that no one had addressed the question systematically. Systematic reviews are an important tool for scientists; unlike ordinary reviews, they are seen as original research and help to provide clarity in areas of uncertainty. The basic underpinning of a systematic review is that the process of conducting the review is pre-specified and that the review itself is as comprehensive as possible within these pre-specified limits. Reviews that are not systematic are much more prone to bias, especially with regards to the selection of papers included for review.

Our systematic reviews found that there was no evidence of any important differences in the nutritional composition of foodstuffs grown using conventional and organic farming methods. There was also no evidence of nutrition-related health benefits from consuming organically produced foods.

The press quickly picked up on the story. The Times ran a front-page headline: “Organic food ‘has no extra health benefits’ ”, the Daily Express added “Official” while, in a wonderfully nuanced piece, the Daily Mail ran: “A cancerous conspiracy to poison your faith in organic food”.

This was initially a tremendously exciting and unprecedented period in my academic career. My ego was certainly flattered! However, the tide of emotion quickly started to turn sour. I became increasingly dismayed at the way in which our data were being used and distorted, especially by those who would benefit from the return of uncertainty to the argument. I was also frustrated that we were being criticised for not including other aspects of organic farming (use of pesticides etc) in our review.

With correspondents only a click away, it will not be surprising to learn that we also received many hundreds of e-mails (it would be very interesting to know what proportion of these correspondents had actually read our reports). My favourite e-mail came from a physician in the US who complained that his wife had “been wasting money for years on organic food” and that at last our “scientific review may finally bring her to her senses”.

Other correspondents were less polite and we received many angry, even vicious e-mails questioning the integrity, independence and ability of the team. These are essential ingredients for a good research team and it is fair to ask these questions but the ferocity of the attack suggested that, by questioning the scientific evidence on the nutrient content of organic food, we had actually questioned something bigger. For the first time, we had drawn into sharp focus the strength of the evidence supporting the widespread belief that organic food is “better” — and many people did not like what they saw. As a Lancet editorial put it: “Eat the emotion but question the evidence”.

Beliefs are important, but so is science and standing up for scientific evidence is crucial. We should not be afraid to report our findings publicly, whether they are merely of academic interest or of a controversial nature. This is our job as scientists.

I expected our reviews to be read with interest but I’m not sure that I fully realised how far I was putting my head above the parapet. I think I’ve passed through the toughest hours and have emerged stronger and better able to fight for the importance of science in modern life.

Returning to the dinner party theme, I have also learnt the — at times painful ­— consequences of telling women that “based on current scientific evidence” their legs are slightly shorter than would be expected for their height. There’s a time and a place for everything.

Alan Dangour is a senior lecturer at the London School of Hygiene & Tropical Medicine

//

//









Follow

Get every new post delivered to your Inbox.

Join 611 other followers