Much Ado About ADHD-Research: Is there a Misrepresentation of ADHD in Scientific Journals?

9 02 2011

ResearchBlogging.org
The reliability of science is increasingly under fire. We all know that media often gives a distorted picture of scientific findings (i.e. Hot news: Curry, Curcumin, Cancer & cure). But there is also an ever growing number of scientific misreports or even fraud (see bmj editorial announcing retraction of the Wakefield paper about causal relation beteen MMR vaccination and autism). Apart from real scientific misconduct there are Ghost Marketing and “Publication Bias”, that makes (large) positive studies easier to find than those with negative or non-significant result.
Then there are also the ever growing contradictions, that makes the public sigh: what IS true in science?

Indeed according to doctor John Ioannidis “Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong. (see “Lies, Damned Lies, and Medical Science” in the Atlantic (2010). In 2005 he wrote the famous PLOS-article “Why most published research findings are false” [2] .

With Iaonnides as an editor, a new PLOS-one paper has recently been published on the topic [1]. The authors Gonon, Bezard and Boraud state that there is often a huge gap between neurobiological facts and firm conclusions stated by the media. They suggest that the misrepresentation often starts in the scientific papers, and is echoed by the media.

Although this article has already been reviewed by another researchblogger (Hadas Shema), I would like to give my own views on this paper

Gonon et al found 3 types of misrepresentation.*

1. Internal inconsistencies (between results and claimed conclusions).

In a (non-systematic) review of 360 ADHD articles  Gonon et al. [1] found  two studies with “obvious” discrepancies between results and claimed conclusions.  One paper claimed that dopamine is depressed in the brain of ADHD patient. Mitigations were only mentioned in the results section and of course only the positive message was resonated by the media without further questioning any alternative explanation (in this case a high baseline dopamine tone). The other paper [3] claimed that treatment with stimulant medications was associated with more favorite long-term school outcomes. However the average reading score and the school drop-outs did not differ significantly between treatment and control group. The newspapers also trumpeted that  “ADHD drugs help boost children’s grades” .

2. Fact Omission

To quantify fact omission in the scientific literature, Gonon et al systematically searched for ADHD articles mentioning the the D4 dopamine receptor (DRD4) gene. Among the 117 primary human studies with actual data (like odds ratios), 74 articles state in their summary that alleles of the DRD4 genes are significantly associated with ADHD but only 19 summaries mentioned that the risk was small. Fact omission was even more preponderant in articles, that only cite studies about DRD4.  Not surprisingly, 82% of the media articles didn’t report that the DRD4 only confers a small risk either.
In accordance with Ioannidis findings [2] Gonon et al found that the most robust effects were reported in initial studies: odds-ratios decreased from 2.4 in the oldest study in 1996 to 1.27 in the most recent meta-analysis.

3. Extrapolating basic and pre-clinical findings to new therapeutic prospects

Animal ADHD models have their limitations because investigations based on mouse behavior cannot capture the ADHD complexity. Analysis of all ADHD-related studies in mice showed that 23% of the conclusions were overstated. The frequency of this overstatement was positively related with the impact factor of the journal.

Again, the positive message was copied by the press. (see Figure below)

”]Discussion

 

The article by Gonon et al is another example that “published research findings are false” [ 2], or at least not completely true. The authors show that the press isn’t culprit number one, but that it “just” copies the overstatements in the scientific abstracts.

The merit of Gonon et al is that they have extensively looked at a great number of articles and at press articles citing those articles.

The first type of misrepresentation wasn’t systematically studied, but types 2 and 3 misrepresentations were studied by analyzing papers on a specific ADHD topic obtained by a systematic search.

One of the solutions the authors propose is that “journal editors collectively reject sensationalism and clearly condemn data misrepresentation”. I agree and would like to add that the reviewers should check that the summary actual reflects the data. Some journals already have strict criteria in this respect. It striked me that the few summaries I checked were very unstructured and short, unlike most summaries I see. Possibly, unstructured abstracts are more typically for journals about neuroscience and animal research.

The choice of the ADHD-topics investigated doesn’t seem random. A previous review[4], written by Francois Gonon deals entirely with “the need to reexamine the dopaminergic hypothesis of ADHD” . The type 1 misrepresentation data stem from this opinion piece.

The putative ADHD-DRD4 gene association and the animal studies, taken as examples for type 2 and type 3 misrepresentations respectively, can also be seen as topics of the “ADHD is a genetic disease” -kind.

Gonon et al clearly favor the hypothesis that ADHD is primarily caused by environmental factors . In his opinion piece he starts with saying:

This dopamine-deficit theory of ADHD is often based upon an overly simplistic dopaminergic theory of reward. Here, I question the relevance of this theory regarding ADHD. I underline the weaknesses of the neurochemical, genetic, neuropharmacological and imaging data put forward to support the dopamine-deficit hypothesis of ADHD. Therefore, this hypothesis should not be put forward to bias ADHD management towards psychostimulants.

I wonder whether it is  fair of the authors to limit the study to ADHD topics they oppose to in order to (indirectly) confirm their “ADHD has a social origin” hypothesis. Indeed in the paragraph “social and public health consequences” Gonon et al state:

Unfortunately, data misrepresentation biases the scientific evidence in favor of the first position stating that ADHD is primarily caused by biological factors.

I do not think that this conclusion is justified by their findings, since similar data misrepresentation might also occur in papers investigating social causes or treatments, but this was not investigated. (mmm, a misrepresentation of the third kind??)

I also wonder why impact factor data were only given for the animal studies.

Gonon et al interpret a lot, also in their results section. For instance, they mention that 2 out of 60 articles show obvious discrepancies between results and claimed conclusions. This is not much. Then they reason:

Our observation that only two articles among 360 show obvious internal inconsistencies must be considered with caution however. First, our review of the ADHD literature was not a systematic one and was not aimed at pointing out internal inconsistencies. Second, generalization to other fields of the neuroscience literature would be unjustified

But this is what they do. See title:

” Misrepresentation of Neuroscience Data Might Give Rise to Misleading Conclusions in the Media.”

Furthermore they selectively report themselves. The Barbaresi paper [3], a large retrospective cohort,  did not find an effect on average reading score and school drop-outs, but it did find a significantly lowered grade retention, which is -after all- an important long-term school outcome.

Misrepresentation type 2 (“omission”)  I would say.*

References

  1. Gonon, F., Bezard, E., & Boraud, T. (2011). Misrepresentation of Neuroscience Data Might Give Rise to Misleading Conclusions in the Media: The Case of Attention Deficit Hyperactivity Disorder PLoS ONE, 6 (1) DOI: 10.1371/journal.pone.0014618
  2. Ioannidis, J. (2005). Why Most Published Research Findings Are False PLoS Medicine, 2 (8) DOI: 10.1371/journal.pmed.0020124
  3. Barbaresi, W., Katusic, S., Colligan, R., Weaver, A., & Jacobsen, S. (2007). Modifiers of Long-Term School Outcomes for Children with Attention-Deficit/Hyperactivity Disorder: Does Treatment with Stimulant Medication Make a Difference? Results from a Population-Based Study Journal of Developmental & Behavioral Pediatrics, 28 (4), 274-287 DOI: 10.1097/DBP.0b013e3180cabc28
  4. GONON, F. (2009). The dopaminergic hypothesis of attention-deficit/hyperactivity disorder needs re-examining Trends in Neurosciences, 32 (1), 2-8 DOI: 10.1016/j.tins.2008.09.010

Related Articles

[*A short comment in the NRC Handelsblad (Febr 5th) comes to a similar conclusion]


Actions

Information

9 responses

9 02 2011
Hadas Shema

Excellent analysis. I think that the things we should take from the paper are:

A. The media really should stop taking scientists’ statements as gospel, even if said statements survived peer-review.

B. Scientists should be more careful about their statements

Thank you for the trackback!
Hadas

10 02 2011
KAL

Some really excellent points as always. People need to be very careful when reading in the bio vs psychosocial wars. (Brandt et al 2000). Conflicting points of view don’t necessarily mean “wrong,” but ideological viewpoint colors how anyone views the world no matter what the ideology or the position. This analysis is right on target in that regard.

And you are absolutely correct about abstracts that sell sizzle without steak. I run across them all the time. As well, LOCF is another source of data contamination that may be a form of misrepresentation regardless of whether the subject is for example about drug efficacy (bio) or Cognitive Behavioral Therapy (psychosocial).

One final note as a journalist who writes about health and medicine. Read any news article. A professional journalist does not, nor should they, interject their personal opinion unless it is commentary – they use expert sources and they provide balance by approaching the subject from multiple viewpoints through their sources.

Don’t assume journalists have ESP – if a source has confirmation bias or an ideologically colored point of view red flags don’t automatically emerge from the sources forehead.

And if you the reader don’t like what was said it does not automatically make it bad journalism – it means you disagree with the sources cited. Expecting journalists to somehow divine misconduct even before the scientific community comes to consensus is silly. Misconduct and differing viewpoints not necessarily the same thing.

Covering health and medicine has much in common with covering politics IMHO.

14 02 2011
Tasha

Very interesting and a little scary! On a personal note, I can see the other side as well. When submitting papers for publication I sometimes find myself in a difficult situation when a reviewer asks me to put in the ‘clinical interpretation’ of my study’s results. Sometimes this is really tough as it can be making a pretty big leap to talk about the results in terms of clinical meaning when the study might just be evaluating step 1 of 4 (where step 4 allows you to make conclusions regarding clinical relevance).

14 02 2011
laikaspoetnik

Thanks all for your comments.

@Hadas I agree with your “take home messages”.
However, kindly “asking” this from authors and leaving it up to them is not enough. There should be standards that “force” authors to mention precise data, also in the abstract. For instance they should exactly mention the Odds ratio/number needed to treat or whatever the outcome measure in the abstract. Most clinical journals already demand this.
Furthermore I don’t think this article is the first to show misrepresentation of data. I’m also little uncomfortable that the authors only looked at articles about dopaminergic and genetic hypothesis of ADHD. They favor the opposite social hypothesis of ADHD.

@kal Nice to hear the experience of a journalist (I didn’t know you were one). I agree with your points. Indirectly you show, what I argued above, that these findings are not terribly new.

@tasha (sorry for not being able to respond at Facebook).
I have had similar experiences. My first articles were written in the pre-computer era, where there were few rules posed by publishers. My article consisted of 25% data, 25% selling of the message and 50% discussion, because at that time I thought that discussion was what science was about. Later I realized that people only read part of the paper and the actual data were the most important. Still you want to make your paper stand out. Otherwise it makes no chance of being accepted and published in a good journal.

(Similar to your experience with reviewers)
Once I had a press conference and a reporter asked me how I could translate the findings to the public. That is pretty hard for a mouse study (cancer prevention by certain nutrients). So I remained pretty vague. In other words the media also try to squeeze out speculations from scientists. Dull data don’t sell.

16 02 2011
KAL

“In other words the media also try to squeeze out speculations from scientists.”

Not necessarily. Reporters who have a general demographic have to write at an eighth grade level in U.S. terms – that’s the grade for 13-year-olds. If you can’t make your key findings understandable at that level it is hard to get your work published in the lay press – dull or not. That is on you as a science communicator.

Next time a reporter asks you such a question, place yourself in the position of being a science teacher for 13-year-olds.

In your example, many people don’t understand what mice have to do with people nor do many understand the importance of basic research in informing other types of research.

If you can put that in simple terms without being condescending you will become every reporter’s favorite. And you will inform the lay public at the same time.

As a journalist I had to learn not to speculate about people’s motives particularly when I’m not familiar with the protocols of their field. Just ask. Many times the answer will not be the one you assumed.😀

31 08 2011
laikaspoetnik

@kal (bit of a late reply) this was the last question in a short telephone interview after a press conference. The intention of the interviewer really was to squeeze out that if people consumed the nutrient it would prevent prostate cancer. I gave a simple and clear answer, though: we can’t tell from mouse studies, but there is an RCT going on (secondary prevention). Meanwhile we advise a healthy lifestyle overall, including taking this nutrient in sufficient amounts. It is present in ….
I just meant they really want to hear the translation, the message to the public even if it isn’t there (yet).

31 08 2011
KAL

I agree that some reporters do try to juice up a story to make it sound more important. They shouldn’t, but it does happen.

You may have also run into a reporter who already had the article written in their head and was trying to get quotes to “fill in the blanks.” Sometimes people have a hard time switching gears when the story they thought was there is not what they envisioned and so they keep trying to make the facts fit their initial assumptions. Shouldn’t happen, but it does.

Educating health/science/medical reporters as to how to accurately report studies is an ongoing process. I may play devil’s advocate with you on occasion, but you do a great job.

7 06 2011
To Retract or Not to Retract… That’s the Question « Laika's MedLibLog

[…] researchers conclude in their studies is misleading, exaggerated, or flat-out wrong”. ( see previous post [12],  “Lies, Damned Lies, and Medical Science” [13])  and Ioannides’ crushing article […]

29 08 2011
Steve Evans

I am not so sure that journalists consistently report the positive. There was a discussion tonight on UK TV which gave an example of the opposite, with reference to mobile phones and cancer. Apparently, many people remian worried that their phones may be giving them cancer, and yet apparently a large study has been done which pretty convincingly confirms that the cancer risk, even for high users is minimal. The press got a heavy drumming for this as it was felt that they had not reported the good news with anywhere near as high a profile as the ealier scare stories of a few years back.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




%d bloggers like this: