BAD Science or BAD Science Journalism? – A Response to Daniel Lakens

10 02 2013

ResearchBlogging.orgTwo weeks ago  there was a hot debate among Dutch Tweeps on “bad science, bad science journalism and bad science communication“. This debate was started and fueled by different Dutch blog posts on this topic.[1,4-6]

A controversial post, with both fierce proponents and fierce opposition was the post by Daniel Lakens [1], an assistant professor in Applied Cognitive Psychology.

I was among the opponents. Not because I don’t like a new fresh point of view, but because of a wrong reasoning and because Daniel continuously compares apples and oranges.

Since Twitter debates can’t go in-depth and lack structure and since I cannot comment to his Google sites blog, I pursue my discussion here.

The title of Daniels post is (freely translated, like the rest of his post):

Is this what one calls good science?” 

In his post he criticizes a Dutch science journalist, Hans van Maanen, and specifically his recent column [2], where Hans discusses a paper published in Pediatrics [3].

This longitudinal study tested the Music Marker theory among 309 Dutch kids. The researchers gathered information about the kids’ favorite types of music and tracked incidents of “minor delinquency”, such as shoplifting or vandalism, from the time they were 12 until they reached age 16 [4]. The researchers conclude that liking music that goes against the mainstream (rock, heavy metal, gothic, punk, African American music, and electronic dance music) at age 12 is a strong predictor of future minor delinquency at 16, in contrast to chart pop, classic music, jazz.

The University press office send out a press release [5 ], which was picked up by news media [4,6] and one of the Dutch authors of this study,  Loes Keijsers,  tweeted enthusiastically: “Want to know whether a 16 year old adult will suffer from delinquency, than look at his music taste at age 12!”

According to Hans, Loes could have easily broadcasted (more) balanced tweets, likeMusic preference doesn’t predict shoplifting” or “12 year olds who like Bach keep quiet about shoplifting when 16.” But even then, Hans argues, the tweets wouldn’t have been scientifically underpinned either.

In column style Hans explains why he thinks that the study isn’t methodologically strong: no absolute numbers are given; 7 out of 11 (!) music styles are positively associated with delinquency, but these correlations are not impressive: the strongest predictor (Gothic music preference) can explain no more than 9%  of the variance in delinquent behaviour, which can include anything from shoplifting, vandalism, fighting, graffiti spraying, switching price tags.  Furthermore the risks of later “delinquent” behavior are small:  on a scale 1 (never) to 4 (4 times or more) the mean risk was 1,12. Hans also wonders whether it is a good idea to monitor kids with a certain music taste.

Thus Hans concludesthis study isn’t good science”. Daniel, however, concludes that Hans’ writing is not good science journalism.

First Daniel recalls he and other PhD’s took a course on how to peer review scientific papers. On basis of their peer review of a (published) article 90% of the students decided to reject it. The two main lessons learned by Daniel were:

  • It is easy to critize a scientific paper and grind it down. No single contribution to science (no single article) is perfect.
  • New scientific insights, although imperfect, are worth sharing, because they help to evolve science. *¹

According to Daniel science jounalists often make the same mistakes as the peer reviewing PhD-students: critisizing the individuel studies without a “meta-view” on science.

Peer review and journalism however are different things (apples and oranges if you like).

Peer review (with all its imperfections) serves to filter, check and to improve the quality of individual scientific papers (usually) before they are published  [10]. My papers that passed peer review, were generally accepted. Of course there were the negative reviewers, often  the ignorant ones, and the naggers, but many reviewers had critique that helped to improve my paper, sometimes substantially. As a peer reviewer myself I only try to separate the wheat from the chaff and to enhance the quality of the papers that pass.

Science journalism also has a filter function: it filters already peer reviewed scientific papers* for its readership, “the public” by selecting novel relevant science and translating the scientific, jargon-laded language, into language readers can understand and appreciate. Of course science journalists should put the publication into perspective (call it “meta”).

Surely the PhD-students finger exercise resembles the normal peer review process as much as peer review resembles science journalism.

I understand that pure nitpicking seldom serves a goal, but this rarely occurs in science journalism. The opposite, however, is commonplace.

Daniel disapproves Hans van Maanen’s criticism, because Hans isn’t “meta” enough. Daniel: “Arguing whether an effect size is small or mediocre is nonsense, because no individual study gives a good estimate of the effect size. You need to do more research and combine the results in a meta-analysis”.

Apples and oranges again.

Being “meta” has little to do with meta-analysis. Being meta is … uh … pretty meta. You could think of it as seeing beyond (meta) the findings of one single study*.

A meta-analysis, however, is a statistical technique for combining the findings from independent, but comparable (homogeneous) studies in order to more powerfully estimate the true effect size (pretty exact). This is an important, but difficult methodological task for a scientist, not a journalist. If a meta-analysis on the topic exist, journalists should take this into account, of course (and so should the researchers). If not, they should put the single study in broader perspective (what does the study add to existing knowledge?) and show why this single study is or is not well done?

Daniel takes this further by stating that “One study is no study” and that journalists who simply echo the press releases of a study ànd journalists who just amply criticizes only single publication (like Hans) are clueless about science.

Apples and oranges! How can one lump science communicators (“media releases”), echoing journalists (“the media”) and critical journalists together?

I see more value in a critical analysis than a blind rejoicing of hot air. As long as the criticism guides the reader to appreciate the study.

And if there is just one single novel study, that seems important enough to get media attention, shouldn’t we judge the research on its own merits?

Then Daniel asks himself: “If I do criticize those journalists, shouldn’t I criticize those scientists who published just a single study and wrote a press release about it? “

His conclusion? “No”.

Daniel explains: science never provides absolute certainty, at the most the evidence is strong enough to state what is likely true. This can only be achieved by a lot of research by different investigators. 

Therefore you should believe in your ideas and encourage other scientists to pursue your findings. It doesn’t help when you say that music preference doesn’t predict shoplifting. It does help when you use the media to draw attention to your research. Many researchers are now aware of the “Music Marker Theory”. Thus the press release had its desired effect. By expressing a firm belief in their conclusions, they encourage other scientists to spend their sparse time on this topic. These scientists will try to repeat and falsify the study, an essential step in Cumulative Science. At a time when science is under pressure, scientists shouldn’t stop writing enthusiastic press releases or tweets. 

The latter paragraph is sheer nonsense!

Critical analysis of one study by a journalist isn’t what undermines the  public confidence in science. Rather it’s the media circus, that blows the implications of scientific findings out of proportion.

As exemplified by the hilarious PhD Comic below research results are propagated by PR (science communication), picked up by media, broadcasted, spread via the internet. At the end of the cycle conclusions are reached, that are not backed up by (sufficient) evidence.

PhD Comics – The news Cycle

Daniel is right about some things. First one study is indeed no study, in the sense that concepts are continuously tested and corrected: falsification is a central property of science (Popper). He is also right that science doesn’t offer absolute certainty (an aspect that is often not understood by the public). And yes, researchers should believe in their findings and encourage other scientists to check and repeat their experiments.

Though not primarily via the media. But via the normal scientific route. Good scientists will keep track of new findings in their field anyway. Suppose that only findings that are trumpeted in the media would be pursued by other scientists?

7-2-2013 23-26-31 media & science

And authors shouldn’t make overstatements. They shouldn’t raise expectations to a level which cannot be met. The Dutch study only shows weak associations. It simply isn’t true that the Dutch study allows us to “predict” at an individual level if a 12 year old will “act out” at 16.

This doesn’t help lay-people to understand the findings and to appreciate science.

The idea that media should just serve to spotlight a paper, seems objectionable to me.

Going back to the meta-level: what about the role of science communicators, media, science journalists and researchers?

According to Maarten Keulemans, journalist, we should just get rid of all science communicators as a layer between scientists and journalists [7]. But Michel van Baal [9] and Roy Meijer[8] have a point when they say that  journalists do a lot PR-ing too and they should do better than to rehash news releases.*²

Now what about Daniel criticism of van Maanen? In my opinion, van Maanen is one of those rare critical journalists who serve as an antidote against uncritical media diarrhea (see Fig above). Comparable to another lone voice in the media: Ben Goldacre. It didn’t surprise me that Daniel didn’t approve of him (and his book Bad Science) either [11]. 

Does this mean that I find Hans van Maanen a terrific science journalist? No, not really. I often agree with him (i.e. see this post [12]). He is one of those rare journalists who has real expertise in research methodology . However, his columns don’t seem to be written for a large audience: they seem too complex for most lay people. One thing I learned during a scientific journalism course, is that one should explain all jargon to one’s audience.

Personally I find this critical Dutch blog post[13] about the Music Marker Theory far more balanced. After a clear description of the study, Linda Duits concludes that the results of the study are pretty obvious, but that the the mini-hype surrounding this research is caused by the positive tone of the press release. She stresses that prediction is not predetermination and that the musical genres are not important: hiphop doesn’t lead to criminal activity and metal not to vandalism.

And this critical piece in Jezebel [14],  reaches far more people by talking in plain, colourful language, hilarious at times.

It also a swell title: “Delinquents Have the Best Taste in Music”. Now that is an apt conclusion!

———————-

*¹ Since Daniel doesn’t refer to  open (trial) data access nor the fact that peer review may , I ignore these aspects for the sake of the discussion.

*² Coincidence? Keulemans has covered  the music marker study quite uncritically (positive).

Photo Credits

http://www.phdcomics.com/comics/archive.php?comicid=1174

References

  1. Daniel Lakens: Is dit nou goede Wetenschap? - Jan 24, 2013 (sites.google.com/site/lakens2/blog)
  2. Hans van Maanen: De smaak van boefjes in de dop,De Volkskrant, Jan 12, 2013 (vanmaanen.org/hans/columns/)
  3. ter Bogt, T., Keijsers, L., & Meeus, W. (2013). Early Adolescent Music Preferences and Minor Delinquency PEDIATRICS DOI: 10.1542/peds.2012-0708
  4. Lindsay Abrams: Kids Who Like ‘Unconventional Music’ More Likely to Become Delinquent, the Atlantic, Jan 18, 2013
  5. Muziekvoorkeur belangrijke voorspeller voor kleine criminaliteit. Jan 8, 2013 (pers.uu.nl)
  6. Maarten Keulemans: Muziek is goede graadmeter voor puberaal wangedrag - De Volkskrant, 12 januari 2013  (volkskrant.nl)
  7. Maarten Keulemans: Als we nou eens alle wetenschapscommunicatie afschaffen? – Jan 23, 2013 (denieuwereporter.nl)
  8. Roy Meijer: Wetenschapscommunicatie afschaffen, en dan? – Jan 24, 2013 (denieuwereporter.nl)
  9. Michel van Baal. Wetenschapsjournalisten doen ook aan PR – Jan 25, 2013 ((denieuwereporter.nl)
  10. What peer review means for science (guardian.co.uk)
  11. Daniel Lakens. Waarom raadde Maarten Keulemans me Bad Science van Goldacre aan? Oct 25, 2012
  12. Why Publishing in the NEJM is not the Best Guarantee that Something is True: a Response to Katan - Sept 27, 2012 (laikaspoetnik.wordpress.com)
  13. Linda Duits: Debunk: worden pubers crimineel van muziek? (dieponderzoek.nl)
  14. Lindy west: Science: “Delinquents Have the Best Taste in Music” (jezebel.com)




Why Publishing in the NEJM is not the Best Guarantee that Something is True: a Response to Katan

27 10 2012

ResearchBlogging.orgIn a previous post [1] I reviewed a recent  Dutch study published in the New England Journal of Medicine (NEJM [2] about the effects of sugary drinks on the body mass index of school children.

The study got widely covered by the media. The NRC, for which the main author Martijn Katan works as a science columnist,  columnist, spent  two full (!) pages on the topic -with no single critical comment-[3].
As if this wasn’t enough, the latest column of Katan again dealt with his article (text freely available at mkatan.nl)[4].

I found Katan’s column “Col hors Catégorie” [4] quite arrogant, especially because he tried to belittle a (as he called it) “know-it-all” journalist who criticized his work  in a rivaling newspaper. This wasn’t fair, because the journalist had raised important points [5, 1] about the work.

The piece focussed on the long road of getting papers published in a top journal like the NEJM.
Katan considers the NEJM as the “Tour de France” among  medical journals: it is a top achievement to publish in this paper.

Katan also states that “publishing in the NEJM is the best guarantee something is true”.

I think the latter statement is wrong for a number of reasons.*

  1. First, most published findings are false [6]. Thus journals can never “guarantee”  that published research is true.
    Factors that  make it less likely that research findings are true include a small effect size,  a greater number and lesser preselection of tested relationships, selective outcome reporting, the “hotness” of the field (all applying more or less to Katan’s study, he also changed the primary outcomes during the trial[7]), a small study, a great financial interest and a low pre-study probability (not applicable) .
  2. It is true that NEJM has a very high impact factor. This is  a measure for how often a paper in that journal is cited by others. Of course researchers want to get their paper published in a high impact journal. But journals with high impact factors often go for trendy topics and positive results. In other words it is far more difficult to publish a good quality study with negative results, and certainly in an English high impact journal. This is called publication bias (and language bias) [8]. Positive studies will also be more frequently cited (citation bias) and will more likely be published more than once (multiple publication bias) (indeed, Katan et al already published about the trial [9], and have not presented all their data yet [1,7]). All forms of bias are a distortion of the “truth”.
    (This is the reason why the search for a (Cochrane) systematic review must be very sensitive [8] and not restricted to core clinical journals, but even include non-published studies: for these studies might be “true”, but have failed to get published).
  3. Indeed, the group of Ioannidis  just published a large-scale statistical analysis[10] showing that medical studies revealing “very large effects” seldom stand up when other researchers try to replicate them. Often studies with large effects measure laboratory and/or surrogate markers (like BMI) instead of really clinically relevant outcomes (diabetes, cardiovascular complications, death)
  4. More specifically, the NEJM does regularly publish studies about pseudoscience or bogus treatments. See for instance this blog post [11] of ScienceBased Medicine on Acupuncture Pseudoscience in the New England Journal of Medicine (which by the way is just a review). A publication in the NEJM doesn’t guarantee it isn’t rubbish.
  5. Importantly, the NEJM has the highest proportion of trials (RCTs) with sole industry support (35% compared to 7% in the BMJ) [12] . On several occasions I have discussed these conflicts of interests and their impact on the outcome of studies ([13, 14; see also [15,16] In their study, Gøtzsche and his colleagues from the Nordic Cochrane Centre [12] also showed that industry-supported trials were more frequently cited than trials with other types of support, and that omitting them from the impact factor calculation decreased journal impact factors. The impact factor decrease was even 15% for NEJM (versus 1% for BMJ in 2007)! For the journals who provided data, income from the sales of reprints contributed to 3% and 41% of the total income for BMJ and The Lancet.
    A recent study, co-authored by Ben Goldacre (MD & science writer) [17] confirms that  funding by the pharmaceutical industry is associated with high numbers of reprint ordersAgain only the BMJ and the Lancet provided all necessary data.
  6. Finally and most relevant to the topic is a study [18], also discussed at Retractionwatch[19], showing that  articles in journals with higher impact factors are more likely to be retracted and surprise surprise, the NEJM clearly stands on top. Although other reasons like higher readership and scrutiny may also play a role [20], it conflicts with Katan’s idea that  “publishing in the NEJM is the best guarantee something is true”.

I wasn’t aware of the latter study and would like to thank drVes and Ivan Oranski for responding to my crowdsourcing at Twitter.

References

  1. Sugary Drinks as the Culprit in Childhood Obesity? a RCT among Primary School Children (laikaspoetnik.wordpress.com)
  2. de Ruyter JC, Olthof MR, Seidell JC, & Katan MB (2012). A trial of sugar-free or sugar-sweetened beverages and body weight in children. The New England journal of medicine, 367 (15), 1397-406 PMID: 22998340
  3. NRC Wim Köhler Eén kilo lichter.NRC | Zaterdag 22-09-2012 (http://archief.nrc.nl/)
  4. Martijn Katan. Col hors Catégorie [Dutch], (published in de NRC,  (20 oktober)(www.mkatan.nl)
  5. Hans van Maanen. Suiker uit fris, De Volkskrant, 29 september 2012 (freely accessible at http://www.vanmaanen.org/)
  6. Ioannidis, J. (2005). Why Most Published Research Findings Are False PLoS Medicine, 2 (8) DOI: 10.1371/journal.pmed.0020124
  7. Changes to the protocol http://clinicaltrials.gov/archive/NCT00893529/2011_02_24/changes
  8. Publication Bias. The Cochrane Collaboration open learning material (www.cochrane-net.org)
  9. de Ruyter JC, Olthof MR, Kuijper LD, & Katan MB (2012). Effect of sugar-sweetened beverages on body weight in children: design and baseline characteristics of the Double-blind, Randomized INtervention study in Kids. Contemporary clinical trials, 33 (1), 247-57 PMID: 22056980
  10. Pereira, T., Horwitz, R.I., & Ioannidis, J.P.A. (2012). Empirical Evaluation of Very Large Treatment Effects of Medical InterventionsEvaluation of Very Large Treatment Effects JAMA: The Journal of the American Medical Association, 308 (16) DOI: 10.1001/jama.2012.13444
  11. Acupuncture Pseudoscience in the New England Journal of Medicine (sciencebasedmedicine.org)
  12. Lundh, A., Barbateskovic, M., Hróbjartsson, A., & Gøtzsche, P. (2010). Conflicts of Interest at Medical Journals: The Influence of Industry-Supported Randomised Trials on Journal Impact Factors and Revenue – Cohort Study PLoS Medicine, 7 (10) DOI: 10.1371/journal.pmed.1000354
  13. One Third of the Clinical Cancer Studies Report Conflict of Interest (laikaspoetnik.wordpress.com)
  14. Merck’s Ghostwriters, Haunted Papers and Fake Elsevier Journals (laikaspoetnik.wordpress.com)
  15. Lexchin, J. (2003). Pharmaceutical industry sponsorship and research outcome and quality: systematic review BMJ, 326 (7400), 1167-1170 DOI: 10.1136/bmj.326.7400.1167
  16. Smith R (2005). Medical journals are an extension of the marketing arm of pharmaceutical companies. PLoS medicine, 2 (5) PMID: 15916457 (free full text at PLOS)
  17. Handel, A., Patel, S., Pakpoor, J., Ebers, G., Goldacre, B., & Ramagopalan, S. (2012). High reprint orders in medical journals and pharmaceutical industry funding: case-control study BMJ, 344 (jun28 1) DOI: 10.1136/bmj.e4212
  18. Fang, F., & Casadevall, A. (2011). Retracted Science and the Retraction Index Infection and Immunity, 79 (10), 3855-3859 DOI: 10.1128/IAI.05661-11
  19. Is it time for a Retraction Index? (retractionwatch.wordpress.com)
  20. Agrawal A, & Sharma A (2012). Likelihood of false-positive results in high-impact journals publishing groundbreaking research. Infection and immunity, 80 (3) PMID: 22338040

——————————————–

* Addendum: my (unpublished) letter to the NRC

Tour de France.
Nadat het NRC eerder 2 pagina’ s de loftrompet over Katan’s nieuwe studie had afgestoken, vond Katan het nodig om dit in zijn eigen column dunnetjes over te doen. Verwijzen naar je eigen werk mag, ook in een column, maar dan moeten wij daar als lezer wel wijzer van worden. Wat is nu de boodschap van dit stuk “Col hors Catégorie“? Het beschrijft vooral de lange weg om een wetenschappelijke studie gepubliceerd te krijgen in een toptijdschrift, in dit geval de New England Journal of Medicine (NEJM), “de Tour de France onder de medische tijdschriften”. Het stuk eindigt met een tackle naar een journalist “die dacht dat hij het beter wist”. Maar ach, wat geeft dat als de hele wereld staat te jubelen? Erg onsportief, omdat die journalist (van Maanen, Volkskrant) wel degelijk op een aantal punten scoorde. Ook op Katan’s kernpunt dat een NEJM-publicatie “de beste garantie is dat iets waar is” valt veel af te dingen. De NEJM heeft inderdaad een hoge impactfactor, een maat voor hoe vaak artikelen geciteerd worden. De NEJM heeft echter ook de hoogste ‘artikelterugtrekkings’ index. Tevens heeft de NEJM het hoogste percentage door de industrie gesponsorde klinische trials, die de totale impactfactor opkrikken. Daarnaast gaan toptijdschriften vooral voor “positieve resultaten” en “trendy onderwerpen”, wat publicatiebias in de hand werkt. Als we de vergelijking met de Tour de France doortrekken: het volbrengen van deze prestigieuze wedstrijd garandeert nog niet dat deelnemers geen verboden middelen gebruikt hebben. Ondanks de strenge dopingcontroles.




Friday Foolery #46 Bad Science: The Psychology Behind Exaggerated & False Results

23 12 2011

Very up-to-date infographic about Bad Science: it includes (or was inspired by?) the recent fraud by Diederik Stapel, a well-known psychologist in the Netherlands.(e.g. see NY Times.com (2011/11/03/).

I am not sure though, that I agree with the 3rd solution to make research more honest: anonymous publication.

Bad Science
Created by: Clinical Psychology

Hattip: @nutrigenomics@Vansteenwinckel@kitteybeth & @rawarrior via Twitter





To Retract or Not to Retract… That’s the Question

7 06 2011

In the previous post I discussed [1] that editors of Science asked for the retraction of a paper linking XMRV retrovirus to ME/CFS.

The decision of the editors was based on the failure of at least 10 other studies to confirm these findings and on growing support that the results were caused by contamination. When the authors refused to retract their paper, Science issued an Expression of Concern [2].

In my opinion retraction is premature. Science should at least await the results of two multi-center studies, that were designed to confirm or disprove the results. These studies will continue anyway… The budget is already allocated.

Furthermore, I can’t suppress the idea that Science asked for a retraction to exonerate themselves for the bad peer review (the paper had serious flaws) and their eagerness to swiftly publish the possibly groundbreaking study.

And what about the other studies linking the XMRV to ME/CFS or other diseases: will these also be retracted?
And what happens in the improbable case that the multi-center studies confirm the 2009 paper? Would Science republish the retracted paper?

Thus in my opinion, it is up to other scientists to confirm or disprove findings published. Remember that falsifiability was Karl Popper’s basic scientific principle. My conclusion was that “fraud is a reason to retract a paper and doubt is not”. 

This is my opinion, but is this opinion shared by others?

When should editors retract a paper? Is fraud the only reason? When should editors issue a letter of concern? Are there guidelines?

Let first say that even editors don’t agree. Schekman, the editor-in chief of PNAS, has no direct plans to retract another paper reporting XMRV-like viruses in CFS [3].

Schekman considers it “an unusual situation to retract a paper even if the original findings in a paper don’t hold up: it’s part of the scientific process for different groups to publish findings, for other groups to try to replicate them, and for researchers to debate conflicting results.”

Back at the Virology Blog [4] there was also a vivid discussion about the matter. Prof. Vincent Ranciello gave the following answer in response to a question of a reader:

I don’t have any hard numbers on how often journals ask scientists to retract a paper, only my sense that it is very rare. Author retractions are more frequent, but I’m only aware of a handful of those in a year. I can recall a few other cases in which the authors were asked to retract a paper, but in those cases scientific fraud was involved. That’s not the case here. I don’t believe there is a standard policy that enumerates how such decisions are made; if they exist they are not public.

However, there is a Guideline for editors, the Guidance from the Committee on Publication Ethics (COPE) (PDF) [5]

Ivanoranski, of the great blog Retraction Watch, linked to it when we discussed reasons for retraction.

With regard to retraction the COPE-guidelines state that journal editors should consider retracting a publication if:

  1. they have clear evidence that the findings are unreliable, either as a result of misconduct (e.g. data fabrication) or honest error (e.g. miscalculation or experimental error)
  2. the findings have previously been published elsewhere without proper crossreferencing, permission or justification (i.e. cases of redundant publication)
  3. it constitutes plagiarism
  4. it reports unethical research

According to the same guidelines journal editors should consider issuing an expression of concern if:

  1. they receive inconclusive evidence of research or publication misconduct by the authors 
  2. there is evidence that the findings are unreliable but the authors’ institution will not investigate the case 
  3. they believe that an investigation into alleged misconduct related to the publication either has not been, or would not be, fair and impartial or conclusive 
  4. an investigation is underway but a judgement will not be available for a considerable time

Thus in the case of the Science XMRV/CSF paper an expression of concern certainly applies (all 4 points) and one might even consider a retraction, because the results seem unreliable (point 1). But it is not 100%  established that the findings are false. There is only serious doubt……

The guidelines seem to leave room for separate decisions. To retract a paper in case of plain fraud is not under discussion. But when is an error sufficiently established ànd important to warrant retraction?

Apparently retractions are on the rise. Although still rare (0.02% of all publications by the late 2000s) there has been a tenfold increase in retractions compared to the early 1980s (see review at Scholarly Kitchen [6] about two papers: [7] and [8]). However it is unclear whether increasing rates of retraction reflect more fraudulent or erroneous papers or a better diligence. The  first paper [7] also highlights that, out of fear of litigation, editors are generally hesitant to retract an article without the author’s permission.

At the blog Nerd Alert they give a nice overview [9] (based on Retraction Watch, but then summarized in one post ;) ) . They clarify that papers are retracted for “less dastardly reasons then those cases that hit the national headlines and involve purposeful falsification of data”, such as the fraudulent papers of Andrew Wakefield (autism caused by vaccination). Besides the mistaken publication of the same paper twice, data over-interpretation, plagiarism and the like, the reason can also be more trivial: ordering the wrong mice or using an incorrectly labeled bottle.

Still, scientist don’t unanimously agree that such errors should lead to retraction.

Drug Monkey blogs about his discussion [10] with @ivanoransky over a recent post at Retraction Watch, which asks whether a failure to replicate a result justifies a retraction [11]“. Ivanoransky presents a case, where a researcher (B) couldn’t reproduce the findings of another lab (A) and demonstrated mutations in the published protein sequence that excluded the mechanism proposed in A’s paper. This wasn’t retracted, possibly because B didn’t follow the published experimental protocols of A in all details. (reminds me of the XMRV controversy). 

Drugmonkey says (quote):  (cross-posted at Scientopia here – hmmpf isn’t that an example of redundant publication?)

“I don’t give a fig what any journals might wish to enact as a policy to overcompensate for their failures of the past.
In my view, a correction suffices” (provided that search engines like Google and PubMed make clear that the paper was in fact corrected).

Drug Monkey has a point there. A clear watermark should suffice.

However, we should note that most papers are retracted by authors, not the editors/journals, and that the majority of “retracted papers” remain available. Just 13.2% are deleted from the journal’s website. And 31% are not clearly labelled as such.

Summary of how the naïve reader is alerted to paper retraction (from Table 2 in [7], see: Scholarly Kitchen [6])

  • Watermark on PDF (41.1%)
  • Journal website (33.4%)
  • Not noted anywhere (31.8%)
  • Note appended to PDF (17.3%)
  • PDF deleted from website (13.2%)

My conclusion?

Of course fraudulent papers should be retracted. Also papers with obvious errors that invalidate the conclusions.

However, we should be extremely hesitant to retract papers that can’t be reproduced, if there is no undisputed evidence of error.

Otherwise we should retract almost all published papers at one point or another. Because if Professor Ioannides is right (and he probably is) “Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong”. ( see previous post [12],  “Lies, Damned Lies, and Medical Science” [13])  and Ioannides’ crushing article “Why most published research findings are false [14]”)

All retracted papers (and papers with major deficiencies and shortcomings) should be clearly labeled as such (as Drugmonkey proposed, not only at the PDF and at the Journal website, but also by search engines and biomedical databases).

Or lets hope, with Biochembelle [15], that the future of scientific publishing will make retractions for technical issues obsolete (whether in the form of nano-publications [16] or otherwise):

One day the scientific community will trade the static print-type approach of publishing for a dynamic, adaptive model of communication. Imagine a manuscript as a living document, one perhaps where all raw data would be available, others could post their attempts to reproduce data, authors could integrate corrections or addenda….

NOTE: Retraction Watch (@ivanoransky) and @laikas have voted in @drugmonkeyblog‘s poll about what a retracted paper means [here]. Have you?

References

  1. Science Asks to Retract the XMRV-CFS Paper, it Should Never Have Accepted in the First Place. (laikaspoetnik.wordpress.com 2011-06-02)
  2. Alberts B. Editorial Expression of Concern. Science. 2011-05-31.
  3. Given Doubt Cast on CFS-XMRV Link, What About Related Research? (blogs.wsj.com)
  4. XMRV is a recombinant virus from mice  (Virology Blog : 2011/05/31)
  5. Retractions: Guidance from the Committee on Publication Ethics (COPE) Elizabeth Wager, Virginia Barbour, Steven Yentis, Sabine Kleinert on behalf of COPE Council:
    http://www.publicationethics.org/files/u661/Retractions_COPE_gline_final_3_Sept_09__2_.pdf
  6. Retract This Paper! Trends in Retractions Don’t Reveal Clear Causes for Retractions (scholarlykitchen.sspnet.org)
  7. Wager E, Williams P. Why and how do journals retract articles? An analysis of Medline retractions 1988-2008. J Med Ethics. 2011 Apr 12. [Epub ahead of print] 
  8. Steen RG. Retractions in the scientific literature: is the incidence of research fraud increasing? J Med Ethics. 2011 Apr;37(4):249-53. Epub 2010 Dec 24.
  9. Don’t touch that blot. (nerd-alert.net/blog/weeklies/ : 2011/02/25)
  10. What_does_a_retracted_paper_mean? (scienceblogs.com/drugmonkey: 2011/06/03)
  11. So when is a retraction warranted? The long and winding road to publishing a failure to replicate (retractionwatch.wordpress.com : 2011/06/03/)
  12. Much Ado About ADHD-Research: Is there a Misrepresentation of ADHD in Scientific Journals? (laikaspoetnik.wordpress.com 2011-06-02)
  13. “Lies, Damned Lies, and Medical Science” (theatlantic.com :2010/11/)
  14. Ioannidis, J. (2005). Why Most Published Research Findings Are False. PLoS Medicine, 2 (8) DOI: 10.1371/journal.pmed.0020124
  15. Retractions: What are they good for? (biochembelle.wordpress.com : 2011/06/04/)
  16. Will Nano-Publications & Triplets Replace The Classic Journal Articles? (laikaspoetnik.wordpress.com 2011-06-02)

NEW* (Added 2011-06-08):

 





Science Asks to Retract the XMRV-CFS Paper, it Should Never Have Accepted in the First Place.

2 06 2011

Wow! Breaking!

As reported in WSJ earlier this week [1], editors of the journal Science asked Mikovits and her co-authors to voluntary retract their 2009 Science paper [2].

In this paper Mikovits and colleagues of the Whittemore Peterson Institute (WPI) and the Cleveland Clinic, reported the presence of xenotropic murine leukemia virus–related virus (XMRV) in peripheral blood mononuclear cells (PBMC) of patients with chronic fatigue syndrome (CFS). They used the very contamination-prone nested PCR to detect XMRV. This 2 round PCR enables detection of a rare target sequence by producing an unimaginable huge number of copies of that sequence.
XMRV was first demonstrated in cell lines and tissue samples of prostate cancer patients.

All the original authors, except for one [3], refused to retract the paper [4]. This prompted Science editor-in-chief Bruce Alberts to  issue an Expression of Concern [5], which was published two days earlier than planned because of the early release of the news in WSJ, mentioned above [1]. (see Retraction Watch [6]).

The expression of concern also follows the publication of two papers in the same journal.

In the first Science paper [7] Knox et al. found no Murine-Like Gammaretroviruses in any of the 61 CFS Patients previously identified as XMRV-positive, using the same PCR and culturing techniques as used by Lombardi et al. This paper made ERV (who consistently critiqued the Lombardi paper from the startlaugh-out-loud [8], because Knox also showed that human sera neutralize the virus in the blood,indicating it can hardly infect human cells in vivo. Knox also showed the WPIs sequences to be similar to the XMRV plasmid VP62, known to often contaminate laboratory agents.*

Contamination as the most likely reason for the positive WPI-results is also the message of the second Science paper. Here, Paprotka et al. [9]  show that XMRV was not present in the original prostate tumor that gave rise to the XMRV-positive 22Rv1 cell line, but originated -as a laboratory artifact- by recombination of two viruses during passaging the cell line in nude mice. For a further explanation see the Virology Blog [10].

Now Science editors have expressed their concern, the tweets, blogposts and health news articles are preponderantly negative about the XMRV findings in CFS/ME, where they earlier were positive or neutral. Tweets like “Mouse virus #XMRV doesn’t cause chronic fatigue #CFS http://t.co/Bekz9RG (Reuters) or “Origins of XMRV deciphered, undermining claims for a role in human disease: Delineation of the origin of… http://bit.ly/klDFuu #cancer” (National Cancer Institute) are unprecedented.

Thus the appeal by Science to retract the paper is justified?

Well yes and no.

The timing is rather odd:

  • Why does Science only express concern after publication of these two latest Science papers? There are almost a dozen other studies that failed to reproduce the WPI-findings. Moreover, 4 earlier papers in Retrovirology already indicated that disease-associated XMRV sequences are consistent with laboratory contamination. (see an overview of all published articles at A Photon in the Darkness [11])
  • There are still (neutral) scientist who believe that genuine human infections with XMRV still exist at a relatively low prevalence. (van der Kijl et al: xmrv is not a mousy virus [12])
  • And why doesn’t Science await the results from the official confirmation studies meant to finally settle whether XMRV exist in our blood supply and/or CFS (by the Blood Working Group and the NIH sponsored study by Lipkin et al.)
  • Why (and this is the most important question) did Science ever decide to publish the piece in the first place, as the study had several flaws.
I do believe that new research that turns existing paradigms upside down deserves a chance. Also a chance to get disproved. Yes such papers might be published in prominent scientific journals like Science, provided they are technically and methodologically sound at the very least. The Lombardi paper wasn’t.

Here I repeat my concerns expressed in earlier posts [13 and 14]. (please read these posts first, if you are unfamiliar with PCR).

Shortcomings in PCR-technique and study design**:

  • No positive control and no demonstration of the sensitivity of the PCR-assay. Usually a known concentration or a serial dilution of a (weakly) positive sample is taken as control. This allows to determine sensitivity of the assay.
  • Aspecific bands in negative samples (indicating suboptimal conditions)
  • Just one vial without added DNA per experiment as a negative control. (Negative controls are needed to exclude contamination).
  • CFS-Positive and negative samples are on separate gels (this increases bias, because conditions and chance of contamination are not the same for all samples, it also raises the question whether the samples were processed differently)
  • Furthermore only results obtained at the Cleveland Clinic are shown. (were similar results not obtained at the WPI? see below)
Contamination not excluded as a possible explanation
  • No variation in the XMRV-sequences detected (expected if the findings are real)
  • Although the PCR is near the detection limit, only single round products are shown. These are much stronger then expected even after two rounds. This is very confusing, because WPI later exclaimed that preculturing PBMC plus nested PCR (2 rounds) were absolutely required to get a positive result. But the Legend of Fig. 1 in the original Science paper clearly says PCR after one round. Strong (homogenous) bands after one round of PCR are highly suggestive of contamination.
  • No effort to exclude contamination of samples with mouse DNA (see below)
  • No determination of the viral DNA integration sites.

Mikovits also stressed that she never used the XMRV-positive cell lines in 2009. But what about the Cleveland Clinic, nota bene the institute that co-discovered XMRV and that had produced the strongly positive PCR-products (…after a single PCR-round…)?

On the other hand, the authors had other proof of the presence of retrovirus: detection of (low levels of) antibodies to XMRV in patient sera, and transmissibility of XMRV. On request they later applied the mouse mitochondrial assay to successfully exclude the presence of mouse DNA in their samples. (but this doesn’t exclude all forms of contamination, and certainly not at Cleveland Clinic)

These shortcomings alone should have been sufficient for the reviewers, had they seen it and /or deemed it of sufficient importance, to halt publication and to ask for additional studies**.

I was once in a similar situation. I found a rare cancer-specific chromosomal translocation in normal cells, but I couldn’t exclude PCR- contamination. The reviewers asked me to exclude contamination by sequencing the breakpoints, which only succeeded after two years of extra work. In retrospect I’m thankful to the reviewers for preventing me from publishing a possible faulty paper which could have ruined my career (yeah, because contamination is a real problem in PCR). And my paper improved tremendously by the additional experiments.

Yes it is peer review that failed here, Science. You should have asked for extra confirmatory tests and a better design in the first place. That would have spared a lot of anguish, and if the findings had been reproducible, more convincing and better data.

There were a couple of incidents after the study was published, that made me further doubt the robustness of WPI’s scientific data and even (after a while) I began to doubt whether WPI, and Judy Mikovits in particular, is adhering to good scientific (and ethical) practices.

  • WPI suddenly disclosed (Feb 18 2010) that culturing PBMC’s is necessary to obtain a positive PCR signal.  As a matter of fact they maintain this in their recent protest letter to Science. They refer to the original Science paper, but this paper doesn’t mention the need for culturing at all!! 
  • WPI suggests their researchers had detected XMRV in patient samples from both Dr. Kerr’s and Dr. van Kuppeveld’s ‘XMRV-negative’ CFS-cohorts. Thus in patient samples obtained without a culture-enrichment step…..  There can only be one truth:  main criticism on negative studies was that improper CFS-criteria were used. Thus either this CFS-population is wrongly defined and DOESN’t contain XMRV (with any method), OR it fulfills the criteria of CFS and the XMRV can be detected applying the proper technique. It is so confusing!..
  • Although Mikovits first reported that they found no to little virus variation, they later exclaimed to find a lot of variation.
  • WPI employees behave unprofessional towards colleague-scientists who failed to reproduce their findings.
Other questionable practices 
  • Mikovits also claims that people with autism harbor XMRV. One wonders which disease ISN’t associated with XMRV….
  • Despite the uncertainties about XMRV in CFS-patients, let alone the total LACK of demonstration of a CAUSAL RELATIONSHIP, Mikovits advocates the use of *not harmless* anti-retrovirals by CFS-patients.
  • At this stage of controversy, the WPI-XMRV test is sold as “a reliable diagnostic tool“ by a firm (VIP Dx) with strong ties to WPI. Mikovits even tells patients in a mail: “First of all the current diagnostic testing will define with essentially 100% accuracy! XMRV infected patients”. WTF!? 
  • This test is not endorsed in Belgium, and even Medicare only reimbursed 15% of the PCR-test.
  • The ties of WPI to RedLabs & VIP Dx are not clearly disclosed in the Science Paper. There is only a small Note (added in proof!)  that Lombardi is operations manager of VIP Dx, “in negotiations with the WPI to offer a diagnostic test for XMRV”.
Please see this earlier post [13] for broader coverage. Or read the post [16] of Keith Grimaldi, scientific director of Eurogene, and expert in personal genomics, who I asked to comment on the “diagnostic” tests. In his post he very clearly describes “what is exactly wrong about selling an unregulated clinical test  to a very vulnerable and exploitable group based on 1 paper on a small isolated sample”.

It is really surprising this wasn’t picked up by the media, by the government or by the scientific community. Will the new findings have any consequences for the XMRV-diagnostic tests? I fear WPI will get away with it for the time being. I agree with Lipkin, who coordinates the NIH-sponsored multi-center CFS-XMRV study that calls to retract the paper are premature at this point . Furthermore, -as addressed by WSJ [17]- if the Science paper is retracted, because XMRV findings are called into question, what about the papers also reporting a  link of XMRV-(like) viruses and CFS or prostate cancer?

WSJ reports, that Schekman, the editor-in chief of PNAS, has no direct plans to retract the paper of Alter et al reporting XMRV-like viruses in CFS [discussed in 18]. Schekman considers it “an unusual situation to retract a paper even if the original findings in a paper don’t hold up: it’s part of the scientific process for different groups to publish findings, for other groups to try to replicate them, and for researchers to debate conflicting results.”

I agree, this is a normal procedure, once the paper is accepted and published. Fraud is a reason to retract a paper, doubt is not.

Notes

*samples, NOT patients, as I saw a patient erroneous interpretation: “if it is contamination in te lab how can I have it as a patient?” (tweet is now deleted). No, according to the contamination -theory” XMRV-contamination is not IN you, but in the processed samples or in the reaction mixtures used.

** The reviewers did ask additional evidence, but not with respect to the PCR-experiments, which are most prone to contamination and false results.

  1. Chronic-Fatigue Paper Is Questioned (online.wsj.com)
  2. Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL, Ruscetti SK, Bagni RK, Petrow-Sadowski C, Gold B, Dean M, Silverman RH, & Mikovits JA (2009). Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. Science (New York, N.Y.), 326 (5952), 585-9 PMID: 19815723
  3. WPI Says No to Retraction / Levy Study Dashes Hopes /NCI Shuts the Door on XMR (phoenixrising.me)
  4. http://wpinstitute.org/news/docs/FinalreplytoScienceWPI.pdf
  5. Alberts B. Editorial Expression of Concern. Science. 2011 May 31.
  6. Science asks authors to retract XMRV-chronic fatigue paper; when they refuse, issue Expression of Concern. 2011/05/31/ (retractionwatch.wordpress.com)
  7. K. Knox, Carrigan D, Simmons G, Teque F, Zhou Y, Hackett Jr J, Qiu X, Luk K, Schochetman G, Knox A, Kogelnik AM & Levy JA. No Evidence of Murine-Like Gammaretroviruses in CFS Patients Previously Identified as XMRV-Infected. Science. 2011 May 31. (10.1126/science.1204963).
  8. XMRV and chronic fatigue syndrome: So long, and thanks for all the lulz, Part I [erv] (scienceblogs.com)
  9. Paprotka T, Delviks-Frankenberry KA, Cingoz O, Martinez A, Kung H-J, Tepper CG, Hu W-S , Fivash MJ, Coffin JM, & Pathak VK. Recombinant origin of the retrovirus XMRV. Science. 2011 May 31. (10.1126/science.1205292).
  10. XMRV is a recombinant virus from mice  (Virology Blog : 2011/05/31)
  11. Science asks XMRV authors to retract paper (photoninthedarkness.com : 2011/05/31)
  12. van der Kuyl AC, Berkhout B. XMRV: Not a Mousy Virus. J Formos Med Assoc. 2011 May;110(5):273-4. PDF
  13. Finally a Viral Cause of Chronic Fatigue Syndrome? Or Not? – How Results Can Vary and Depend on Multiple Factor (laikaspoetnik.wordpress.com: 2010/02/15/)
  14. Three Studies Now Refute the Presence of XMRV in Chronic Fatigue Syndrome (CFS) (laikaspoetnik.wordpress.com 2010/04/27)
  15. WPI Announces New, Refined XMRV Culture Test – Available Now Through VIP Dx in Reno (prohealth.com 2010/01/15)
  16. The murky side of physician prescribed LDTs (eurogene.blogspot.com : 2010/09/06)
  17. Given Doubt Cast on CFS-XMRV Link, What About Related Research? (blogs.wsj.com)
  18. Does the NHI/FDA Paper Confirm XMRV in CFS? Well, Ditch the MR and Scratch the X… and… you’ve got MLV. (laikaspoetnik.wordpress.com : 2010/08/30/)

Related articles





Friday Foolery #20 What is in an element’s name?

19 03 2010

You probably know the periodic table of elements. The  table contains 118 confirmed elements, from 1 (H, hydrogen) to 118 (Uuo, Ununoctium).

In Wikipedia. you have a nice large periodic table with chemical symbols, that link to the Wikipedia pages on the individual elements (left).

As a chemist, David Bradley at Sciencebase must have been bored with it, because he designed an unusual version of the periodic table, where the chemical symbols will take you to his various accounts online rather than information about a given chemical. Quite a few elements remained and he invited other research bloggers to claim an element if your or your blog’s name fit in terms of initial letters. David started this morning and in about a few hours almost the entire table was filled.

I claimed Li (my surname), but that was already taken by David’s Linkedin account and he suggested that I should take La of Laikas. La is Lathanum.

Of course this can be hilarious. I tweeted to Andrew Spong that he would surely fit As (Arsenicum) -poisonous as you may know- and he replied he would rather choose absinth, which unfortunately isn’t an element.

There are still a few elements left. Thus if you would like your site highlighted as an element, let David know via Twitter, give him the link to your blog and an appropriate element.

This is how the table looks. You can go to the table here (with real links).
The original post is here

And if you don’t particularly care about this table, perhaps the following adaptation suits you better. It is still available via Amazon (click on the Figure).

This table was also found on David’s blog ( see here)





Kaleidoscope 2009 wk 47

19 11 2009

3621322354_4bc3bb115e

Kaleidoscope is a new series, with a “kaleidoscope” of facts, findings, views and news gathered over the last 1-2 weeks.

Most items originate from Twitter, my Google Reader (RSS) and sometimes real articles (yeah!).

I read a lot, I bookmark a lot, but only some of those things end op in a post. Since tweets have a half-life of less than a week, I thought it would be nice to safeguard some of the tweets in a post. For me to keep, for you to read.

I don’t have the time and the discipline to post daily about health news and social media as Ves Dimov does. It looks more like the compilation at blogs of dr Shock’s (see example),  dr Bates shout-outs, Health Highlights of Highlight HEALTH and Rachel Walden’s Womens health News Round-ups, but less on one subject and less structured. It will just be a mix of old and new, Social Media and science, just a kaleidoscope. Or a potpourri  if you like.

I don’t know if this kaleidoscope will live a long live. I already wrote 2 3 4 5 6 editions, but didn’t have the time to finish them. Well, we will see, just enjoy this one.

Ooh and the beautiful kaleidoscope is made by RevBean and is called: Eyeballs divide like cells. Looks very much like the eyeball-bubblewrap of a previous post but that is thus coincidence. Here is the link (Flickr, CC)

3621322354_4bc3bb115e

Medical Grand Rounds

Louise Norris at Colorado Health Insurance Insider is this week’s host of Grand Rounds.(see here). There are many interesting posts again. As a mother of two teens I especially liked the insight Nancy Brown of Teen Health 411 brings us into what teens want when it comes to their relationships with their parents and the “would you rather…?” story that Amy Tenderich of Diabetes Mine shares with us. The punch line is great. Her 9 year old melts my heart.

At InsureBlog’s Hank Stern brings us an article about a British hospital that will no longer admit expectant mothers with a BMI of more than 34, because the hospital’s labor and delivery unit is not equipped to handle complicated births. Hank concludes: “Fear not, though, portly preggies have to travel but 20 miles to the next closest facility. Assuming, of course, that they can make it that far when contractions are minutes apart.”

Dr Charles of the The Examining Room wrote an in depth article about a cheerleader who was supposedly stricken with dystonia following a seasonal flu vaccine in August. Dr Charles not only highlights why (specialists) think it is not dystonia, but gives also background information about the efficacy of vaccins.

Recent editions of the Grand Rounds were at CREGRL, flight nurse (link), NonClinicalJobs (link) and Codeblog, tales of a nurse (link). You can always find previous and upcoming hosts at the Grand Rounds Archive at Blogborygmi.

3621322354_4bc3bb115e Breast cancer screening

The update of the 2002 USPSTF recommendation statement on screening for breast cancer in the general population, published in the November issue of The Annals of Internal Medicine has led to heated discussions in the mainstream media (i.e. New York Times and MedPage Today). Based on current evidence, partly based on 2 other articles in the same journal (comparison screening schedules and an systematic review) the guidelines advise scaling back of the screening. The USPSTF recommends:

  • against routine screening mammography in women aged 40 to 49 years
  • against routine screening mammography of women 75 years or older.
  • biennial (instead of annual) screening mammography for women between the ages of 50 and 74 years.
  • against teaching breast self-examination (BSE).
  • against either digital mammography or magnetic resonance imaging (MRI) instead of film mammography as screening modalities.

The two articles published in Ann Intern Med add to the evidence that the propagation of breast cancer self exam doesn’t save lives (see Cochrane review discussed in a previous post) and that the benefits of routine mammography in the young (<50) or old (>75) do not outweigh the harm (also covered by a  Cochrane review before). Indeed, as put forward by Gary Schwitzer at Schwitzer health news blog this is NOT a new debate. He refers to Slate who republishes a five-year old piece of Amanda Schaffer that does a good job of explaining the potential harms of screening. However it is difficult for women (and some doctors) to understand that “When it comes to cancer screening, more isn’t always better.” Indeed -as Kevin Pho at Kevin MD states, the question is whether “patients will accept the new, evidence-based, breast cancer screening guidelines”.

In the Netherlands it is already practice to start biannual routine mammography at the age of 50. The official breast cancer screening site of the RIVM even states that the US is now going to follow the Dutch guidelines ;) (one of assessed guidelines in one the Ann Intern Med papers is Dutch). But people still find the  long established guidelines difficult to accept: coincidentally I saw tweets today asking to sign a petition to advance the age of screening ‘because breast cancer is more and more frequently observed at young age…(??)’ Young, well educated, women are very willing to sign…

No time to read the full articles, but interested to know more, then listen to the podcast of this Ann Intern Med edition:


3621322354_4bc3bb115e

Systematic Reviews, pharma sponsored trials and other publishing news

Cochrane reviews are regarded as scientifically rigorous, yet a review’s time to publication can be affected by factors such as the statistical significance of the findings. A study published in Open Medicine examined the factors associated with the time to publication of Cochrane reviews. A change in authors and updated reviews were predictive factors, but the favorability of the results was not.

Roy Poses of the Health Care Renewal Blog starts this blogpost as follows: “Woe to those of us who have been advocates for evidence-based medicine”. He mainly refers to a study published in the NEJM, that identified selective outcome reporting for trials of off-label use of gabapentin: for 8 of the 12 published trials, there was a disagreement between the definition of the primary outcome in the protocol and that in the published report. This seriously threatens the validity of evidence for the effectiveness of off-label interventions. Roy was surprised that the article didn’t generate much media attention. The reason may be that we have been overwhelmed by manipulation of data, ghostwriting and by the fact that pharma-sponsored trials rarely produce results that are unfavorable to the companies’ product (see previous posts about Ghostwriting (Merck/Elsevier, Conflict of Interest in Cancer Studies and David Tovey about Cochrane Reviews). At least two authors of the NEJM review (Bero and Dickersin) have repeatedly this to be the case [e.g. see here for an overview, and papers of Lisa Bero]. It is some relief that at least 3 of the 4 NEJM authors are also members of the Cochrane Collaboration. Indirectly better control of reporting, i.e. by clinical trials registries, can improve the reliability of pharma sponsored trials and thus systematic reviews summarizing them. As a matter of fact Cochrane authors always have to check these registries.

At Highlight Health Walter Jessen writes about Medical Journal Conflict of Interest Disclosure and Other Issues, which also discusses how money can taint objectivity in scientific publishing. Half of the post discusses the book The Trouble with Medical Journals, written in 2007 by Richard Smith, the former editor of the BMJ.
By the way, Walter just hosted MedLibs Round with the theme “Finding Credible Health Information Online”.

Good news in the Netherlands: right after international Open Access week and the launching of the Dutch Open Access website (www.openaccess.nl), the Netherlands Organization for Scientific Research (NWO) has announced that it is in favor of Open Access. (via PLOS-facebook).

The open access nature of PLOS itself gets out of hand: they even peer-review T-shirts (according to Bora Zivkovic of a Blog around the Clock, see here)

3621322354_4bc3bb115e

Other Health & Science News:

Medline Plus summarizes an article in the Journal of Nutrition, that states that Selenium supplements, may pose a heart risk.

Even Folic Acid and vitamin B12, when taken in large doses, have been reported to Increase Cancer Risk (WebMD)

Luckily WebMD also reports that dark chocolate seems to help against stress, that is it reduced stress hormones in the blood. However @evidencematters and @NHSChoices cast doubt on that“Chocolate cuts stress, says newspaper. Does the study really say that? And who paid for the study?…”

Scientists made the unexpected discovery (published in Molecular Cell) that BRAF, which is linked to around 70 per cent of melanomas and seven per cent of all cancers, is in fact controlled by a gene from the same RAF family called CRAF – which has also been linked to the disease. For the first time it is shown “how two genes from the same ‘family’ can interact with each other to stop cancer in its tracks” (Source: Info Cancer Research UK)

For the first time, scientists have successfully used exome sequencing to quickly discover a previously unknown gene responsible for Miller syndrome, a rare disorder. The finding demonstrates the usefulness of exome sequencing in studying rare genetic disorders. The exome is enriched for coding (thus functional) DNA, it is only 1% of the total DNA, but contains 85% of the mutations (Published in , source: PhysOrg.com)

3621322354_4bc3bb115e

Web 2.0
For information regarding the FDA hearings on internet and social media see #FDASM: http://www.fdasm.com.

Read Write Web summarizes the new numbers released by analytics firm Postrank that indicate that reader engagement with blogs has changed dramatically over the last three years, primarily because of the rise of online social networks.

Twitter has began to relaunch the new retweet feature, although not without controversy. What do you think about the newest feature?

The Next Web gives an overview of which Twitter application is hot and which is not.

And Finally: Top 100 tools for learning, compiled by Jane Hart from the contributions of 278 learning professionals worldwide. You can see the lists here (HT: http://blogs.netedu.info/?p=1005)

The web 2.0 part is relatively short, but it is time to conclude this edition. Till next time!

  • MEDLIB’s ROUND 1.6 (laikaspoetnik.wordpress.com)
  • Tool Talk: quick links re Facebook, GReader and GWave (socialfish.org)
  • Reblog this post [with Zemanta]







    Follow

    Get every new post delivered to your Inbox.

    Join 607 other followers