Science Asks to Retract the XMRV-CFS Paper, it Should Never Have Accepted in the First Place.

2 06 2011

Wow! Breaking!

As reported in WSJ earlier this week [1], editors of the journal Science asked Mikovits and her co-authors to voluntary retract their 2009 Science paper [2].

In this paper Mikovits and colleagues of the Whittemore Peterson Institute (WPI) and the Cleveland Clinic, reported the presence of xenotropic murine leukemia virus–related virus (XMRV) in peripheral blood mononuclear cells (PBMC) of patients with chronic fatigue syndrome (CFS). They used the very contamination-prone nested PCR to detect XMRV. This 2 round PCR enables detection of a rare target sequence by producing an unimaginable huge number of copies of that sequence.
XMRV was first demonstrated in cell lines and tissue samples of prostate cancer patients.

All the original authors, except for one [3], refused to retract the paper [4]. This prompted Science editor-in-chief Bruce Alberts to  issue an Expression of Concern [5], which was published two days earlier than planned because of the early release of the news in WSJ, mentioned above [1]. (see Retraction Watch [6]).

The expression of concern also follows the publication of two papers in the same journal.

In the first Science paper [7] Knox et al. found no Murine-Like Gammaretroviruses in any of the 61 CFS Patients previously identified as XMRV-positive, using the same PCR and culturing techniques as used by Lombardi et al. This paper made ERV (who consistently critiqued the Lombardi paper from the startlaugh-out-loud [8], because Knox also showed that human sera neutralize the virus in the blood,indicating it can hardly infect human cells in vivo. Knox also showed the WPIs sequences to be similar to the XMRV plasmid VP62, known to often contaminate laboratory agents.*

Contamination as the most likely reason for the positive WPI-results is also the message of the second Science paper. Here, Paprotka et al. [9]  show that XMRV was not present in the original prostate tumor that gave rise to the XMRV-positive 22Rv1 cell line, but originated -as a laboratory artifact- by recombination of two viruses during passaging the cell line in nude mice. For a further explanation see the Virology Blog [10].

Now Science editors have expressed their concern, the tweets, blogposts and health news articles are preponderantly negative about the XMRV findings in CFS/ME, where they earlier were positive or neutral. Tweets like “Mouse virus #XMRV doesn’t cause chronic fatigue #CFS (Reuters) or “Origins of XMRV deciphered, undermining claims for a role in human disease: Delineation of the origin of… #cancer” (National Cancer Institute) are unprecedented.

Thus the appeal by Science to retract the paper is justified?

Well yes and no.

The timing is rather odd:

  • Why does Science only express concern after publication of these two latest Science papers? There are almost a dozen other studies that failed to reproduce the WPI-findings. Moreover, 4 earlier papers in Retrovirology already indicated that disease-associated XMRV sequences are consistent with laboratory contamination. (see an overview of all published articles at A Photon in the Darkness [11])
  • There are still (neutral) scientist who believe that genuine human infections with XMRV still exist at a relatively low prevalence. (van der Kijl et al: xmrv is not a mousy virus [12])
  • And why doesn’t Science await the results from the official confirmation studies meant to finally settle whether XMRV exist in our blood supply and/or CFS (by the Blood Working Group and the NIH sponsored study by Lipkin et al.)
  • Why (and this is the most important question) did Science ever decide to publish the piece in the first place, as the study had several flaws.
I do believe that new research that turns existing paradigms upside down deserves a chance. Also a chance to get disproved. Yes such papers might be published in prominent scientific journals like Science, provided they are technically and methodologically sound at the very least. The Lombardi paper wasn’t.

Here I repeat my concerns expressed in earlier posts [13 and 14]. (please read these posts first, if you are unfamiliar with PCR).

Shortcomings in PCR-technique and study design**:

  • No positive control and no demonstration of the sensitivity of the PCR-assay. Usually a known concentration or a serial dilution of a (weakly) positive sample is taken as control. This allows to determine sensitivity of the assay.
  • Aspecific bands in negative samples (indicating suboptimal conditions)
  • Just one vial without added DNA per experiment as a negative control. (Negative controls are needed to exclude contamination).
  • CFS-Positive and negative samples are on separate gels (this increases bias, because conditions and chance of contamination are not the same for all samples, it also raises the question whether the samples were processed differently)
  • Furthermore only results obtained at the Cleveland Clinic are shown. (were similar results not obtained at the WPI? see below)
Contamination not excluded as a possible explanation
  • No variation in the XMRV-sequences detected (expected if the findings are real)
  • Although the PCR is near the detection limit, only single round products are shown. These are much stronger then expected even after two rounds. This is very confusing, because WPI later exclaimed that preculturing PBMC plus nested PCR (2 rounds) were absolutely required to get a positive result. But the Legend of Fig. 1 in the original Science paper clearly says PCR after one round. Strong (homogenous) bands after one round of PCR are highly suggestive of contamination.
  • No effort to exclude contamination of samples with mouse DNA (see below)
  • No determination of the viral DNA integration sites.

Mikovits also stressed that she never used the XMRV-positive cell lines in 2009. But what about the Cleveland Clinic, nota bene the institute that co-discovered XMRV and that had produced the strongly positive PCR-products (…after a single PCR-round…)?

On the other hand, the authors had other proof of the presence of retrovirus: detection of (low levels of) antibodies to XMRV in patient sera, and transmissibility of XMRV. On request they later applied the mouse mitochondrial assay to successfully exclude the presence of mouse DNA in their samples. (but this doesn’t exclude all forms of contamination, and certainly not at Cleveland Clinic)

These shortcomings alone should have been sufficient for the reviewers, had they seen it and /or deemed it of sufficient importance, to halt publication and to ask for additional studies**.

I was once in a similar situation. I found a rare cancer-specific chromosomal translocation in normal cells, but I couldn’t exclude PCR- contamination. The reviewers asked me to exclude contamination by sequencing the breakpoints, which only succeeded after two years of extra work. In retrospect I’m thankful to the reviewers for preventing me from publishing a possible faulty paper which could have ruined my career (yeah, because contamination is a real problem in PCR). And my paper improved tremendously by the additional experiments.

Yes it is peer review that failed here, Science. You should have asked for extra confirmatory tests and a better design in the first place. That would have spared a lot of anguish, and if the findings had been reproducible, more convincing and better data.

There were a couple of incidents after the study was published, that made me further doubt the robustness of WPI’s scientific data and even (after a while) I began to doubt whether WPI, and Judy Mikovits in particular, is adhering to good scientific (and ethical) practices.

  • WPI suddenly disclosed (Feb 18 2010) that culturing PBMC’s is necessary to obtain a positive PCR signal.  As a matter of fact they maintain this in their recent protest letter to Science. They refer to the original Science paper, but this paper doesn’t mention the need for culturing at all!! 
  • WPI suggests their researchers had detected XMRV in patient samples from both Dr. Kerr’s and Dr. van Kuppeveld’s ‘XMRV-negative’ CFS-cohorts. Thus in patient samples obtained without a culture-enrichment step…..  There can only be one truth:  main criticism on negative studies was that improper CFS-criteria were used. Thus either this CFS-population is wrongly defined and DOESN’t contain XMRV (with any method), OR it fulfills the criteria of CFS and the XMRV can be detected applying the proper technique. It is so confusing!..
  • Although Mikovits first reported that they found no to little virus variation, they later exclaimed to find a lot of variation.
  • WPI employees behave unprofessional towards colleague-scientists who failed to reproduce their findings.
Other questionable practices 
  • Mikovits also claims that people with autism harbor XMRV. One wonders which disease ISN’t associated with XMRV….
  • Despite the uncertainties about XMRV in CFS-patients, let alone the total LACK of demonstration of a CAUSAL RELATIONSHIP, Mikovits advocates the use of *not harmless* anti-retrovirals by CFS-patients.
  • At this stage of controversy, the WPI-XMRV test is sold as “a reliable diagnostic tool“ by a firm (VIP Dx) with strong ties to WPI. Mikovits even tells patients in a mail: “First of all the current diagnostic testing will define with essentially 100% accuracy! XMRV infected patients”. WTF!? 
  • This test is not endorsed in Belgium, and even Medicare only reimbursed 15% of the PCR-test.
  • The ties of WPI to RedLabs & VIP Dx are not clearly disclosed in the Science Paper. There is only a small Note (added in proof!)  that Lombardi is operations manager of VIP Dx, “in negotiations with the WPI to offer a diagnostic test for XMRV”.
Please see this earlier post [13] for broader coverage. Or read the post [16] of Keith Grimaldi, scientific director of Eurogene, and expert in personal genomics, who I asked to comment on the “diagnostic” tests. In his post he very clearly describes “what is exactly wrong about selling an unregulated clinical test  to a very vulnerable and exploitable group based on 1 paper on a small isolated sample”.

It is really surprising this wasn’t picked up by the media, by the government or by the scientific community. Will the new findings have any consequences for the XMRV-diagnostic tests? I fear WPI will get away with it for the time being. I agree with Lipkin, who coordinates the NIH-sponsored multi-center CFS-XMRV study that calls to retract the paper are premature at this point . Furthermore, –as addressed by WSJ [17]– if the Science paper is retracted, because XMRV findings are called into question, what about the papers also reporting a  link of XMRV-(like) viruses and CFS or prostate cancer?

WSJ reports, that Schekman, the editor-in chief of PNAS, has no direct plans to retract the paper of Alter et al reporting XMRV-like viruses in CFS [discussed in 18]. Schekman considers it “an unusual situation to retract a paper even if the original findings in a paper don’t hold up: it’s part of the scientific process for different groups to publish findings, for other groups to try to replicate them, and for researchers to debate conflicting results.”

I agree, this is a normal procedure, once the paper is accepted and published. Fraud is a reason to retract a paper, doubt is not.


*samples, NOT patients, as I saw a patient erroneous interpretation: “if it is contamination in te lab how can I have it as a patient?” (tweet is now deleted). No, according to the contamination -theory” XMRV-contamination is not IN you, but in the processed samples or in the reaction mixtures used.

** The reviewers did ask additional evidence, but not with respect to the PCR-experiments, which are most prone to contamination and false results.

  1. Chronic-Fatigue Paper Is Questioned (
  2. Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL, Ruscetti SK, Bagni RK, Petrow-Sadowski C, Gold B, Dean M, Silverman RH, & Mikovits JA (2009). Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. Science (New York, N.Y.), 326 (5952), 585-9 PMID: 19815723
  3. WPI Says No to Retraction / Levy Study Dashes Hopes /NCI Shuts the Door on XMR (
  5. Alberts B. Editorial Expression of Concern. Science. 2011 May 31.
  6. Science asks authors to retract XMRV-chronic fatigue paper; when they refuse, issue Expression of Concern. 2011/05/31/ (
  7. K. Knox, Carrigan D, Simmons G, Teque F, Zhou Y, Hackett Jr J, Qiu X, Luk K, Schochetman G, Knox A, Kogelnik AM & Levy JA. No Evidence of Murine-Like Gammaretroviruses in CFS Patients Previously Identified as XMRV-Infected. Science. 2011 May 31. (10.1126/science.1204963).
  8. XMRV and chronic fatigue syndrome: So long, and thanks for all the lulz, Part I [erv] (
  9. Paprotka T, Delviks-Frankenberry KA, Cingoz O, Martinez A, Kung H-J, Tepper CG, Hu W-S , Fivash MJ, Coffin JM, & Pathak VK. Recombinant origin of the retrovirus XMRV. Science. 2011 May 31. (10.1126/science.1205292).
  10. XMRV is a recombinant virus from mice  (Virology Blog : 2011/05/31)
  11. Science asks XMRV authors to retract paper ( : 2011/05/31)
  12. van der Kuyl AC, Berkhout B. XMRV: Not a Mousy Virus. J Formos Med Assoc. 2011 May;110(5):273-4. PDF
  13. Finally a Viral Cause of Chronic Fatigue Syndrome? Or Not? – How Results Can Vary and Depend on Multiple Factor ( 2010/02/15/)
  14. Three Studies Now Refute the Presence of XMRV in Chronic Fatigue Syndrome (CFS) ( 2010/04/27)
  15. WPI Announces New, Refined XMRV Culture Test – Available Now Through VIP Dx in Reno ( 2010/01/15)
  16. The murky side of physician prescribed LDTs ( : 2010/09/06)
  17. Given Doubt Cast on CFS-XMRV Link, What About Related Research? (
  18. Does the NHI/FDA Paper Confirm XMRV in CFS? Well, Ditch the MR and Scratch the X… and… you’ve got MLV. ( : 2010/08/30/)

Related articles


Does the NHI/FDA Paper Confirm XMRV in CFS? Well, Ditch the MR and Scratch the X… and… you’ve got MLV.

30 08 2010

ResearchBlogging.orgThe long awaited paper that would ‘solve’ the controversies about the presence of Xenotropic Murine Leukemia Virus-related virus (XMRV) in patients with chronic fatigue syndrome (CFS) was finally published in PNAS last week [1]. The study, a joint effort of the NIH and the FDA, was withheld, on request of the authors [2], because it contradicted the results of another study performed by the CDC. Both studies were put on hold.

The CDC study was published in Retrovirology online July 1 [3]. It was the fourth study in succession [4,5,6] and the first US study, that failed to demonstrate XMRV since researchers of the US Whittemore Peterson Institute (WPI) had published their controversial paper regarding the presence of XMRV in CFS [7].

The WPI-study had several flaws, but so had the negative papers: these had tested a less rigorously defined CFS-population, had used old and/or too few samples (discussed in two previous posts here and here).
In a way,  negative studies, failing to reproduce a finding, are less convincing then positive studies.  Thus everyone was eagerly looking forward to the release of the PNAS-paper, especially because the grapevine whispered this study was  to confirm the original WPI findings.

Indeed after publication, both Harvey Alter, the team leader of the NIH/FDA study, and Judy Mikovitz of the WPI emphasized that the PNAS paper essentially confirmed the presence of XMRV in CFS.

But that isn’t true. Not one single XMRV-sequence was found. Instead related MLV-sequences were detected.

Before I go into further details, please have a look at the previous posts if you are not familiar with the technical details , like the PCR-technique. Here (and in a separate spreadsheet) I also describe the experimental differences between the studies.

Now what did Lo et al exactly do? What were their findings? And in what respect do their findings agree or disagree with the WPI-paper?

Like WPI, Lo et al used nested PCR to enable detect XMRV. Nested means that there are two rounds of amplification. Outer primers are used to amplify the DNA between the two primers used (primers are a kind of very specific anchors fitting a small approximately 20 basepair long piece of DNA). Then a second round is performed with primers fitting a short sequence within the amplified sequence or amplicon.

The first amplified gag product is ~730 basepairs long, the second ~410 or ~380 basepairs, depending on the primer sets used:  Lo et al used the same set of outer primers as WPI to amplify the gag gene, but the inner gag primers were either those of WPI (410 bp)  or a in-house-designed primer set (380 bp).

Using the nested PCR approach Lo et al found gag-gene sequences in peripheral blood mononuclear cells (PBMC)  in 86.5% of all tested CFS-patients (32/37)  and in 96% (!) of the rigorously evaluated CFS-patients (24/25) compared with only 6.8% of the healthy volunteer blood donors (3/44). Half of the patients with gag detected in their PBMC, also had detectable gag in their serum (thus not in the cells). Vice versa, all but one patient with gag-sequences in the plasma also had gag-positive PBMC. Thus these findings are consistent.

The gels  (Figs 1 and 2) showing the PCR-products in PBMC don’t look pretty, because there are many aspecific bands amplified from human PBMC. These aspecific bands are lacking when plasma is tested (which lacks PBMC). To get the idea. The researchers are trying to amplify a 730 bp long sequence, using primers that are 23 -25 basepairs long, that need to find the needle in the haystack (only 1 in 1000 to 10,000 PBMC may be infected, 1 PBMC contains appr 6*10^9 basepairs). Only the order of A, C, G and T varies! Thus there is a lot of competition of sequences that have a near-fit, but are more preponderant than the true gag-sequences fitting the primers).

Therefore, detecting a band with the expected size does not suffice to demonstrate the presence of a particular viral sequence. Lo et al verified whether it were true gag-sequences, by sequencing each band with the appropriate size. All the sequenced amplicons appeared true gag-sequences. What makes there finding particularly strong is that the sequences were not always identical. This was one of the objections against the WPI-findings: they only found the same sequence in all patients (apart from some sequencing errors).

Another convincing finding is that the viral sequences could be demonstrated in samples that were taken 2-15 years apart. The more recent sequences had evolved and gained one or more mutations. Exactly what one would expect from a retrovirus. Such findings also make contamination unlikely. The lack of PCR-amplifiable mouse mitochondrial DNA also makes contamination a less likely event (although personally I would be more afraid of contamination by viral amplicons used as a positive control). The negative controls (samples without DNA) were also negative in all cases. The researchers also took all necessary physical precautions to prevent contamination (i.e. the blood samples were prepared at another lab than did the testing, both labs never sequenced similar sequences before).
(people often think of conspiracy wherever the possibility of contamination is mentioned, but this is a real pitfall when amplifying low frequency targets. It took me two years to exclude contamination in my experiments)

To me the data look quite convincing, although we’re still far from concluding that the virus is integrated in the human genome and infectious. And, of course, mere presence of a viral sequence in CFS patients, does not demonstrate a causal relationship. The authors recognize this and try to tackle this in future experiments.

Although the study seems well done, it doesn’t alleviate the confusion raised.

The reason, as said, is that the NIH/FDA researchers didn’t find a single XMRV sequence  in any of the samples!

Instead a variety of related MLV retrovirus sequences were detected.

Sure the two retroviruses belong to a similar “family”. The gag gene sequences share 96.6% homology.

However there are essential differences.

One is that XMRV is a  Xenotropic virus, hence the X: which means it can no longer enter mouse cells (MR= murine (mouse) related) but can infect cells of other species, including humans. (to be more precise it has both xenotropic and polytropic characteristics). According to the phylogenetic tree Lo et al  constructed,  the viral sequences they found are more diverse and best matches the so-called polytropic MLV viruses (able to infect both mouse and non-mouse cells infected). (see the PNAS commentary by Valerie Courgnaud et al for an explanation)

The main question, this paper raises is why they didn’t find XMRV, like WPI did.

Sure, Mikovitz —who is “delighted” by the results—now hurries to say that in the meantime, her group has found more diversity in the virus as well [8]. Or as a critical CFS patient writes on his blog:

In my opinion, the second study is neither a confirmation for, nor a replication of the first. The second study only confirms that WPI is on to something and that there might be an association between a type of retroviruses and ME/CFS.
For 10 months all we’ve heard was “it’s XMRV”. If you didn’t find
XMRV you were doing something wrong: wrong selection criteria, wrong methods, or wrong attitude. Now comes this new paper which doesn’t find XMRV either and it’s heralded as the long awaited replication and confirmation study. Well, it isn’t! Nice piece of spin by Annette Whittemore and Judy Mikovits from the WPI as you can see in the videos below (… ). WPI may count their blessings that the NIH/FDA/Harvard team looked at other MLVs and found them or otherwise it could have been game over. Well, probably not, but how many negative studies can you take?

Assuming the NIH/FDA findings are true, then the key question is not why most experiments were completely negative (there may many reasons why, for one thing they only tested XMRV), but why Lo didn’t find any XMRV amongst the positive CFS patients, and WPI didn’t find any MLV in their positive patient samples.

Two US cohorts of CFS patients with mutually exclusive presence of either XMRV or MLV, whereas the rest of the world finds nothing?? I don’t believe it. One would at least expect overlap.

My guess is that it must be something in the conditions used. Perhaps the set of primers.

As said, Lo used the same external primers as WPI, but varied the internal primers. Sometimes they used those of WPI (GAG-I-F/GAG-I-R ; F=forward, R=reverse) yielding a ~410 basepair product, sometimes their own primers (NP116/NP117), yielding a ~380 basepair product. In the Materials and Methods section  Lo et al write The NP116/NP117 was an in-house–designed primer set based on the highly conserved sequences found in different MLV-like viruses and XMRVs”.
In the supplement they are more specific:

…. (GAG-I-F/GAG-I-R (intended to be more XMRV specific) or the primer set NP116/NP117 (highly conserved sequences for XMRV and MLV).

Is it possible that the conditions that WPI used were not so suitable for finding MLV?

Lets look at Fig. S1 (partly depicted below), showing the multiple sequence alignment of 746 gag nucleotides (nt) amplified from 21 CFS patient samples (3 types) and one blood donor (BD22) [first 4 rows] and their comparison with known MLV (middle rows) and XMRV (last rows) sequences. There is nothing remarkable with the area of the reverse primer (not shown). The external forward primer (–>) fits all sequences (dots mean identical nucleotides). Just next to this primer are 15 nt deletions specific for XMRV (—-), but that isn’t hurdle for the external primers. The internal primers (–>) overlap, but the WPI-internal primer starts earlier, in the region with heterogeneity: here there are two mismatches between MLV- and XMRV-like viruses. In this region the CFS type MLV (nt 196) starts with TTTCA, whereas XMRV sequences all have TCTCG. And yes, the WPI-primers starts as follows: TCTCG. Thus there is a complete match with XMRV, but a 2 bp mismatch with MLV. Such a mismatch might easily explain why WPI (not using very optimal PCR conditions) didn’t find any low frequency MLV-sequences. The specific inner primer designed by the group of Lo and Alter, do fit both sequences, so differences in this region don’t explain the failure of Lo et al to detect XMRV. Perhaps MLV is more abundant and easier to detect?

But wait a minute. BD22, a variant detected in normal donor blood does have the XMRV variant sequence in this particular (very small) region. This sequence and the two other sequenced normal donor MLV variants differ form the patient variants, although -according to Lo- both patient and healthy donor variants differ more from XMRV then from each other (Figs 4 and 2s). Using the eyeball test I do see many similarities between XMRV and BD22 though (not only in the above region).

The media pay no attention to these differences between patient and healthy control viral sequences, and the different primer sets used. Did no one actually read the paper?

Whether theses differences are relevant, depends on whether identical conditions were used for each type of sample. It worries me that Lo says he sometimes uses the WPI inner primer sets and sometimes the other specific set. When is sometimes? It is striking that Fig 1 shows the results from CFS patients done with the specific primers and Fig 2 the results from normal donor blood done with the WPI-primers. Why? Is this the reason they picked up a sequence that fit the WPI-primers (starting with TCTCG)?

I don’t like it. I want to know how many times tested samples were positive or negative with either primer set. I not only want to see the PCR results of CFS-plasma (positive in half of the PBMC+ cases), but also of the control plasma. And I want a mix of the patient, normal samples, positive and negative controls on one gel. Everyone doing PCR knows that the signal can differ per PCR and per gel. Furthermore, the second PCR round gives way too much aspecific bands, whereas usually you get rid of those under optimal conditions.

Another confusing finding is a statement at the FDA site:

Additionally, the CDC laboratory provided 82 samples from their published negative study to FDA, who tested the samples blindly.  Initial analysis shows that the FDA test results are generally consistent with CDC, with no XMRV-positive results in the CFS samples CDC provided (34 samples were tested, 31 were negative, 3 were indeterminate).

What does this mean? Which inner primers did the FDA use? With the WPI inner primers MLV sequences might just not be found (although there might be other reasons as well, as the less stringent patient criteria).

And what to think of the earlier WPI findings? They did find “XMRV” sequences while no one else did.

I have always been skeptic (see here and here), because:

  • no mention of sensitivity in their paper
  • No mention of a positive control. The negative controls were just vials without added DNA.
  • No variation in the sequences detected, a statement that they retracted after the present NIH/FDA publication. What a coincidence.
  • Although the PCR is near the detection limit, only first round products are shown. These are stronger then you would expect them to be after one round.
  • The latter two points are suggestive of contamination. No extra test were undertaken to exclude this.
  • Surprisingly in an open letters/news items (Feb 18), they disclose that culturing PBMC’s is necessary to obtain a positive signal.  They refer to the original Science paper, but this paper doesn’t mention the need for culturing at all.
  • In an other open letter* Annette Whittemore, director of the WPI,writes to Dr McClure, virologist of tone of the negative papers that WPI researchers had detected XMRV in patient samples from both Dr. Kerr’s and Dr. van Kuppeveld’s cohorts. So if we must believe Annette, the negative samples weren’t negative
  • At this stage of controversy, the test is sold as “a reliable diagnostic tool“ by a firm with strong ties to WPI. In one such mail Mikovits says: “First of all the current diagnostic testing will define with essentially 100% accuracy! XMRV infected patients”.

Their PR machine, ever changing “findings” and anti-scientific attitude are worrying. Read more about at erv here.

What we can conclude from this all. I don’t know. I presume that WPI did find “something”, but weren’t cautious, critical and accurate enough in their drive to move forward (hence their often changing statements). I presume that the four negative findings relate to the nature of their samples or the use of the WPI inner primers or both. I assume that the NIH/CDC findings are real, although the actual positive rates might vary depending on conditions used (I would love to see all actual data).

Virologist “erv”is less positive, about the quality of the findings and their implications. In one of her comments (17) she responds:

No. An exogenous mouse ERV in humans makes no sense. But thats what their tree says. Mouse ERV is even more incredible than XMRV. Might be able to figure this out more if they upload their sequences to genbank. I realize they tried very hard not to contaminate their samples with mouse cells. That doesnt mean mouse DNA isnt in any of their store-bought reagents. There are H2O lanes in the mitochondral gels, but not the MLV gels (Fig 1, Fig 2). Why? Positive and negative controls go on every gel, end of story. First lesson every rotating student in our lab learns.

Finding mere virus-like sequences in CFS-patients is not enough. We need more data, more carefully gathered and presented. Not only in CFS patients and controls, but in cohorts of patients with different diseases and controls under controlled conditions. This will tell something about the specificity of the finding for CFS. We also need more information about XMRV infectivity and serology.

We also need to find out what being normal healthy and MLV+ means.

The research on XMRV/MLV seems to progress with one step forward, two steps back.

With the CFS patients, I truly hope that we are walking in the right direction.


The title from this post was taken from:


  1. Lo SC, Pripuzova N, Li B, Komaroff AL, Hung GC, Wang R, & Alter HJ (2010). Detection of MLV-related virus gene sequences in blood of patients with chronic fatigue syndrome and healthy blood donors. Proceedings of the National Academy of Sciences of the United States of America PMID: 20798047
  2. Schekman R (2010). Patients, patience, and the publication process. Proceedings of the National Academy of Sciences of the United States of America PMID: 20798042
  3. Switzer WM, Jia H, Hohn O, Zheng H, Tang S, Shankar A, Bannert N, Simmons G, Hendry RM, Falkenberg VR, Reeves WC, & Heneine W (2010). Absence of evidence of xenotropic murine leukemia virus-related virus infection in persons with chronic fatigue syndrome and healthy controls in the United States. Retrovirology, 7 PMID: 20594299
  4. Erlwein, O., Kaye, S., McClure, M., Weber, J., Wills, G., Collier, D., Wessely, S., & Cleare, A. (2010). Failure to Detect the Novel Retrovirus XMRV in Chronic Fatigue Syndrome PLoS ONE, 5 (1) DOI: 10.1371/journal.pone.0008519
  5. Groom, H., Boucherit, V., Makinson, K., Randal, E., Baptista, S., Hagan, S., Gow, J., Mattes, F., Breuer, J., Kerr, J., Stoye, J., & Bishop, K. (2010). Absence of xenotropic murine leukaemia virus-related virus in UK patients with chronic fatigue syndrome Retrovirology, 7 (1) DOI: 10.1186/1742-4690-7-10
  6. van Kuppeveld, F., Jong, A., Lanke, K., Verhaegh, G., Melchers, W., Swanink, C., Bleijenberg, G., Netea, M., Galama, J., & van der Meer, J. (2010). Prevalence of xenotropic murine leukaemia virus-related virus in patients with chronic fatigue syndrome in the Netherlands: retrospective analysis of samples from an established cohort BMJ, 340 (feb25 1) DOI: 10.1136/bmj.c1018
  7. Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL, Ruscetti SK, Bagni RK, Petrow-Sadowski C, Gold B, Dean M, Silverman RH, & Mikovits JA (2009). Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. Science (New York, N.Y.), 326 (5952), 585-9 PMID: 19815723
  8. Enserink M (2010). Chronic fatigue syndrome. New XMRV paper looks good, skeptics admit–yet doubts linger. Science (New York, N.Y.), 329 (5995) PMID: 20798285

Related Articles

Three Studies Now Refute the Presence of XMRV in Chronic Fatigue Syndrome (CFS)

27 04 2010“Removing the doubt is part of the cure” (RedLabs)

Two months ago I wrote about two contradictory studies on the presence of the novel XMRV retrovirus in blood of patients with Chronic Fatigue Syndrome (CFS).

The first study, published in autumn last year by investigators of the Whittemore Peterson Institute (WPI) in the USA [1], claimed to find XMRV virus in peripheral blood mononuclear cells (PBMC) of patients with CFS. They used PCR and several other techniques.

A second study, performed in the UK [2] failed to show any XMRV-virus in peripheral blood of CFS patients.

Now there are two other negative studies, one from the UK [3] and one from the Netherlands [4].

Does this mean that XMRV is NOT present in CFS patients?

No, different results may still be due do to different experimental conditions and patient characteristics.

The discrepancies between the studies discussed in the previous post remain, but there are new insights, that I would like to share.*

1. Conflict of Interest, bias

Most CFS patients seem “to go for” WPI, because WPI, established by the family of a chronic fatigue patient, has a commitment to CFS. CFS patients feel that many psychiatrists, including authors of the negative papers [2-4] dismiss CFS as something “between the ears”.  This explains the negative attitude against these “psych-healers” on ME-forums (i.e. the Belgium forum and MECVS even has a section “faulty/wrong” papers, i.e. about the “failure” of psychiatrists to demonstrate  XMRV!

Since a viral (biological) cause would not fit in the philosophy of these psychiatrists, they might just not do their best to find the virus. Or even worse…

Dr. Mikovits, co-author of the first paper [1] and Director of Research at WPI, even responded to the first UK study as follows (ERV and Prohealth):

“You can’t claim to replicate a study if you don’t do a single thing that we did in our study,” …
“They skewed their experimental design in order to not find XMRV in the blood.” (emphasis mine)

Mikovits also suggested that insurance companies in the UK are behind attempts to sully their findings (ERV).

These kind of personal attacks are “not done” in Science. And certainly not via this route.

Furthermore, WPI has its own bias.

For one thing WPI is dependent on CFS and other neuro-immune patients for its existence.

WPI has generated numerous press releases and doesn’t seem to use the normal scientific channels. Mikovits presented a 1 hr Q&A session about XMRV and CFS (in a stage where nothing has been proven yet). She will also present data about XMRV at an autism meeting. There is a lot of PR going on.

Furthermore there is an intimate link  between WPI and VIP Dx, both housed in Reno. Vip DX is licensed by WPI to provide the XMRV-test. links to the same site as, for Vip Dx is the new name of the former RedLabs.

Interestingly Lombardi (the first author of the paper) co-founded Redlabs USA Inc. and  served as the Director of Operations at Redlabs, Harvey Whittemore owns 100% of VIP Dx, and was the company President until this year and  Mikovits is the Vice President of VIP Dx. (ME-forum). They didn’t disclose this in the Science paper.


Vip/Dx offers a plethora of tests, and is the only RedLab -branch that performs the WPI-PCR test, now replaced by the “sensitive” culture test (see below). At this stage of controversy, the test is sold as “a reliable diagnostic tool“(according to prohealth). Surely their motto “Removing the doubt is part of the cure” appeals to patients. But how can doubt be removed if the association of XMRV with CFS has not been confirmed, the diagnostic tests offered have yet not been truly validated (see below), as long as a causal relationship between XMRV and CFS has not been proven and/or when XMRV does not seem that specific for CFS: it has also been found in people with prostate cancer, autism,  atypical multiple sclerosis, fibromyalgia, lymphoma)(WSJ).

Meanwhile CFS/ME websites are abuzz with queries about how to obtain tests -also in Europe- …and antiretroviral drugs. Sites like Prohealth seem to advocate for WPI. There is even a commercial XMRV site (who runs it is unclear)

Project leader Mikovits, and the WPI as a whole, seem to have many contacts with CSF patients, also by mail. In one such mail she says (emphasis and [exclamations] mine):

“First of all the current diagnostic testing will define with essentially 100% accuracy! XMRV infected patients”. [Bligh me!]….
We are testing the hypothesis that XMRV is to CFS as HIV is to AIDS. There are many people with HIV who don’t have AIDS (because they are getting treatment). But by definition if you have ME you must have XMRV. [doh?]
[….] There is so much that we don’t know about the virus. Recall that the first isolation of HIV was from a single AIDS patient published in late 1982 and it was not until 2 years later that it was associated with AIDS with the kind of evidence that we put into that first paper. Only a few short years later there were effective therapies. […]. Please don’t hesitate to email me directly if you or anyone in the group has questions/concerns. To be clear..I do think even if you tested negative now that you are likely still infected with XMRV or its closest cousin..

Kind regards, Judy

These tests costs patients money, because even Medicare will only reimburse 15% of the PCR-test till now. VIP Dx does donate anything above costs to XMRV research, but isn’t this an indirect way to support the WPI-research? Why do patients have to pay for tests that have not proven to be diagnostic? The test is only in the experimental phase.

I ask you: would such an attitude be tolerated from a regular pharmaceutical company?


Another discrepancy between the WPI and the other studies is that only the WPI use the Fukuda and Canadian criteria to diagnose CFS patients. The Canadian  criteria are much more rigid than those used in the European studies. This could explain why WPI has more positives than the other studies, but it can’t fully explain that WPI shows 96% positives (their recent claim) against 0% in the other studies. For at least some of the European patients should fulfill the more rigid criteria.

Regional Differences

Patients of the positive and negative studies also differ with respect to the region they come from (US and Europe). Indeed, XMRV has previously been detected in prostate cancer cells from American patients, but not from German and Irish patients.

However, the latter two reasons may not be crucial if the statement in the open letter* from Annette Whittemore, director of the WPI, to Dr McClure**, the virologist of the second paper [2], is true:

We would also like to report that WPI researchers have previously detected XMRV in patient samples from both Dr. Kerr’s and Dr. van Kuppeveld’s cohorts prior to the completion of their own studies, as they requested. We have email communication that confirms both doctors were aware of these findings before publishing their negative papers.(……)
One might begin to suspect that the discrepancy between our findings of XMRV in our patient population and patients outside of the United States, from several separate laboratories, are in part due to technical aspects of the testing procedures.

Assuming that this is true we will now concentrate on the differences in the PCR -procedures and results.


All publications have used PCR to test the presence of XMRV in blood: XMRV is present in such low amounts that you can’t detect the RNA without amplifying it first.

PCR allows the detection of a single or few copies of target DNA/RNA per milligram DNA input, theoretically 1 target DNA copy in 105 to 106 cells. (RNA is first reverse transcribed to DNA). If the target is not frequent, the amplified DNA is only visible after Southern blotting (a radioactive probe “with a perfect fit to” the amplified sequence) or after a second PCR round (so called nested PCR). In this second round a set of primers is used internal to the first set of primers. So a weak signal is converted in a strong and visible one.

All groups have applied nested PCR. The last two studies have also used a sensitive real time PCR, which is more of a quantitative assay and less prone to contamination.

Twenty years ago, I had similar experiences as the WPI. I saw very vague PCR bands that had all characteristics of a tumor-specific sequence in  normal individuals, which was contrary to prevailing beliefs and hard to prove. This had all to do with a target frequency near to the detection limit and with the high chance of contamination with positive controls. I had to enrich tonsils and purified B cells to get a signal and sequence the found PCR products to prove we had no contamination. Data were soon confirmed by others. By the way our finding of a tumor specific sequence in normal individuals didn’t mean that everyone develops lymphoma (oh analogy)

Now if you want to proof you’re right when you discovered something new you better do it good.

Whether a PCR assay at or near the detection limit of PCR is successful depends on:

  • the sensitivity of the PCR
    • Every scientific paper should show the detection limit of the PCR: what can the PCR detect? Is 1 virus particle enough or need there be 100 copies of the virus before it is detected? Preferably the positive control should be diluted in negative cells. This is called spiking. Testing a positive control diluted in water doesn’t reflect the true sensitivity. It is much easier for primers to find one single small piece of target DNA in water than to find that piece of DNA swimming in a pool of DNA from 105 cells. 
  • the specificity of the PCR.
    • You can get aspecific bands if the primers recognize other than the intended sequences. Suppose you have one target sequence competing with a lot of similar sequences, then even a less perfect match in the normal genome has every chance to get amplified. Therefore you should have a negative control of cells not containing the virus (i.e. placental DNA), not only water. This resembles the PCR conditions of your test samples.
  • Contamination
    • this should be prevented by rigorous spatial separation of  sample preparation, PCR reaction assembly, PCR execution, and post-PCR analysis. There should be many negative controls. Control samples should be processed the same way as the experimental samples and should preferably be handled blinded.
  • The quality and properties of your sample.
    • If XMRV is mainly present in PBMC, separation of PBMC by Ficoll separation (from other cells and serum) could make the difference between a positive and a negative signal. Furthermore,  whole blood and other body fluids often contain inhibitors, that may lead to a much lower sensitivity. Purification steps are recommended and presence of inhibitors should be checked by spiking and amplification of control sequences.

Below the results per article. I have also made an overview of the results in a Google spreadsheet.

The PCR conditions are badly reported in the WPI paper, published in Science[1]. As a matter of fact I wonder how it ever came trough the review.

  • Unlike XMRV-positive prostate cancer cells, XMRV infection status did not not correlate with the RNASEL genotype.
  • The sensitivity of the PCR is not shown (nor discussed).
  • No positive control is mentioned. The negative controls were just vials without added DNA.
  • Although the PCR is near the detection limit, only first round products are shown (without confirmation of the identity of the product). The positive bands are really strong, whereas you expect them to be weak (near the detection limit after two rounds). This is suggestive of contamination.
  • PBMC have been used as a source and that is fine, but one of WPI’s open letters/news items (Feb 18), in response to the first UK study, says the following:
    • point 7. Perhaps the most important issue to focus on is the low level of XMRV in the blood. XMRV is present in such a small percentage of white blood cells that it is highly unlikely that either UK study’s PCR method could detect it using the methods described. Careful reading of the Science paper shows that increasing the amount of the virus by growing the white blood cells is usually required rather than using white blood cells directly purified from the body. When using PCR alone, the Science authors found that four samples needed to be taken at different times from the same patient in order for XMRV to be detected by PCR in freshly isolated white blood cells.(emphasis mine)
  • But carefully reading the methods,  mentioned in the “supporting material” I only read:
    • The PBMC (approximately 2 x 107 cells) were centrifuged at 500x g for 7 min and either stored as unactivated cells in 90% FBS and 10% DMSO at -80 ºC for further culture and analysis or resuspended in TRIzol (…) and stored at -80 ºC for DNA and RNA extraction and analysis. (emphasis mine)

    Either …. or. Seems clear to me that the PBMC were not cultured for PCR, at least not in the experiments described in the science paper.

    How can one accuse other scientists of not “duplicating” the results if the methods are so poorly described and the authors don’t adhere to it themselves??

  • Strikingly only those PCR-reactions are shown, performed by the Cleveland Clinic (using one round), not the actual PCR-data performed by WPI. That is really odd.
  • It is also not clear whether the results obtained by the various tests were consistent.
    Suzanne D. Vernon, PhD, Scientific Director of the CFIDS Association of America (charitable organization dedicated to CFS) has digged deeper into the topic. This is what she wrote [9]:
    Of the 101 CFS subjects reported in the paper, results for the various assays are shown for only 32 CFS subjects. Of the 32 CFS subjects whose results for any of the tests are displayed, 12 CFS subjects were positive for XMRV on more than one assay. The other 20 CFS subjects were documented as positive by just one testing method. Using information from a public presentation at the federal CFS Advisory Committee, four of the 12 CFS subjects (WPI 1118, 1150, 1199 and 1125) included in the Science paper were also reported to have cancer – either lymphoma, mantle cell lymphoma or myelodysplasia. The presentation reported that 17 WPI repository CFS subjects with cancer had tested positive for XMRV. So how well are these CFS cases characterized, really?

The Erlwein study was published within 3 months after the first article. It is simpler in design and was reviewed in less then 3 days. They used whole blood instead of PBMC and performed nested PCR using another set of primers. This doesn’t matter a lot, if the PCR is sensitive. However, the sensitivity of the assay is not shown and the PCR bands of the positive control look very weak, even after the second round (think they mad a mistake in the legend as well: lane 9 is not a positive control but a base pair ladder, I presume). It also looked like they used a “molecular plasmid control in water”, but in the comments on the PLoS ONE paper, one of the authors states that the positive control WAS spiked into patient DNA.(Qetzel commenting to Pipeline Corante) Using this PCR none of the 186 CSF samples was positive.

Groom and van Kuppeveld studies
The two other studies use an excellent PCR approach[3,4]. Both used PBMC, van Kuppeveld used older cryoperserved PBMC. They first tried the primers of Lombardi using a similar nested PCR, but since the sensitivity was low they changed to a real time PCR with other optimized primers. They determined the sensitivity of the PCR by serially diluting a plasmid into PBMC DNA from a healthy donor. The limit of sensitivity equates to 16 and 10 XMRV-gene copies in the UK and the Dutch study respectively. They have appropriate negative controls and controls for the integrity of the material (GAPDH, spiking normal control cDNAs in negative DNA to exclude sample mediated PCR inhibition[1], phocine distemper virus[2]), therefore also excluding that cryopreserved PBMC were not suitable for amplification.

The results look excellent, but none of the PCR-samples were positive using these sensitive techniques. A limitation of the Dutch study is the that numbers of patients and controls were small (32 CSF, 43 controls)

Summary and Conclusion

In a recent publication in Science, Lombardi and co-authors from the WPI reported the detection of XMRV-related, a novel retrovirus that was first identified in prostate cancer samples.

Their main finding, presence of XMRV in peripheral blood cells could not be replicated by 3 other studies, even under sensitive PCR conditions.

The original Science study has severe flaws, discussed above. For one thing WPI doesn’t seem to adhere to the PCR to test XMRV any longer.

It is still possible that XMRV is present in amounts at or near the detection limit. But it is equally possible that the finding is an artifact (the paper being so inaccurate and incomplete). And even if XMRV was reproducible present in CFS patients, causality is still not proven and it is way too far to offer patients “diagnostic tests” and retroviral treatment.

Perhaps the most worrisome part of it all is the non-scientific attitude of WPI-employees towards colleague-scientists, their continuous communication via press releases. And the way they try to directly reach patients, who -i can’t blame them-, are fed up with people not taking them serious and who are longing for a better diagnosis and most of all a better treatment. But this is not the way.


*Many thanks to Tate (CSF-patient) for alerting me to the last Dutch publication, Q&A’s of WPI and the findings of Mrs Vernon.
– Ficoll blood separation. Photo [CC]
– Nested PCR:


  1. Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL, Ruscetti SK, Bagni RK, Petrow-Sadowski C, Gold B, Dean M, Silverman RH, & Mikovits JA (2009). Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. Science (New York, N.Y.), 326 (5952), 585-9 PMID: 19815723
  2. Erlwein, O., Kaye, S., McClure, M., Weber, J., Wills, G., Collier, D., Wessely, S., & Cleare, A. (2010). Failure to Detect the Novel Retrovirus XMRV in Chronic Fatigue Syndrome PLoS ONE, 5 (1) DOI: 10.1371/journal.pone.0008519
  3. Groom, H., Boucherit, V., Makinson, K., Randal, E., Baptista, S., Hagan, S., Gow, J., Mattes, F., Breuer, J., Kerr, J., Stoye, J., & Bishop, K. (2010). Absence of xenotropic murine leukaemia virus-related virus in UK patients with chronic fatigue syndrome Retrovirology, 7 (1) DOI: 10.1186/1742-4690-7-10
  4. van Kuppeveld, F., Jong, A., Lanke, K., Verhaegh, G., Melchers, W., Swanink, C., Bleijenberg, G., Netea, M., Galama, J., & van der Meer, J. (2010). Prevalence of xenotropic murine leukaemia virus-related virus in patients with chronic fatigue syndrome in the Netherlands: retrospective analysis of samples from an established cohort BMJ, 340 (feb25 1) DOI: 10.1136/bmj.c1018
  5. McClure, M., & Wessely, S. (2010). Chronic fatigue syndrome and human retrovirus XMRV BMJ, 340 (feb25 1) DOI: 10.1136/bmj.c1099
Sensitivity of PCR screening for XMRV in PBMC DNA. VP62 plasmid was serially diluted 1:10 into PBMC DNA from a healthy donor and tested by Taqman PCR with env 6173 primers and probe. The final amount of VP62 DNA in the reaction was A, 2.3 × 10-2 ng, B, 2.3 × 10-3 ng, C, 2.3 × 10-4 ng, D, 2.3 × 10-5 ng, E, 2.3 × 10-6 ng, F, 2.3 × 10-7 ng or G, 2.3 × 10-8 ng. The limit of sensitivity was 2.3 × 10-7 ng (shown by trace F) which equates to 16 molecules of VP62 XMRV clone.

To Retract or Not to Retract… That’s the Question

7 06 2011

In the previous post I discussed [1] that editors of Science asked for the retraction of a paper linking XMRV retrovirus to ME/CFS.

The decision of the editors was based on the failure of at least 10 other studies to confirm these findings and on growing support that the results were caused by contamination. When the authors refused to retract their paper, Science issued an Expression of Concern [2].

In my opinion retraction is premature. Science should at least await the results of two multi-center studies, that were designed to confirm or disprove the results. These studies will continue anyway… The budget is already allocated.

Furthermore, I can’t suppress the idea that Science asked for a retraction to exonerate themselves for the bad peer review (the paper had serious flaws) and their eagerness to swiftly publish the possibly groundbreaking study.

And what about the other studies linking the XMRV to ME/CFS or other diseases: will these also be retracted?
And what happens in the improbable case that the multi-center studies confirm the 2009 paper? Would Science republish the retracted paper?

Thus in my opinion, it is up to other scientists to confirm or disprove findings published. Remember that falsifiability was Karl Popper’s basic scientific principle. My conclusion was that “fraud is a reason to retract a paper and doubt is not”. 

This is my opinion, but is this opinion shared by others?

When should editors retract a paper? Is fraud the only reason? When should editors issue a letter of concern? Are there guidelines?

Let first say that even editors don’t agree. Schekman, the editor-in chief of PNAS, has no direct plans to retract another paper reporting XMRV-like viruses in CFS [3].

Schekman considers it “an unusual situation to retract a paper even if the original findings in a paper don’t hold up: it’s part of the scientific process for different groups to publish findings, for other groups to try to replicate them, and for researchers to debate conflicting results.”

Back at the Virology Blog [4] there was also a vivid discussion about the matter. Prof. Vincent Ranciello gave the following answer in response to a question of a reader:

I don’t have any hard numbers on how often journals ask scientists to retract a paper, only my sense that it is very rare. Author retractions are more frequent, but I’m only aware of a handful of those in a year. I can recall a few other cases in which the authors were asked to retract a paper, but in those cases scientific fraud was involved. That’s not the case here. I don’t believe there is a standard policy that enumerates how such decisions are made; if they exist they are not public.

However, there is a Guideline for editors, the Guidance from the Committee on Publication Ethics (COPE) (PDF) [5]

Ivanoranski, of the great blog Retraction Watch, linked to it when we discussed reasons for retraction.

With regard to retraction the COPE-guidelines state that journal editors should consider retracting a publication if:

  1. they have clear evidence that the findings are unreliable, either as a result of misconduct (e.g. data fabrication) or honest error (e.g. miscalculation or experimental error)
  2. the findings have previously been published elsewhere without proper crossreferencing, permission or justification (i.e. cases of redundant publication)
  3. it constitutes plagiarism
  4. it reports unethical research

According to the same guidelines journal editors should consider issuing an expression of concern if:

  1. they receive inconclusive evidence of research or publication misconduct by the authors 
  2. there is evidence that the findings are unreliable but the authors’ institution will not investigate the case 
  3. they believe that an investigation into alleged misconduct related to the publication either has not been, or would not be, fair and impartial or conclusive 
  4. an investigation is underway but a judgement will not be available for a considerable time

Thus in the case of the Science XMRV/CSF paper an expression of concern certainly applies (all 4 points) and one might even consider a retraction, because the results seem unreliable (point 1). But it is not 100%  established that the findings are false. There is only serious doubt……

The guidelines seem to leave room for separate decisions. To retract a paper in case of plain fraud is not under discussion. But when is an error sufficiently established ànd important to warrant retraction?

Apparently retractions are on the rise. Although still rare (0.02% of all publications by the late 2000s) there has been a tenfold increase in retractions compared to the early 1980s (see review at Scholarly Kitchen [6] about two papers: [7] and [8]). However it is unclear whether increasing rates of retraction reflect more fraudulent or erroneous papers or a better diligence. The  first paper [7] also highlights that, out of fear of litigation, editors are generally hesitant to retract an article without the author’s permission.

At the blog Nerd Alert they give a nice overview [9] (based on Retraction Watch, but then summarized in one post 😉 ) . They clarify that papers are retracted for “less dastardly reasons then those cases that hit the national headlines and involve purposeful falsification of data”, such as the fraudulent papers of Andrew Wakefield (autism caused by vaccination). Besides the mistaken publication of the same paper twice, data over-interpretation, plagiarism and the like, the reason can also be more trivial: ordering the wrong mice or using an incorrectly labeled bottle.

Still, scientist don’t unanimously agree that such errors should lead to retraction.

Drug Monkey blogs about his discussion [10] with @ivanoransky over a recent post at Retraction Watch, which asks whether a failure to replicate a result justifies a retraction [11]”. Ivanoransky presents a case, where a researcher (B) couldn’t reproduce the findings of another lab (A) and demonstrated mutations in the published protein sequence that excluded the mechanism proposed in A’s paper. This wasn’t retracted, possibly because B didn’t follow the published experimental protocols of A in all details. (reminds me of the XMRV controversy). 

Drugmonkey says (quote):  (cross-posted at Scientopia here — hmmpf isn’t that an example of redundant publication?)

“I don’t give a fig what any journals might wish to enact as a policy to overcompensate for their failures of the past.
In my view, a correction suffices” (provided that search engines like Google and PubMed make clear that the paper was in fact corrected).

Drug Monkey has a point there. A clear watermark should suffice.

However, we should note that most papers are retracted by authors, not the editors/journals, and that the majority of “retracted papers” remain available. Just 13.2% are deleted from the journal’s website. And 31% are not clearly labelled as such.

Summary of how the naïve reader is alerted to paper retraction (from Table 2 in [7], see: Scholarly Kitchen [6])

  • Watermark on PDF (41.1%)
  • Journal website (33.4%)
  • Not noted anywhere (31.8%)
  • Note appended to PDF (17.3%)
  • PDF deleted from website (13.2%)

My conclusion?

Of course fraudulent papers should be retracted. Also papers with obvious errors that invalidate the conclusions.

However, we should be extremely hesitant to retract papers that can’t be reproduced, if there is no undisputed evidence of error.

Otherwise we should retract almost all published papers at one point or another. Because if Professor Ioannides is right (and he probably is) “Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong”. ( see previous post [12],  “Lies, Damned Lies, and Medical Science” [13])  and Ioannides’ crushing article “Why most published research findings are false [14]”)

All retracted papers (and papers with major deficiencies and shortcomings) should be clearly labeled as such (as Drugmonkey proposed, not only at the PDF and at the Journal website, but also by search engines and biomedical databases).

Or lets hope, with Biochembelle [15], that the future of scientific publishing will make retractions for technical issues obsolete (whether in the form of nano-publications [16] or otherwise):

One day the scientific community will trade the static print-type approach of publishing for a dynamic, adaptive model of communication. Imagine a manuscript as a living document, one perhaps where all raw data would be available, others could post their attempts to reproduce data, authors could integrate corrections or addenda….

NOTE: Retraction Watch (@ivanoransky) and @laikas have voted in @drugmonkeyblog‘s poll about what a retracted paper means [here]. Have you?


  1. Science Asks to Retract the XMRV-CFS Paper, it Should Never Have Accepted in the First Place. ( 2011-06-02)
  2. Alberts B. Editorial Expression of Concern. Science. 2011-05-31.
  3. Given Doubt Cast on CFS-XMRV Link, What About Related Research? (
  4. XMRV is a recombinant virus from mice  (Virology Blog : 2011/05/31)
  5. Retractions: Guidance from the Committee on Publication Ethics (COPE) Elizabeth Wager, Virginia Barbour, Steven Yentis, Sabine Kleinert on behalf of COPE Council:
  6. Retract This Paper! Trends in Retractions Don’t Reveal Clear Causes for Retractions (
  7. Wager E, Williams P. Why and how do journals retract articles? An analysis of Medline retractions 1988-2008. J Med Ethics. 2011 Apr 12. [Epub ahead of print] 
  8. Steen RG. Retractions in the scientific literature: is the incidence of research fraud increasing? J Med Ethics. 2011 Apr;37(4):249-53. Epub 2010 Dec 24.
  9. Don’t touch that blot. ( : 2011/02/25)
  10. What_does_a_retracted_paper_mean? ( 2011/06/03)
  11. So when is a retraction warranted? The long and winding road to publishing a failure to replicate ( : 2011/06/03/)
  12. Much Ado About ADHD-Research: Is there a Misrepresentation of ADHD in Scientific Journals? ( 2011-06-02)
  13. “Lies, Damned Lies, and Medical Science” ( :2010/11/)
  14. Ioannidis, J. (2005). Why Most Published Research Findings Are False. PLoS Medicine, 2 (8) DOI: 10.1371/journal.pmed.0020124
  15. Retractions: What are they good for? ( : 2011/06/04/)
  16. Will Nano-Publications & Triplets Replace The Classic Journal Articles? ( 2011-06-02)

NEW* (Added 2011-06-08):


A New Safe Blood Test to Diagnose Down Syndrome

14 03 2011
The established method to prenatally diagnose chromosomal gross abnormalities is to obtain fetal cells from the womb with a fine needle, either by Amniocentesis (a sample of the fluid surrounding the foetus in the womb)  or by Chorionic villus sampling (CVS, a sample of the placenta taken via the vaginal route).
The procedures are not to be sneezed at. I’ve undergone both, so I talk from experience. It is kind of horrifying to see a needle entering the womb near to your baby, also because you realize that there is a (small) chance that the procedure will cause a miscarriage. Furthermore, in my case  (rhesus negative) I also had to get an injection of human anti-D immunoglobulin as a precaution to prevent rhesus disease after birth. Finally it takes ages (ok 2-3 weeks) to hear the results. At that point the fetus is already 14-18 weeks old and there is little time to intervene, if that is what is decided.

en: Karyotype of 21 trisomy (Down syndrome) fr...

Image via Wikipedia

Over the years many non-invasive alternatives have been sought to test for Down Syndrome, the most common chromosomal abnormality,  affecting chromosome 21. Instead of one pair of  chromosomes, there is (usually) a third chromosome 21 (hence trisomy 21) (see Fig).

An older non-invasive test is a simple blood test to check the levels of some proteins and hormones in the mother’s blood, that are somehow related to Down Syndrome. However, this test is not very accurate.

The same is true for another non-invasive method: the ultrasound scan of the neck of the fetus. An increased amount of translucent fluid behind the neck (‘nuchal translucency‘) is associated with Down syndrome and a few other chromosomal defects.

A combination of serum tests and nuchal translucency in the 11th week correctly identifies fetuses with Down syndrome 87 percent of the time, whereas it misidentifies healthy fetuses as having Down Syndrome in 5% of the cases (5% false positive rate).

For this reason these non-invasive tests are usually used to “screen”, not to diagnose trisomy 21.

Ever since circulating fetal cells and free fetal DNA were found in the maternal blood, researchers try to enrich for this fetal sources and try to characterize chromosomal aberrations, using very sensitive molecular diagnostic tools, like polymerase chain reaction PCR (i.e. see this post) . The first attempts were directed at detecting the Y chromosome of male babies in the blood of the mother [1].

This January, Chiu et al published an article in the BMJ showing that Down’s syndrome can be detected with greater than 98% accuracy in maternal blood [2]. The group of Lo tested 753 pregnant women at high risk for carrying a fetus with trisomy 21 with this new blood test and compared it with the results obtained by karyotyping (analyzing number and appearance of chromosomes). The new technique is called multiplexed massively parallel DNA sequencing. This is a high throughput sequencing technique in which many DNA-fragments are sequenced in parallel. Sequencing means that the genetic code is unraveled. It is even possible to analyze 2 to 8 labeled maternal samples in parallel (2- and 8-plex reaction).

This parallel DNA sequencing method is “just” a counting method, in which the overall number of chromosomes is counted and one looks at an overrepresentation of chromosome 21.

With the superior 2-plex approach, 100% of the 86 known trisomy 21 fetuses were detected at a 2.1% false positive rate. In other words  the duplex approach had 100% sensitivity (all known positives were detected) and 97.9% specificity (2 were positive according to the test, whereas they were not in reality).

Thus it is a good and non-invasive technique to exclude Down syndrome in pregnant women known to have a high risk of Down syndrome. The approach might perform less well in a low risk group. Furthermore the study was not fully blinded. A practical disadvantage of this new test is that it is expensive, requiring machines not yet available in most hospitals (A spoonful of Medicine).

Another approach, recently published in Nature Medicine doesn’t have this disadvantage [3]. It involves the application of methylated DNA immunoprecipitation (MeDiP) and real-time quantitative PCR (rt-qPCR), which are accessible in all basic diagnostic laboratories. MedDiP is a technique to enrich for methylated DNA sequences, which are more preponderant in the fetus. Next rt-qPCR (amplification of DNA) is used to assess whether the fetus has an extra copy of the fetal-specific methylated region compared to a normal fetus.

In an initial series of 20 known normal cases and 20 known trisomy 21 cases, the researchers tested several differentially methylated regions (DMRs).  The majority of the ratio values in normal cases were at or below a value of 1, whereas in trisomy 21 cases the ratio values were above a value of 1. A combination of 8 specific DMRs out of 12 enabled the correct diagnosis of all the cases.

Next the authors validated the technique by applying the above method to 40 new samples in a blinded fashion. These samples contained 26 normal cases and 14 trisomy 21 cases (as later defined by karyotyping). Normal and trisomy 21 cases were all correctly identified.

The authors conclude that they achieved 100% sensitivity and 100% specificity in 80 samples. However, the first 40 samples were used to calibrate the test, thus the real validation study was done in a small set of 40, containing only 14 trisomy cases. One can imagine that a greater sample could have a few more false negatives or false positives. Indeed, small initial studies are likely to overestimate the true effect.

Furthermore, there was an overrepresentation of trisomy 21 cases (1/3 of the sample). Thus it is to soon to say that this trisomy 2 method is to be potentially employed in the routine practice of all diagnostic laboratories and be applicable to all pregnancies”, as the authors did. To this end the method should be confirmed in larger studies and in low risk pregnancies.

In conclusion, the relative easy and cheap methylated DNA immunoprecipitation/real-time quantitative PCR combo test, seems a promising approach to screen for Down syndrome in high risk pregnancies. Larger studies are needed to confirm the extreme accuracy of 100% and must demonstrate the applicability to low risk pregnancies. If confirmed, this blood test could eliminate the need for invasive procedures. Another positive aspect is that the test can be performed early, from the 11th week of gestation, and the results can be obtained within 4-5 days. Moreover the researchers can easily adapt the current technique to demonstrate abnormal numbers (aneuploidy) of chromosomes 13, 18, X and Y.


  • LO, Y., CORBETTA, N., CHAMBERLAIN, P., RAI, V., SARGENT, I., REDMAN, C., & WAINSCOAT, J. (1997). Presence of fetal DNA in maternal plasma and serum The Lancet, 350 (9076), 485-487 DOI: 10.1016/S0140-6736(97)02174-0
  • Chiu, R., Akolekar, R., Zheng, Y., Leung, T., Sun, H., Chan, K., Lun, F., Go, A., Lau, E., To, W., Leung, W., Tang, R., Au-Yeung, S., Lam, H., Kung, Y., Zhang, X., van Vugt, J., Minekawa, R., Tang, M., Wang, J., Oudejans, C., Lau, T., Nicolaides, K., & Lo, Y. (2011). Non-invasive prenatal assessment of trisomy 21 by multiplexed maternal plasma DNA sequencing: large scale validity study BMJ, 342 (jan11 1) DOI: 10.1136/bmj.c7401
  • Papageorgiou, E., Karagrigoriou, A., Tsaliki, E., Velissariou, V., Carter, N., & Patsalis, P. (2011). Fetal-specific DNA methylation ratio permits noninvasive prenatal diagnosis of trisomy 21 Nature Medicine DOI: 10.1038/nm.2312

Related Articles

Stories [9]: A Healthy Volunteer

20 09 2010

The host of Next Grand Rounds (Pallimed) asked to submit a recent blog post from another blogger in addition to your own post.
I choose “Orthostatics – one more time” from DB Medical rants and a post commenting on that from Musings of a Dinosaur.

Bob Center’s (@medrants) posts was about the value of orthostatic vital sign measurements (I won’t go into any details here), and about who should be doing them, nurses or doctors. In his post, Bob Center also mentioned briefly that students were seeing this as scut work similar as drawing your own bloods and carrying them to the lab.

That reminded me of something that happened when I was working in the lab as a PhD, 20 years ago.

I was working on a chromosomal translocation between chromosome 14 and 18. (see Fig)

The t(14;18) is THE hallmark of follicular lymphoma (lymphoma is a B cell cancer of the lymph nodes).

This chromosomal translocation is caused by a faulty coupling of an immunoglobulin chain to the BCL-2 proto-oncogene during the normal rearrangement process of the immunoglobulins in the pre-B-cells.

This t(14;18) translocation can be detected by genetic techniques, such as PCR.

Using PCR, we found that the t(14:18) translocation was not only present in follicular lymphoma, but also in benign hyperplasia of tonsils and lymph nodes in otherwise healthy persons. Just one out of  1 : 100,000 cells were positive. When I finally succeeded in sequencing the PCR-amplified breakpoints, we could show that each breakpoint was unique and not due to contamination of our positive control (read my posts on XMRV to see why this is important).

So we had a paper. Together with experiments in transgenic mice, our results hinted that t(14;18) translocations is necessary but not sufficient for follicular lymphoma. Enhanced expression of BCL-2 might give make the cells with the translocation “immortal”.

All fine, but hyperplastic tonsils might still form an exception, since they are not completely normal. We reasoned that if the t(14;18) was an accidental mistake in pre B cells it might sometimes be found in normal B cells in the blood too.

But then we needed normal blood from healthy individuals.

At the blood bank we could only get pooled blood at that time. But that wasn’t suitable, because if a translocation was present in one individual it would be diluted with the blood of the others.

So, as was quite common then, we asked our colleagues to donate some blood.

The entire procedure was cumbersome: a technician first had to enrich for T and  B cells, we had to separate the cells by FACS and I would then PCR and sequence them.

The PCR and sequencing techniques had to be adopted, because the frequency of positive cells was lower than in the tonsils and approached the detection limit. ….. That is in most people. But not in all. One of our colleagues had relatively prominent bands, and several breakpoints.

It was explained to him that this meant nothing really. Because we did find similar translocations in every healthy person.

But still, I wouldn’t feel 100% sure, if so many of my blood cells (one out of 1000 or 10.000) contained t(14:18) translocations.

He was one of the first volunteers we tested, but from then on it was decided to test only anonymous persons.

Related Articles

Silly Sunday [33] Science, Journalists & Reporting

12 09 2010

I Friday I read a post of David Bradley at Sciscoop Science on six reasons why scientist should talk to reporters, which was based on an article in this week’s The Scientist magazine by Edyta Zielinska (registration required).

The main reasons why scientist should talk to reporters:

  • It’s your duty
  • It raises your profile with journal editors and funders
  • Your bosses will love it
  • You may pick up grant-writing tips
  • It gets the public excited about science
  • It’s better you than someone else

But the most strong part of the Zielinska article are the practical tips, which fall into 3 categories:

  • the medium matters (i.e. tv versus print)
  • getting the most out of a press call (KISS, significance metaphors)
  • Common press pitfalls, and how to avoid them (avoid oversimplification, errors, jargon, misquotes, sensational stories)

The article is concluded by a useful glossary. Read more: Why Trust A Reporter? – The Scientist

Alan Dangour has experienced what may happen when you report scientific evidence which is then covered by the news.

He and his group published systematic reviews that found no evidence of any important differences in the nutritional composition of foodstuffs grown using conventional and organic farming methods. There was also no evidence of nutrition-related health benefits from consuming organically produced foods.

The press quickly picked up on the story. The Times ran a front-page headline: “Organic food ‘has no extra health benefits’ ”, the Daily Express added “Official” while, in a wonderfully nuanced piece, the Daily Mail ran: “A cancerous conspiracy to poison your faith in organic food”.

Initially it was “tremendously exciting and flattering, but their findings were contrary to beliefs held by many and soon the hate-mails started flooding in. That’s why he concludes: “Come on scientists, stand up and fight!” when not the scientific evidence is called into question, but also your scientific skills, and  personal and professional integrity. Quite appropriately a Lancet editorial put it like this: “Eat the emotion but question the evidence”.

Journalists can also be target of hate mail or aggressive comments. In the whole XMRV-CFS torrent, patients seem to almost “adore” positive journalists (i.e. Amy Dockser Marcus of the WSJ Health Blog), while harassing those who are a bit more critical, like @elmarveerman of Noorderlicht author of “tiring viruses“). It has caused another journalist (who wrote about the same topic) to stop because people hurled curses at her. A good discussion is fine, but unfounded criticism is not, she reasoned.

Last  week, 2 other articles emphasized the need for science journalism to change.

One article by Matthew Nisbet at  Big Think elaborated on an idea on what Alice Bell calls “upstream science journalism.” Her blog post is based on her talk at Science Online London as part of a plenary panel with David Dobbs, Martin Robbins and Ed Yong on “Rebooting” (aka the future of) science journalism (video -of bad quality- included).

Upstream, we have the early stages of communication about some area of science: meetings, literature reviews or general lab gossip. Gradually these ideas are worked through, and the communicative output flows downstream towards the peer-reviewed and published journal article and perhaps, via a press release and maybe even a press conference, some mass media reporting.

This still is pretty vague to me. I think less pushed press releases copied by each and every news source and more background stories, giving insight in how science comes about and what it represents would be welcomed. As long as it isn’t too much like glorification of certain personalities. (More) gossip is also not what we’re waiting for.

Her examples and the interesting discussion that follows clarify that she thinks more of blogs and twitter as tools propelling upstream science journalism.

One main objection (or rather limitation) is that: “most science journalists/writers cover whatever they find interesting and what they believe their readers will find interesting (Ian Sample in comments).”

David ropeik

Wonderful goal, to have journalism serve society in this, or any way, but, forgive me, it’s a naive hope, common among those who observe journalism but haven’t done it.(…..)
Even those of us who feel journalism is a calling and serves an important civic role do not see ourselves principally as teachers or civil servants working in the name of some higher social cause, to educate the public about stuff we thought they should know. We want the lead story. We want our work to get attention. We want to have impact, sure, hopefully positive. But we don’t come into work everyday asking “what should the public know about?”

That’s reality. John Fleck (journalist) agrees that the need to “get a lot of attention” is a driving force in newsroom culture and decision-making, but stresses that the newspapers he worked for have always devoted a portion of their resources to things managers felt were important even if not attention-getting.

So truth in the middle?

Another blogpost -at Jay Rosen: Public Notebook gives advice to journalist “formerly known as the media”. Apart from advice as “you need to be blogging and you need to “get” mobile he want the next generation journalists to understand:

  1. Replace readers, viewers, listeners and consumers with the term “users.”
  2. Remember: the users know more than you do.
  3. There’s been a power shift; the mutualization of journalism is here. We bring important things to the table, and so do the users. Therefore we include them. “Seeing people as a public” means that.
  4. Describe the world in a way that helps people participate in it.  When people participate, they seek out information.
  5. Anyone can doesn’t mean everyone will. (…) It’s an emerging rule of thumb that suggests that if you get a group of 100 people online then one will create content, 10 will ‘interact’ with it (commenting or offering improvements) and the other 89 will just view it… So what’s the conclusion? Only that you shouldn’t expect too much online.
  6. The journalist is just a heightened case of an informed citizen, not a special class.
  7. Your authority starts with, “I’m there, you’re not, let me tell you about it.”
  8. Somehow, you need to listen to demand and give people what they have no way to demand (…) because they don’t know about it yet
  9. In your bid to be trusted, don’t take the View From Nowhere; instead, tell people where you’re coming from.
  10. Breathe deeply of what DeTocqueville said: “Newspapers make associations and associations make newspapers.”

I think those are useful and practical tips, some of which fit in with the idea of more upstream journalism.

O.k. that’s enough for now. We have been pretty serious on the topic. But it is a Friday Fun/ Silly Sunday post. So bring in the comics.

These are self-explanatory, aren’t they?

(HT: David Bradley and commenter on Facebook. Can’t find it anymore. Facebook is hard to search)

From SMBC comics:

Come on scientists, stand up and fight! From where I’m sitting it looks as if we are under attack from those who not only want to question the importance of scientific evidence but also to cast doubt on our scientific skills, and our personal and professional integrity. In the year of the 350th anniversary of the Royal Society we must defend the importance of scientific evidence and stand up for science.

I’m quite lucky. My research is just about interesting enough to discuss at dinner. It helps that I’m a public health nutritionist and, at least at dinner, my friends are generally happy to talk about food and sometimes even health. I work on projects including nutritional and physical activity interventions designed to maintain health and function in later life and the impact our love affair with animal foods has on both the environment and public health. Dressed up, and with a light touch of spin, these are all possible dinner party conversations.

My first brush with an audience outside the narrow circles of academia came soon after completing my PhD on the growth of the legs of Amerindian children (the things you used to be able to get funding for!). It turns out that leg length is a sensitive marker of diet and health in early childhood. Later work in England showed that the legs of English boys and girls are now longer than they were 20 years ago, probably because of improved diet and environmental conditions. The great British press loved this story. Lots of photos of long-legged women adorned the newspapers and one national paper even ran a competition to find Britain’s longest legs! This was a good story — easy to understand, straightforward to report and not challenging any pre-existing beliefs.

However, I have recently had a different experience of what can happen when you report scientific evidence. Last year, a team of us from the London School of Hygiene & Tropical Medicine released two systematic reviews on the nutritional quality and nutrition-related health benefits of organically produced foods. The research had been commissioned by the Food Standards Agency and had taken more than a year to complete.

We were not the first people to ask whether there were any differences in nutritional composition or health benefits of foods produced under different production regimens but it became clear that no one had addressed the question systematically. Systematic reviews are an important tool for scientists; unlike ordinary reviews, they are seen as original research and help to provide clarity in areas of uncertainty. The basic underpinning of a systematic review is that the process of conducting the review is pre-specified and that the review itself is as comprehensive as possible within these pre-specified limits. Reviews that are not systematic are much more prone to bias, especially with regards to the selection of papers included for review.

Our systematic reviews found that there was no evidence of any important differences in the nutritional composition of foodstuffs grown using conventional and organic farming methods. There was also no evidence of nutrition-related health benefits from consuming organically produced foods.

The press quickly picked up on the story. The Times ran a front-page headline: “Organic food ‘has no extra health benefits’ ”, the Daily Express added “Official” while, in a wonderfully nuanced piece, the Daily Mail ran: “A cancerous conspiracy to poison your faith in organic food”.

This was initially a tremendously exciting and unprecedented period in my academic career. My ego was certainly flattered! However, the tide of emotion quickly started to turn sour. I became increasingly dismayed at the way in which our data were being used and distorted, especially by those who would benefit from the return of uncertainty to the argument. I was also frustrated that we were being criticised for not including other aspects of organic farming (use of pesticides etc) in our review.

With correspondents only a click away, it will not be surprising to learn that we also received many hundreds of e-mails (it would be very interesting to know what proportion of these correspondents had actually read our reports). My favourite e-mail came from a physician in the US who complained that his wife had “been wasting money for years on organic food” and that at last our “scientific review may finally bring her to her senses”.

Other correspondents were less polite and we received many angry, even vicious e-mails questioning the integrity, independence and ability of the team. These are essential ingredients for a good research team and it is fair to ask these questions but the ferocity of the attack suggested that, by questioning the scientific evidence on the nutrient content of organic food, we had actually questioned something bigger. For the first time, we had drawn into sharp focus the strength of the evidence supporting the widespread belief that organic food is “better” — and many people did not like what they saw. As a Lancet editorial put it: “Eat the emotion but question the evidence”.

Beliefs are important, but so is science and standing up for scientific evidence is crucial. We should not be afraid to report our findings publicly, whether they are merely of academic interest or of a controversial nature. This is our job as scientists.

I expected our reviews to be read with interest but I’m not sure that I fully realised how far I was putting my head above the parapet. I think I’ve passed through the toughest hours and have emerged stronger and better able to fight for the importance of science in modern life.

Returning to the dinner party theme, I have also learnt the — at times painful ­— consequences of telling women that “based on current scientific evidence” their legs are slightly shorter than would be expected for their height. There’s a time and a place for everything.

Alan Dangour is a senior lecturer at the London School of Hygiene & Tropical Medicine



FDA to Regulate Genetic Testing by DTC-Companies Like 23andMe

14 06 2010

Direct-to-consumer (DTC) genetic testing refers to genetic tests that are marketed directly to consumers via television, print advertisements, or the Internet. This form of testing, which is also known as at-home genetic testing, provides access to a person’s genetic information without necessarily involving a doctor or insurance company in the process. [definition from NLM’s Genetic Home Reference Handbook]

Almost two years ago I wrote about 23andMe (23andMe: 23notMe, not yet),  a well known DTC company, that offers a genetics scan (SNP-genotyping) to the public ‘for research’, ‘for education’ and ‘for fun’:

“Formally 23andMe denies there is a diagnostic purpose (in part, surely, because the company doesn’t want to antagonize the FDA, which strictly regulates diagnostic testing for disease). However, 23andme does give information on your risk profile for certain diseases, including Parkinson”

In another post Personalized Genetics: Too Soon, Too Little? I summarized an editorial by Ioannides on the topic. His (and my) conclusion was that “the promise of personalized genetic prediction may be exaggerated and premature”. The most important issue is that predictive power to individualize risks is relatively weak. Ioannidis emphasized that despite the poor evidence, direct to consumer genetic testing has already begun and is here to stay. He proposed several safeguards, including transparent and thorough reporting, unbiased continuous synthesis and grading of the evidence and alerting the public that most genetic tests have not yet been shown to be clinically useful.

And now these “precautionary measures” actually seem to happen.
Last week the FDA sent 5 DTC-companies, including 23andMe a letter saying “their tests are medical devices that must receive regulatory approval before they can be marketed.” (ie. see NY-times article).

Alberto Gutierrez, who leads diagnostic test regulation at the FDA, wrote in the letters:

“Premarket review allows for an independent and unbiased assessment of a diagnostic test’s ability to generate test results that can reliably be used to support good health care decisions,”

These letters are part of an initiative to better explain the FDA’s actions by providing information that supports clinical medicine, biomedical innovation, and public health,” (May 19 New England Journal of Medicine commentary, source: see AMED-news)

Although it doesn’t look like the tests will be taken from the market, 23andMe does take a quite a rebellious attitude: one of its directors called the FDA “appallingly paternalistic.”

Many support this view: “people have the right to know their own genetic make-up”, so to say. Furthermore as discussed above, 23andMe denies that their genetic scans are meant for diagnosis.

In my view the latter is largely untrue. At least 23andMe suggests that knowing a scan does tell you something about your risks for certain diseases.
However, the risks are often not that straightforward. You just can’t “measure” the risk of a multifactorial disease like diabetes by “scanning” a few weakly predisposing  genes. Often the results are given in relative risk, which is highly confusing. In her TED-talk the 23andMe director Anne Wojcicki said her husband Sergey Brin (Google), had a 50% chance of getting Parkinson, but his relative risk (RR, based on the LRRK2-mutation, which isn’t the most crucial gene for getting Parkinson) varies from 20% to 80% , which means that this mutation increases his absolute risk of getting Parkinson from 2-5% (normal chance) to 4-10% at the most. (see this post).

Furthermore, as reported by Venture in Nature (October 8, 2009): For seven diseases, 50% or less of the predictions of two companies agreed across five individuals (i.e. for one disease: 23andMe : RR 4.02, and Navigenics RR: 1.25). On the other hand *fun* diagnoses could lead to serious concern in, or wrong/unnecessary decisions (removal of ovaries, changing drug doses) by patients.

There are also concerns with regard to their good-practice standards, as 23andMe just flipped a 96-wells plate of costumer DNA (see Genetic Future for a balanced post), which upset a mother noticing that her son didn’t have compatible genes. But lets assume that proper precautions will prevent this to happen again.

There are also positive aspects: results of a preliminary study showed that people who find out they have high genetic risk for cardiovascular disease are more likely to change their diet and exercise patterns than are those who learn they have a high risk from family history. (Technology ReviewGenetic Testing Can Change Behavior).

Furthermore, people buy those tests themselves and, indeed, there genes are their own.

However, I agree with Dr. Gutierrez of the FDA saying: “We really don’t have any issues with denying people information. We just want to make sure the information they are given is correct. (NY-Times). The FDA is putting the consumers first.

However, it will be very difficult to be consistent. What about total body scans in normal healthy people, detecting innocent incidentilomas? Or what about the controversial XMRV-tests offered by the Whittemore Peterson Institute (WPI) directly to CFS- patients? (see these posts) And one step further (although not in the diagnostic field): the ineffective CAM/homeopathic products sold over the counter?

I wouldn’t mind if these tests/products would be held up to the light. Consumers should not be misled by the results of unproven or invalid tests, and where needed should be offered the guidance of a healthcare provider.

But if tests are valid and risk predictions correct, it is up to the “consumer” if he/she wants to purchase such a test.


What Five FDA Letters Mean for the Future of DTC Genetic Testingat Genomics law Report is highly recommendable, but couldn’t be accessed while writing the post.

[Added: 2010-06-14 13.10]

  • Problem assessing Genomics Law Report is resolved.
  • Also recommendable: the post “FDA to regulate genetic tests as “devices”” at PHG Foundation. This post highlights that simply trying to classify the complete genomic testing service as “a device” is inadequate and will not address the difficult issues at hand. One of the biggest issues is that, while classifying DTC genetics tests as devices is certainly appropriate for assessing their analytical validity and direct safety, it does not and cannot provide an assessment of the service, thus of the predictions and interpretations resulting from the genome scans.  Although standard medical testing has traditionally been overseen by professional medical bodies, the current genomic risk profiling tests are simply not good enough to be used by health care services. (see post)
Related articles by Zemanta

Finally a Viral Cause of Chronic Fatigue Syndrome? Or Not? – How Results Can Vary and Depend on Multiple Factors

15 02 2010

Last week @F1000 (on Twitter) alerted me to an interesting discussion at F1000 on  a paper in Science, that linked Chronic fatigue syndrome (CFS) to a newly discovered human virus XRMV [1]

This finding was recently disputed by another study in PLOS [2], that couldn’t reproduce the results.  This was highlighted in an excellent post by neuroskeptic “Chronic Fatigue Syndrome in “not caused by single virus” shock!

Here is my take on the discrepancy.

Chronic fatigue syndrome (CFS) is a debilitating disorder with unknown etiology. CFS causes extreme fatigue of the kind that does not go  away after a rest. Symptoms of CFS include fatigue for 6 months or more and experiencing other problems such as muscle pain, memory problems, headaches, pain in multiple joints and  sleep problems. Since other illnesses can cause similar symptoms, CFS is hard to diagnose. (source: Medline Plus).

No one knows what causes CFS, but a viral cause has often been suspected, at least in part of the CFS patients. Because the course of the disease often resembles a post-viral fatigue, CFS has also been referred to as post-viral fatigue syndrome (PVFS).

The article of Lombardi [1], published in October 2009 in Science, was a real breakthrough. The study showed that two thirds of patients with CFS were infected with a novel gamma retrovirus, xenotropic murine leukaemia virus-related virus (XMRV). XMVR was previously linked to prostate cancer.

Lombardi et al  isolated DNA from white blood cells (Peripheral Blood Mononuclear Cells or PBMCs) and assayed the samples for XMRV gag sequences by nested polymerase chain reaction (PCR).

The PCR is a technique that allows the detection of a single or few copies of target DNA by amplifying it across several orders of magnitude, generating thousands to millions of copies of a particular DNA. Nested PCR amplifies the resultant amplicon several orders of magnitude further. In the first round external primers are used (short DNA-sequences that fit the outer end of the piece of DNA to be amplified) and an internal set of primers is used for the second round. Nested PCR is often used if the target DNA is not abundantly present and to avoid the comtamination with products that are amplified as a spin-off due to the amplification of artifacts (sites to which the primers bind as well)

[I used a similar approach 15-20 years ago to identify a lymphoma-characteristic translocation in tonsils and purified B cells of (otherwise) healthy individuals. By direct sequencing I could prove that each sequence was unique in its breakpoint sequence, thereby excluding that the PCR-products arose by contamination of an amplified positive control. All tumor cells had the translocation against one in 100,000 or 1,000,000 normal cells. To be able to detect the oncogene in B cells, B cells had to be purified by FACS. Otherwise the detection limit could not be reached]

Lombardi could detect XMRV gag DNA in 68 of 101 patients (67%) as compared to 8 of 218 (3.7%) healthy controls. Detection of gag as well as env XMRV was confirmed in 7 of 11 CFS samples at the Cleveland Clinic (remarkably these are only shown in Fig 1A of the paper, thus not the original PCR-results).
In contrast, XMRV gag sequences were detected in 8 of 218 (3.7%) PBMC DNA specimens from healthy individuals. Of the 11 healthy control DNA samples analyzed by PCR, only one sample was positive for gag and none for env. The XMRV gag and env sequences were more than 99% similar to those previously reported for prostate tumor–associated strains of XMRV. The authors see this as proof against contamination of samples with prostate cancer associated XMRV-DNA.

Not only PCR experiments were done. Using intracellular flow cytometry and Western blot assays XMRV proteins were found to be expressed in PBMCs from CFS patients. CFS patiens had anti-XMRV antibodies and cell culture experiments revealed that patient-derived XMRV was infectious. These findings are consistent with but do not prove that XMRV may be a contributing factor in the pathogenesis of CFS. XMRV might just be an innocent bystander. However, unlike XMRV-positive prostate cancer cells, XMRV infection status did not not correlate with the RNASEL genotype.

The Erlwein study was published within 3 months after the first article. It is much simpler in design. DNA was extracted from whole blood (not purified white blood cells) and subjected to a nested PCR using another set of primers. The positive control was an end-point dilution of the plasmid. Water served as a negative control. None of the 186 CSF samples was positive.

The question then is: which study is true? (although it should be stressed that the Science paper just shows a link between the virus and CFS, not a causal relationship)

Regional Differences

Both findings could be “real” if there was a regional difference in occurrence of the virus. Indeed XMRV has previously been detected in prostate cancer cells from American patients, but not from German and Irish patients.

Conflict of Interest

Lombardi’s clinic [1] offers $650 diagnostic test to detect XMRV, so it is of real advantage to the authors of the first paper that the CSF-samples are positive for the virus. On the other hand Prof. Simon Wessely of the second paper has built his career on the hypothesis that CFS is a form of psychoneurosis, that should be treated with cognitive behavior therapy. The presence of a viral (biological) cause would not fit in.

Shortcomings of the Lombardi-article [1]

Both studies have used nested PCR to detect XMRV. Because of the enormous amplification potential, PCR can easily lead to contamination (with the positive control) and thus false positive results. Indeed it is very easy to get contamination from an undiluted positive into a weakly positive or negative sample.

Charles Chiu who belongs to the group detecting XMRV in a specific kind of hereditary prostate cancer, puts it like this [5]:

In their Dissenting Opinion of this article, Moore and Shuda raise valid concerns regarding the potential for PCR contamination in this study. Some concerns include 1) the criteria for defining CFS/ME in the patients and in controls were not explicitly defined, 2) nested PCR was used and neither in a blinded nor randomized fashion, 3) the remarkable lack of diversity in the six fully sequenced XMRV genomes (<6 nucleotide average difference across genome) — with Fig. S1 even showing that for one fully sequenced isolate two of the single nucleotide differences were “N’s” — clearly the result of a sequencing error, 4) failure to use Southern blotting to confirm PCR results, and 5) primary nested PCR screening done in one lab as opposed to independent screening from start to finish in two different laboratories. Concerns have also been brought up with respect to the antigen testing

Shortcomings of the Erlwein-article [2]

Many people have objected that the population of CSF patients is not the same in both studies. Sure it is difficult enough to diagnose CSF (which is only done by exclusion), but according to many commenters of the PLOS study there was a clear bias towards more depressed patients. Therefore, a biological agent is less likely the cause of the disease in these patients. In contrast the US patients had all kinds of physical constraints and immunological problems.

The review process was also far less stringent: 3 days versus several months.

The PLOS study might have suffered from the opposite of contamination: failure to amplify the rare CSF-DNA. This is not improbable. The Erlwein group did not purify the blood cells, used other primers, amplified another sequences and did not test DNA of normal individuals. The positive control was diluted in water not in human DNA. The negative control was water.

Omitting cell purification can lead to a lower relative amount of the XMRV-DNA or to inhibition (often seen this with unpurified samples). Furthermore the gel results seem of poor quality (see Fig 2). The second round of the positive PCR sample results in an overloaded lane with too many aspecific bands (lane 9), whereas the first round leads to a very vague low molecular band (lane 10). True that the CSF-samples also run two rounds, but why aren’t the aspecific bands seen here? It would have been better to use a tenfold titration of the positive control in human DNA (this might be a more real imitation of the CSF samples: (possibly) a rare piece of XMRV DNA mixed with genomic DNA) and to use normal DNA as control, not water.Another point is that the normal XMRV-incidence of 1-3,7% in healthy controls is not reached in the PLOS study, although this could be a matter of chance (1 out of 100).

Further Studies

Anyway, we can philosophize, but the answer must await further studies. There are several ongoing efforts.


  1. Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL, Ruscetti SK, Bagni RK, Petrow-Sadowski C, Gold B, Dean M, Silverman RH, & Mikovits JA (2009). Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. Science (New York, N.Y.), 326 (5952), 585-9 PMID: 19815723
  2. Erlwein, O., Kaye, S., McClure, M., Weber, J., Wills, G., Collier, D., Wessely, S., & Cleare, A. (2010). Failure to Detect the Novel Retrovirus XMRV in Chronic Fatigue Syndrome PLoS ONE, 5 (1) DOI: 10.1371/journal.pone.0008519
  5. Charles Chiu: Faculty of 1000 Biology, 19 Jan 2010

Photo Credits

Nested PCR