Sugary Drinks as the Culprit in Childhood Obesity? a RCT among Primary School Children

24 09 2012

ResearchBlogging.org Childhood obesity is a growing health problem. Since 1980, the proportion of overweighted children has almost tripled in the USA:  nowadays approximately 17% of children and adolescents are obese.  (Source: cdc.gov [6])

Common sense tells me that obesity is the result of too high calory intake without sufficient physical activity.” - which is just what the CDC states. I’m not surprised that the CDC also mentions the greater availability of high-energy-dense foods and sugary drinks at home and at school as main reasons for the increased intake of calories among children.

In my teens I already realized that sugar in sodas were just “empty calories” and I replaced tonic and cola by low calory  Rivella (and omitted sugar from tea). When my children were young I urged the day care to restrain from routinely giving lemonade (often in vain).

I was therefore a bit surprised to notice all the fuss in the Dutch newspapers [NRC] [7] about a new Dutch study [1] showing that sugary drinks contributed to obesity. My first reaction was “Duhhh?!…. so what?”.

Also, it bothered me that the researchers had performed a RCT (randomized controlled trial) in kids giving one half of them sugar-sweetened drinks and the other half sugar-free drinks. “Is it ethical to perform such a scientific “experiment” in healthy kids?”, I wondered, “giving more than 300 kids 14 kilo sugar over 18 months, without them knowing it?”

But reading the newspaper and the actual paper[1], I found that the study was very well thought out. Also ethically.

It is true that the association between sodas and weight gain has been shown before. But these studies were either observational studies, where one cannot look at the effect of sodas in isolation (kids who drink a lot of sodas often eat more junk food and watch more television: so these other life style aspects may be the real culprit) or inconclusive RCT’s (i.e. because of low sample size). Weak studies and inconclusive evidence will not convince policy makers, organizations and beverage companies (nor schools) to take action.

As explained previously in The Best Study Design… For Dummies [8] the best way to test whether an intervention has a health effect is to do a  double blind RCT, where the intervention (in this case: sugary drinks) is compared to a control (drinks with artificial sweetener instead of sugar) and where the study participants, and direct researchers do not now who receives the  actual intervention and who the phony one.

The study of Katan and his group[1] was a large, double blinded RCT with a long follow-up (18 months). The researchers recruited 641 normal-weight schoolchildren from 8 primary schools.

Importantly, only children were included in the study that normally drank sugared drinks at school (see announcement in Dutch). Thus participation in the trial only meant that half of the children received less sugar during the study-period. The researchers would have preferred drinking water as a control, but to ensure that the sugar-free and sugar-containing drinks tasted and looked essentially the same they used an artificial sweetener as a control.

The children drank 8 ounces (250 ml) of a 104-calorie sugar-sweetened or no-calorie sugar-free fruit-flavoured drink every day during 18 months.  Compliance was good as children who drank the artificially sweetened beverages had the expected level of urinary sucralose (sweetener).

At the end of the study the kids in the sugar-free group gained a kilo less weight than their peers. They also had a significant lower BMI-increase and gained less body fat.

Thus, according to Katan in the Dutch newspaper NRC[7], “it is time to get rid of the beverage vending machines”. (see NRC [6]).

But does this research really support that conclusion and does it, as some headlines state [9]: “powerfully strengthen the case against soda and other sugary drinks as culprits in the obesity epidemic?”

Rereading the paper I wondered as to the reasons why this study was performed.

If the trial was meant to find out whether putting children on artificially sweetened beverages (instead of sugary drinks) would lead to less fat gain, then why didn’t the researchers do an  intention to treat (ITT) analysis? In an ITT analysis trial participants are compared–in terms of their final results–within the groups to which they were initially randomized. This permits the pragmatic evaluation of the benefit of a treatment policy.
Suppose there were more dropouts in the intervention group, that might indicate that people had a reason not to adhere to the treatment. Indeed there were many dropouts overall: 26% of the children had stopped consuming the drinks, 29% from the sugar-free group, and 22% from the sugar group.
Interestingly, the majority of the children who stopped drinking the cans because they no longer liked the drink (68/94 versus 45/70 dropouts in the sugar-free versus the sugar group).
Ànd children who correctly assumed that the sweetened drinks were “artificially sweetened” was 21% higher than expected by chance (correct identification was 3% lower in the sugar group).
Did some children stop using the non-sugary drinks because they found the taste less nice than usual or artificial? Perhaps.

This  might indicate that replacing sugar-drinks by artificially sweetened drinks might not be as effective in “practice”.

Indeed most of the effect on the main outcome, the differences in BMI-Z score (the number of standard deviations by which a child differs from the mean in the Netherland for his or her age or sex) was “strongest” after 6 months and faded after 12 months.

Mind you, the researchers did neatly correct for the missing data by multiple imputation. As long as the children participated in the study, their changes in body weight and fat paralleled those of children who finished the study. However, the positive effect of the earlier use of non-sugary drinks faded in children who went back to drinking sugary drinks. This is not unexpected, but it underlines the point I raised above: the effect may be less drastic in the “real world”.

Another (smaller) RCT, published in the same issue of the NEJM [2](editorial in[4]), aimed to test the effect of an intervention to cut the intake of sugary drinks in obese adolescents. The intervention (home deliveries of bottled water and diet drinks for one year) led to a significant reduction in mean BMI (body mass index), but not in percentage body fat, especially in Hispanic adolescents. However at one year follow up (thus one year after the intervention had stopped) the differences between the groups evaporated again.

But perhaps the trial was “just” meant as a biological-fysiological experiment, as Hans van Maanen suggested in his critical response in de Volkskrant[10].

Indeed, the data actually show that sugar in drinks can lead to a greater increase in obesity-related parameters (and vice versa). [avoiding the endless fructose-glucose debate [11].

In the media, Katan stresses the mechanistic aspects too. He claims that children who drank the sweetened drinks, didn’t compensate for the lower intake of sugars by eating more. In the NY-times he is cited as follows[12]: “When you change the intake of liquid calories, you don’t get the effect that you get when you skip breakfast and then compensate with a larger lunch…”

This seems a logic explanation, but I can’t find any substatation in the article.

Still “food intake of the children at lunch time, shortly after the morning break when the children have consumed the study drinks”, was a secondary outcome in the original protocol!! (see the nice comparison of the two most disparate descriptions of the trial design at clinicaltrials.gov [5], partly shown in the figure below).

“Energy intake during lunchtime” was later replaced by a “sensory evaluation” (with questions like: “How satiated do you feel?”). The results, however were not reported in their current paper. That is also true for a questionnaire about dental health.

Looking at the two protocol versions I saw other striking differences. At 2009_05_28, the primary outcomes of the study are the children’s body weight (BMI z-score),waist circumference (replaced by waist to height), skin folds and bioelectrical impedance.
The latter three become secondary outcomes in the final draft. Why?

Click to enlarge (source Clinicaltrials.gov [5])

It is funny that although the main outcome is the BMI z score, the authors mainly discuss the effects on body weight and body fat in the media (but perhaps this is better understood by the audience).

Furthermore, the effect on weight is less then expected: 1 kilo instead of 2,3 kilo. And only a part is accounted for by loss in body fat: -0,55 kilo fat as measured by electrical impedance and -0,35 kilo as measured by changes in skinfold thickness. The standard deviations are enormous.

Look for instance at the primary end point (BMI z score) at 0 and 18 months in both groups. The change in this period is what counts. The difference in change between both groups from baseline is -0,13, with a P value of 0.001.

(data are based on the full cohort, with imputed data, taken from Table 2)

Sugar-free group : 0.06±1.00  [0 Mo]  –> 0.08±0.99 [18 Mo] : change = 0.02±0.41  

Sugar-group: 0.01±1.04  [0 Mo]  –> 0.15±1.06 [18 Mo] : change = 0.15±0.42 

Difference in change from baseline: −0.13 (−0.21 to −0.05) P = 0.001

Looking at these data I’m impressed by the standard deviations (replaced by standard errors in the somewhat nicer looking fig 3). What does a value of 0.01 ±1.04 represent? There is a looooot of variation (even though BMI z is corrected for age and sex). Although no statistical differences were found for baseline values between the groups the “eyeball test” tells me the sugar- group has a slight “advantage”. They seem to start with slightly lower baseline values (overall, except for body weight).

Anyway, the changes are significant….. But significance isn’t identical to relevant.

At a second look the data look less impressive than the media reports.

Another important point, raised by van Maanen[10], is that the children’s weight increases more in this study than in the normal Dutch population. 6-7 kilo instead of 3 kilo.

In conclusion, the study by the group of Katan et al is a large, unique, randomized trial, that looked at the effects of replacement of sugar by artificial sweeteners in drinks consumed by healthy school children. An effect was noticed on several “obesity-related parameters”, but the effects were not large and possibly don’t last after discontinuation of the trial.

It is important that a single factor, the sugar component in beverages is tested in isolation. This shows that sugar itself “does matter”. However, the trial does not show that sugary drinks are the main obesity  factor in childhood (as suggested in some media reports).

It is clear that the investigators feel very engaged, they really want to tackle the childhood obesity problem. But they should separate the scientific findings from common sense.

The cans fabricated for this trial were registered under the trade name Blikkie (Dutch for “Little Can”). This was to make sure that the drinks would never be sold by smart business guys using the slogan: “cans which have scientifically been proven to help to keep your child lean and healthy”.[NRC]

Still soft drink stakeholders may well argue that low calory drinks are just fine and that curbing sodas is not the magic bullet.

But it is a good start, I think.

Photo credits Cola & Obesity:  Melliegrunt Flikr [CC]

  1. de Ruyter JC, Olthof MR, Seidell JC, & Katan MB (2012). A Trial of Sugar-free or Sugar-Sweetened Beverages and Body Weight in Children. The New England journal of medicine PMID: 22998340
  2. Ebbeling CB, Feldman HA, Chomitz VR, Antonelli TA, Gortmaker SL, Osganian SK, & Ludwig DS (2012). A Randomized Trial of Sugar-Sweetened Beverages and Adolescent Body Weight. The New England journal of medicine PMID: 22998339
  3. Qi Q, Chu AY, Kang JH, Jensen MK, Curhan GC, Pasquale LR, Ridker PM, Hunter DJ, Willett WC, Rimm EB, Chasman DI, Hu FB, & Qi L (2012). Sugar-Sweetened Beverages and Genetic Risk of Obesity. The New England journal of medicine PMID: 22998338
  4. Caprio S (2012). Calories from Soft Drinks – Do They Matter? The New England journal of medicine PMID: 22998341
  5. Changes to the protocol http://clinicaltrials.gov/archive/NCT00893529/2011_02_24/changes
  6. Overweight and Obesity: Childhood obesity facts  and A growing problem (www.cdc.gov)
  7. NRC Wim Köhler Eén kilo lichter.NRC | Zaterdag 22-09-2012 (http://archief.nrc.nl/)
  8.  The Best Study Design… For Dummies (http://laikaspoetnik.wordpress.com)
  9. Studies point to sugary drinks as culprits in childhood obesity – CTV News (ctvnews.ca)
  10. Hans van Maanen. Suiker uit fris, De Volkskrant, 29 september 2012 (freely accessible at http://www.vanmaanen.org/)
  11. Sugar-Sweetened Beverages, Diet Coke & Health. Part I. (http://laikaspoetnik.wordpress.com)
  12. Roni Caryn Rabina. Avoiding Sugared Drinks Limits Weight Gain in Two Studies. New York Times, September 21, 2012




Three Studies Now Refute the Presence of XMRV in Chronic Fatigue Syndrome (CFS)

27 04 2010

ResearchBlogging.org.“Removing the doubt is part of the cure” (RedLabs)

Two months ago I wrote about two contradictory studies on the presence of the novel XMRV retrovirus in blood of patients with Chronic Fatigue Syndrome (CFS).

The first study, published in autumn last year by investigators of the Whittemore Peterson Institute (WPI) in the USA [1], claimed to find XMRV virus in peripheral blood mononuclear cells (PBMC) of patients with CFS. They used PCR and several other techniques.

A second study, performed in the UK [2] failed to show any XMRV-virus in peripheral blood of CFS patients.

Now there are two other negative studies, one from the UK [3] and one from the Netherlands [4].

Does this mean that XMRV is NOT present in CFS patients?

No, different results may still be due do to different experimental conditions and patient characteristics.

The discrepancies between the studies discussed in the previous post remain, but there are new insights, that I would like to share.*

1. Conflict of Interest, bias

Most CFS patients seem “to go for” WPI, because WPI, established by the family of a chronic fatigue patient, has a commitment to CFS. CFS patients feel that many psychiatrists, including authors of the negative papers [2-4] dismiss CFS as something “between the ears”.  This explains the negative attitude against these “psych-healers” on ME-forums (i.e. the Belgium forum MECVS.net and http://www.forums.aboutmecfs.org/). MECVS even has a section “faulty/wrong” papers, i.e. about the “failure” of psychiatrists to demonstrate  XMRV!

Since a viral (biological) cause would not fit in the philosophy of these psychiatrists, they might just not do their best to find the virus. Or even worse…

Dr. Mikovits, co-author of the first paper [1] and Director of Research at WPI, even responded to the first UK study as follows (ERV and Prohealth):

“You can’t claim to replicate a study if you don’t do a single thing that we did in our study,” …
“They skewed their experimental design in order to not find XMRV in the blood.” (emphasis mine)

Mikovits also suggested that insurance companies in the UK are behind attempts to sully their findings (ERV).

These kind of personal attacks are “not done” in Science. And certainly not via this route.

Furthermore, WPI has its own bias.

For one thing WPI is dependent on CFS and other neuro-immune patients for its existence.

WPI has generated numerous press releases and doesn’t seem to use the normal scientific channels. Mikovits presented a 1 hr Q&A session about XMRV and CFS (in a stage where nothing has been proven yet). She will also present data about XMRV at an autism meeting. There is a lot of PR going on.

Furthermore there is an intimate link  between WPI and VIP Dx, both housed in Reno. Vip DX is licensed by WPI to provide the XMRV-test. Vipdx.com links to the same site as redlabsusa.com, for Vip Dx is the new name of the former RedLabs.

Interestingly Lombardi (the first author of the paper) co-founded Redlabs USA Inc. and  served as the Director of Operations at Redlabs, Harvey Whittemore owns 100% of VIP Dx, and was the company President until this year and  Mikovits is the Vice President of VIP Dx. (ME-forum). They didn’t disclose this in the Science paper.

TEST_RQN_Feb_2010

Vip/Dx offers a plethora of tests, and is the only RedLab -branch that performs the WPI-PCR test, now replaced by the “sensitive” culture test (see below). At this stage of controversy, the test is sold as “a reliable diagnostic tool“(according to prohealth). Surely their motto “Removing the doubt is part of the cure” appeals to patients. But how can doubt be removed if the association of XMRV with CFS has not been confirmed, the diagnostic tests offered have yet not been truly validated (see below), as long as a causal relationship between XMRV and CFS has not been proven and/or when XMRV does not seem that specific for CFS: it has also been found in people with prostate cancer, autism,  atypical multiple sclerosis, fibromyalgia, lymphoma)(WSJ).

Meanwhile CFS/ME websites are abuzz with queries about how to obtain tests -also in Europe- …and antiretroviral drugs. Sites like Prohealth seem to advocate for WPI. There is even a commercial XMRV site (who runs it is unclear)

Project leader Mikovits, and the WPI as a whole, seem to have many contacts with CSF patients, also by mail. In one such mail she says (emphasis and [exclamations] mine):

“First of all the current diagnostic testing will define with essentially 100% accuracy! XMRV infected patients”. [Bligh me!]….
We are testing the hypothesis that XMRV is to CFS as HIV is to AIDS. There are many people with HIV who don’t have AIDS (because they are getting treatment). But by definition if you have ME you must have XMRV. [doh?]
[....] There is so much that we don’t know about the virus. Recall that the first isolation of HIV was from a single AIDS patient published in late 1982 and it was not until 2 years later that it was associated with AIDS with the kind of evidence that we put into that first paper. Only a few short years later there were effective therapies. [...]. Please don’t hesitate to email me directly if you or anyone in the group has questions/concerns. To be clear..I do think even if you tested negative now that you are likely still infected with XMRV or its closest cousin..

Kind regards, Judy

These tests costs patients money, because even Medicare will only reimburse 15% of the PCR-test till now. VIP Dx does donate anything above costs to XMRV research, but isn’t this an indirect way to support the WPI-research? Why do patients have to pay for tests that have not proven to be diagnostic? The test is only in the experimental phase.

I ask you: would such an attitude be tolerated from a regular pharmaceutical company?

Patients

Another discrepancy between the WPI and the other studies is that only the WPI use the Fukuda and Canadian criteria to diagnose CFS patients. The Canadian  criteria are much more rigid than those used in the European studies. This could explain why WPI has more positives than the other studies, but it can’t fully explain that WPI shows 96% positives (their recent claim) against 0% in the other studies. For at least some of the European patients should fulfill the more rigid criteria.

Regional Differences

Patients of the positive and negative studies also differ with respect to the region they come from (US and Europe). Indeed, XMRV has previously been detected in prostate cancer cells from American patients, but not from German and Irish patients.

However, the latter two reasons may not be crucial if the statement in the open letter* from Annette Whittemore, director of the WPI, to Dr McClure**, the virologist of the second paper [2], is true:

We would also like to report that WPI researchers have previously detected XMRV in patient samples from both Dr. Kerr’s and Dr. van Kuppeveld’s cohorts prior to the completion of their own studies, as they requested. We have email communication that confirms both doctors were aware of these findings before publishing their negative papers.(……)
One might begin to suspect that the discrepancy between our findings of XMRV in our patient population and patients outside of the United States, from several separate laboratories, are in part due to technical aspects of the testing procedures.

Assuming that this is true we will now concentrate on the differences in the PCR -procedures and results.

PCR

All publications have used PCR to test the presence of XMRV in blood: XMRV is present in such low amounts that you can’t detect the RNA without amplifying it first.

PCR allows the detection of a single or few copies of target DNA/RNA per milligram DNA input, theoretically 1 target DNA copy in 105 to 106 cells. (RNA is first reverse transcribed to DNA). If the target is not frequent, the amplified DNA is only visible after Southern blotting (a radioactive probe “with a perfect fit to” the amplified sequence) or after a second PCR round (so called nested PCR). In this second round a set of primers is used internal to the first set of primers. So a weak signal is converted in a strong and visible one.

All groups have applied nested PCR. The last two studies have also used a sensitive real time PCR, which is more of a quantitative assay and less prone to contamination.

Twenty years ago, I had similar experiences as the WPI. I saw very vague PCR bands that had all characteristics of a tumor-specific sequence in  normal individuals, which was contrary to prevailing beliefs and hard to prove. This had all to do with a target frequency near to the detection limit and with the high chance of contamination with positive controls. I had to enrich tonsils and purified B cells to get a signal and sequence the found PCR products to prove we had no contamination. Data were soon confirmed by others. By the way our finding of a tumor specific sequence in normal individuals didn’t mean that everyone develops lymphoma (oh analogy)

Now if you want to proof you’re right when you discovered something new you better do it good.

Whether a PCR assay at or near the detection limit of PCR is successful depends on:

  • the sensitivity of the PCR
    • Every scientific paper should show the detection limit of the PCR: what can the PCR detect? Is 1 virus particle enough or need there be 100 copies of the virus before it is detected? Preferably the positive control should be diluted in negative cells. This is called spiking. Testing a positive control diluted in water doesn’t reflect the true sensitivity. It is much easier for primers to find one single small piece of target DNA in water than to find that piece of DNA swimming in a pool of DNA from 105 cells. 
  • the specificity of the PCR.
    • You can get aspecific bands if the primers recognize other than the intended sequences. Suppose you have one target sequence competing with a lot of similar sequences, then even a less perfect match in the normal genome has every chance to get amplified. Therefore you should have a negative control of cells not containing the virus (i.e. placental DNA), not only water. This resembles the PCR conditions of your test samples.
  • Contamination
    • this should be prevented by rigorous spatial separation of  sample preparation, PCR reaction assembly, PCR execution, and post-PCR analysis. There should be many negative controls. Control samples should be processed the same way as the experimental samples and should preferably be handled blinded.
  • The quality and properties of your sample.
    • If XMRV is mainly present in PBMC, separation of PBMC by Ficoll separation (from other cells and serum) could make the difference between a positive and a negative signal. Furthermore,  whole blood and other body fluids often contain inhibitors, that may lead to a much lower sensitivity. Purification steps are recommended and presence of inhibitors should be checked by spiking and amplification of control sequences.

Below the results per article. I have also made an overview of the results in a Google spreadsheet.

WPI
The PCR conditions are badly reported in the WPI paper, published in Science[1]. As a matter of fact I wonder how it ever came trough the review.

  • Unlike XMRV-positive prostate cancer cells, XMRV infection status did not not correlate with the RNASEL genotype.
  • The sensitivity of the PCR is not shown (nor discussed).
  • No positive control is mentioned. The negative controls were just vials without added DNA.
  • Although the PCR is near the detection limit, only first round products are shown (without confirmation of the identity of the product). The positive bands are really strong, whereas you expect them to be weak (near the detection limit after two rounds). This is suggestive of contamination.
  • PBMC have been used as a source and that is fine, but one of WPI’s open letters/news items (Feb 18), in response to the first UK study, says the following:
    • point 7. Perhaps the most important issue to focus on is the low level of XMRV in the blood. XMRV is present in such a small percentage of white blood cells that it is highly unlikely that either UK study’s PCR method could detect it using the methods described. Careful reading of the Science paper shows that increasing the amount of the virus by growing the white blood cells is usually required rather than using white blood cells directly purified from the body. When using PCR alone, the Science authors found that four samples needed to be taken at different times from the same patient in order for XMRV to be detected by PCR in freshly isolated white blood cells.(emphasis mine)
  • But carefully reading the methods,  mentioned in the “supporting material” I only read:
    • The PBMC (approximately 2 x 107 cells) were centrifuged at 500x g for 7 min and either stored as unactivated cells in 90% FBS and 10% DMSO at -80 ºC for further culture and analysis or resuspended in TRIzol (…) and stored at -80 ºC for DNA and RNA extraction and analysis. (emphasis mine)

    Either …. or. Seems clear to me that the PBMC were not cultured for PCR, at least not in the experiments described in the science paper.

    How can one accuse other scientists of not “duplicating” the results if the methods are so poorly described and the authors don’t adhere to it themselves??

  • Strikingly only those PCR-reactions are shown, performed by the Cleveland Clinic (using one round), not the actual PCR-data performed by WPI. That is really odd.
  • It is also not clear whether the results obtained by the various tests were consistent.
    Suzanne D. Vernon, PhD, Scientific Director of the CFIDS Association of America (charitable organization dedicated to CFS) has digged deeper into the topic. This is what she wrote [9]:
    Of the 101 CFS subjects reported in the paper, results for the various assays are shown for only 32 CFS subjects. Of the 32 CFS subjects whose results for any of the tests are displayed, 12 CFS subjects were positive for XMRV on more than one assay. The other 20 CFS subjects were documented as positive by just one testing method. Using information from a public presentation at the federal CFS Advisory Committee, four of the 12 CFS subjects (WPI 1118, 1150, 1199 and 1125) included in the Science paper were also reported to have cancer – either lymphoma, mantle cell lymphoma or myelodysplasia. The presentation reported that 17 WPI repository CFS subjects with cancer had tested positive for XMRV. So how well are these CFS cases characterized, really?

The Erlwein study was published within 3 months after the first article. It is simpler in design and was reviewed in less then 3 days. They used whole blood instead of PBMC and performed nested PCR using another set of primers. This doesn’t matter a lot, if the PCR is sensitive. However, the sensitivity of the assay is not shown and the PCR bands of the positive control look very weak, even after the second round (think they mad a mistake in the legend as well: lane 9 is not a positive control but a base pair ladder, I presume). It also looked like they used a “molecular plasmid control in water”, but in the comments on the PLoS ONE paper, one of the authors states that the positive control WAS spiked into patient DNA.(Qetzel commenting to Pipeline Corante) Using this PCR none of the 186 CSF samples was positive.

Groom and van Kuppeveld studies
The two other studies use an excellent PCR approach[3,4]. Both used PBMC, van Kuppeveld used older cryoperserved PBMC. They first tried the primers of Lombardi using a similar nested PCR, but since the sensitivity was low they changed to a real time PCR with other optimized primers. They determined the sensitivity of the PCR by serially diluting a plasmid into PBMC DNA from a healthy donor. The limit of sensitivity equates to 16 and 10 XMRV-gene copies in the UK and the Dutch study respectively. They have appropriate negative controls and controls for the integrity of the material (GAPDH, spiking normal control cDNAs in negative DNA to exclude sample mediated PCR inhibition[1], phocine distemper virus[2]), therefore also excluding that cryopreserved PBMC were not suitable for amplification.

The results look excellent, but none of the PCR-samples were positive using these sensitive techniques. A limitation of the Dutch study is the that numbers of patients and controls were small (32 CSF, 43 controls)

Summary and Conclusion

In a recent publication in Science, Lombardi and co-authors from the WPI reported the detection of XMRV-related, a novel retrovirus that was first identified in prostate cancer samples.

Their main finding, presence of XMRV in peripheral blood cells could not be replicated by 3 other studies, even under sensitive PCR conditions.

The original Science study has severe flaws, discussed above. For one thing WPI doesn’t seem to adhere to the PCR to test XMRV any longer.

It is still possible that XMRV is present in amounts at or near the detection limit. But it is equally possible that the finding is an artifact (the paper being so inaccurate and incomplete). And even if XMRV was reproducible present in CFS patients, causality is still not proven and it is way too far to offer patients “diagnostic tests” and retroviral treatment.

Perhaps the most worrisome part of it all is the non-scientific attitude of WPI-employees towards colleague-scientists, their continuous communication via press releases. And the way they try to directly reach patients, who -i can’t blame them-, are fed up with people not taking them serious and who are longing for a better diagnosis and most of all a better treatment. But this is not the way.

Credits

*Many thanks to Tate (CSF-patient) for alerting me to the last Dutch publication, Q&A’s of WPI and the findings of Mrs Vernon.
- Ficoll blood separation. Photo [CC] http://www.flickr.com/photos/42299655@N00/3013136882/
- Nested PCR: ivpresearch.org

References

  1. Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL, Ruscetti SK, Bagni RK, Petrow-Sadowski C, Gold B, Dean M, Silverman RH, & Mikovits JA (2009). Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. Science (New York, N.Y.), 326 (5952), 585-9 PMID: 19815723
  2. Erlwein, O., Kaye, S., McClure, M., Weber, J., Wills, G., Collier, D., Wessely, S., & Cleare, A. (2010). Failure to Detect the Novel Retrovirus XMRV in Chronic Fatigue Syndrome PLoS ONE, 5 (1) DOI: 10.1371/journal.pone.0008519
  3. Groom, H., Boucherit, V., Makinson, K., Randal, E., Baptista, S., Hagan, S., Gow, J., Mattes, F., Breuer, J., Kerr, J., Stoye, J., & Bishop, K. (2010). Absence of xenotropic murine leukaemia virus-related virus in UK patients with chronic fatigue syndrome Retrovirology, 7 (1) DOI: 10.1186/1742-4690-7-10
  4. van Kuppeveld, F., Jong, A., Lanke, K., Verhaegh, G., Melchers, W., Swanink, C., Bleijenberg, G., Netea, M., Galama, J., & van der Meer, J. (2010). Prevalence of xenotropic murine leukaemia virus-related virus in patients with chronic fatigue syndrome in the Netherlands: retrospective analysis of samples from an established cohort BMJ, 340 (feb25 1) DOI: 10.1136/bmj.c1018
  5. McClure, M., & Wessely, S. (2010). Chronic fatigue syndrome and human retrovirus XMRV BMJ, 340 (feb25 1) DOI: 10.1136/bmj.c1099
  6. http://scienceblogs.com/erv/2010/01/xmrv_and_chronic_fatigue_syndr_5.php
  7. http://scienceblogs.com/erv/2010/01/xmrv_and_chronic_fatigue_syndr_6.php
  8. http://scienceblogs.com/erv/2010/03/xmrv_and_chronic_fatigue_syndr_11.php
  9. http://www.cfids.org/xmrv/022510study.asp
Sensitivity of PCR screening for XMRV in PBMC DNA. VP62 plasmid was serially diluted 1:10 into PBMC DNA from a healthy donor and tested by Taqman PCR with env 6173 primers and probe. The final amount of VP62 DNA in the reaction was A, 2.3 × 10-2 ng, B, 2.3 × 10-3 ng, C, 2.3 × 10-4 ng, D, 2.3 × 10-5 ng, E, 2.3 × 10-6 ng, F, 2.3 × 10-7 ng or G, 2.3 × 10-8 ng. The limit of sensitivity was 2.3 × 10-7 ng (shown by trace F) which equates to 16 molecules of VP62 XMRV clone.







Follow

Get every new post delivered to your Inbox.

Join 609 other followers