A New Safe Blood Test to Diagnose Down Syndrome

14 03 2011

ResearchBlogging.org
The established method to prenatally diagnose chromosomal gross abnormalities is to obtain fetal cells from the womb with a fine needle, either by Amniocentesis (a sample of the fluid surrounding the foetus in the womb)  or by Chorionic villus sampling (CVS, a sample of the placenta taken via the vaginal route).
The procedures are not to be sneezed at. I’ve undergone both, so I talk from experience. It is kind of horrifying to see a needle entering the womb near to your baby, also because you realize that there is a (small) chance that the procedure will cause a miscarriage. Furthermore, in my case  (rhesus negative) I also had to get an injection of human anti-D immunoglobulin as a precaution to prevent rhesus disease after birth. Finally it takes ages (ok 2-3 weeks) to hear the results. At that point the fetus is already 14-18 weeks old and there is little time to intervene, if that is what is decided.

en: Karyotype of 21 trisomy (Down syndrome) fr...

Image via Wikipedia

Over the years many non-invasive alternatives have been sought to test for Down Syndrome, the most common chromosomal abnormality,  affecting chromosome 21. Instead of one pair of  chromosomes, there is (usually) a third chromosome 21 (hence trisomy 21) (see Fig).

An older non-invasive test is a simple blood test to check the levels of some proteins and hormones in the mother’s blood, that are somehow related to Down Syndrome. However, this test is not very accurate.

The same is true for another non-invasive method: the ultrasound scan of the neck of the fetus. An increased amount of translucent fluid behind the neck (‘nuchal translucency‘) is associated with Down syndrome and a few other chromosomal defects.

A combination of serum tests and nuchal translucency in the 11th week correctly identifies fetuses with Down syndrome 87 percent of the time, whereas it misidentifies healthy fetuses as having Down Syndrome in 5% of the cases (5% false positive rate).

For this reason these non-invasive tests are usually used to “screen”, not to diagnose trisomy 21.

Ever since circulating fetal cells and free fetal DNA were found in the maternal blood, researchers try to enrich for this fetal sources and try to characterize chromosomal aberrations, using very sensitive molecular diagnostic tools, like polymerase chain reaction PCR (i.e. see this post) . The first attempts were directed at detecting the Y chromosome of male babies in the blood of the mother [1].

This January, Chiu et al published an article in the BMJ showing that Down’s syndrome can be detected with greater than 98% accuracy in maternal blood [2]. The group of Lo tested 753 pregnant women at high risk for carrying a fetus with trisomy 21 with this new blood test and compared it with the results obtained by karyotyping (analyzing number and appearance of chromosomes). The new technique is called multiplexed massively parallel DNA sequencing. This is a high throughput sequencing technique in which many DNA-fragments are sequenced in parallel. Sequencing means that the genetic code is unraveled. It is even possible to analyze 2 to 8 labeled maternal samples in parallel (2- and 8-plex reaction).

This parallel DNA sequencing method is “just” a counting method, in which the overall number of chromosomes is counted and one looks at an overrepresentation of chromosome 21.

With the superior 2-plex approach, 100% of the 86 known trisomy 21 fetuses were detected at a 2.1% false positive rate. In other words  the duplex approach had 100% sensitivity (all known positives were detected) and 97.9% specificity (2 were positive according to the test, whereas they were not in reality).

Thus it is a good and non-invasive technique to exclude Down syndrome in pregnant women known to have a high risk of Down syndrome. The approach might perform less well in a low risk group. Furthermore the study was not fully blinded. A practical disadvantage of this new test is that it is expensive, requiring machines not yet available in most hospitals (A spoonful of Medicine).

Another approach, recently published in Nature Medicine doesn’t have this disadvantage [3]. It involves the application of methylated DNA immunoprecipitation (MeDiP) and real-time quantitative PCR (rt-qPCR), which are accessible in all basic diagnostic laboratories. MedDiP is a technique to enrich for methylated DNA sequences, which are more preponderant in the fetus. Next rt-qPCR (amplification of DNA) is used to assess whether the fetus has an extra copy of the fetal-specific methylated region compared to a normal fetus.

In an initial series of 20 known normal cases and 20 known trisomy 21 cases, the researchers tested several differentially methylated regions (DMRs).  The majority of the ratio values in normal cases were at or below a value of 1, whereas in trisomy 21 cases the ratio values were above a value of 1. A combination of 8 specific DMRs out of 12 enabled the correct diagnosis of all the cases.

Next the authors validated the technique by applying the above method to 40 new samples in a blinded fashion. These samples contained 26 normal cases and 14 trisomy 21 cases (as later defined by karyotyping). Normal and trisomy 21 cases were all correctly identified.

The authors conclude that they achieved 100% sensitivity and 100% specificity in 80 samples. However, the first 40 samples were used to calibrate the test, thus the real validation study was done in a small set of 40, containing only 14 trisomy cases. One can imagine that a greater sample could have a few more false negatives or false positives. Indeed, small initial studies are likely to overestimate the true effect.

Furthermore, there was an overrepresentation of trisomy 21 cases (1/3 of the sample). Thus it is to soon to say that this trisomy 2 method is to be potentially employed in the routine practice of all diagnostic laboratories and be applicable to all pregnancies”, as the authors did. To this end the method should be confirmed in larger studies and in low risk pregnancies.

In conclusion, the relative easy and cheap methylated DNA immunoprecipitation/real-time quantitative PCR combo test, seems a promising approach to screen for Down syndrome in high risk pregnancies. Larger studies are needed to confirm the extreme accuracy of 100% and must demonstrate the applicability to low risk pregnancies. If confirmed, this blood test could eliminate the need for invasive procedures. Another positive aspect is that the test can be performed early, from the 11th week of gestation, and the results can be obtained within 4-5 days. Moreover the researchers can easily adapt the current technique to demonstrate abnormal numbers (aneuploidy) of chromosomes 13, 18, X and Y.

References

  • LO, Y., CORBETTA, N., CHAMBERLAIN, P., RAI, V., SARGENT, I., REDMAN, C., & WAINSCOAT, J. (1997). Presence of fetal DNA in maternal plasma and serum The Lancet, 350 (9076), 485-487 DOI: 10.1016/S0140-6736(97)02174-0
  • Chiu, R., Akolekar, R., Zheng, Y., Leung, T., Sun, H., Chan, K., Lun, F., Go, A., Lau, E., To, W., Leung, W., Tang, R., Au-Yeung, S., Lam, H., Kung, Y., Zhang, X., van Vugt, J., Minekawa, R., Tang, M., Wang, J., Oudejans, C., Lau, T., Nicolaides, K., & Lo, Y. (2011). Non-invasive prenatal assessment of trisomy 21 by multiplexed maternal plasma DNA sequencing: large scale validity study BMJ, 342 (jan11 1) DOI: 10.1136/bmj.c7401
  • Papageorgiou, E., Karagrigoriou, A., Tsaliki, E., Velissariou, V., Carter, N., & Patsalis, P. (2011). Fetal-specific DNA methylation ratio permits noninvasive prenatal diagnosis of trisomy 21 Nature Medicine DOI: 10.1038/nm.2312

Related Articles





Much Ado About ADHD-Research: Is there a Misrepresentation of ADHD in Scientific Journals?

9 02 2011

ResearchBlogging.org
The reliability of science is increasingly under fire. We all know that media often gives a distorted picture of scientific findings (i.e. Hot news: Curry, Curcumin, Cancer & cure). But there is also an ever growing number of scientific misreports or even fraud (see bmj editorial announcing retraction of the Wakefield paper about causal relation beteen MMR vaccination and autism). Apart from real scientific misconduct there are Ghost Marketing and “Publication Bias”, that makes (large) positive studies easier to find than those with negative or non-significant result.
Then there are also the ever growing contradictions, that makes the public sigh: what IS true in science?

Indeed according to doctor John Ioannidis “Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong. (see “Lies, Damned Lies, and Medical Science” in the Atlantic (2010). In 2005 he wrote the famous PLOS-article “Why most published research findings are false” [2] .

With Iaonnides as an editor, a new PLOS-one paper has recently been published on the topic [1]. The authors Gonon, Bezard and Boraud state that there is often a huge gap between neurobiological facts and firm conclusions stated by the media. They suggest that the misrepresentation often starts in the scientific papers, and is echoed by the media.

Although this article has already been reviewed by another researchblogger (Hadas Shema), I would like to give my own views on this paper

Gonon et al found 3 types of misrepresentation.*

1. Internal inconsistencies (between results and claimed conclusions).

In a (non-systematic) review of 360 ADHD articles  Gonon et al. [1] found  two studies with “obvious” discrepancies between results and claimed conclusions.  One paper claimed that dopamine is depressed in the brain of ADHD patient. Mitigations were only mentioned in the results section and of course only the positive message was resonated by the media without further questioning any alternative explanation (in this case a high baseline dopamine tone). The other paper [3] claimed that treatment with stimulant medications was associated with more favorite long-term school outcomes. However the average reading score and the school drop-outs did not differ significantly between treatment and control group. The newspapers also trumpeted that  “ADHD drugs help boost children’s grades” .

2. Fact Omission

To quantify fact omission in the scientific literature, Gonon et al systematically searched for ADHD articles mentioning the the D4 dopamine receptor (DRD4) gene. Among the 117 primary human studies with actual data (like odds ratios), 74 articles state in their summary that alleles of the DRD4 genes are significantly associated with ADHD but only 19 summaries mentioned that the risk was small. Fact omission was even more preponderant in articles, that only cite studies about DRD4.  Not surprisingly, 82% of the media articles didn’t report that the DRD4 only confers a small risk either.
In accordance with Ioannidis findings [2] Gonon et al found that the most robust effects were reported in initial studies: odds-ratios decreased from 2.4 in the oldest study in 1996 to 1.27 in the most recent meta-analysis.

3. Extrapolating basic and pre-clinical findings to new therapeutic prospects

Animal ADHD models have their limitations because investigations based on mouse behavior cannot capture the ADHD complexity. Analysis of all ADHD-related studies in mice showed that 23% of the conclusions were overstated. The frequency of this overstatement was positively related with the impact factor of the journal.

Again, the positive message was copied by the press. (see Figure below)

“]Discussion

 

The article by Gonon et al is another example that “published research findings are false” [ 2], or at least not completely true. The authors show that the press isn’t culprit number one, but that it “just” copies the overstatements in the scientific abstracts.

The merit of Gonon et al is that they have extensively looked at a great number of articles and at press articles citing those articles.

The first type of misrepresentation wasn’t systematically studied, but types 2 and 3 misrepresentations were studied by analyzing papers on a specific ADHD topic obtained by a systematic search.

One of the solutions the authors propose is that “journal editors collectively reject sensationalism and clearly condemn data misrepresentation”. I agree and would like to add that the reviewers should check that the summary actual reflects the data. Some journals already have strict criteria in this respect. It striked me that the few summaries I checked were very unstructured and short, unlike most summaries I see. Possibly, unstructured abstracts are more typically for journals about neuroscience and animal research.

The choice of the ADHD-topics investigated doesn’t seem random. A previous review[4], written by Francois Gonon deals entirely with “the need to reexamine the dopaminergic hypothesis of ADHD” . The type 1 misrepresentation data stem from this opinion piece.

The putative ADHD-DRD4 gene association and the animal studies, taken as examples for type 2 and type 3 misrepresentations respectively, can also be seen as topics of the “ADHD is a genetic disease” -kind.

Gonon et al clearly favor the hypothesis that ADHD is primarily caused by environmental factors . In his opinion piece he starts with saying:

This dopamine-deficit theory of ADHD is often based upon an overly simplistic dopaminergic theory of reward. Here, I question the relevance of this theory regarding ADHD. I underline the weaknesses of the neurochemical, genetic, neuropharmacological and imaging data put forward to support the dopamine-deficit hypothesis of ADHD. Therefore, this hypothesis should not be put forward to bias ADHD management towards psychostimulants.

I wonder whether it is  fair of the authors to limit the study to ADHD topics they oppose to in order to (indirectly) confirm their “ADHD has a social origin” hypothesis. Indeed in the paragraph “social and public health consequences” Gonon et al state:

Unfortunately, data misrepresentation biases the scientific evidence in favor of the first position stating that ADHD is primarily caused by biological factors.

I do not think that this conclusion is justified by their findings, since similar data misrepresentation might also occur in papers investigating social causes or treatments, but this was not investigated. (mmm, a misrepresentation of the third kind??)

I also wonder why impact factor data were only given for the animal studies.

Gonon et al interpret a lot, also in their results section. For instance, they mention that 2 out of 60 articles show obvious discrepancies between results and claimed conclusions. This is not much. Then they reason:

Our observation that only two articles among 360 show obvious internal inconsistencies must be considered with caution however. First, our review of the ADHD literature was not a systematic one and was not aimed at pointing out internal inconsistencies. Second, generalization to other fields of the neuroscience literature would be unjustified

But this is what they do. See title:

” Misrepresentation of Neuroscience Data Might Give Rise to Misleading Conclusions in the Media.”

Furthermore they selectively report themselves. The Barbaresi paper [3], a large retrospective cohort,  did not find an effect on average reading score and school drop-outs, but it did find a significantly lowered grade retention, which is -after all- an important long-term school outcome.

Misrepresentation type 2 (“omission”)  I would say.*

References

  1. Gonon, F., Bezard, E., & Boraud, T. (2011). Misrepresentation of Neuroscience Data Might Give Rise to Misleading Conclusions in the Media: The Case of Attention Deficit Hyperactivity Disorder PLoS ONE, 6 (1) DOI: 10.1371/journal.pone.0014618
  2. Ioannidis, J. (2005). Why Most Published Research Findings Are False PLoS Medicine, 2 (8) DOI: 10.1371/journal.pmed.0020124
  3. Barbaresi, W., Katusic, S., Colligan, R., Weaver, A., & Jacobsen, S. (2007). Modifiers of Long-Term School Outcomes for Children with Attention-Deficit/Hyperactivity Disorder: Does Treatment with Stimulant Medication Make a Difference? Results from a Population-Based Study Journal of Developmental & Behavioral Pediatrics, 28 (4), 274-287 DOI: 10.1097/DBP.0b013e3180cabc28
  4. GONON, F. (2009). The dopaminergic hypothesis of attention-deficit/hyperactivity disorder needs re-examining Trends in Neurosciences, 32 (1), 2-8 DOI: 10.1016/j.tins.2008.09.010

Related Articles

[*A short comment in the NRC Handelsblad (Febr 5th) comes to a similar conclusion]





Does the NHI/FDA Paper Confirm XMRV in CFS? Well, Ditch the MR and Scratch the X… and… you’ve got MLV.

30 08 2010

ResearchBlogging.orgThe long awaited paper that would ‘solve’ the controversies about the presence of Xenotropic Murine Leukemia Virus-related virus (XMRV) in patients with chronic fatigue syndrome (CFS) was finally published in PNAS last week [1]. The study, a joint effort of the NIH and the FDA, was withheld, on request of the authors [2], because it contradicted the results of another study performed by the CDC. Both studies were put on hold.

The CDC study was published in Retrovirology online July 1 [3]. It was the fourth study in succession [4,5,6] and the first US study, that failed to demonstrate XMRV since researchers of the US Whittemore Peterson Institute (WPI) had published their controversial paper regarding the presence of XMRV in CFS [7].

The WPI-study had several flaws, but so had the negative papers: these had tested a less rigorously defined CFS-population, had used old and/or too few samples (discussed in two previous posts here and here).
In a way,  negative studies, failing to reproduce a finding, are less convincing then positive studies.  Thus everyone was eagerly looking forward to the release of the PNAS-paper, especially because the grapevine whispered this study was  to confirm the original WPI findings.

Indeed after publication, both Harvey Alter, the team leader of the NIH/FDA study, and Judy Mikovitz of the WPI emphasized that the PNAS paper essentially confirmed the presence of XMRV in CFS.

But that isn’t true. Not one single XMRV-sequence was found. Instead related MLV-sequences were detected.

Before I go into further details, please have a look at the previous posts if you are not familiar with the technical details , like the PCR-technique. Here (and in a separate spreadsheet) I also describe the experimental differences between the studies.

Now what did Lo et al exactly do? What were their findings? And in what respect do their findings agree or disagree with the WPI-paper?

Like WPI, Lo et al used nested PCR to enable detect XMRV. Nested means that there are two rounds of amplification. Outer primers are used to amplify the DNA between the two primers used (primers are a kind of very specific anchors fitting a small approximately 20 basepair long piece of DNA). Then a second round is performed with primers fitting a short sequence within the amplified sequence or amplicon.

The first amplified gag product is ~730 basepairs long, the second ~410 or ~380 basepairs, depending on the primer sets used:  Lo et al used the same set of outer primers as WPI to amplify the gag gene, but the inner gag primers were either those of WPI (410 bp)  or a in-house-designed primer set (380 bp).

Using the nested PCR approach Lo et al found gag-gene sequences in peripheral blood mononuclear cells (PBMC)  in 86.5% of all tested CFS-patients (32/37)  and in 96% (!) of the rigorously evaluated CFS-patients (24/25) compared with only 6.8% of the healthy volunteer blood donors (3/44). Half of the patients with gag detected in their PBMC, also had detectable gag in their serum (thus not in the cells). Vice versa, all but one patient with gag-sequences in the plasma also had gag-positive PBMC. Thus these findings are consistent.

The gels  (Figs 1 and 2) showing the PCR-products in PBMC don’t look pretty, because there are many aspecific bands amplified from human PBMC. These aspecific bands are lacking when plasma is tested (which lacks PBMC). To get the idea. The researchers are trying to amplify a 730 bp long sequence, using primers that are 23 -25 basepairs long, that need to find the needle in the haystack (only 1 in 1000 to 10,000 PBMC may be infected, 1 PBMC contains appr 6*10^9 basepairs). Only the order of A, C, G and T varies! Thus there is a lot of competition of sequences that have a near-fit, but are more preponderant than the true gag-sequences fitting the primers).

Therefore, detecting a band with the expected size does not suffice to demonstrate the presence of a particular viral sequence. Lo et al verified whether it were true gag-sequences, by sequencing each band with the appropriate size. All the sequenced amplicons appeared true gag-sequences. What makes there finding particularly strong is that the sequences were not always identical. This was one of the objections against the WPI-findings: they only found the same sequence in all patients (apart from some sequencing errors).

Another convincing finding is that the viral sequences could be demonstrated in samples that were taken 2-15 years apart. The more recent sequences had evolved and gained one or more mutations. Exactly what one would expect from a retrovirus. Such findings also make contamination unlikely. The lack of PCR-amplifiable mouse mitochondrial DNA also makes contamination a less likely event (although personally I would be more afraid of contamination by viral amplicons used as a positive control). The negative controls (samples without DNA) were also negative in all cases. The researchers also took all necessary physical precautions to prevent contamination (i.e. the blood samples were prepared at another lab than did the testing, both labs never sequenced similar sequences before).
(people often think of conspiracy wherever the possibility of contamination is mentioned, but this is a real pitfall when amplifying low frequency targets. It took me two years to exclude contamination in my experiments)

To me the data look quite convincing, although we’re still far from concluding that the virus is integrated in the human genome and infectious. And, of course, mere presence of a viral sequence in CFS patients, does not demonstrate a causal relationship. The authors recognize this and try to tackle this in future experiments.

Although the study seems well done, it doesn’t alleviate the confusion raised.

The reason, as said, is that the NIH/FDA researchers didn’t find a single XMRV sequence  in any of the samples!

Instead a variety of related MLV retrovirus sequences were detected.

Sure the two retroviruses belong to a similar “family”. The gag gene sequences share 96.6% homology.

However there are essential differences.

One is that XMRV is a  Xenotropic virus, hence the X: which means it can no longer enter mouse cells (MR= murine (mouse) related) but can infect cells of other species, including humans. (to be more precise it has both xenotropic and polytropic characteristics). According to the phylogenetic tree Lo et al  constructed,  the viral sequences they found are more diverse and best matches the so-called polytropic MLV viruses (able to infect both mouse and non-mouse cells infected). (see the PNAS commentary by Valerie Courgnaud et al for an explanation)

The main question, this paper raises is why they didn’t find XMRV, like WPI did.

Sure, Mikovitz —who is “delighted” by the results—now hurries to say that in the meantime, her group has found more diversity in the virus as well [8]. Or as a critical CFS patient writes on his blog:

In my opinion, the second study is neither a confirmation for, nor a replication of the first. The second study only confirms that WPI is on to something and that there might be an association between a type of retroviruses and ME/CFS.
For 10 months all we’ve heard was “it’s XMRV”. If you didn’t find
XMRV you were doing something wrong: wrong selection criteria, wrong methods, or wrong attitude. Now comes this new paper which doesn’t find XMRV either and it’s heralded as the long awaited replication and confirmation study. Well, it isn’t! Nice piece of spin by Annette Whittemore and Judy Mikovits from the WPI as you can see in the videos below (… ). WPI may count their blessings that the NIH/FDA/Harvard team looked at other MLVs and found them or otherwise it could have been game over. Well, probably not, but how many negative studies can you take?

Assuming the NIH/FDA findings are true, then the key question is not why most experiments were completely negative (there may many reasons why, for one thing they only tested XMRV), but why Lo didn’t find any XMRV amongst the positive CFS patients, and WPI didn’t find any MLV in their positive patient samples.

Two US cohorts of CFS patients with mutually exclusive presence of either XMRV or MLV, whereas the rest of the world finds nothing?? I don’t believe it. One would at least expect overlap.

My guess is that it must be something in the conditions used. Perhaps the set of primers.

As said, Lo used the same external primers as WPI, but varied the internal primers. Sometimes they used those of WPI (GAG-I-F/GAG-I-R ; F=forward, R=reverse) yielding a ~410 basepair product, sometimes their own primers (NP116/NP117), yielding a ~380 basepair product. In the Materials and Methods section  Lo et al write The NP116/NP117 was an in-house–designed primer set based on the highly conserved sequences found in different MLV-like viruses and XMRVs”.
In the supplement they are more specific:

…. (GAG-I-F/GAG-I-R (intended to be more XMRV specific) or the primer set NP116/NP117 (highly conserved sequences for XMRV and MLV).

Is it possible that the conditions that WPI used were not so suitable for finding MLV?

Lets look at Fig. S1 (partly depicted below), showing the multiple sequence alignment of 746 gag nucleotides (nt) amplified from 21 CFS patient samples (3 types) and one blood donor (BD22) [first 4 rows] and their comparison with known MLV (middle rows) and XMRV (last rows) sequences. There is nothing remarkable with the area of the reverse primer (not shown). The external forward primer (–>) fits all sequences (dots mean identical nucleotides). Just next to this primer are 15 nt deletions specific for XMRV (—-), but that isn’t hurdle for the external primers. The internal primers (–>) overlap, but the WPI-internal primer starts earlier, in the region with heterogeneity: here there are two mismatches between MLV- and XMRV-like viruses. In this region the CFS type MLV (nt 196) starts with TTTCA, whereas XMRV sequences all have TCTCG. And yes, the WPI-primers starts as follows: TCTCG. Thus there is a complete match with XMRV, but a 2 bp mismatch with MLV. Such a mismatch might easily explain why WPI (not using very optimal PCR conditions) didn’t find any low frequency MLV-sequences. The specific inner primer designed by the group of Lo and Alter, do fit both sequences, so differences in this region don’t explain the failure of Lo et al to detect XMRV. Perhaps MLV is more abundant and easier to detect?

But wait a minute. BD22, a variant detected in normal donor blood does have the XMRV variant sequence in this particular (very small) region. This sequence and the two other sequenced normal donor MLV variants differ form the patient variants, although -according to Lo- both patient and healthy donor variants differ more from XMRV then from each other (Figs 4 and 2s). Using the eyeball test I do see many similarities between XMRV and BD22 though (not only in the above region).

The media pay no attention to these differences between patient and healthy control viral sequences, and the different primer sets used. Did no one actually read the paper?

Whether theses differences are relevant, depends on whether identical conditions were used for each type of sample. It worries me that Lo says he sometimes uses the WPI inner primer sets and sometimes the other specific set. When is sometimes? It is striking that Fig 1 shows the results from CFS patients done with the specific primers and Fig 2 the results from normal donor blood done with the WPI-primers. Why? Is this the reason they picked up a sequence that fit the WPI-primers (starting with TCTCG)?

I don’t like it. I want to know how many times tested samples were positive or negative with either primer set. I not only want to see the PCR results of CFS-plasma (positive in half of the PBMC+ cases), but also of the control plasma. And I want a mix of the patient, normal samples, positive and negative controls on one gel. Everyone doing PCR knows that the signal can differ per PCR and per gel. Furthermore, the second PCR round gives way too much aspecific bands, whereas usually you get rid of those under optimal conditions.

Another confusing finding is a statement at the FDA site:

Additionally, the CDC laboratory provided 82 samples from their published negative study to FDA, who tested the samples blindly.  Initial analysis shows that the FDA test results are generally consistent with CDC, with no XMRV-positive results in the CFS samples CDC provided (34 samples were tested, 31 were negative, 3 were indeterminate).

What does this mean? Which inner primers did the FDA use? With the WPI inner primers MLV sequences might just not be found (although there might be other reasons as well, as the less stringent patient criteria).

And what to think of the earlier WPI findings? They did find “XMRV” sequences while no one else did.

I have always been skeptic (see here and here), because:

  • no mention of sensitivity in their paper
  • No mention of a positive control. The negative controls were just vials without added DNA.
  • No variation in the sequences detected, a statement that they retracted after the present NIH/FDA publication. What a coincidence.
  • Although the PCR is near the detection limit, only first round products are shown. These are stronger then you would expect them to be after one round.
  • The latter two points are suggestive of contamination. No extra test were undertaken to exclude this.
  • Surprisingly in an open letters/news items (Feb 18), they disclose that culturing PBMC’s is necessary to obtain a positive signal.  They refer to the original Science paper, but this paper doesn’t mention the need for culturing at all.
  • In an other open letter* Annette Whittemore, director of the WPI,writes to Dr McClure, virologist of tone of the negative papers that WPI researchers had detected XMRV in patient samples from both Dr. Kerr’s and Dr. van Kuppeveld’s cohorts. So if we must believe Annette, the negative samples weren’t negative
  • At this stage of controversy, the test is sold as “a reliable diagnostic tool“ by a firm with strong ties to WPI. In one such mail Mikovits says: “First of all the current diagnostic testing will define with essentially 100% accuracy! XMRV infected patients”.

Their PR machine, ever changing “findings” and anti-scientific attitude are worrying. Read more about at erv here.

What we can conclude from this all. I don’t know. I presume that WPI did find “something”, but weren’t cautious, critical and accurate enough in their drive to move forward (hence their often changing statements). I presume that the four negative findings relate to the nature of their samples or the use of the WPI inner primers or both. I assume that the NIH/CDC findings are real, although the actual positive rates might vary depending on conditions used (I would love to see all actual data).

Virologist “erv”is less positive, about the quality of the findings and their implications. In one of her comments (17) she responds:

No. An exogenous mouse ERV in humans makes no sense. But thats what their tree says. Mouse ERV is even more incredible than XMRV. Might be able to figure this out more if they upload their sequences to genbank. I realize they tried very hard not to contaminate their samples with mouse cells. That doesnt mean mouse DNA isnt in any of their store-bought reagents. There are H2O lanes in the mitochondral gels, but not the MLV gels (Fig 1, Fig 2). Why? Positive and negative controls go on every gel, end of story. First lesson every rotating student in our lab learns.

Finding mere virus-like sequences in CFS-patients is not enough. We need more data, more carefully gathered and presented. Not only in CFS patients and controls, but in cohorts of patients with different diseases and controls under controlled conditions. This will tell something about the specificity of the finding for CFS. We also need more information about XMRV infectivity and serology.

We also need to find out what being normal healthy and MLV+ means.

The research on XMRV/MLV seems to progress with one step forward, two steps back.

With the CFS patients, I truly hope that we are walking in the right direction.

Note

The title from this post was taken from: http://www.veteranstoday.com/2010/08/20/xmrv-renamed-to-hgrv/

References

  1. Lo SC, Pripuzova N, Li B, Komaroff AL, Hung GC, Wang R, & Alter HJ (2010). Detection of MLV-related virus gene sequences in blood of patients with chronic fatigue syndrome and healthy blood donors. Proceedings of the National Academy of Sciences of the United States of America PMID: 20798047
  2. Schekman R (2010). Patients, patience, and the publication process. Proceedings of the National Academy of Sciences of the United States of America PMID: 20798042
  3. Switzer WM, Jia H, Hohn O, Zheng H, Tang S, Shankar A, Bannert N, Simmons G, Hendry RM, Falkenberg VR, Reeves WC, & Heneine W (2010). Absence of evidence of xenotropic murine leukemia virus-related virus infection in persons with chronic fatigue syndrome and healthy controls in the United States. Retrovirology, 7 PMID: 20594299
  4. Erlwein, O., Kaye, S., McClure, M., Weber, J., Wills, G., Collier, D., Wessely, S., & Cleare, A. (2010). Failure to Detect the Novel Retrovirus XMRV in Chronic Fatigue Syndrome PLoS ONE, 5 (1) DOI: 10.1371/journal.pone.0008519
  5. Groom, H., Boucherit, V., Makinson, K., Randal, E., Baptista, S., Hagan, S., Gow, J., Mattes, F., Breuer, J., Kerr, J., Stoye, J., & Bishop, K. (2010). Absence of xenotropic murine leukaemia virus-related virus in UK patients with chronic fatigue syndrome Retrovirology, 7 (1) DOI: 10.1186/1742-4690-7-10
  6. van Kuppeveld, F., Jong, A., Lanke, K., Verhaegh, G., Melchers, W., Swanink, C., Bleijenberg, G., Netea, M., Galama, J., & van der Meer, J. (2010). Prevalence of xenotropic murine leukaemia virus-related virus in patients with chronic fatigue syndrome in the Netherlands: retrospective analysis of samples from an established cohort BMJ, 340 (feb25 1) DOI: 10.1136/bmj.c1018
  7. Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL, Ruscetti SK, Bagni RK, Petrow-Sadowski C, Gold B, Dean M, Silverman RH, & Mikovits JA (2009). Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. Science (New York, N.Y.), 326 (5952), 585-9 PMID: 19815723
  8. Enserink M (2010). Chronic fatigue syndrome. New XMRV paper looks good, skeptics admit–yet doubts linger. Science (New York, N.Y.), 329 (5995) PMID: 20798285

———
Related Articles





Collaborating and Delivering Literature Search Results to Clinical Teams Using Web 2.0 Tools

8 08 2010

ResearchBlogging.orgThere seem to be two camps in the library, the medical and many other worlds: those who embrace Web 2.0, because they consider it useful for their practice and those who are unaware of Web 2.0 or think it is just a fad. There are only a few ways the Web 2.0-critical people can be convinced: by arguments (hardly), by studies that show evidence of its usefulness and by examples of what works and what doesn’t work.

The paper of Shamsha Damani and Stephanie Fulton published in the latest Medical Reference Services Quarterly [1] falls in the latter category. Perhaps the name Shamsha Damania rings a bell: she is a prominent twitterer and has written quest posts at this blog on several occasions (here, herehere and here)

As clinical librarians at The University of Texas MD Anderson Cancer Center, Shamsha and Stephanie are immersed in clinical teams and provide evidence-based literature for various institutional clinical algorithms designed for patient care.

These were some of the problems the clinical librarians encountered when sharing the results of their searches with the teams by classic methods (email):

First, team members were from different departments and were dispersed across the sprawling hospital campus. Since the teams did not meet in person very often, it was difficult for the librarians to receive timely feedback on the results of each literature search. Second, results sent from multiple database vendors were either not received or were overlooked by team members. Third, even if users received the bibliography, they still had to manually search for and locate the full text of articles. The librarians also experimented with e-mailing EndNote libraries; however, many users were not familiar with EndNote and did not have the time to learn how to use it. E-mails in general tended to get lost in the shuffle, and librarians often found themselves re-sending e-mails with attachments. Lastly, it was difficult to update the results of a literature search in a consistent manner and obtain meaningful feedback from the entire team.

Therefore, they tried several Web 2.0 tools for sharing search results with their clinical teams.
In their article, the librarians share their experience with the various applications they explored that allowed centralization of the search results, provided easy online access, and enabled collaboration within the group.

Online Reference Management Tools were the librarians’ first choice, since these are specifically designed to help users gather and store references from multiple databases and allow sharing of results. Of the available tools, Refworks was eventually not tested, because it required two sets of usernames and passwords. In contrast, EndNote Web can be accessed from any computer with a username and password. Endnoteweb is suitable for downloading and managing references from multiple databases and for retrieving full text papers as well as  for online collaboration. In theory, that is. In practice, the team members experienced several difficulties: trouble to remember the usernames and passwords, difficulties using the link resolver and navigating to the full text of each article and back to the Endnote homepage. Furthermore, accessing the full text of each article was considered a too laborious process.

Next, free Social bookmarking sites were tested allowing users to bookmark Web sites and articles, to share the bookmarks and to access them from any computer. However, most team members didn’t create an account and could therefore not make use of the collaborative features. The bookmarking sites were deemed ‘‘user-unfriendly’’, because  (1) the overall layout and the presentation of results -with the many links- were experienced as confusing,  (2) sorting possibilities were not suitable for this purpose and (3) it was impossible to search within the abstracts, which were not part of the bookmarked records. This was true both for Delicious and Connotea, even though the latter is more apt for science and medicine, includes bibliographic information and allows import and export of references from other systems. An other drawback was that the librarians needed to bookmark and comment each individual article.

Wikis (PBWorks and SharePoint) appeared most user-friendly, because they were intuitive and easy to use: the librarians had created a shared username and password for the entire team, the wiki was behind the hospital’s firewall (preferred by the team) and the users could access the articles with one click. For the librarians it was labor-consuming as they annotated the bibliographies, published it on the wiki and added persistent links to each article. It is not clear from the article how final reference lists were created by the team afterwards. Probably by cut & paste, because Wikis don’t seem suitable as a Word processor nor  are they suitable for  import and export of references.

Some Remarks

It is informative to read the pros and cons of the various Web 2.0 tools for collaborating and delivering search results. For me, it was even more valuable to read how the research was done. As the authors note (quote):

There is no ‘‘one-size-fits-all’’ approach. Each platform must be tested and evaluated to see how and where it fits within the user’s workflow. When evaluating various Web 2.0 technologies, librarians should try to keep users at the forefront and seek feedback frequently in order to provide better service. Only after months of exploration did the librarians at MD Anderson Cancer Center learn that their users preferred wikis and 1-click access to full-text articles. Librarians were surprised to learn that users did not like the library’s link resolvers and wanted a more direct way to access information.

Indeed, there is no ‘‘one-size-fits-all’’ approach. For that reason too, the results obtained may only apply in certain settings.

I was impressed by the level of involvement of the clinical librarians and the time they put not only in searching, but also in presenting the data, in ranking the references according to study design, publication type, and date and in annotating the references. I hope they prune the results as well, because applying this procedure to 1000 or more references is no kidding. And, although it may be ideal for the library users, not all librarians work like this. I know of no Dutch librarian who does. Because of the workload such a ready made wiki may not be feasible for many librarians .

The librarians starting point was to find an easy and intuitive Web based tool that allowed collaborating and sharing of references.
The emphasis seems more on the sharing, since end-users did not seem to collaborate via the wikis themselves. I also wonder if the simpler and free Google Docs wouldn’t fulfill most of the needs. In addition, some of the tools might have been perceived more useful if users had received some training beforehand.
The training we offer in Reference Manager, is usually sufficient to learn to work efficiently with this quite complex reference manager tool. Of course, desktop software is not suitable for collaboration online (although it could always be easily exported to an easier system), but a short training may take away most of the barriers people feel when using a new tool (and with the advantage that they can use this tool for other purposes).

In short,

Of the Web 2.0 tools tested, wikis were the most intuitive and easy to use tools for collaborating with clinical teams and for delivering the literature search results. Although it is easy to use by end-users, it seems very time-consuming for librarians, who make ready-to-use lists with annotations.

Clinical teams of MD Anderson must be very lucky with their clinical librarians.

Reference
Damani S, & Fulton S (2010). Collaborating and delivering literature search results to clinical teams using web 2.0 tools. Medical reference services quarterly, 29 (3), 207-17 PMID: 20677061

Are you a Twitter user? Tweet this!

———————————

Added: August 9th 2010, 21:30 pm

On basis of the comments below (Annemarie Cunningham) and on Twitter (@Dymphie – here and here (Dutch)) I think it is a good idea to include a figure of one of the published wiki-lists.

It looks beautiful, but -as said- where is the collaborative aspect? Like Dymphie I have the impression that these lists are no different from the “normal” reference lists. Or am I missing something? I also agree with Dymphie that instructing people in Reference Manager may be much more efficient for this purpose.

It is interesting to read Christina Pikas view about this paper. At her blog Christina’s Lis Rant (just moved to the new Scientopia platform) Christina first describes how she delivers her search results to her customers and which platforms she uses for this. Then she shares some thoughts about the paper, like:

  • they (the authors) ruled out RefWorks because it required two sets of logins/passwords – hmm, why not RefWorks with RefShare? Why two sets of passwords?
  • SharePoint wikis suck. I would probably use some other type of web part – even a discussion board entry for each article.
  • they really didn’t use the 2.0 aspects of the 2.0 tools – particularly in the case of the wiki. The most valued aspects were access without a lot of logins and then access to the full text without a lot of clicks.

Like Christina,  I would be interested in hearing other approaches – particularly using newer tools.






Kaleidoscope 2: 2010 wk 31

8 08 2010

Almost a year ago I started a new series Kaleidoscope, with a “kaleidoscope” of facts, findings, views and news gathered over the last 1-2 weeks.
It never got beyond the first edition. Perhaps the introduction of this Kaleidoscope was to overwhelming & dazzling: lets say it was very rich in content. Or as
Andrew Spong tweeted: “Part cornucopia, part cabinet of wonders, it’s @laikas Kaleidoscope 2009 wk 47″

This is  a reprise in a (somewhat) “shorter” format. Lets see how it turns out.

This edition will concentrate on Social Media (Blogging, Twitter Google Wave). I fear that I won’t keep my promise, if I deal with more topics.

Medical Grand Rounds and News from the Blogosphere

Life in the Fast Lane is the host of this weeks Grand Rounds. This edition is truly terrific, if not terrifying. Not only does it contain “killer posts”, each medblogger has also been coupled to its preferred deadly Aussie critter.
Want to know how a full time ER-doctor/educator/textbook author/blogger/editor /health search engine director manages to complete work-related tasks …when the kids are either at school or asleep(!), then read this recent interview with Mike Cadogan, the founder of Life in the Fast Lane.

Don’t forget to submit your medical blog post to next weeks Grand Rounds over at Dispatch From Second Base. Instructions and theme details can be found on the post “You are invited to Grand Rounds!“ (update here).

And certainly don’t forget to submit your post related to medical information to the MedLibs Round (about medical information) here. More details can be found at Laika’s MedLibLog and at Highlight Health, the host of the upcoming Edition.
(sorry, writing this post took longer than I thought: you have one day left for submission)

Dr Shock of the blog with the same name advises us to submit good quality, easy-to-understand posts dealing with science, environment or medicine to Scientia Pro Publica via the blog carnival submission form.

There is a new on-line science blogging community – Scientopia, till now mostly consisting of bloggers who left Scienceblogs after (but not because of) Pepsigate. New members can only be added to the collective by invitation (?). Obviously, pepsi-researchers will not be invited, but it remains to be seen who will…  Hopefully it doesn’t become an elitist club.
Virginia Heffernan (NY-Times) has an outspoken opinion about the (ex-) sciencebloggers, illustrated by this one-liner

“ScienceBlogs has become Fox News for the religion-baiting, peak-oil crowd.”

Although I don’t appreciate the ranting-style of some of the blogs myself (the sub-“South Park” blasphemy style of PZ Myers, as Virginia puts it). I don’t think most Scienceblogs deserve to be labelled as “preoccupied with trivia, name-calling and saber rattling”.
See balanced responses at: NeurodojoNeuron Culture & Neuroanthropology (anything with neuro- makes sense, I guess).
Want to understand more about ScienceBlogs and why it was such a terrific community, then read Bora Z’s (rather long) ScienceBlog farewell post.

Oh.. and there is yet another new science blogging platform: http://www.labspaces.net/, that has evolved from a science news aggregator . It looks slick.

Social Media

Speaking about Twitter, did you know that  Twitter reached its 20 billionth tweet over the weekend, a milestone that came just a few months after hitting the 10 billion tweet mark!? (read more in the Guardian)

Well and if you have no idea WHAT THE FUCK IS MY SOCIAL MEDIA “STRATEGY”? you might click the link to get some (new) ideas. You probably need to refresh the site a couple of times to find the right answer.

First-year medical school and master’s of medicine students of Stanford University will receive an i-pad at the start of the year. The extremely tech-savvy Students do appreciate the gift:

“Especially in medicine, we’re using so many different resources, including all the syllabuses and slides. I’m able to pull them up and search them whenever I need to. It’s a fantastic idea.”

Good news for Facebook friends: VoIP giant Vonage has just introduced a new iPhone, iPod touch and Android app that allows users to call their Facebook friends for free (Mashable).

It was a shock – or wasn’t it – that Google pulled the plug on Google Wave (RRW), after being available to the general public for only 78 days?  The unparalleled tool that “could change the web”, but was too complex to be understood. Here are some thoughts why Google wave failed.  Since much of the Code is open source, ambitious developers may pick up where Google left.

Votes down for the social media site Digg.com: an undercover investigation has exposed that a group of influential conservative members were involved in censorship, deliberately trying to ban progressives, by “burying them” (voting down), which effectively means these progressives don’t get enough “digs” to reach the front page where most users spend their time.

Votes up for Healthcare Social Media Europe (#HCSMEU), which just celebrated its first birthday.

Miscellanous

A very strange move: a journal has changed a previously stated conclusion of a previously published paper after a Reuters Health story about serious shortcomings in the report. Read more about it at Gary Schwitzer’s HealthNewsReview Blog.

Finally for the EBM-addicts among us: The Center of Evidence Based Medicine released a new (downloadable) Levels of Evidence Table. At the CEBM-blog they stress that hierarchies of evidence have been somewhat inflexibly used, but are essentially a heuristic, or short-cut to finding the likely best evidence. At first sight the new Table looks simpler, and more easy to use.

Are you a Twitter user? Tweet this!





Will Nano-Publications & Triplets Replace The Classic Journal Articles?

23 06 2010

ResearchBlogging.org“Libraries and journals articles as we know them will cease to exists” said Barend Mons at the symposium in honor of our Library 25th Anniversary (June 3rd). “Possibly we will have another kind of party in another 25 years”…. he continued, grinning.

What he had to say the next half hour intrigued me. And although I had no pen with me (it was our party, remember), I thought it was interesting enough to devote a post to it.

I’m basing this post not only on my memory (we had a lot of Italian wine at the buffet), but on an article Mons referred to [1], a Dutch newspaper article [2]), other articles [3-6] and Powerpoints [7-9] on the topic.

This is a field I know little about, so I will try to keep it simple (also for my sake).

Mons started by touching on a problem that is very familiar to doctors, scientists and librarians: information overload by a growing web of linked data.  He showed a picture that looked like the one at the right (though I’m sure those are Twitter Networks).

As he said elsewhere [3]:

(..) the feeling that we are drowning in information is widespread (..) we often feel that we have no satisfactory mechanisms in place to make sense of the data generated at such a daunting speed. Some pharmaceutical companies are apparently seriously considering refraining from performing any further genome-wide association studies (… whole genome association –…) as the world is likely to produce many more data than these companies will ever be able to analyze with currently available methods .

With the current search engines we have to do a lot of digging to get the answers [8]. Computers are central to this digging, because there is no way people can stay updated, even in their own field.

However,  computers can’t deal with the current web and the scientific  information as produced in the classic articles (even the electronic versions), because of the following reasons:

  1. Homonyms. Words that sound or are the same but have a different meaning. Acronyms are notorious in this respect. Barend gave PSA as an example, but, without realizing it, he used a better example: PPI. This means Protein Pump Inhibitor to me, but apparently Protein Protein Interactions to him.
  2. Redundancy. To keep journal articles readable we often use different words to denote the same. These do not add to the real new findings in a paper. In fact the majority of digital information is duplicated repeatedly. For example “Mosquitoes transfer malaria”, is a factual statement repeated in many consecutive papers on the subject.
  3. The connection between words is not immediately clear (for a computer). For instance, anti-TNF inhibitors can be used to treat skin disorders, but the same drugs can also cause it.
  4. Data are not structured beforehand.
  5. Weight: some “facts” are “harder” than others.
  6. Not all data are available or accessible. Many data are either not published (e.g. negative studies), not freely available or not easy to find.  Some portals (GoPubmed, NCBI) provide structural information (fields, including keywords), but do not enable searching full text.
  7. Data are spread. Data are kept in “data silos” not meant for sharing [8](ppt2). One would like to simultaneously query 1000 databases, but this would require semantic web standards for publishing, sharing and querying knowledge from diverse sources…..

In a nutshell, the problem is as Barend put it: “Why bury data first and then mine it again?” [9]

Homonyms, redundancy and connection can be tackled, at least in the field Barend is working in (bioinformatics).

Different terms denoting the same concept (i.e. synonyms) can be mapped to a single concept identifier (i.e. a list of synonyms), whereas identical terms used to indicate different concepts (i.e. homonyms) can be resolved by a disambiguation algorithm.

The shortest meaningful sentence is a triplet: a combination of subject, predicate and object. A triplet indicates the connection and direction.  “Mosquitoes cause/transfer malaria”  is such a triplet, where mosquitoes and malaria are concepts. In the field of proteins: “UNIPROT 05067 is a protein” is a triplet (where UNIPROT 05067 and protein are concepts), as are: “UNIprotein 05067 is located in the membrane” and “UNIprotein 0506 interacts with UNIprotein 0506″[8].  Since these triplets  (statements)  derive from different databases, consistent naming and availability of  information is crucial to find them. Barend and colleagues are the people behind Wikiproteins, an open, collaborative wiki  focusing on proteins and their role in biology and medicine [4-6].

Concepts and triplets are widely accepted in the world of bio-informatics. To have an idea what this means for searching, see the search engine Quertle, which allows semantic search of PubMed & full-text biomedical literature, automatic extraction of key concepts; Searching for ESR1 $BiologicalProcess will search abstracts mentioning all kind of processes where ESR1 (aka ERα, ERalpha, EStrogen Receptor 1) are involved. The search can be refined by choosing ‘narrower terms’ like “proliferation” or “transcription”.

The new aspects is that Mons wants to turn those triplets into (what he calls) nano-publications. Because not every statement is as ‘hard’, nano-publications are weighted by assigning numbers from 0 (uncertain) to 1 (very certain). The nano-publication “mosquitoes transfer malaria” will get a number approaching 1.

Such nano-publications offer little shading and possibility for interpretation and discussion. Mons does not propose to entirely replace traditional articles by nano-publications. Quote [3]:

While arguing that research results should be available in the form of nano-publications, are emphatically not saying that traditional, classical papers should not be published any longer. But their role is now chiefly for the official record, the “minutes of science” , and not so much as the principle medium for the exchange of scientific results. That exchange, which increasingly needs the assistance of computers to be done properly and comprehensively, is best done with machine-readable, semantically consistent nano-publications.

According to Mons, authors and their funders should start requesting and expecting the papers that they have written and funded to be semantically coded when published, preferably by the publisher and otherwise by libraries: the technology exists to provide Web browsers with the functionality for users to identify nano-publications, and annotate them.

Like the wikiprotein-wiki, nano-publications will be entirely open access. It will suffice to properly cite the original finding/publication.

In addition there is a new kind of “peer review”. An expert network is set up to immediately assess a twittered nano-publication when it comes out, so that  the publication is assessed by perhaps 1000 experts instead of 2 or 3 reviewers.

On a small-scale, this is already happening. Nano-publications are send as tweets to people like Gert Jan van Ommen (past president of HUGO and co-author of 5 of my publications (or v.v.)) who then gives a red (don’t believe) or a green light (believe) via one click on his blackberry.

As  Mons put it, it looks like a subjective event, quite similar to “dislike” and “like” in social media platforms like Facebook.

Barend often referred to a PLOS ONE paper by van Haagen et al [1], showing the superiority of the concept-profile based approach not only in detecting explicitly described PPI’s, but also in inferring new PPI’s.

[You can skip the part below if you're not interested in details of this paper]

Van Haagen et al first established a set of a set of 61,807 known human PPIs and of many more probable Non-Interacting Protein Pairs (NIPPs) from online human-curated databases (and NIPPs also from the IntAct database).

For the concept-based approach they used the concept-recognition software Peregrine, which includes synonyms and spelling variations  of concepts and uses simple heuristics to resolve homonyms.

This concept-profile based approach was compared with several other approaches, all depending on co-occurrence (of words or concepts):

  • Word-based direct relation. This approach uses direct PubMed queries (words) to detect if proteins co-occur in the same abstract (thus the names of two proteins are combined with the boolean ‘AND’). This is the simplest approach and represents how biologists might use PubMed to search for information.
  • Concept-based direct relation (CDR). This approach uses concept-recognition software to find PPIs, taking synonyms into account, and resolving homonyms. Here two concepts (h.l. two proteins) are detected if they co-occur in the same abstract.
  • STRING. The STRING database contains a text mining score which is based on direct co-occurrences in literature.

The results show that, using concept profiles, 43% of the known PPIs were detected, with a specificity of 99%, and 66% of all known PPIs with a specificity of 95%. In contrast, the direct relations methods and STRING show much lower scores:

Word-based CDR Concept profiles STRING
Sensitivity at spec = 99% 28% 37% 43% 39%
Sensitivity at spec = 95% 33% 41% 66% 41%
Area under Curve 0.62 0.69 0.90 0.69

These findings suggested that not all proteins with high similarity scores are known to interact but may be related in another way, e.g.they could be involved in the same pathway or be part of the same protein complex, but do not physically interact. Indeed concept-based profiling was superior in predicting relationships between proteins potentially present in the same complex or pathway (thus A-C inferred from concurrence protein pairs A-B and B-C).

Since there is often a substantial time lag between the first publication of a finding, and the time the PPI is entered in a database, a retrospective study was performed to examine how many of the PPIs that would have been predicted by the different methods in 2005 were confirmed in 2007. Indeed, using concept profiles, PPIs could be efficiently predicted before they enter PPI databases and before their interaction was explicitly described in the literature.

The practical value of the method for discovery of novel PPIs is illustrated by the experimental confirmation of the inferred physical interaction between CAPN3 and PARVB, which was based on frequent co-occurrence of both proteins with concepts like Z-disc, dysferlin, and alpha-actinin. The relationships between proteins predicted are broader than PPIs, and include proteins in the same complex or pathway. Dependent on the type of relationships deemed useful, the precision of the method can be as high as 90%.

In line with their open access policy, they have made the full set of predicted interactions available in a downloadable matrix and through the webtool Nermal, which lists the most likely interaction partners for a given protein.

According to Mons, this framework will be a very rich source for new discoveries, as it will enable scientists to prioritize potential interaction partners for further testing.

Barend Mons started with the statement that nano-publications will replace the classic articles (and the need for libraries). However, things are never as black as they seem.
Mons showed that a nano-publication is basically a “peer-reviewed, openly available” triplet. Triplets can be effectively retrieved ànd inferred from available databases/papers using a
concept-based approach.
Nevertheless, effectivity needs to be enhanced by semantically coding triplets when published.

What will this mean for clinical medicine? Bioinformatics is quite another discipline, with better structured and more straightforward data (interaction, identity, place). Interestingly, Mons and van Haage plan to do further studies, in which they will evaluate whether the use of concept profiles can also be applied in the prediction of other types of relations, for instance between drugs or genes and diseases. The future will tell whether the above-mentioned approach is also useful in clinical medicine.

Implementation of the following (implicit) recommendations would be advisable, independent of the possible success of nano-publications:

  • Less emphasis on “publish or perish” (thus more on the data themselves, whether positive, negative, trendy or not)
  • Better structured data, partly by structuring articles. This has already improved over the years by introducing structured abstracts, availability of extra material (appendices, data) online and by guidelines, such as STARD (The Standards for Reporting of Diagnostic Accuracy)
  • Open Access
  • Availability of full text
  • Availability of raw data

One might argue that disclosing data is unlikely when pharma is involved. It is very hopeful therefore, that a group of major pharmaceutical companies have announced that they will share pooled data from failed clinical trials in an attempt to figure out what is going wrong in the studies and what can be done to improve drug development (10).

Unfortunately I don’t dispose of Mons presentation. Therefore two other presentations about triplets, concepts and the semantic web.

&

References

  1. van Haagen HH, ‘t Hoen PA, Botelho Bovo A, de Morrée A, van Mulligen EM, Chichester C, Kors JA, den Dunnen JT, van Ommen GJ, van der Maarel SM, Kern VM, Mons B, & Schuemie MJ (2009). Novel protein-protein interactions inferred from literature context. PloS one, 4 (11) PMID: 19924298
  2. Twitteren voor de wetenschap, Maartje Bakker, Volskrant (2010-06-05) (Twittering for Science)
  3. Barend Mons and Jan Velterop (?) Nano-Publication in the e-science era (Concept Web Alliance, Netherlands BioInformatics Centre, Leiden University Medical Center.) http://www.nbic.nl/uploads/media/Nano-Publication_BarendMons-JanVelterop.pdf, assessed June 20th, 2010.
  4. Mons, B., Ashburner, M., Chichester, C., van Mulligen, E., Weeber, M., den Dunnen, J., van Ommen, G., Musen, M., Cockerill, M., Hermjakob, H., Mons, A., Packer, A., Pacheco, R., Lewis, S., Berkeley, A., Melton, W., Barris, N., Wales, J., Meijssen, G., Moeller, E., Roes, P., Borner, K., & Bairoch, A. (2008). Calling on a million minds for community annotation in WikiProteins Genome Biology, 9 (5) DOI: 10.1186/gb-2008-9-5-r89
  5. Science Daily (2008/05/08) Large-Scale Community Protein Annotation — WikiProteins
  6. Boing Boing: (2008/05/28) WikiProteins: a collaborative space for biologists to annotate proteins
  7. (ppt1) SWAT4LS 2009Semantic Web Applications and Tools for Life Sciences http://www.swat4ls.org/
    Amsterdam, Science Park, Friday, 20th of November 2009
  8. (ppt2) Michel Dumontier: triples for the people scientists liberating biological knowledge with the semantic web
  9. (ppt3, only slide shown): Bibliography 2.0: A citeulike case study from the Wellcome Trust Genome Campus – by Duncan Hill (EMBL-EBI)
  10. WSJ (2010/06/11) Drug Makers Will Share Data From Failed Alzheimer’s Trials




What One Short Night’s Sleep does to your Glucose Metabolism

11 05 2010

ResearchBlogging.orgAs a blogger I regularly sleep 3-5 hours just to finish a post. I know that this has its effects on how I feel the next day. I also know short nights don’t promote my clear-headedness and I also recognize short-term effects on  memory, cognitive functions, reaction time and mood (irritability), as depicted in the picture below. But I had no idea of any effect on heart disease, obesity and risk of diabetes type 2.

Indeed, short sleep duration is consistently associated with the development of obesity and diabetes in observational studies (see several recent systematic reviews, 3-5). However, as explained before, an observational design cannot establish causality. For instance, diabetes type 2 may be the consequence of other lifestyle aspects of people who spend little time sleeping, or sleep problems might be a consequence rather than a cause of diabetogenic changes.

Diabetes is basically a condition characterized by difficulties processing carbohydrates (sugars, glucose). Type 2 diabetes has a slow onset. First there is a gradual defect in the body’s ability to use insulin. This is called insulin resistance. Insulin is a pancreatic hormone that increases glucose utilization in skeletal muscle and fat tissue and suppresses glucose production by the liver, thereby lowering blood glucose levels.  Over time, damage may occur to the insulin-producing cells in the pancreas (type 2 diabetes),  which may ultimately progress to the point where the pancreas doesn’t make enough insulin and injections are needed. (source: about.com).

Since it is such a slow process one would not expect insulin resistance to change overnight. And certainly not by just partial sleep deprivation of 4-5 hrs of sleep.

Still, this is the outcome of a study, performed by the PhD student Esther Donga. Esther belongs to the study group of Romijn who also studied the previously summarized effects of previous cortisol excess on cognitive functions in Cushing’s disease .

Donga et al. have studied the effects of one night of sleep restriction on insulin sensitivity in 9 healthy lean individuals [1] and in 7 patients with type 1 diabetes [2]. The outcomes were practically the same, but since the results in healthy individuals (having no problems with glucose metabolism, weight or sleep) are most remarkable, I will confine myself to the study in healthy people.

The study design is relatively simple. Five men and four healthy women (mean age 45 years) with a lean body weight and normal  sleep pattern participated in the study. They were not using medication affecting sleep or glucose metabolism and were asked to adhere to their normal lifestyle pattern during the study.

There were 3 study days, separated by intervals of at least 3 weeks. The volunteers were admitted to the clinical research center the night before each study day to become accustomed to sleeping there. They fasted throughout these nights and spent 8.5 h in bed.  The subjects were randomly assigned to sleep deprivation on either the second or third occasion. Then they were only allowed to sleep from 1 am to 4 am to secure equal compression of both non-REM and REM sleep stages.

(skip blue paragraphs if you are not interested in the details)

Effects on insulin sensitivity were determined on the day after the second and third night (one normal and one short night sleep) by the gold standard for quantifying insulin resistance: the hyperinsulinemic euglycemic clamp method. This method uses catheters to infuse insulin and glucose into the bloodstream. Insulin is infused to get a steady state of insulin in the blood and the insulin sensitivity is determined by measuring the amount of glucose necessary to compensate for an increased insulin level without causing hypoglycemia (low blood sugar). (see Figure below, and a more elaborate description at Diabetesmanager (pbworks).

Prior to beginning the hyperinsulinemic period, basal blood samples were taken and labeled [6,6-2H2]glucose was infused  for assessment of glucose kinetics in the basal state. At different time-points concentrations of glucose, insulin, and plasma nonesterified fatty acids (NEFA) were measured.

The sleep stages were differently affected  by the curtailed sleep duration: the proportion of the stage III and stage II sleep were greater (P < 0.007), respectively smaller (P < 0.006) in the sleep deprived night.

Partial sleep deprivation did not alter basal levels of glucose, nonesterified fatty acids (NEFA), insulin, glucagon, or cortisol measured the following morning, nor did it affect basal endogenous glucose production.

However, during the CLAMP-procedure there were significant alterations on the following parameters:

  • Endogenous glucose production – increase of approximately 22% (p< 0.017), indicating hepatic insulin resistance.
  • Rate of Glucose Disposal - decrease by approximately 20% (p< 0.009), indicating decreased peripheral insulin sensitivity.
  • Glucose infusion rate – approximately 25% lower after the night of reduced sleep duration (p< 0.001). This is in agreement with the above findings: less extra glucose needed to maintain plasma glucose levels.
  • NEFA – increased by 19% (p< 0.005), indicating decreased insulin sensitivity of lipolysis (breakdown of triglyceride lipids- into free fatty acids).

The main novelty of the present study is the finding that one single night of shortened sleep is sufficient to reduce insulin sensitivity (of different metabolic pathways) in healthy men and women.

This is in agreement with the evidence of observational studies showing an association between sleep deprivation and obesity/insulin resistance/diabetes (3-5). It also extends results from previous experimental studies (summarized in the paper), that document the effects on glucose-resistance after multiple nights of sleep reduction (of 4h) or total sleep deprivation.

The authors speculate that the negative effects of multiple nights of partial sleep restriction on glucose tolerance can be reproduced, at least in part, by only a single night of sleep deprivation.

And the media conclude:

  • just one night of short sleep duration can induce insulin resistance, a component of type 2 diabetes (Science Daily)
  • healthy people who had just one night of short sleep can show signs of insulin resistance, a condition that often precedes Type 2 diabetes. (Medical News Today)
  • even a single of night of sleep deprivation can cause the body to show signs of insulin resistance, a warning sign of diabetes (CBS-news)
  • And this was of course the message that catched my eye in the first place: “Gee, one night of bad sleep, can already disturb your glucose metabolism in such a way that you arrive at the first stage of diabetes: insulin resistance!…Help!”

    First “insulin resistance” calls up another association than “partial insulin resistance” or a “somewhat lower insulin sensitivity” (as demonstrated in this study).  We interpret insulin resistance as a disorder that will eventually lead to diabetes, but perhaps adaptations in insulin sensitivity are just a normal phenomenon, a way to cope with normal fluctuations in exercise, diet and sleep. Or a consequence of other adaptive processes, like changes  in the activity of the autonomous nervous system in response to a short sleep duration.

    Just as blood lipids will be high after a lavish dinner, or even after a piece of chocolate. And just as blood-cortisol will raise in case of exercise, inflammation or stress. That is normal homeostasis. In this way the body adapts to changing conditions.

    Similarly -and it is a mere coincidence that I saw the post of Neuroskeptic about this study today- an increase of blood cortisol levels in children when ‘dropped’ at daycare, doesn’t mean that this small increase in cortisol is bad for them. And it certainly doesn’t mean that you should avoid putting toddlers in daycare as Oliver James concludes, because “high cortisol has been shown many times to be a correlate of all manner of problems”. As neuroskeptic explains:

    Our bodies release cortisol to mobilize us for pretty much any kind of action. Physical exercise, which of course is good for you in pretty much every possible way, cause cortisol release. This is why cortisol spikes every day when you wake up: it helps give you the energy to get out of bed and brush your teeth. Maybe the kids in daycare were just more likely to be doing stuff than before they enrolled.

    Extremely high levels of cortisol over a long period certainly do cause plenty of symptoms including memory and mood problems, probably linked to changes in the hippocampus. And moderately elevated levels are correlated with depression etc, although it’s not clear that they cause it. But a rise from 0.3 to 0.4 is much lower than the kind of values we’re talking about there.

    So the same may be true for a small temporary decrease in glucose sensitivity. Of course insulin resistance can be a bad thing, if blood sugars stay elevated. And it is conceivable that bad sleep habits contribute to this (certainly when combined with the use of much alcohol and eating junk food).

    What is remarkable (and not discussed by the authors) is that the changes in sensitivity were only “obvious” (by eyeballing) in 3-4 volunteers in all 4 tests. Was the insulin resistance unaffected in the same persons in all 4 tests or was the variation just randomly distributed? This could mean that not all persons are equally sensitive.

    It should be noted that the authors themselves remain rather reserved about the consequences of their findings for normal individuals. They conclude “This physiological observation may be of relevance for variations in glucoregulation in patients with type 1 and type 2 diabetes” and suggest that  “interventions aimed at optimization of sleep duration may be beneficial in stabilizing glucose levels in patients with diabetes.”
    Of course, their second article in diabetic persons[2], rather warrants this conclusion. Their specific advise is not directly relevant to healthy individuals.

    Credits

    References

    1. Donga E, van Dijk M, van Dijk JG, Biermasz NR, Lammers GJ, van Kralingen KW, Corssmit EP, & Romijn JA (2010). A Single Night of Partial Sleep Deprivation Induces Insulin Resistance in Multiple Metabolic Pathways in Healthy Subjects. The Journal of clinical endocrinology and metabolism PMID: 20371664
    2. Donga E, van Dijk M, van Dijk JG, Biermasz NR, Lammers GJ, van Kralingen K, Hoogma RP, Corssmit EP, & Romijn JA (2010). Partial sleep restriction decreases insulin sensitivity in type 1 diabetes. Diabetes care PMID: 2035738
    3. Nielsen LS, Danielsen KV, & Sørensen TI (2010). Short sleep duration as a possible cause of obesity: critical analysis of the epidemiological evidence. Obesity reviews : an official journal of the International Association for the Study of Obesity PMID: 20345429
    4. Monasta L, Batty GD, Cattaneo A, Lutje V, Ronfani L, van Lenthe FJ, & Brug J (2010). Early-life determinants of overweight and obesity: a review of systematic reviews. Obesity reviews : an official journal of the International Association for the Study of Obesity PMID: 20331509
    5. Cappuccio FP, D’Elia L, Strazzullo P, & Miller MA (2010). Quantity and quality of sleep and incidence of type 2 diabetes: a systematic review and meta-analysis. Diabetes care, 33 (2), 414-20 PMID: 19910503
    The subjects were studied on 3 d, separated by intervals of at
    least 3 wk. Subjects kept a detailed diary of their diet and physical
    activity for 3 d before each study day and were asked to maintain
    a standardized schedule of bedtimes and mealtimes in accordance
    with their usual habits. They were admitted to our clinical
    research center the night before each study day, and spent 8.5 h
    in bed from 2300 to 0730 h on all three occasions. Subjects fasted
    throughout these nights from 2200 h. The first study day was
    included to let the subjects become accustomed to sleeping in our
    clinical research center. Subjects were randomly assigned to sleep
    deprivation on either the second (n4) or third (n5) occasion.
    During the night of sleep restriction, subjects spent 8.5 h in
    bed but were only allowed to sleep from 0100 to 0500 h. They
    were allowed to read or watch movies in an upward position
    during the awake hours, and their wakefulness was monitored
    and assured if necessary.
    The rationale for essentially broken sleep deprivation from
    2300 to 0100 h and from 0500 to 0730 h, as opposed to sleep
    deprivation from 2300 to 0300 h or from 0300 to 0730 h, was
    that in both conditions, the time in bed was centered at the same
    time, i.e. approximately 0300 h. Slow-wave sleep (i.e. stage III of
    non-REM sleep) is thought to play the most important role in
    metabolic, hormonal, and neurophysiological changes during
    sleep. Slow-wave sleep mainly occurs during the first part of the
    night, whereas REM sleep predominantly occurs during the latter
    part of the night (12). We used broken sleep deprivation to
    achieve a more equal compression of both non-REM and REM
    sleep stages. Moreover, we used the same experimental conditions
    for partial sleep deprivation as previously used in other
    studies (7, 13) to enable comparison of the results.




    Three Studies Now Refute the Presence of XMRV in Chronic Fatigue Syndrome (CFS)

    27 04 2010

    ResearchBlogging.org.“Removing the doubt is part of the cure” (RedLabs)

    Two months ago I wrote about two contradictory studies on the presence of the novel XMRV retrovirus in blood of patients with Chronic Fatigue Syndrome (CFS).

    The first study, published in autumn last year by investigators of the Whittemore Peterson Institute (WPI) in the USA [1], claimed to find XMRV virus in peripheral blood mononuclear cells (PBMC) of patients with CFS. They used PCR and several other techniques.

    A second study, performed in the UK [2] failed to show any XMRV-virus in peripheral blood of CFS patients.

    Now there are two other negative studies, one from the UK [3] and one from the Netherlands [4].

    Does this mean that XMRV is NOT present in CFS patients?

    No, different results may still be due do to different experimental conditions and patient characteristics.

    The discrepancies between the studies discussed in the previous post remain, but there are new insights, that I would like to share.*

    1. Conflict of Interest, bias

    Most CFS patients seem “to go for” WPI, because WPI, established by the family of a chronic fatigue patient, has a commitment to CFS. CFS patients feel that many psychiatrists, including authors of the negative papers [2-4] dismiss CFS as something “between the ears”.  This explains the negative attitude against these “psych-healers” on ME-forums (i.e. the Belgium forum MECVS.net and http://www.forums.aboutmecfs.org/). MECVS even has a section “faulty/wrong” papers, i.e. about the “failure” of psychiatrists to demonstrate  XMRV!

    Since a viral (biological) cause would not fit in the philosophy of these psychiatrists, they might just not do their best to find the virus. Or even worse…

    Dr. Mikovits, co-author of the first paper [1] and Director of Research at WPI, even responded to the first UK study as follows (ERV and Prohealth):

    “You can’t claim to replicate a study if you don’t do a single thing that we did in our study,” …
    “They skewed their experimental design in order to not find XMRV in the blood.” (emphasis mine)

    Mikovits also suggested that insurance companies in the UK are behind attempts to sully their findings (ERV).

    These kind of personal attacks are “not done” in Science. And certainly not via this route.

    Furthermore, WPI has its own bias.

    For one thing WPI is dependent on CFS and other neuro-immune patients for its existence.

    WPI has generated numerous press releases and doesn’t seem to use the normal scientific channels. Mikovits presented a 1 hr Q&A session about XMRV and CFS (in a stage where nothing has been proven yet). She will also present data about XMRV at an autism meeting. There is a lot of PR going on.

    Furthermore there is an intimate link  between WPI and VIP Dx, both housed in Reno. Vip DX is licensed by WPI to provide the XMRV-test. Vipdx.com links to the same site as redlabsusa.com, for Vip Dx is the new name of the former RedLabs.

    Interestingly Lombardi (the first author of the paper) co-founded Redlabs USA Inc. and  served as the Director of Operations at Redlabs, Harvey Whittemore owns 100% of VIP Dx, and was the company President until this year and  Mikovits is the Vice President of VIP Dx. (ME-forum). They didn’t disclose this in the Science paper.

    TEST_RQN_Feb_2010

    Vip/Dx offers a plethora of tests, and is the only RedLab -branch that performs the WPI-PCR test, now replaced by the “sensitive” culture test (see below). At this stage of controversy, the test is sold as “a reliable diagnostic tool“(according to prohealth). Surely their motto “Removing the doubt is part of the cure” appeals to patients. But how can doubt be removed if the association of XMRV with CFS has not been confirmed, the diagnostic tests offered have yet not been truly validated (see below), as long as a causal relationship between XMRV and CFS has not been proven and/or when XMRV does not seem that specific for CFS: it has also been found in people with prostate cancer, autism,  atypical multiple sclerosis, fibromyalgia, lymphoma)(WSJ).

    Meanwhile CFS/ME websites are abuzz with queries about how to obtain tests -also in Europe- …and antiretroviral drugs. Sites like Prohealth seem to advocate for WPI. There is even a commercial XMRV site (who runs it is unclear)

    Project leader Mikovits, and the WPI as a whole, seem to have many contacts with CSF patients, also by mail. In one such mail she says (emphasis and [exclamations] mine):

    “First of all the current diagnostic testing will define with essentially 100% accuracy! XMRV infected patients”. [Bligh me!]….
    We are testing the hypothesis that XMRV is to CFS as HIV is to AIDS. There are many people with HIV who don’t have AIDS (because they are getting treatment). But by definition if you have ME you must have XMRV. [doh?]
    [....] There is so much that we don’t know about the virus. Recall that the first isolation of HIV was from a single AIDS patient published in late 1982 and it was not until 2 years later that it was associated with AIDS with the kind of evidence that we put into that first paper. Only a few short years later there were effective therapies. [...]. Please don’t hesitate to email me directly if you or anyone in the group has questions/concerns. To be clear..I do think even if you tested negative now that you are likely still infected with XMRV or its closest cousin..

    Kind regards, Judy

    These tests costs patients money, because even Medicare will only reimburse 15% of the PCR-test till now. VIP Dx does donate anything above costs to XMRV research, but isn’t this an indirect way to support the WPI-research? Why do patients have to pay for tests that have not proven to be diagnostic? The test is only in the experimental phase.

    I ask you: would such an attitude be tolerated from a regular pharmaceutical company?

    Patients

    Another discrepancy between the WPI and the other studies is that only the WPI use the Fukuda and Canadian criteria to diagnose CFS patients. The Canadian  criteria are much more rigid than those used in the European studies. This could explain why WPI has more positives than the other studies, but it can’t fully explain that WPI shows 96% positives (their recent claim) against 0% in the other studies. For at least some of the European patients should fulfill the more rigid criteria.

    Regional Differences

    Patients of the positive and negative studies also differ with respect to the region they come from (US and Europe). Indeed, XMRV has previously been detected in prostate cancer cells from American patients, but not from German and Irish patients.

    However, the latter two reasons may not be crucial if the statement in the open letter* from Annette Whittemore, director of the WPI, to Dr McClure**, the virologist of the second paper [2], is true:

    We would also like to report that WPI researchers have previously detected XMRV in patient samples from both Dr. Kerr’s and Dr. van Kuppeveld’s cohorts prior to the completion of their own studies, as they requested. We have email communication that confirms both doctors were aware of these findings before publishing their negative papers.(……)
    One might begin to suspect that the discrepancy between our findings of XMRV in our patient population and patients outside of the United States, from several separate laboratories, are in part due to technical aspects of the testing procedures.

    Assuming that this is true we will now concentrate on the differences in the PCR -procedures and results.

    PCR

    All publications have used PCR to test the presence of XMRV in blood: XMRV is present in such low amounts that you can’t detect the RNA without amplifying it first.

    PCR allows the detection of a single or few copies of target DNA/RNA per milligram DNA input, theoretically 1 target DNA copy in 105 to 106 cells. (RNA is first reverse transcribed to DNA). If the target is not frequent, the amplified DNA is only visible after Southern blotting (a radioactive probe “with a perfect fit to” the amplified sequence) or after a second PCR round (so called nested PCR). In this second round a set of primers is used internal to the first set of primers. So a weak signal is converted in a strong and visible one.

    All groups have applied nested PCR. The last two studies have also used a sensitive real time PCR, which is more of a quantitative assay and less prone to contamination.

    Twenty years ago, I had similar experiences as the WPI. I saw very vague PCR bands that had all characteristics of a tumor-specific sequence in  normal individuals, which was contrary to prevailing beliefs and hard to prove. This had all to do with a target frequency near to the detection limit and with the high chance of contamination with positive controls. I had to enrich tonsils and purified B cells to get a signal and sequence the found PCR products to prove we had no contamination. Data were soon confirmed by others. By the way our finding of a tumor specific sequence in normal individuals didn’t mean that everyone develops lymphoma (oh analogy)

    Now if you want to proof you’re right when you discovered something new you better do it good.

    Whether a PCR assay at or near the detection limit of PCR is successful depends on:

    • the sensitivity of the PCR
      • Every scientific paper should show the detection limit of the PCR: what can the PCR detect? Is 1 virus particle enough or need there be 100 copies of the virus before it is detected? Preferably the positive control should be diluted in negative cells. This is called spiking. Testing a positive control diluted in water doesn’t reflect the true sensitivity. It is much easier for primers to find one single small piece of target DNA in water than to find that piece of DNA swimming in a pool of DNA from 105 cells. 
    • the specificity of the PCR.
      • You can get aspecific bands if the primers recognize other than the intended sequences. Suppose you have one target sequence competing with a lot of similar sequences, then even a less perfect match in the normal genome has every chance to get amplified. Therefore you should have a negative control of cells not containing the virus (i.e. placental DNA), not only water. This resembles the PCR conditions of your test samples.
    • Contamination
      • this should be prevented by rigorous spatial separation of  sample preparation, PCR reaction assembly, PCR execution, and post-PCR analysis. There should be many negative controls. Control samples should be processed the same way as the experimental samples and should preferably be handled blinded.
    • The quality and properties of your sample.
      • If XMRV is mainly present in PBMC, separation of PBMC by Ficoll separation (from other cells and serum) could make the difference between a positive and a negative signal. Furthermore,  whole blood and other body fluids often contain inhibitors, that may lead to a much lower sensitivity. Purification steps are recommended and presence of inhibitors should be checked by spiking and amplification of control sequences.

    Below the results per article. I have also made an overview of the results in a Google spreadsheet.

    WPI
    The PCR conditions are badly reported in the WPI paper, published in Science[1]. As a matter of fact I wonder how it ever came trough the review.

    • Unlike XMRV-positive prostate cancer cells, XMRV infection status did not not correlate with the RNASEL genotype.
    • The sensitivity of the PCR is not shown (nor discussed).
    • No positive control is mentioned. The negative controls were just vials without added DNA.
    • Although the PCR is near the detection limit, only first round products are shown (without confirmation of the identity of the product). The positive bands are really strong, whereas you expect them to be weak (near the detection limit after two rounds). This is suggestive of contamination.
    • PBMC have been used as a source and that is fine, but one of WPI’s open letters/news items (Feb 18), in response to the first UK study, says the following:
      • point 7. Perhaps the most important issue to focus on is the low level of XMRV in the blood. XMRV is present in such a small percentage of white blood cells that it is highly unlikely that either UK study’s PCR method could detect it using the methods described. Careful reading of the Science paper shows that increasing the amount of the virus by growing the white blood cells is usually required rather than using white blood cells directly purified from the body. When using PCR alone, the Science authors found that four samples needed to be taken at different times from the same patient in order for XMRV to be detected by PCR in freshly isolated white blood cells.(emphasis mine)
    • But carefully reading the methods,  mentioned in the “supporting material” I only read:
      • The PBMC (approximately 2 x 107 cells) were centrifuged at 500x g for 7 min and either stored as unactivated cells in 90% FBS and 10% DMSO at -80 ºC for further culture and analysis or resuspended in TRIzol (…) and stored at -80 ºC for DNA and RNA extraction and analysis. (emphasis mine)

      Either …. or. Seems clear to me that the PBMC were not cultured for PCR, at least not in the experiments described in the science paper.

      How can one accuse other scientists of not “duplicating” the results if the methods are so poorly described and the authors don’t adhere to it themselves??

    • Strikingly only those PCR-reactions are shown, performed by the Cleveland Clinic (using one round), not the actual PCR-data performed by WPI. That is really odd.
    • It is also not clear whether the results obtained by the various tests were consistent.
      Suzanne D. Vernon, PhD, Scientific Director of the CFIDS Association of America (charitable organization dedicated to CFS) has digged deeper into the topic. This is what she wrote [9]:
      Of the 101 CFS subjects reported in the paper, results for the various assays are shown for only 32 CFS subjects. Of the 32 CFS subjects whose results for any of the tests are displayed, 12 CFS subjects were positive for XMRV on more than one assay. The other 20 CFS subjects were documented as positive by just one testing method. Using information from a public presentation at the federal CFS Advisory Committee, four of the 12 CFS subjects (WPI 1118, 1150, 1199 and 1125) included in the Science paper were also reported to have cancer – either lymphoma, mantle cell lymphoma or myelodysplasia. The presentation reported that 17 WPI repository CFS subjects with cancer had tested positive for XMRV. So how well are these CFS cases characterized, really?

    The Erlwein study was published within 3 months after the first article. It is simpler in design and was reviewed in less then 3 days. They used whole blood instead of PBMC and performed nested PCR using another set of primers. This doesn’t matter a lot, if the PCR is sensitive. However, the sensitivity of the assay is not shown and the PCR bands of the positive control look very weak, even after the second round (think they mad a mistake in the legend as well: lane 9 is not a positive control but a base pair ladder, I presume). It also looked like they used a “molecular plasmid control in water”, but in the comments on the PLoS ONE paper, one of the authors states that the positive control WAS spiked into patient DNA.(Qetzel commenting to Pipeline Corante) Using this PCR none of the 186 CSF samples was positive.

    Groom and van Kuppeveld studies
    The two other studies use an excellent PCR approach[3,4]. Both used PBMC, van Kuppeveld used older cryoperserved PBMC. They first tried the primers of Lombardi using a similar nested PCR, but since the sensitivity was low they changed to a real time PCR with other optimized primers. They determined the sensitivity of the PCR by serially diluting a plasmid into PBMC DNA from a healthy donor. The limit of sensitivity equates to 16 and 10 XMRV-gene copies in the UK and the Dutch study respectively. They have appropriate negative controls and controls for the integrity of the material (GAPDH, spiking normal control cDNAs in negative DNA to exclude sample mediated PCR inhibition[1], phocine distemper virus[2]), therefore also excluding that cryopreserved PBMC were not suitable for amplification.

    The results look excellent, but none of the PCR-samples were positive using these sensitive techniques. A limitation of the Dutch study is the that numbers of patients and controls were small (32 CSF, 43 controls)

    Summary and Conclusion

    In a recent publication in Science, Lombardi and co-authors from the WPI reported the detection of XMRV-related, a novel retrovirus that was first identified in prostate cancer samples.

    Their main finding, presence of XMRV in peripheral blood cells could not be replicated by 3 other studies, even under sensitive PCR conditions.

    The original Science study has severe flaws, discussed above. For one thing WPI doesn’t seem to adhere to the PCR to test XMRV any longer.

    It is still possible that XMRV is present in amounts at or near the detection limit. But it is equally possible that the finding is an artifact (the paper being so inaccurate and incomplete). And even if XMRV was reproducible present in CFS patients, causality is still not proven and it is way too far to offer patients “diagnostic tests” and retroviral treatment.

    Perhaps the most worrisome part of it all is the non-scientific attitude of WPI-employees towards colleague-scientists, their continuous communication via press releases. And the way they try to directly reach patients, who -i can’t blame them-, are fed up with people not taking them serious and who are longing for a better diagnosis and most of all a better treatment. But this is not the way.

    Credits

    *Many thanks to Tate (CSF-patient) for alerting me to the last Dutch publication, Q&A’s of WPI and the findings of Mrs Vernon.
    - Ficoll blood separation. Photo [CC] http://www.flickr.com/photos/42299655@N00/3013136882/
    – Nested PCR: ivpresearch.org

    References

    1. Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL, Ruscetti SK, Bagni RK, Petrow-Sadowski C, Gold B, Dean M, Silverman RH, & Mikovits JA (2009). Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. Science (New York, N.Y.), 326 (5952), 585-9 PMID: 19815723
    2. Erlwein, O., Kaye, S., McClure, M., Weber, J., Wills, G., Collier, D., Wessely, S., & Cleare, A. (2010). Failure to Detect the Novel Retrovirus XMRV in Chronic Fatigue Syndrome PLoS ONE, 5 (1) DOI: 10.1371/journal.pone.0008519
    3. Groom, H., Boucherit, V., Makinson, K., Randal, E., Baptista, S., Hagan, S., Gow, J., Mattes, F., Breuer, J., Kerr, J., Stoye, J., & Bishop, K. (2010). Absence of xenotropic murine leukaemia virus-related virus in UK patients with chronic fatigue syndrome Retrovirology, 7 (1) DOI: 10.1186/1742-4690-7-10
    4. van Kuppeveld, F., Jong, A., Lanke, K., Verhaegh, G., Melchers, W., Swanink, C., Bleijenberg, G., Netea, M., Galama, J., & van der Meer, J. (2010). Prevalence of xenotropic murine leukaemia virus-related virus in patients with chronic fatigue syndrome in the Netherlands: retrospective analysis of samples from an established cohort BMJ, 340 (feb25 1) DOI: 10.1136/bmj.c1018
    5. McClure, M., & Wessely, S. (2010). Chronic fatigue syndrome and human retrovirus XMRV BMJ, 340 (feb25 1) DOI: 10.1136/bmj.c1099
    6. http://scienceblogs.com/erv/2010/01/xmrv_and_chronic_fatigue_syndr_5.php
    7. http://scienceblogs.com/erv/2010/01/xmrv_and_chronic_fatigue_syndr_6.php
    8. http://scienceblogs.com/erv/2010/03/xmrv_and_chronic_fatigue_syndr_11.php
    9. http://www.cfids.org/xmrv/022510study.asp
    Sensitivity of PCR screening for XMRV in PBMC DNA. VP62 plasmid was serially diluted 1:10 into PBMC DNA from a healthy donor and tested by Taqman PCR with env 6173 primers and probe. The final amount of VP62 DNA in the reaction was A, 2.3 × 10-2 ng, B, 2.3 × 10-3 ng, C, 2.3 × 10-4 ng, D, 2.3 × 10-5 ng, E, 2.3 × 10-6 ng, F, 2.3 × 10-7 ng or G, 2.3 × 10-8 ng. The limit of sensitivity was 2.3 × 10-7 ng (shown by trace F) which equates to 16 molecules of VP62 XMRV clone.




    Irreversible Effects of Previous Cortisol Excess on Cognitive Functions in Cushing’s Disease

    10 04 2010

    ResearchBlogging.orgApril 8th is Cushing’s Awareness Day. This day has been chosen as a day of awareness as it is the birthday of Dr. Harvey Cushing, a neurosurgeon, who discovered this illness.

    Cushing’s disease is a rare hormone disease caused by prolonged exposure to high levels of the stress hormone cortisol in the blood, whereas Addison’s disease is caused by the opposite: the lack of cortisol. For more background information on both see this previous post. Ramona Bates MD, of Suture for a Living, has written an excellent review (in plain language) about Cushing’s Disease on occasion of Cushing Awareness Day at EmaxHealth.

    From this you can learn that Cushing’s disease can be due to the patient taking cortisol-like glucocorticoids, such as prednisone for asthma (exogenous cause), but can also arise because people’s bodies make too much of cortisol itself.  This may be due to a tumor on the pituitary gland, the adrenal gland, or elsewhere in the body.

    Symptoms of Cushing’s disease are related to the effects of high levels of cortisol or other glucocorticoids on the immune system, the metabolism and  the brain. Symptoms include rapid weight gain, particularly of the trunk and face (central obesity, “moon face” and buffalo neck), thinning of the skin and easy bruising, excessive hair growth, opportunistic infections, osteoporosis and high blood pressure.

    Less emphasized than the clinical features are the often very disabling cognitive deficits and emotional symptoms that accompany Cushing’s disease. Cushing patients may suffer from various psychological disturbances, like insomnia, mood swings, depression and manic depression, and from cognitive decline. Several studies have shown that these glucocorticoid induced changes are accompanied by atrophy of the brain, and in particular of the  hippocampal region, leading to hippocampal volume loss and a profound loss of synapses [2]. This hippocampal loss seems reversible [2], but are neurological and psychological defects also restored? This is far more important to the patient than anatomic changes.

    If we listen to Cushing patients, who are “cured” and have traded Cushing’s disease for Addison’s disease, we notice that they feel better after their high levels of cortisol have normalized, but not fully cured (see two examples of ex-Cushing patients with longlasting if not irreversible health) problems in my previous post here. [added 2010-04-17)
    To realize how this affects daily life, I recommend to read the photo-blog 365 days with Cushing by Robin (also author of Survive the Journey). Quite a few of her posts deal with the continuous weakness (tag muscle atrophy), tiredness (tag fatigue), problems with (short-term) memory (see tag memory)  or both (like here and here).

    Scientifically the question is to which extent ex-Cushing patients score worse than other healthy individuals or chronically ill people and, if so, whether this can be attributed to the previous high levels of glucocorticoids.

    A recent study by endocrinologists (and one neurologists) from the Leiden University Medical Center assessed the cognitive functioning of patients  after long-term cure of their Cushing’s disease (caused by a ACTH producing pituitary adenoma, that induces overproduction of cortisol (hypercortisolism) by the adrenals [1]. Previous studies had contradictory outcomes and/or were too small to draw conclusions.

    The authors first compared a group of 74 Cushing patients (with a previous pituitary tumor) with matched healthy controls (selected by the patients themselves). Matched means that these controls had the same characteristics as the Cushing patients with respect to gender (male/female: 13/61), age (52 yr) and education.
    Cushing patients were on average 13 years in remission and were followed for another 3 years (total 16 yrs follow-up). Cushing’s disease  had been established by clinical signs and symptoms and by appropriate biochemical tests. All patients were treated by transsphenoidal surgery (surgery via the nostrils), if necessary followed by repeat surgery and/or radiotherapy (27%). Cure of Cushing’s disease was defined by normal overnight suppression of plasma cortisol levels after administration of dexamethasone and normal 24-h urinary excretion rates of cortisol. 58% of the patients had at least one form of hypopituitarism (deficiency of one or more hormones) and half of the patients needed hydrocortisone replacement therapy.

    Long after their cure, 62% of the Cushing patients reported memory problems, and 47% reported problems in executive functioning. The Hospital Anxiety and Depression Scale (HADS)-score (10.5)  indicated no clinical depression or anxiety. Patients with long-term cure of Cushing’s disease did not perform worse on measures of global cognitive functioning. However, these patients had several other cognitive impairments, mainly in the memory domain.
    Only a single test result (FAS, measures verbal mental flexibility and fluency) was significantly different between patients with short and long-term remission.

    From direct comparison with healthy controls it is not clear what causes these cognitive alterations in Cushing patients.

    Therefore the cognitive function of Cushing patients was compared to that of patients previously treated for non-functioning pituitary macroadenomas (NFMA).
    NFMA patients were chosen, because they have undergone similar treatments (transsphenoidal surgery (100%), with repeat surgery and/or radiotherapy (44%) as the Cushing patients. They also shared hypopituitarism and the need for hydrocortisone substitution in half of the cases. NFMA patients, however, have never been exposed to prolonged excess of cortisol.

    Cushing patients could not be directly compared to NFMA-patients, because these patient groups differed with regard to age and gender.

    Thus Cushing patients were compared to matched healthy controls and NFMA to another set of healthy controls, matched to these NFMA patients (Male/Female: 30/24  and mean age: 61 yr).

    To compare Cushing patients with NFMA patients the Z-scores* were calculated for each patient group in relation to their appropriate control group. A general linear model was used to compare the Z-scores.

    Overall Cushing patients performed worse than NFMA patients. In the memory domain, patients cured from Cushing’s disease had a significantly lower MQ measured with the Wechsler Memory Scale compared with patients with NFMA in the subscales concentration and visual memory. On the Verbal Learning Test of Rey, patients cured from Cushing’s disease recalled fewer words in the imprinting, the immediate and delayed recall trials. Furthermore, on the Rey Complex Figure, patients with cured Cushing’s disease scored worse on both trials when compared with NFMA patients. In tests measuring executive function, patients cured from Cushing’s disease made fewer correct substitutions on the Letter-Digit Substitution Test and came up with fewer correct patterns on the Figure Fluency Test compared with treated NFMA patients.

    These impairments were not merely related to pituitary disease in general and/or its treatment, because these patients with long-term cure of Cushing’s disease also revealed subtle impairments in cognitive function compared with patients previously treated for NFMA. These are most likely caused by the irreversible effects of previous glucocorticoid excess on the central nervous system (because this is the main difference between the two).

    Sub-analysis indicated that hypopituitarism was associated with mildly impaired executive functioning** and hydrocortisone dependency** and additional radiotherapy were negatively associated with memory and executive functioning, whereas the duration of remission positively influenced memory and executive functioning.

    The main point of criticism, apparently raised during the review process and discussed by the authors, is the presentation of the data without adjustments for multiple comparisons. When more than one test is used, the chance of finding at least one test statistically significant due to chance increases. As the authors point out, however, the positive significant results were not randomly distributed among the different variables. Furthermore, the findings are plausible given the irreversible effects of cortisol excess on the central nervous system in experimental animal and clinical studies.

    Although not addressed in this study, similar cognitive impairments would be expected in patients having continuous overexposure to exogenous glucocorticosteroids, like prednison.

    * Z-scores: The z score for an item, indicates how far and in what direction, that item deviates from its distribution’s mean, expressed in units of its distribution’s standard deviation. The z score transformation is especially useful when seeking to compare the relative standings of items from distributions with different means and/or different standard deviations (see: http://sysurvey.com/tips/statistics/zscore.htm).

    ** This makes me wonder whether Addison patients with panhypopituitarism have lower cognitive functions compared to healthy controls as well.

    Hattip: Hersenschade door stresshormoon lijkt onomkeerbaar (2010/04/08/) (medicalfacts.nl/)

    References

    1. Tiemensma J, Kokshoorn NE, Biermasz NR, Keijser BJ, Wassenaar MJ, Middelkoop HA, Pereira AM, & Romijn JA (2010). Subtle Cognitive Impairments in Patients with Long-Term Cure of Cushing’s Disease. The Journal of clinical endocrinology and metabolism PMID: 20371667
    2. Patil CG, Lad SP, Katznelson L, & Laws ER Jr (2007). Brain atrophy and cognitive deficits in Cushing’s disease. Neurosurgical focus, 23 (3) PMID: 17961025 Freely available PDF, also published at Medscape




    More about the Research Blogging Awards

    24 03 2010

    In my previous post I mentioned that the winners of the very first edition of the Research Blogging Awards are now known.

    In Beyond the book* you can hear the First Research Blogging Awards announced (see post).
    Here are the podcast and the  transcript of the live interview with the Award organizers Dave Munger of ResearchBlogging.org and Joy Moore of Seed Media.

    Dave and Joy talk about blogs in the research space and the reasons behind some of the winners, which include Not Exactly Rocket Science, Epiphenom, BPS Research Digest and Culturing Science.

    In the interview Dave and Joy not only talk about the winners but also discuss why it is important that science bloggers write about peer review and form a community. It is also meant “to give people the broader picture about the state of research blogging today online and how all of this is helping to promote science and science literacy and culture throughout the world.”

    Two Excerpts from the Transcripts by Moore (which highlights why research blogging is important:

    (…..) and what we’re seeing, and it’s quite exciting, is that bloggers, scientist bloggers around the world are putting a lot of very, very thoughtful effort into spontaneously writing about peer reviewed research in a way that is very similar to what you’ll see in say the news and views sections of some of the top science journals. And so what we’re able to see is not only a broader spectrum of coverage of peer reviewed research and interpretation, but we’re also seeing the immediate accessibility to that interpretation through the blogs and it’s open and it’s free and so it’s really opening up the accessibility to views and interpretations of research in a way that we’ve never seen before.

    (…..)  One of the most critical aspects of being not only a scientist, but also a blogger is ensuring that you get your work out there and you have recognition and attribution for it and therefore, to continue to encourage the Research Blogging activity, we feel that we can help play a role by ensuring that the bloggers are recognized for their work.

    *Beyond the Book is an educational presentation of the not for profit Copyright Clearance Center, with conferences and seminars featuring leading authors and editors, publishing analysts, and information technology specialists.




    Researchblogging Awards. Beaten by a (Former) Rat.

    23 03 2010

    The winners of the Researchblogging contest have been selected.

    I was rudely confronted with the harsh reality that I lost from a fellow Philosophy, Research, or Scholarship blogger, Richard Grant of Confessions of a (former) Lab Rat (and  previously of Life of a Lab Rat).

    Very subtle Richard just left a note: “Sorry”.

    “Thanks” Richard! And congratulations from the bottom of my heart… (no kidding, I really mean congrats!)
    But in one respect you were wrong. You said: “We don’t have the sort of blogs that win awards” Well at least you were half wrong. ;)

    Ed Yong of Not Exactly Rocket Science (No?) deserves a special mention, because he won in 3 (!) categories: Research blog of the year, blog post of the year and best lay-level blog. So if you don’t know this blogger it may well be worthwhile to take a look at his blog.

    Of course this is also true for all other winners (depicted below).
    You can visit their blogs and/or see their Research Blogging (RB) Page.

    Congrats to all winners! And heads up to all other finalists. You’re winners too!





    Sugar-Sweetened Beverages, Diet Coke & Health. Part I.

    14 03 2010

    At Medical and Technology of Joseph Kim, the upcoming Grand Rounds host, I saw the blog post “Need your help on Facebook to get Diet Coke to Donate $50,000 to the Foundation for NIH”.

    National Heart Lung and Blood Institute has started a national campaign in the US, The Heart Truth®. They issued a challenge in support of heart health, raising awareness on the fact that  heart disease is the #1 killer of women, to identify risk factors and take action to lower them. Diet Coke is one of their corporate-partners, helping to spread the word through visibility on 6.7 billion packages of Diet Coke featuring The Heart Truth and Red Dress symbol. It has also started a Facebook cause: Diet Coke will donate $0.50 for every person that joins the cause and $1.00 for every person that donates $1, for a total donation of up to $50,000!

    O.k. Donation Fine, NIH fine, but Coca Cola as a main sponsor to raise awareness against heart disease?? Its almost feels like a tobacco company raising awareness against lung cancer. It is as odd as McDonalds, Lego & Mars preaching online advertising awareness to kids...

    You could object that any money to raise awareness is  a welcome bonus and that diet coke, unlike normal coke, doesn’t contain any calories. But then you could ask whether diet coke is really healthy… Plus Coca Cola does sell a lot of beverages with loads of sugar, with a possible adverse effect on health, including cardiovascular disease (see below). It looks a lot like hypocrisy to me, meant only to improve the BRAND.

    Well, I was to write about sweetened beverages anyway, since I came across several interesting news items the last weeks.

    Sugar-Sweetened Beverages Have Major Effects on Diabetes and Cardiovascular Health

    During the joint EPI/NPAM Conference (Cardiovascular Disease Epidemiology and Prevention &- Nutrition, Physical Activity and Metabolism), Mar 2-5, 2010 (link), Litsa Lambrakos presented a posterSugar-Sweetened Beverage Consumption and the Attributable Burden to Diabetes and Coronary Heart Disease” that was covered in a press release and in the media (Elsevier Global Medical News; All Headline News)

    Based on data from several large observational studies demonstrating a link between higher rates of sugar-sweetened beverages (SSB) consumption and subsequent risk of incident diabetes, Lambrakos and colleagues assumed that daily consumption of SSBs is associated with an increased risk of incident diabetes (RR 1.32 for those with daily consumption compared with adults consuming less than one sugar-sweetened beverage per month).  Next they estimated that the increased consumption of sugar-sweetened beverages (including sugar-sweetened soda, sport and fruit drinks) between 1990 and 2000 contributed to 130,000 new cases of diabetes, 14,000 new cases of coronary heart disease (CHD), and 50,000 additional life-years burdened by coronary heart disease over the past decade. They derived these data from the 1990-2000 National Health and Nutrition Examination Survey (NHANES) on consumption of sugar-sweetened beverages, combined  with the CHD Policy Model, a computer simulation of heart disease in U. S. adults aged 35-84 years.

    Through the model, the researchers also estimated that the additional disease caused by the drinks has increased coronary heart disease healthcare costs by 300-550 million U.S. dollars between 2000-2010. This is probably an underestimation, because it does not account for the increased costs associated with the treatment and care of patients with diabetes alone.

    How does this ($300.000.000-$550.000.000) compare to the $50,000 (max) that Coca Cola is willing to contribute to The Heart Truth?

    Admitted, the comparison is not entirely fair. There are far more soft drinks than the sodas from Coca Cola. More importantly, the reliability of the  figures is highly dependent on the accuracy of the assumptions. Furthermore it is hard to review a study that is not yet published.

    Other studies on possible harm of SSB consumption. 1. Effects on BMI, overweight & obesity.

    To get an idea about the evidence on the ‘harm’ of SSB I did a quick search in PubMed (see PubMed tips).

    First I searched for secondary (aggregated) sources.

    ((Dietary Sucrose AND beverages) OR soft drink* OR sugar-sweetened beverag* OR soda*[tiab]) AND “systematic”[Filter]

    This yielded 27 hits.

    Five Publications centered on the effect of beverages on weight, obesity or BMI.

    The effect on overweight seems the most obvious side-effect of SSB’s. First the increase in obesity over time has been paralleled by an increase in soft drink consumption. Second the daily sweetener consumption in the United States increased by 83 kcal per person, of which 54 kcal/d  from soda. If these calories are added to the normal diet without reducing intake from other sources, 1 soda/d could lead to a weight gain of 6.75 kg in 1 year. [refs in 2]

    Still the evidence is not that clear.

    Malik [2], and an almost overlapping systematic review [3] conclude that large cross-sectional studies, well-powered prospective cohort studies with long follow-up, and short-term experimental studies (including 2 RCT’s), show a positive association between greater intakes of SSBs and weight gain and obesity in both children and adults and yield sufficient evidence for public health strategies to discourage consumption of sugary drinks as part of a healthy lifestyle.

    Two later reviews [4,5] point out that Malik et al. had erroneously concluded that the evidence was ‘strong’, because “several studies were reported as positive when only a selected sub-group had a positive result, or classified as ‘positive non-significant’ where coefficients are near zero and P values in excess of 0·2. Furthermore, the results of two studies were confounded by the inclusion of diet soft drinks.”[4]

    On the contrary, Forshee et al [4] conclude that the  association between SSB consumption and BMI was near zero. Interestingly, the funnel plot analysis was consistent with publication bias against studies that do not report statistically significant findings!

    Gibson [5] concludes that that the effect of SSB on body weight is small except in susceptible individuals or at high levels of intake. She also points out that the totality of evidence is dominated by American studies (including the positive NHANES study), “that may be less applicable to the European context where consumption is substantially lower and composition or formulation may differ (high-fructose corn syrup v. sucrose, proportion of diet v. non-diet, etc).”
    Indeed in a systematic review primarily including European studies [6], overweight was not associated with the intake of soft drinks, but with lower physical activity and more tv watching time.

    Thus the effect of SSB (alone) on BMI and overweight is inconclusive, based on the current body of evidence.

    It is not excluded though that high intake of SSB alone or regular consumption of SSB in combination with other unhealthy lifestyle factors (unsaturated fat, lower physical activity) do contribute to obesity.

    Since lack of sleep is also unhealthy (and possibly obesogen), I will leave it here.

    Next time I will discuss any cardiovascular or other harmful effects of sugar sweetened beverages ànd diet sodas.

    Meanwhile enjoy the sugar and coca cola video below.

    Whatever the evidence, daily consumption of SSB, with many calories and no nutritional value, doesn’t seem overtly healthy to me. I won’t allow my kids to drink soda as a habit.

    ResearchBlogging.org

    References

    1. Litsa K Lambrakos, Pamela Coxson, Lee Goldman, Kirsten Bibbins-Domingo (2010). Sugar-Sweetened Beverage Consumption and the Attributable Burden to Diabetes and Coronary Heart Disease, poster  365, Joint Cardiovascular Disease Epidemiology and Prevention &- Nutrition, Physical Activity and Metabolism – Conference Mar 2-5, 2010.
    2. Malik VS, Schulze MB, & Hu FB (2006). Intake of sugar-sweetened beverages and weight gain: a systematic review. The American journal of clinical nutrition, 84 (2), 274-88 PMID: 16895873
    3. Wolff E, & Dansinger ML (2008). Soft drinks and weight gain: how strong is the link? Medscape journal of medicine, 10 (8) PMID: 18924641
    4. Forshee RA, Anderson PA, & Storey ML (2008). Sugar-sweetened beverages and body mass index in children and adolescents: a meta-analysis. The American journal of clinical nutrition, 87 (6), 1662-71 PMID: 18541554
    5. Gibson S (2008). Sugar-sweetened soft drinks and obesity: a systematic review of the evidence from observational studies and interventions. Nutrition research reviews, 21 (2), 134-47 PMID: 19087367
    6. Janssen I, Katzmarzyk PT, Boyce WF, Vereecken C, Mulvihill C, Roberts C, Currie C, Pickett W, & Health Behaviour in School-Aged Children Obesity Working Group (2005). Comparison of overweight and obesity prevalence in school-aged youth from 34 countries and their relationships with physical activity and dietary patterns. Obesity reviews : an official journal of the International Association for the Study of Obesity, 6 (2), 123-32 PMID: 15836463

    Photo Credits

    1. Diet Coke: http://en.wikipedia.org/wiki/File:Diet_Coke_can_US_1982.jpg
    2. Sugar in Coca Cola: http://www.sugarstacks.com/
    They used data from the 1990-2000 National Health and Nutrition Examination Survey (NHANES) on consumption of sugar-sweetened beverages. She combined that with the Coronary Heart Disease Policy Model, a computer simulation of heart disease in U. S. adults aged 35-84 years.




    Research Blogging Awards 2010

    5 03 2010

    Research Blogging Awards 2010It is now possible to vote for the winners of the 2010 Research Blogging Awards.

    Yet another blog contest, I can hear you say.

    Yes, another blog contest, but a very special one. It is a contest among outstanding bloggers who discuss peer-reviewed research.

    There are over 1,000 blogs registered at ResearchBlogging.org., responsible for 9,500 posts about peer-reviewed journal articles.

    By February 11, 2010, readers had made over 400 nominations. Then, according to researchblogging.org, “the expert panel of judges painstakingly assessed the nominees to select 5 to 10 finalists in each of 20 categories”.

    The categories include:

    • Research Blog of the Year  with some excellent blogs like Neuroskeptic (RB page) and Science-Based Medicine (RB page)
    • Blog Post of the Year
    • Research Twitterer of the Year including David Bradley, Dr. Shock and Bora Zivkovic
    • Best New Blog (launched in 2009)
    • Best Expert-Level blog 
    • Best Lay-Level blog 
    • Funniest Blog 
    • Blogs in other languages, like German and Chinese
    • Blogs according to specialty like Biology, Health, Clinical Research, NeuroScience, Psychology etc

    I was surprised and honored to note that Laika’s MedLiblog is finalist in the section Philosophy, Research, or Scholarship. Another librarian, Anne Welsh of First Person Narrative is also finalist in this section.

    1. First Person Narrative (RB page)
    2. Christopher Leo (RB page)
    3. The Scientist (RB page)
    4. Laika’s MedLibLog (RB page)
    5. Good, Bad, and Bogus (RB page)

    It is now up to you, researchbloggers to vote for your favorite blogs. You don’t need to vote for all categories. It is simply too much and in case of Chinese blogs wouldn’t make much sense either.

    You can only cast your vote if you are registered with ResearchBlogging.org.
    If you’re not registered (and you blog about peer-reviewed research), you still have time to register. See here for more information. This way you can vote, and most important, can contribute to ResearchBlogging.org. with your review of peer reviewed scientific articles.

    Voting closes on March 14, and awards will be announced on ResearchBlogging.org on March 23, 2010.





    Finally a Viral Cause of Chronic Fatigue Syndrome? Or Not? – How Results Can Vary and Depend on Multiple Factors

    15 02 2010

    Last week @F1000 (on Twitter) alerted me to an interesting discussion at F1000 on  a paper in Science, that linked Chronic fatigue syndrome (CFS) to a newly discovered human virus XRMV [1]ResearchBlogging.org.

    This finding was recently disputed by another study in PLOS [2], that couldn’t reproduce the results.  This was highlighted in an excellent post by neuroskeptic “Chronic Fatigue Syndrome in “not caused by single virus” shock!

    Here is my take on the discrepancy.

    Chronic fatigue syndrome (CFS) is a debilitating disorder with unknown etiology. CFS causes extreme fatigue of the kind that does not go  away after a rest. Symptoms of CFS include fatigue for 6 months or more and experiencing other problems such as muscle pain, memory problems, headaches, pain in multiple joints and  sleep problems. Since other illnesses can cause similar symptoms, CFS is hard to diagnose. (source: Medline Plus).

    No one knows what causes CFS, but a viral cause has often been suspected, at least in part of the CFS patients. Because the course of the disease often resembles a post-viral fatigue, CFS has also been referred to as post-viral fatigue syndrome (PVFS).

    The article of Lombardi [1], published in October 2009 in Science, was a real breakthrough. The study showed that two thirds of patients with CFS were infected with a novel gamma retrovirus, xenotropic murine leukaemia virus-related virus (XMRV). XMVR was previously linked to prostate cancer.

    Lombardi et al  isolated DNA from white blood cells (Peripheral Blood Mononuclear Cells or PBMCs) and assayed the samples for XMRV gag sequences by nested polymerase chain reaction (PCR).

    The PCR is a technique that allows the detection of a single or few copies of target DNA by amplifying it across several orders of magnitude, generating thousands to millions of copies of a particular DNA. Nested PCR amplifies the resultant amplicon several orders of magnitude further. In the first round external primers are used (short DNA-sequences that fit the outer end of the piece of DNA to be amplified) and an internal set of primers is used for the second round. Nested PCR is often used if the target DNA is not abundantly present and to avoid the comtamination with products that are amplified as a spin-off due to the amplification of artifacts (sites to which the primers bind as well)

    [I used a similar approach 15-20 years ago to identify a lymphoma-characteristic translocation in tonsils and purified B cells of (otherwise) healthy individuals. By direct sequencing I could prove that each sequence was unique in its breakpoint sequence, thereby excluding that the PCR-products arose by contamination of an amplified positive control. All tumor cells had the translocation against one in 100,000 or 1,000,000 normal cells. To be able to detect the oncogene in B cells, B cells had to be purified by FACS. Otherwise the detection limit could not be reached]

    Lombardi could detect XMRV gag DNA in 68 of 101 patients (67%) as compared to 8 of 218 (3.7%) healthy controls. Detection of gag as well as env XMRV was confirmed in 7 of 11 CFS samples at the Cleveland Clinic (remarkably these are only shown in Fig 1A of the paper, thus not the original PCR-results).
    In contrast, XMRV gag sequences were detected in 8 of 218 (3.7%) PBMC DNA specimens from healthy individuals. Of the 11 healthy control DNA samples analyzed by PCR, only one sample was positive for gag and none for env. The XMRV gag and env sequences were more than 99% similar to those previously reported for prostate tumor–associated strains of XMRV. The authors see this as proof against contamination of samples with prostate cancer associated XMRV-DNA.

    Not only PCR experiments were done. Using intracellular flow cytometry and Western blot assays XMRV proteins were found to be expressed in PBMCs from CFS patients. CFS patiens had anti-XMRV antibodies and cell culture experiments revealed that patient-derived XMRV was infectious. These findings are consistent with but do not prove that XMRV may be a contributing factor in the pathogenesis of CFS. XMRV might just be an innocent bystander. However, unlike XMRV-positive prostate cancer cells, XMRV infection status did not not correlate with the RNASEL genotype.

    The Erlwein study was published within 3 months after the first article. It is much simpler in design. DNA was extracted from whole blood (not purified white blood cells) and subjected to a nested PCR using another set of primers. The positive control was an end-point dilution of the plasmid. Water served as a negative control. None of the 186 CSF samples was positive.

    The question then is: which study is true? (although it should be stressed that the Science paper just shows a link between the virus and CFS, not a causal relationship)

    Regional Differences

    Both findings could be “real” if there was a regional difference in occurrence of the virus. Indeed XMRV has previously been detected in prostate cancer cells from American patients, but not from German and Irish patients.

    Conflict of Interest

    Lombardi’s clinic [1] offers $650 diagnostic test to detect XMRV, so it is of real advantage to the authors of the first paper that the CSF-samples are positive for the virus. On the other hand Prof. Simon Wessely of the second paper has built his career on the hypothesis that CFS is a form of psychoneurosis, that should be treated with cognitive behavior therapy. The presence of a viral (biological) cause would not fit in.

    Shortcomings of the Lombardi-article [1]

    Both studies have used nested PCR to detect XMRV. Because of the enormous amplification potential, PCR can easily lead to contamination (with the positive control) and thus false positive results. Indeed it is very easy to get contamination from an undiluted positive into a weakly positive or negative sample.

    Charles Chiu who belongs to the group detecting XMRV in a specific kind of hereditary prostate cancer, puts it like this [5]:

    In their Dissenting Opinion of this article, Moore and Shuda raise valid concerns regarding the potential for PCR contamination in this study. Some concerns include 1) the criteria for defining CFS/ME in the patients and in controls were not explicitly defined, 2) nested PCR was used and neither in a blinded nor randomized fashion, 3) the remarkable lack of diversity in the six fully sequenced XMRV genomes (<6 nucleotide average difference across genome) — with Fig. S1 even showing that for one fully sequenced isolate two of the single nucleotide differences were “N’s” — clearly the result of a sequencing error, 4) failure to use Southern blotting to confirm PCR results, and 5) primary nested PCR screening done in one lab as opposed to independent screening from start to finish in two different laboratories. Concerns have also been brought up with respect to the antigen testing

    Shortcomings of the Erlwein-article [2]

    Many people have objected that the population of CSF patients is not the same in both studies. Sure it is difficult enough to diagnose CSF (which is only done by exclusion), but according to many commenters of the PLOS study there was a clear bias towards more depressed patients. Therefore, a biological agent is less likely the cause of the disease in these patients. In contrast the US patients had all kinds of physical constraints and immunological problems.

    The review process was also far less stringent: 3 days versus several months.

    The PLOS study might have suffered from the opposite of contamination: failure to amplify the rare CSF-DNA. This is not improbable. The Erlwein group did not purify the blood cells, used other primers, amplified another sequences and did not test DNA of normal individuals. The positive control was diluted in water not in human DNA. The negative control was water.

    Omitting cell purification can lead to a lower relative amount of the XMRV-DNA or to inhibition (often seen this with unpurified samples). Furthermore the gel results seem of poor quality (see Fig 2). The second round of the positive PCR sample results in an overloaded lane with too many aspecific bands (lane 9), whereas the first round leads to a very vague low molecular band (lane 10). True that the CSF-samples also run two rounds, but why aren’t the aspecific bands seen here? It would have been better to use a tenfold titration of the positive control in human DNA (this might be a more real imitation of the CSF samples: (possibly) a rare piece of XMRV DNA mixed with genomic DNA) and to use normal DNA as control, not water.Another point is that the normal XMRV-incidence of 1-3,7% in healthy controls is not reached in the PLOS study, although this could be a matter of chance (1 out of 100).

    Further Studies

    Anyway, we can philosophize, but the answer must await further studies. There are several ongoing efforts.

    References

    1. Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL, Ruscetti SK, Bagni RK, Petrow-Sadowski C, Gold B, Dean M, Silverman RH, & Mikovits JA (2009). Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. Science (New York, N.Y.), 326 (5952), 585-9 PMID: 19815723
    2. Erlwein, O., Kaye, S., McClure, M., Weber, J., Wills, G., Collier, D., Wessely, S., & Cleare, A. (2010). Failure to Detect the Novel Retrovirus XMRV in Chronic Fatigue Syndrome PLoS ONE, 5 (1) DOI: 10.1371/journal.pone.0008519
    3. http://f1000biology.com/article/yxfr5q9qnc967kn/id/1166366/evaluation/sections
    4. http://neuroskeptic.blogspot.com/2010/01/chronic-fatigue-syndrome-in-not-caused.html
    5. Charles Chiu: Faculty of 1000 Biology, 19 Jan 2010 http://f1000biology.com/article/id/1166366/evaluation

    Photo Credits

    Nested PCR ivpresearch.org







    Follow

    Get every new post delivered to your Inbox.

    Join 610 other followers