Of Mice and Men Again: New Genomic Study Helps Explain why Mouse Models of Acute Inflammation do not Work in Men

25 02 2013

ResearchBlogging.org

This post is update after a discussion at Twitter with @animalevidence who pointed me at a great blog post at Speaking of Research ([19], a repost of [20], highlighting the shortcomings of the current study using just one single inbred strain of mice (C57Bl6)  [2013-02-26]. Main changes are in blue

A recent paper published in PNAS [1] caused quite a stir both inside and outside the scientific community. The study challenges the validity of using mouse models to test what works as a treatment in humans. At least this is what many online news sources seem to conclude: “drug testing may be a waste of time”[2], “we are not mice” [3, 4], or a bit more to the point: mouse models of inflammation are worthless [5, 6, 7].

But basically the current study looks only at one specific area, the area of inflammatory responses that occur in critically ill patients after severe trauma and burns (SIRS, Systemic Inflammatory Response Syndrome). In these patients a storm of events may eventually lead to organ failure and death. It is similar to what may occur after sepsis (but here the cause is a systemic infection).

Furthermore the study only uses one single approach: it compares the gene response patterns in serious human injuries (burns, trauma) and a human model partially mimicking these inflammatory diseases (human healthy volunteers receiving  a low dose endotoxin) with the corresponding three animal models (burns, trauma, endotoxin).

And, as highlighted by Bill Barrington of “Understand Nutrition” [8], the researchers have only tested the gene profiles in one single strain of mice: C57Bl6 (B6 for short). If B6 was the only model used in practice this would be less of a problem. But according to Mark Wanner of the Jackson Laboratory [19, 20]:

 It is now well known that some inbred mouse strains, such as the C57BL/6J (B6 for short) strain used, are resistant to septic shock. Other strains, such as BALB and A/J, are much more susceptible, however. So use of a single strain will not provide representative results.

The results in itself are very clear. The figures show at a glance that there is no correlation whatsoever between the human and B6 mouse expression data.

Seok and 36 other researchers from across the USA  looked at approximately 5500 human genes and their mouse analogs. In humans, burns and traumatic injuries (and to a certain extent the human endotoxin model) triggered the activation of a vast number of genes, that were not triggered in the present C57Bl6 mouse models. In addition the genomic response is longer lasting in human injuries. Furthermore, the top 5 most activated and most suppressed pathways in human burns and trauma had no correlates in mice. Finally, analysis of existing data in the Gene Expression (GEO) Database showed that the lack of correlation between mouse and human studies was also true for other acute inflammatory responses, like sepsis and acute infection.

This is a high quality study with interesting results. However, the results are not as groundbreaking as some media suggest.

As discussed by the authors [1], mice are known to be far more resilient to inflammatory challenge than humans*: a million fold higher dose of endotoxin than the dose causing shock in humans is lethal to mice.* This, and the fact that “none of the 150  candidate agents that progressed to human trials has proved successful in critically ill patients” already indicates that the current approach fails.

[This is not entirely correct the endotoxin/LPS dose in mice is 1000–10,000 times the dose required to induce severe disease with shock in humans [20] and mice that are resilient to endotoxin may still be susceptible to infection. It may well be that the endotoxin response is not a good model for the late effects of  sepsis]

The disappointing trial results have forced many researchers to question not only the usefulness of the current mouse models for acute inflammation [9,10; refs from 11], but also to rethink the key aspects of the human response itself and the way these clinical trials are performed [12, 13, 14]. For instance, emphasis has always been on the exuberant inflammatory reaction, but the subsequent immunosuppression may also be a major contributor to the disease. There is also substantial heterogeneity among patients [13-14] that may explain why some patients have a good prognosis and others haven’t. And some of the initially positive results in human trials have not been reproduced in later studies either (benefit of intense glucose control and corticosteroid treatment) [12]. Thus is it fair to blame only the mouse studies?

dick mouse

dick mouse (Photo credit: Wikipedia)

The coverage by some media is grist to the mill of people who think animal studies are worthless anyway. But one cannot extrapolate these findings to other diseases. Furthermore, as referred to above, the researchers have only tested the gene profiles in one single strain of mice: C57Bl6, meaning that “The findings of Seok et al. are solely applicable to the B6 strain of mice in the three models of inflammation they tested. They unduly generalize these findings to mouse models of inflammation in general. [8]“

It is true that animal studies, including rodent studies, have their limitations. But what are the alternatives? In vitro studies are often even more artificial, and direct clinical testing of new compounds in humans is not ethical.

Obviously, the final proof of effectiveness and safety of new treatments can only be established in human trials. No one will question that.

A lot can be said about why animal studies often fail to directly translate to the clinic [15]. Clinical disparities between the animal models and the clinical trials testing the treatment (like in sepsis) are one reason. Other important reasons may be methodological flaws in animal studies (i.e. no randomization, wrong statistics) and publication bias: non-publication of “negative” results appears to be prevalent in laboratory animal research.[15-16]. Despite their shortcomings, animal studies and in vitro studies offer a way to examine certain aspects of a process, disease or treatment.

In summary, this study confirms that the existing (C57Bl6) mouse model doesn’t resemble the human situation in the systemic response following acute traumatic injury or sepsis: the genomic response is entirely different, in magnitude, duration and types of changes in expression.

The findings are not new: the shortcomings of the mouse model(s) were long known. It remains enigmatic why the researchers chose only one inbred strain of mice, and of all mice only the B6-strain, which is less sensitive to endotoxin, and only develop acute kidney injury (part of organ failure) at old age (young mice were used) [21]. In this paper from 2009 (!) various reasons are given why the animal models didn’t properly mimic the human disease and how this can be improved. The authors stress that:

the genetically heterogeneous human population should be more accurately represented by outbred mice, reducing the bias found in inbred strains that might contain or lack recessive disease susceptibility loci, depending on selective pressures.” 

Both Bill Barrington [8] and Mark Wanner [18,19] propose the use of “diversity outbred cross or collaborative cross mice that  provide additional diversity.” Indeed, “replicating genetic heterogeneity and critical clinical risk factors such as advanced age and comorbid conditions (..) led to improved models of sepsis and sepsis-induced AKI (acute kidney injury). 

The authors of the PNAS paper suggest that genomic analysis can aid further in revealing which genes play a role in the perturbed immune response in acute inflammation, but it remains to be seen whether this will ultimately lead to effective treatments of sepsis and other forms of acute inflammation.

It also remains to be seen whether comprehensive genomic characterization will be useful in other disease models. The authors suggest for instance,  that genetic profiling may serve as a guide to develop animal models. A shotgun analyses of gene expression of thousands of genes was useful in the present situation, because “the severe inflammatory stress produced a genomic storm affecting all major cellular functions and pathways in humans which led to sufficient perturbations to allow comparisons between the genes in the human conditions and their analogs in the murine models”. But rough analysis of overall expression profiles may give little insight in the usefulness of other animal models, where genetic responses are more subtle.

And predicting what will happen is far less easy that to confirm what is already known….

NOTE: as said the coverage in news and blogs is again quite biased. The conclusion of a generally good Dutch science  news site (the headline and lead suggested that animal models of immune diseases are crap [6]) was adapted after a critical discussion at Twitter (see here and here), and a link was added to this blog post). I wished this occurred more often….
In my opinion the most balanced summaries can be found at the science-based blogs: ScienceBased Medicine [11] and NIH’s Director’s Blog [17], whereas “Understand Nutrition” [8] has an original point of view, which is further elaborated by Mark Wanner at Speaking of Research [19] and Genetics and your health Blog [20]

References

  1. Seok, J., Warren, H., Cuenca, A., Mindrinos, M., Baker, H., Xu, W., Richards, D., McDonald-Smith, G., Gao, H., Hennessy, L., Finnerty, C., Lopez, C., Honari, S., Moore, E., Minei, J., Cuschieri, J., Bankey, P., Johnson, J., Sperry, J., Nathens, A., Billiar, T., West, M., Jeschke, M., Klein, M., Gamelli, R., Gibran, N., Brownstein, B., Miller-Graziano, C., Calvano, S., Mason, P., Cobb, J., Rahme, L., Lowry, S., Maier, R., Moldawer, L., Herndon, D., Davis, R., Xiao, W., Tompkins, R., , ., Abouhamze, A., Balis, U., Camp, D., De, A., Harbrecht, B., Hayden, D., Kaushal, A., O’Keefe, G., Kotz, K., Qian, W., Schoenfeld, D., Shapiro, M., Silver, G., Smith, R., Storey, J., Tibshirani, R., Toner, M., Wilhelmy, J., Wispelwey, B., & Wong, W. (2013). Genomic responses in mouse models poorly mimic human inflammatory diseases Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1222878110
  2. Drug Testing In Mice May Be a Waste of Time, Researchers Warn 2013-02-12 (science.slashdot.org)
  3. Susan M Love We are not mice 2013-02-14 (Huffingtonpost.com)
  4. Elbert Chu  This Is Why It’s A Mistake To Cure Mice Instead Of Humans 2012-12-20(richarddawkins.net)
  5. Derek Low. Mouse Models of Inflammation Are Basically Worthless. Now We Know. 2013-02-12 (pipeline.corante.com)
  6. Elmar Veerman. Waardeloos onderzoek. Proeven met muizen zeggen vrijwel niets over ontstekingen bij mensen. 2013-02-12 (wetenschap24.nl)
  7. Gina Kolata. Mice Fall Short as Test Subjects for Humans’ Deadly Ills. 2013-02-12 (nytimes.com)

  8. Bill Barrington. Are Mice Reliable Models for Human Disease Studies? 2013-02-14 (understandnutrition.com)
  9. Raven, K. (2012). Rodent models of sepsis found shockingly lacking Nature Medicine, 18 (7), 998-998 DOI: 10.1038/nm0712-998a
  10. Nemzek JA, Hugunin KM, & Opp MR (2008). Modeling sepsis in the laboratory: merging sound science with animal well-being. Comparative medicine, 58 (2), 120-8 PMID: 18524169
  11. Steven Novella. Mouse Model of Sepsis Challenged 2013-02-13 (http://www.sciencebasedmedicine.org/index.php/mouse-model-of-sepsis-challenged/)
  12. Wiersinga WJ (2011). Current insights in sepsis: from pathogenesis to new treatment targets. Current opinion in critical care, 17 (5), 480-6 PMID: 21900767
  13. Khamsi R (2012). Execution of sepsis trials needs an overhaul, experts say. Nature medicine, 18 (7), 998-9 PMID: 22772540
  14. Hotchkiss RS, Coopersmith CM, McDunn JE, & Ferguson TA (2009). The sepsis seesaw: tilting toward immunosuppression. Nature medicine, 15 (5), 496-7 PMID: 19424209
  15. van der Worp, H., Howells, D., Sena, E., Porritt, M., Rewell, S., O’Collins, V., & Macleod, M. (2010). Can Animal Models of Disease Reliably Inform Human Studies? PLoS Medicine, 7 (3) DOI: 10.1371/journal.pmed.1000245
  16. ter Riet, G., Korevaar, D., Leenaars, M., Sterk, P., Van Noorden, C., Bouter, L., Lutter, R., Elferink, R., & Hooft, L. (2012). Publication Bias in Laboratory Animal Research: A Survey on Magnitude, Drivers, Consequences and Potential Solutions PLoS ONE, 7 (9) DOI: 10.1371/journal.pone.0043404
  17. Dr. Francis Collins. Of Mice, Men and Medicine 2013-02-19 (directorsblog.nih.gov)
  18. Tom/ Mark Wanner Why mice may succeed in research when a single mouse falls short (2013-02-15) (speakingofresearch.com) [repost, with introduction]
  19. Mark Wanner Why mice may succeed in research when a single mouse falls short (2013-02-13/) (http://community.jax.org) %5Boriginal post]
  20. Warren, H. (2009). Editorial: Mouse models to study sepsis syndrome in humans Journal of Leukocyte Biology, 86 (2), 199-201 DOI: 10.1189/jlb.0309210
  21. Doi, K., Leelahavanichkul, A., Yuen, P., & Star, R. (2009). Animal models of sepsis and sepsis-induced kidney injury Journal of Clinical Investigation, 119 (10), 2868-2878 DOI: 10.1172/JCI39421




HOT TOPIC: Does Soy Relieve Hot Flashes?

20 06 2011

ResearchBlogging.orgThe theme of the Upcoming Grand Rounds held at June 21th (1st day of the Summer) at Shrink Rap is “hot”. A bit far-fetched, but aah you know….shrinks“. Of course they hope  assume  that we will express Weiner-like exhibitionism at our blogs. Or go into spicy details of hot sexpectations or other Penis Friday NCBI-ROFL posts. But no, not me, scientist and librarian to my bone marrow. I will stick to boring, solid science and will do a thorough search to find the evidence. Here I will discuss whether soy really helps to relieve hot flashes (also called hot flushes).

…..As illustrated by this HOT picture, I should post as well…..

(CC from Katy Tresedder, Flickr):

Yes, many menopausal women plagued by hot flashes take their relief  in soy or other phytoestrogens (estrogen-like chemicals derived from plants). I know, because I happen to have many menopausal women in my circle of friends who prefer taking soy over estrogen. They rather not take normal hormone replacement therapy, because this can have adverse effects if taken for a longer time. Soy on the other hand is considered a “natural remedy”, and harmless. Probably physiological doses of soy (food) are harmless and therefore a better choice than the similarly “natural” black cohosh, which is suspected to give liver injury and other adverse effects.

But is soy effective?

I did a quick search in PubMed and found a Cochrane Systematic Review from 2007 that was recently edited with no change to the conclusions.

This review looked at several phytoestrogens that were offered in several ways, as: dietary soy (9x) (powder, cereals, drinks, muffins), soy extracts (9x), red clover extracts (7x, including Promensil (5x)), Genistein extract , Flaxseed, hop-extract  and a Chinese medicinal herb.

Thirty randomized controlled trials with a total of 2730 participants met the inclusion criteria: the participants were women in or just before their menopause complaining of vasomotor symptoms (thus having hot flashes) for at least 12 weeks. The intervention was a food or supplement with high levels of phytoestrogens (not any other herbal treatment) and this was compared with placebo, no treatment or hormone replacement therapy.

Only 5 trials using the red clover extract Promensil were homogenous enough to combine in a meta-analysis. The effect on one outcome (incidence of hot flashes) is shown below. As can be seen at a glance, Promensil had no significant effect, whether given in a low (40 mg/day) or a higher (80 mg/day) dose. This was also true for the other outcomes.

The other phytoestrogen interventions were very heterogeneous with respect to dose, composition and type. This was especially true for the dietary soy treatment. Although some of the trials showed a positive effect of phytoestrogens on hot flashes and night sweats, overall, phytoestrogens were no better than the comparisons.

Most trials were small,  of short duration and/or of poor quality. Fewer than half of the studies (n=12) indicated that allocation had been concealed from the trial investigators.

One striking finding was that there was a strong placebo effect in most trials with a reduction in frequency of hot flashes ranging from 1% to 59% .

I also found another systematic review in PubMed by Bolaños R et al , that limited itself only to soy. Other differences with the Cochrane Systematic Review (besides the much simpler search ;) ) were: inclusion of more recently published clinical trials, no inclusion of unpublished studies and less strict exclusion on basis of low methodological quality. Furthermore, genestein was (rightly) considered as a soy product.

The group of studies that used soy dietary supplement showed the highest heterogeneity. Overall, the results “showed a significant tendency(?)  in favor of soy. Nevertheless the authors conclude (similar to the Cochrane authors), that  it is still difficult to establish conclusive results given the high heterogeneity found in the studies. (but apparently the data could still be pooled?)

References

  • Lethaby A, Marjoribanks J, Kronenberg F, Roberts H, Eden J, & Brown J. (2007). Phytoestrogens for vasomotor menopausal symptoms Cochrane Database of Systematic Reviews (4) : 10.1002/14651858.CD001395.pub3.
  • Bolaños R, Del Castillo A, & Francia J (2010). Soy isoflavones versus placebo in the treatment of climacteric vasomotor symptoms: systematic review and meta-analysis. Menopause (New York, N.Y.), 17 (3), 660-6 PMID: 20464785




A Filter for Finding “All Studies on Animal Experimentation in PubMed”

29 09 2010

ResearchBlogging.orgFor  an introduction to search filters you can first read this post.

Most people searching PubMed try to get rid of publications about animals. But basic scientists and lab animal technicians just want to find those animal studies.

PubMed has built-in filters for that: the limits. There is a limit  for “humans” and a limit for “animals”. But that is not good enough to find each and every article about humans, respectively animals. The limits are MeSH, Medical Subject Headings or index-terms and these are per definition not added to new articles, that haven’t been indexed yet. To name the main disadvantage…
Thus to find all papers one should at least search for other relevant MeSH and textwords (words in title and abstract) too.

A recent paper published in Laboratory Animals describes a filter for finding “all studies on animal experimentation in PubMed“, to facilitate “writing a systematic review (SR) of animal research” .

As the authors rightly emphasize, SR’s are no common practice in the field of animal research. Knowing what already has been done can prevent unnecessary duplication of animal experiments and thus unnecessary animal use. The authors have interesting ideas like registration of animal studies (similar to clinical trials registers).

In this article they describe the design of an animal filter for PubMed. The authors describe their filter as follows:

“By using our effective search filter in PubMed, all available literature concerning a specific topic can be found and read, which will help in making better evidencebased decisions and result in optimal experimental conditions for both science and animal welfare.”

Is this conclusion justified?

Design of the filter

Their filter is subjectively derived: the terms are “logically” chosen.

[1] The first part of the animal filter consists of only MeSH-terms.

You can’t use animals[mh] (mh=Mesh) as a search term, because MeSH are automatically exploded in PubMed. This means that narrower terms (lower in the tree) are also searched. If “Animals” were allowed to explode, the search would include the MeSH, “Humans”, which is at the end of one tree (primates etc, see Fig. below)

Therefore the MeSH-parts of their search consists of:

  1. animals [mh:noexp]: only articles are found that are indexed with “animals”, but not its narrower terms. Notably, this is identical to the PubMed Limit: animals).
  2. Exploded Animal-specific MeSH-terms not having humans as a narrow term, i.e. “fishes”[MeSH Terms].
  3. and non-exploded MeSH in those cases that humans occurred in the same branch. Like “primates”[MeSH Terms:noexp]
  4. In addition two other MeSH are used: “animal experimentation”[MeSH Terms] and “models, animal”[MeSH Terms]

[2] The second part of the search filter consist of terms in the title and abstract (command: [tiab]).

The terms are taken from relevant MeSH, two reports about animal experimentation in the Netherlands and in Europe, and the experience of the authors, who are experts in the field.

The authors use this string for non-indexed records (command: NOT medline[sb]). Thus this part is only meant to find records that haven’t (yet) been indexed, but in which (specific) animals are mentioned by the author in title or text. Synonyms and spelling variants have been taken into account.

Apparently the authors have chosen NOT to search for text words in indexed records only. Presumably it gives too much noise, to search for animals mentioned in non-indexed articles. However, the authors do not discuss why this was necessary.

This search string is extremely long. Partly because truncation isn’t used with the longer words: i.e. nematod*[tiab] instead of nematoda[Tiab] OR nematode[Tiab] OR nematoda[Tiab] OR nematode[Tiab] OR nematodes[Tiab]. Partly because they aim for completeness. However the usefulness of the terms as such hasn’t been verified (see below).

Search strategies can be freely accessed here.

Validation

The filter is mainly validated against the PubMed Limit “Animals”.

The authors assume that the PubMed Limits are “the most easily available and most obvious method”. However I know few librarians or authors of systematic reviews who would solely apply this so called ‘regular method’. In the past I have used exactly the same MeSH-terms (1) and the main text words (2) as included in their filter.

Considering that the filter includes the PubMed limit “Animals” [1.1] it does not come as a surprise that the sensitivity of the filter exceeds that of the PubMed limit Animals…

Still, the sensitivity (106%) is not really dramatic: 6% more records are found, the PubMed Limit “animals” is set as 100%.

Apparently records are very well indexed with the MeSH “animals”. Few true animal records are missed, because “animals” is a check tag. A check tag is a MeSH that is looked for routinely by indexers in every journal article. It is added to the record even if it isn’t the main (or major) point of an article.

Is an increased sensitivity of appr. 6% sufficient to conclude that this filter “performs much better than the current alternative in PubMed”?

No. It is not only important that MORE is found but to what degree the extra hits are relevant. Surprisingly, the authors ONLY determined SENSITIVITY, not specificity or precision.

There are many irrelevant hits, partly caused by the inclusion of animal population groups[mesh], which has some narrower terms that often not used for experimentation, i.e. endangered species.

Thus even after omission of animal population groups[mesh], the filter still gives hits like:

These are evidently NOT laboratory animal experiments and mainly caused by the inclusion invertebrates  like plankton.

Most other MeSH are not extremely useful either. Even terms as animal experimentation[mh] and models, animal[mh] are seldom assigned to experimental studies lacking animals as a MeSH.

According to the authors, the MeSH “Animals” will not retrieve studies solely indexed with the MeSH term Mice. However, the first records missed with mice[mesh] NOT animals[mh:noexp] are from 1965, when they apparently didn’t use “animals” as a check tag in addition to specific ‘animal’ MeSH.

Thus presumably the MeSH-filter can be much shorter and need only contain animal MeSH (rats[mh], mice[mh] etc) when publications older than 1965 are also required.

The types of vertebrate animals used in lab re...

Image via Wikipedia

Their text word string (2) is also extremely long.  Apart from the lack of truncation, most animal terms are not relevant for most searches. 2/3 of the experiments are done with rodents (see Fig). The other animals are often used for specific experiments (zebra-fish, Drosophila) or in another context, not related to animal experiments, such as:

swine flu, avian flu, milk production by cows, or allergy by milk-products or mites, stings by insects and bites by dogs and of course fish, birds, cattle and poultry as food, fetal calf serum in culture medium, but also vaccination with “mouse products” in humans. Thus most of the terms produce noise for most topics. An example below (found by birds[mesh] :-)

On the other hand strains of mice and rats are missing from the search string: i.e. balb/c, wistar.

Extremely long search strings (1 page) are also annoying to use. However, the main issue is whether the extra noise matters. Because the filter is meant to find all experimental animal studies.

As Carlijn Hooijmans notices correctly, the filters are never used on their own, only in combination with topic search terms.

Hooijmans et al have therefore “validated” their filter with two searches. “Validated” between quotation marks because they have only compared the number of hits, thus the increase in sensitivity.

Their first topic is the use of probiotics in experimental pancreatitis (see appendix).

Their filter (combined with the topic search) retrieved 37 items against 33 items with the so called “regular method”: an increase in sensitivity of 21,1%.

After updating the search I got  38 vs 35 hits. Two of the 3 extra hits obtained with the broad filter are relevant and are missed with the PubMed limit for animals, because the records haven’t been indexed. They could also have been found with the text words pig*[tiab] or dog*[tiab]. Thus the filter is ok for this purpose, but unnecessary long. The MeSH-part of the filter had NO added value compared to animals[mh:noexp].

Since there are only 148 hits without the use of any filters, researchers could also use screen all hits. Alternatively there is a trick to safely exclude human studies:

NOT (humans[mh] NOT animals[mh:noexp])

With this double negation you exclude PubMed records that are indexed with humans[mh], as long as these records aren’t indexed with animals[mh:noexp] too. It is far “safer” than limiting for “animals”[mesh:noexp] only. We use a similar approach to ” exclude”  animals when we search for human studies.

This extremely simple filter yields 48 hits, finding all hits found with the large animal filter (plus 10 irrelevant hits).

Such a simple filter can easily be used for searches with relatively few hits, but gives too many irrelevant hits in case of  a high yield.

The second topic is food restriction. 9280 Records were obtained with the Limit: “Animals”, whereas this strategy combined with the complete filter retrieved 9650 items. The sensitivity in this search strategy was therefore 104%. 4% extra hits were obtained.

The MeSH-search added little to the search. Only 21 extra hits. The relevant hits were (again) only from before 1965.

The text-word part of the search finds relevant new articles, although there are quite some  irrelevant findings too, i.e. dieting and obtaining proteins from chicken.

4% isn’t a lot extra, but the aim of the researchers is too find all there is.

However, it is the question whether researchers want to find every single experiment or observation done in the animal kingdom. If I were to plan an experiment on whether food restriction lowers the risk for prostate cancer in a transgenic mice, need I know what the effects are of food restriction on Drosophila, nematodes, salmon or even chicken on whatever outcome? Would I like to screen 10,000 hits?

Probably most researchers would like separate filters for rodents and other laboratory animals (primates, dogs) and for work on Drosophila or fish. In some fields there might also be a need to filter clinical trials and reviews out.

Furthermore, it is not only important to have a good filter but also a good search.

The topic searches in the current paper are not ideal: they contain overlapping terms (food restriction is also found by food and restriction) and misses important MeSH (Food deprivation, fasting and the broader term of caloric restriction “energy intake” are assigned more often to records about food deprivation than caloric restriction).

Their search:

(“food restriction”[tiab] OR (“food”[tiab] AND “restriction”[tiab]) OR “feed restriction”[tiab] OR (“feed”[tiab] AND “restriction”[tiab]) OR “restricted feeding”[tiab] OR (“feeding”[tiab] AND “restricted”[tiab]) OR “energy restriction”[tiab] OR (“energy”[tiab] AND “restriction”[tiab]) OR “dietary restriction”[tiab] OR (dietary”[tiab] AND “restriction”[tiab]) OR “caloric restriction”[MeSH Terms] OR (“caloric”[tiab] AND “restriction”[tiab]) OR “caloric restriction”[tiab])
might for instance be changed to:

Energy Intake[mh] OR Food deprivation[mh] OR Fasting[mh] OR food restrict*[tiab] OR feed restrict*[tiab] OR restricted feed*[tiab] OR energy restrict*[tiab] OR dietary restrict*[tiab] OR  caloric restrict*[tiab] OR calorie restrict*[tiab] OR diet restrict*[tiab]

You do not expect such incomplete strategies from people who repeatedly stress that: most scientists do not know how to use PubMed effectively” and that “many researchers do not use ‘Medical Subject Headings’ (MeSH terms), even though they work with PubMed every day”…..

Combining this modified search with their animal filter yields 21920 hits instead of 10335 as found with their “food deprivation” search and their animal filter. A sensitivity of 212%!!! Now we are talking! ;) (And yes there are many new relevant hits found)

Summary

The paper describes the performance of a subjective search filter to find all experimental studies performed with laboratory animals. The authors have merely “validated”  this filter against the Pubmed Limits: animals. In addition, they only determined sensitivity:  on average 7% more hits were obtained with the new animal filter than with the PubMed limit alone.

The authors have not determined the specificity or precision of the filter, not even for the 2 topics where they have applied the filter. A quick look at the results shows that the MeSH-terms other than the PubMed limit “animals” contributed little to the enhanced sensitivity. The text word part of the filter yields more relevant hits. Still -depending on the topic- there are many irrelevant records found, because  it is difficult to separate animals as food, allergens etc from laboratory animals used in experiments and the filter is developed to find every single animal in the animal kingdom, including poultry, fish, nematodes, flies, endangered species and plankton. Another (hardly to avoid) “contamination” comes from in vitro experiments with animal cells, animal products used in clinical trials and narrative reviews.

In practice, only parts of the search filter seem useful for most systematic reviews, and especially if these reviews are not meant to give an overview of all findings in the universe, but are needed to check if a similar experiment hasn’t already be done. It seems impractical if researchers have to make a systematic review, checking, summarizing and appraising  10,000 records each time they start a new experiment.

Perhaps I’m somewhat too critical, but the cheering and triumphant tone of the paper in combination with a too simple design and without proper testing of the filter asked for a critical response.

Credits

Thanks to Gerben ter Riet for alerting me to the paper. He also gave the tip that the paper can be obtained here for free.

References

  1. Hooijmans CR, Tillema A, Leenaars M, & Ritskes-Hoitinga M (2010). Enhancing search efficiency by means of a search filter for finding all studies on animal experimentation in PubMed. Laboratory animals, 44 (3), 170-5 PMID: 20551243

———————-





Collaborating and Delivering Literature Search Results to Clinical Teams Using Web 2.0 Tools

8 08 2010

ResearchBlogging.orgThere seem to be two camps in the library, the medical and many other worlds: those who embrace Web 2.0, because they consider it useful for their practice and those who are unaware of Web 2.0 or think it is just a fad. There are only a few ways the Web 2.0-critical people can be convinced: by arguments (hardly), by studies that show evidence of its usefulness and by examples of what works and what doesn’t work.

The paper of Shamsha Damani and Stephanie Fulton published in the latest Medical Reference Services Quarterly [1] falls in the latter category. Perhaps the name Shamsha Damania rings a bell: she is a prominent twitterer and has written quest posts at this blog on several occasions (here, herehere and here)

As clinical librarians at The University of Texas MD Anderson Cancer Center, Shamsha and Stephanie are immersed in clinical teams and provide evidence-based literature for various institutional clinical algorithms designed for patient care.

These were some of the problems the clinical librarians encountered when sharing the results of their searches with the teams by classic methods (email):

First, team members were from different departments and were dispersed across the sprawling hospital campus. Since the teams did not meet in person very often, it was difficult for the librarians to receive timely feedback on the results of each literature search. Second, results sent from multiple database vendors were either not received or were overlooked by team members. Third, even if users received the bibliography, they still had to manually search for and locate the full text of articles. The librarians also experimented with e-mailing EndNote libraries; however, many users were not familiar with EndNote and did not have the time to learn how to use it. E-mails in general tended to get lost in the shuffle, and librarians often found themselves re-sending e-mails with attachments. Lastly, it was difficult to update the results of a literature search in a consistent manner and obtain meaningful feedback from the entire team.

Therefore, they tried several Web 2.0 tools for sharing search results with their clinical teams.
In their article, the librarians share their experience with the various applications they explored that allowed centralization of the search results, provided easy online access, and enabled collaboration within the group.

Online Reference Management Tools were the librarians’ first choice, since these are specifically designed to help users gather and store references from multiple databases and allow sharing of results. Of the available tools, Refworks was eventually not tested, because it required two sets of usernames and passwords. In contrast, EndNote Web can be accessed from any computer with a username and password. Endnoteweb is suitable for downloading and managing references from multiple databases and for retrieving full text papers as well as  for online collaboration. In theory, that is. In practice, the team members experienced several difficulties: trouble to remember the usernames and passwords, difficulties using the link resolver and navigating to the full text of each article and back to the Endnote homepage. Furthermore, accessing the full text of each article was considered a too laborious process.

Next, free Social bookmarking sites were tested allowing users to bookmark Web sites and articles, to share the bookmarks and to access them from any computer. However, most team members didn’t create an account and could therefore not make use of the collaborative features. The bookmarking sites were deemed ‘‘user-unfriendly’’, because  (1) the overall layout and the presentation of results -with the many links- were experienced as confusing,  (2) sorting possibilities were not suitable for this purpose and (3) it was impossible to search within the abstracts, which were not part of the bookmarked records. This was true both for Delicious and Connotea, even though the latter is more apt for science and medicine, includes bibliographic information and allows import and export of references from other systems. An other drawback was that the librarians needed to bookmark and comment each individual article.

Wikis (PBWorks and SharePoint) appeared most user-friendly, because they were intuitive and easy to use: the librarians had created a shared username and password for the entire team, the wiki was behind the hospital’s firewall (preferred by the team) and the users could access the articles with one click. For the librarians it was labor-consuming as they annotated the bibliographies, published it on the wiki and added persistent links to each article. It is not clear from the article how final reference lists were created by the team afterwards. Probably by cut & paste, because Wikis don’t seem suitable as a Word processor nor  are they suitable for  import and export of references.

Some Remarks

It is informative to read the pros and cons of the various Web 2.0 tools for collaborating and delivering search results. For me, it was even more valuable to read how the research was done. As the authors note (quote):

There is no ‘‘one-size-fits-all’’ approach. Each platform must be tested and evaluated to see how and where it fits within the user’s workflow. When evaluating various Web 2.0 technologies, librarians should try to keep users at the forefront and seek feedback frequently in order to provide better service. Only after months of exploration did the librarians at MD Anderson Cancer Center learn that their users preferred wikis and 1-click access to full-text articles. Librarians were surprised to learn that users did not like the library’s link resolvers and wanted a more direct way to access information.

Indeed, there is no ‘‘one-size-fits-all’’ approach. For that reason too, the results obtained may only apply in certain settings.

I was impressed by the level of involvement of the clinical librarians and the time they put not only in searching, but also in presenting the data, in ranking the references according to study design, publication type, and date and in annotating the references. I hope they prune the results as well, because applying this procedure to 1000 or more references is no kidding. And, although it may be ideal for the library users, not all librarians work like this. I know of no Dutch librarian who does. Because of the workload such a ready made wiki may not be feasible for many librarians .

The librarians starting point was to find an easy and intuitive Web based tool that allowed collaborating and sharing of references.
The emphasis seems more on the sharing, since end-users did not seem to collaborate via the wikis themselves. I also wonder if the simpler and free Google Docs wouldn’t fulfill most of the needs. In addition, some of the tools might have been perceived more useful if users had received some training beforehand.
The training we offer in Reference Manager, is usually sufficient to learn to work efficiently with this quite complex reference manager tool. Of course, desktop software is not suitable for collaboration online (although it could always be easily exported to an easier system), but a short training may take away most of the barriers people feel when using a new tool (and with the advantage that they can use this tool for other purposes).

In short,

Of the Web 2.0 tools tested, wikis were the most intuitive and easy to use tools for collaborating with clinical teams and for delivering the literature search results. Although it is easy to use by end-users, it seems very time-consuming for librarians, who make ready-to-use lists with annotations.

Clinical teams of MD Anderson must be very lucky with their clinical librarians.

Reference
Damani S, & Fulton S (2010). Collaborating and delivering literature search results to clinical teams using web 2.0 tools. Medical reference services quarterly, 29 (3), 207-17 PMID: 20677061

Are you a Twitter user? Tweet this!

———————————

Added: August 9th 2010, 21:30 pm

On basis of the comments below (Annemarie Cunningham) and on Twitter (@Dymphie – here and here (Dutch)) I think it is a good idea to include a figure of one of the published wiki-lists.

It looks beautiful, but -as said- where is the collaborative aspect? Like Dymphie I have the impression that these lists are no different from the “normal” reference lists. Or am I missing something? I also agree with Dymphie that instructing people in Reference Manager may be much more efficient for this purpose.

It is interesting to read Christina Pikas view about this paper. At her blog Christina’s Lis Rant (just moved to the new Scientopia platform) Christina first describes how she delivers her search results to her customers and which platforms she uses for this. Then she shares some thoughts about the paper, like:

  • they (the authors) ruled out RefWorks because it required two sets of logins/passwords – hmm, why not RefWorks with RefShare? Why two sets of passwords?
  • SharePoint wikis suck. I would probably use some other type of web part – even a discussion board entry for each article.
  • they really didn’t use the 2.0 aspects of the 2.0 tools – particularly in the case of the wiki. The most valued aspects were access without a lot of logins and then access to the full text without a lot of clicks.

Like Christina,  I would be interested in hearing other approaches – particularly using newer tools.






What One Short Night’s Sleep does to your Glucose Metabolism

11 05 2010

ResearchBlogging.orgAs a blogger I regularly sleep 3-5 hours just to finish a post. I know that this has its effects on how I feel the next day. I also know short nights don’t promote my clear-headedness and I also recognize short-term effects on  memory, cognitive functions, reaction time and mood (irritability), as depicted in the picture below. But I had no idea of any effect on heart disease, obesity and risk of diabetes type 2.

Indeed, short sleep duration is consistently associated with the development of obesity and diabetes in observational studies (see several recent systematic reviews, 3-5). However, as explained before, an observational design cannot establish causality. For instance, diabetes type 2 may be the consequence of other lifestyle aspects of people who spend little time sleeping, or sleep problems might be a consequence rather than a cause of diabetogenic changes.

Diabetes is basically a condition characterized by difficulties processing carbohydrates (sugars, glucose). Type 2 diabetes has a slow onset. First there is a gradual defect in the body’s ability to use insulin. This is called insulin resistance. Insulin is a pancreatic hormone that increases glucose utilization in skeletal muscle and fat tissue and suppresses glucose production by the liver, thereby lowering blood glucose levels.  Over time, damage may occur to the insulin-producing cells in the pancreas (type 2 diabetes),  which may ultimately progress to the point where the pancreas doesn’t make enough insulin and injections are needed. (source: about.com).

Since it is such a slow process one would not expect insulin resistance to change overnight. And certainly not by just partial sleep deprivation of 4-5 hrs of sleep.

Still, this is the outcome of a study, performed by the PhD student Esther Donga. Esther belongs to the study group of Romijn who also studied the previously summarized effects of previous cortisol excess on cognitive functions in Cushing’s disease .

Donga et al. have studied the effects of one night of sleep restriction on insulin sensitivity in 9 healthy lean individuals [1] and in 7 patients with type 1 diabetes [2]. The outcomes were practically the same, but since the results in healthy individuals (having no problems with glucose metabolism, weight or sleep) are most remarkable, I will confine myself to the study in healthy people.

The study design is relatively simple. Five men and four healthy women (mean age 45 years) with a lean body weight and normal  sleep pattern participated in the study. They were not using medication affecting sleep or glucose metabolism and were asked to adhere to their normal lifestyle pattern during the study.

There were 3 study days, separated by intervals of at least 3 weeks. The volunteers were admitted to the clinical research center the night before each study day to become accustomed to sleeping there. They fasted throughout these nights and spent 8.5 h in bed.  The subjects were randomly assigned to sleep deprivation on either the second or third occasion. Then they were only allowed to sleep from 1 am to 4 am to secure equal compression of both non-REM and REM sleep stages.

(skip blue paragraphs if you are not interested in the details)

Effects on insulin sensitivity were determined on the day after the second and third night (one normal and one short night sleep) by the gold standard for quantifying insulin resistance: the hyperinsulinemic euglycemic clamp method. This method uses catheters to infuse insulin and glucose into the bloodstream. Insulin is infused to get a steady state of insulin in the blood and the insulin sensitivity is determined by measuring the amount of glucose necessary to compensate for an increased insulin level without causing hypoglycemia (low blood sugar). (see Figure below, and a more elaborate description at Diabetesmanager (pbworks).

Prior to beginning the hyperinsulinemic period, basal blood samples were taken and labeled [6,6-2H2]glucose was infused  for assessment of glucose kinetics in the basal state. At different time-points concentrations of glucose, insulin, and plasma nonesterified fatty acids (NEFA) were measured.

The sleep stages were differently affected  by the curtailed sleep duration: the proportion of the stage III and stage II sleep were greater (P < 0.007), respectively smaller (P < 0.006) in the sleep deprived night.

Partial sleep deprivation did not alter basal levels of glucose, nonesterified fatty acids (NEFA), insulin, glucagon, or cortisol measured the following morning, nor did it affect basal endogenous glucose production.

However, during the CLAMP-procedure there were significant alterations on the following parameters:

  • Endogenous glucose production – increase of approximately 22% (p< 0.017), indicating hepatic insulin resistance.
  • Rate of Glucose Disposal - decrease by approximately 20% (p< 0.009), indicating decreased peripheral insulin sensitivity.
  • Glucose infusion rate – approximately 25% lower after the night of reduced sleep duration (p< 0.001). This is in agreement with the above findings: less extra glucose needed to maintain plasma glucose levels.
  • NEFA – increased by 19% (p< 0.005), indicating decreased insulin sensitivity of lipolysis (breakdown of triglyceride lipids- into free fatty acids).

The main novelty of the present study is the finding that one single night of shortened sleep is sufficient to reduce insulin sensitivity (of different metabolic pathways) in healthy men and women.

This is in agreement with the evidence of observational studies showing an association between sleep deprivation and obesity/insulin resistance/diabetes (3-5). It also extends results from previous experimental studies (summarized in the paper), that document the effects on glucose-resistance after multiple nights of sleep reduction (of 4h) or total sleep deprivation.

The authors speculate that the negative effects of multiple nights of partial sleep restriction on glucose tolerance can be reproduced, at least in part, by only a single night of sleep deprivation.

And the media conclude:

  • just one night of short sleep duration can induce insulin resistance, a component of type 2 diabetes (Science Daily)
  • healthy people who had just one night of short sleep can show signs of insulin resistance, a condition that often precedes Type 2 diabetes. (Medical News Today)
  • even a single of night of sleep deprivation can cause the body to show signs of insulin resistance, a warning sign of diabetes (CBS-news)
  • And this was of course the message that catched my eye in the first place: “Gee, one night of bad sleep, can already disturb your glucose metabolism in such a way that you arrive at the first stage of diabetes: insulin resistance!…Help!”

    First “insulin resistance” calls up another association than “partial insulin resistance” or a “somewhat lower insulin sensitivity” (as demonstrated in this study).  We interpret insulin resistance as a disorder that will eventually lead to diabetes, but perhaps adaptations in insulin sensitivity are just a normal phenomenon, a way to cope with normal fluctuations in exercise, diet and sleep. Or a consequence of other adaptive processes, like changes  in the activity of the autonomous nervous system in response to a short sleep duration.

    Just as blood lipids will be high after a lavish dinner, or even after a piece of chocolate. And just as blood-cortisol will raise in case of exercise, inflammation or stress. That is normal homeostasis. In this way the body adapts to changing conditions.

    Similarly -and it is a mere coincidence that I saw the post of Neuroskeptic about this study today- an increase of blood cortisol levels in children when ‘dropped’ at daycare, doesn’t mean that this small increase in cortisol is bad for them. And it certainly doesn’t mean that you should avoid putting toddlers in daycare as Oliver James concludes, because “high cortisol has been shown many times to be a correlate of all manner of problems”. As neuroskeptic explains:

    Our bodies release cortisol to mobilize us for pretty much any kind of action. Physical exercise, which of course is good for you in pretty much every possible way, cause cortisol release. This is why cortisol spikes every day when you wake up: it helps give you the energy to get out of bed and brush your teeth. Maybe the kids in daycare were just more likely to be doing stuff than before they enrolled.

    Extremely high levels of cortisol over a long period certainly do cause plenty of symptoms including memory and mood problems, probably linked to changes in the hippocampus. And moderately elevated levels are correlated with depression etc, although it’s not clear that they cause it. But a rise from 0.3 to 0.4 is much lower than the kind of values we’re talking about there.

    So the same may be true for a small temporary decrease in glucose sensitivity. Of course insulin resistance can be a bad thing, if blood sugars stay elevated. And it is conceivable that bad sleep habits contribute to this (certainly when combined with the use of much alcohol and eating junk food).

    What is remarkable (and not discussed by the authors) is that the changes in sensitivity were only “obvious” (by eyeballing) in 3-4 volunteers in all 4 tests. Was the insulin resistance unaffected in the same persons in all 4 tests or was the variation just randomly distributed? This could mean that not all persons are equally sensitive.

    It should be noted that the authors themselves remain rather reserved about the consequences of their findings for normal individuals. They conclude “This physiological observation may be of relevance for variations in glucoregulation in patients with type 1 and type 2 diabetes” and suggest that  “interventions aimed at optimization of sleep duration may be beneficial in stabilizing glucose levels in patients with diabetes.”
    Of course, their second article in diabetic persons[2], rather warrants this conclusion. Their specific advise is not directly relevant to healthy individuals.

    Credits

    References

    1. Donga E, van Dijk M, van Dijk JG, Biermasz NR, Lammers GJ, van Kralingen KW, Corssmit EP, & Romijn JA (2010). A Single Night of Partial Sleep Deprivation Induces Insulin Resistance in Multiple Metabolic Pathways in Healthy Subjects. The Journal of clinical endocrinology and metabolism PMID: 20371664
    2. Donga E, van Dijk M, van Dijk JG, Biermasz NR, Lammers GJ, van Kralingen K, Hoogma RP, Corssmit EP, & Romijn JA (2010). Partial sleep restriction decreases insulin sensitivity in type 1 diabetes. Diabetes care PMID: 2035738
    3. Nielsen LS, Danielsen KV, & Sørensen TI (2010). Short sleep duration as a possible cause of obesity: critical analysis of the epidemiological evidence. Obesity reviews : an official journal of the International Association for the Study of Obesity PMID: 20345429
    4. Monasta L, Batty GD, Cattaneo A, Lutje V, Ronfani L, van Lenthe FJ, & Brug J (2010). Early-life determinants of overweight and obesity: a review of systematic reviews. Obesity reviews : an official journal of the International Association for the Study of Obesity PMID: 20331509
    5. Cappuccio FP, D’Elia L, Strazzullo P, & Miller MA (2010). Quantity and quality of sleep and incidence of type 2 diabetes: a systematic review and meta-analysis. Diabetes care, 33 (2), 414-20 PMID: 19910503
    The subjects were studied on 3 d, separated by intervals of at
    least 3 wk. Subjects kept a detailed diary of their diet and physical
    activity for 3 d before each study day and were asked to maintain
    a standardized schedule of bedtimes and mealtimes in accordance
    with their usual habits. They were admitted to our clinical
    research center the night before each study day, and spent 8.5 h
    in bed from 2300 to 0730 h on all three occasions. Subjects fasted
    throughout these nights from 2200 h. The first study day was
    included to let the subjects become accustomed to sleeping in our
    clinical research center. Subjects were randomly assigned to sleep
    deprivation on either the second (n4) or third (n5) occasion.
    During the night of sleep restriction, subjects spent 8.5 h in
    bed but were only allowed to sleep from 0100 to 0500 h. They
    were allowed to read or watch movies in an upward position
    during the awake hours, and their wakefulness was monitored
    and assured if necessary.
    The rationale for essentially broken sleep deprivation from
    2300 to 0100 h and from 0500 to 0730 h, as opposed to sleep
    deprivation from 2300 to 0300 h or from 0300 to 0730 h, was
    that in both conditions, the time in bed was centered at the same
    time, i.e. approximately 0300 h. Slow-wave sleep (i.e. stage III of
    non-REM sleep) is thought to play the most important role in
    metabolic, hormonal, and neurophysiological changes during
    sleep. Slow-wave sleep mainly occurs during the first part of the
    night, whereas REM sleep predominantly occurs during the latter
    part of the night (12). We used broken sleep deprivation to
    achieve a more equal compression of both non-REM and REM
    sleep stages. Moreover, we used the same experimental conditions
    for partial sleep deprivation as previously used in other
    studies (7, 13) to enable comparison of the results.




    More about the Research Blogging Awards

    24 03 2010

    In my previous post I mentioned that the winners of the very first edition of the Research Blogging Awards are now known.

    In Beyond the book* you can hear the First Research Blogging Awards announced (see post).
    Here are the podcast and the  transcript of the live interview with the Award organizers Dave Munger of ResearchBlogging.org and Joy Moore of Seed Media.

    Dave and Joy talk about blogs in the research space and the reasons behind some of the winners, which include Not Exactly Rocket Science, Epiphenom, BPS Research Digest and Culturing Science.

    In the interview Dave and Joy not only talk about the winners but also discuss why it is important that science bloggers write about peer review and form a community. It is also meant “to give people the broader picture about the state of research blogging today online and how all of this is helping to promote science and science literacy and culture throughout the world.”

    Two Excerpts from the Transcripts by Moore (which highlights why research blogging is important:

    (…..) and what we’re seeing, and it’s quite exciting, is that bloggers, scientist bloggers around the world are putting a lot of very, very thoughtful effort into spontaneously writing about peer reviewed research in a way that is very similar to what you’ll see in say the news and views sections of some of the top science journals. And so what we’re able to see is not only a broader spectrum of coverage of peer reviewed research and interpretation, but we’re also seeing the immediate accessibility to that interpretation through the blogs and it’s open and it’s free and so it’s really opening up the accessibility to views and interpretations of research in a way that we’ve never seen before.

    (…..)  One of the most critical aspects of being not only a scientist, but also a blogger is ensuring that you get your work out there and you have recognition and attribution for it and therefore, to continue to encourage the Research Blogging activity, we feel that we can help play a role by ensuring that the bloggers are recognized for their work.

    *Beyond the Book is an educational presentation of the not for profit Copyright Clearance Center, with conferences and seminars featuring leading authors and editors, publishing analysts, and information technology specialists.




    Researchblogging Awards. Beaten by a (Former) Rat.

    23 03 2010

    The winners of the Researchblogging contest have been selected.

    I was rudely confronted with the harsh reality that I lost from a fellow Philosophy, Research, or Scholarship blogger, Richard Grant of Confessions of a (former) Lab Rat (and  previously of Life of a Lab Rat).

    Very subtle Richard just left a note: “Sorry”.

    “Thanks” Richard! And congratulations from the bottom of my heart… (no kidding, I really mean congrats!)
    But in one respect you were wrong. You said: “We don’t have the sort of blogs that win awards” Well at least you were half wrong. ;)

    Ed Yong of Not Exactly Rocket Science (No?) deserves a special mention, because he won in 3 (!) categories: Research blog of the year, blog post of the year and best lay-level blog. So if you don’t know this blogger it may well be worthwhile to take a look at his blog.

    Of course this is also true for all other winners (depicted below).
    You can visit their blogs and/or see their Research Blogging (RB) Page.

    Congrats to all winners! And heads up to all other finalists. You’re winners too!








    Follow

    Get every new post delivered to your Inbox.

    Join 611 other followers