Health Experts & Patient Advocates Beware: 10 Reasons Why you Shouldn’t be a Curator at Organized Wisdom!! #OrganizedWisdom

11 05 2011

Last year I aired my concern about Organized Wisdom in a post called Expert Curators, WisdomCards & The True Wisdom of @organizedwisdom.

Organized Wisdom shares health links of health experts or advocates, who (according to OW’s FAQ), either requested a profile or were recommended by OW’s Medical Review Board. I was one of those so called Expert Curators. However, I had never requested a profile and I seriously doubt whether someone from the a medical board had actually read any of my tweets or my blog posts.

This was one of the many issues with Organized Wisdom. But the main issue was its lack of credibility and transparency. I vented my complaints, I removed my profile from OW, stopped following updates at Twitter and informed some fellow curators.

I almost forgot about it, till Simon Sikorski, MD, commented at my blog, informing me that my complaints hadn’t been fully addressed and convincing me things were even worse than I thought.

He has started a campaign to do something about this Unethical Health Information Content Farming by Organized Wisdom (OW).

While discussing this affair with a few health experts and patient advocates I was disappointed by the reluctant reactions of a few people: “Well, our profiles are everywhere”, “Thanks I will keep an eye open”, “cannot say much yet”. How much evidence does one need?

Of course there were also people – well known MD’s and researchers – who immediately removed their profile and compared OW’s approach with that of Wellsphere, that scammed the Health Blogosphere. Yes, OW also scrapes and steals your intellectual property (blog and/or tweet content), but the difference is: OW doesn’t ask you to join, it just puts up your profile and shares it with the world.

As a medical librarian and e-patient I find the quality, reliability and objectivity of health information of utmost importance. I believe in the emancipation of patients (“Patient is not a third person word”, e-patient Dave), but it can only work if patients are truly well informed. This is difficult enough, because of the information overload and the conflicting data. We don’t need any further misinformation and non-transparency.

I belief that Organized Wisdom puts the reputation of  its “curators” at stake and that it is not a trustworthy nor useful resource for health information. For the following reasons (x see also Simon’s blog post and slides, his emphasis is more on content theft)

1. Profiles of Expert Curators are set up without their knowledge and consent
Most curators I asked didn’t know they were expert curators. Simon has spoken with 151 of the 5700 expert curators and not one of those persons knew he/she was listed on OW. (x)

2. The name Expert Curator suggests that you (can) curate information, but you cannot.
The information is automatically produced and is shown unfiltered (and often shown in duplicate, because many different people can link to the same source). It is not possible to edit the cards.
Ideally, curating should even be more than filtering (see this nice post about 
Social Media Content Curators, where curation is defined as the act of synthesizing and interpreting in order to present a complete record of a concept.)

3. OW calls your profile address: “A vanity URL¹”.

Is that how they see you? Well it must be said they try to win you by pure flattery. And they often succeed….

¹Quote OW: “We credit, honor, and promote our Health Experts, including offering: A vanity URL to promote so visitors can easily share your Health Profile with others, e.g. my.organizedwisdom.com/ePatientDave.
Note: this too is quite similar to the Wellsphere’s approach (read more at E-patients-net)

4. Bots tap into your tweets and/or scrape the content off their website
(x: see healthcare content farms monetizing scheme)

5. Scraping your content can affect your search rankings (x)
This probably affects starting/small blogs the most. I checked two posts of well known blogs and their websites still came up first.

6.  The site is funded/sponsored by pharmaceutical companies.
 “Tailored” ads show up next to the so called Wisdom Cards dealing with the same topic. If no pharmaceutical business has responded Google ads show up instead.
See the form where they actually invite pharma companies to select a target condition for advertizing. Note that the target conditions fit the OW topics.

7. The Wisdom Cards are no more than links to your tweets or posts. They have no added value. 

8. Worse, tweets and links are shown out of context.
I provided various examples in my previous post (mainly in the comment section)

A Cancer and Homeopathy WisdomCard™ shows Expert Curator Liz Ditz who is sharing a link about Cancer and Homeopathy. The link she shares is a dangerous article by a Dr. who is working in an Homeopathic General Hospital, in India “reporting” several cases of miraculous cures by Conium 1M, Thuja 50M and other watery-dilutions. I’m sure that Liz Ditz, didn’t say anything positive about the “article”. Still it seems she “backs it up”. Perhaps she tweeted: “Look what a dangerous crap.”
When I informed her, Liz said:“AIEEEE…. didn’t sign up with Organized Wisdom that I know of”. She felt she was used for credulous support for homeopathy & naturopathy.

Note: Liz card has disappeared (because she opted out), but I was was surprised to find that the link (http://organizedwisdom.com/Cancer-and-Homeopathy/wt/medstill works and links to other “evidence” on the same topic.


9. There is no quality control. Not of the wisdom cards and not of the expert curators.
Many curators are not what I would call true experts and I’m not alone: @holly comments at a Techcrunch postI am glad you brought up the “written by people who do not have a clue, let alone ANY medical training [of any kind] at all.” I have no experience with any kind of medical education, knowledge or even the slightest clue of a tenth of the topics covered on OW, yet for some reason they tried to recruit me to review cards there!?! )

The emphasis is also on alternative treatments: prevention of cancer, asthma, ADHD by herbs etc. In addition to “Health Centers”, there also Wellness Centers (AgingDietFitness etc) and Living Centers (BeautyCookingEnvironment). A single card can share information of 2 or 3 centers (diabetes and multivitamins for example).

And as said, all links of expert curators are placed unfiltered, even when you make a joke or mention you’re on vacation. Whether you’re a  Top health expert or advocate (there is a regular shout-out) just depends on the number of links you share, thus NOT on quality. For this reason the real experts are often at lower positions.

Some cards are just link baits.

 

10.  Organized Wisdom is heavily promoting its site.
Last year it launched activitydigest, automatic digests meant to stimulate “engagement” of expert curators. It tries to connect with top health experts, pharma -people and patient advocates. Hoping they will support OW. This leads to uncritical interviews such as at Pixels and Pills, at Health Interview (
Reader’s Digest + Organized Wisdom = Wiser Patients), Xconomy.com organizedwisdom recruits experts to filter health information on the web.

What can you do?

  • Check whether you have a profile at Organized Wisdom here.
  • Take a good look at Organized Wisdom and what it offers. It isn’t difficult and it doesn’t take much time to see through the facade.
  • If you don’t agree with what it represents, please consider to opt out.
  • You can email info@organizedwisdom.com to let your profile as expert curator removed.
  • If you agree that what OW does is no good practice, you could do the following (most are suggestions of Simon):
  • spread the word and inform others
  • join the conversation on Twitter #EndToFarms
  • join the tweetup on what you can do about this scandal and how to protect yourself from being liable. (more details will be offered by Simon at his regularly updated blogpost)
  • If you don’t agree this Content Farm deserves HONcode certification, notify HON at  https://www.healthonnet.org/HONcode/Conduct.html?HONConduct444558
Please don’t sit back and think that being a wisdom curator does not matter. Don’t show off  with an Organized Wisdom badget, widget or link at your blog or website.  Resist the flattery of being called an expert curator, because it doesn’t mean anything in this context. And by being part of Organized Wisdom, you indirectly support their practice. This may seriously affect your own reputation and indirectly you may contribute to misinformation.

Or as Heidi’s commented to my previous post:

I am flabbergasted that people’s reputation are being used to endorse content without their say so.
Even more so that they cannot delete their profile and withdraw their support.*

For me those two things on their own signal big red flags:

The damage to a health professional’s reputation as a result could be great.
Misleading the general public with poor (yes dangerous) information another

Altogether unethical.

*This was difficult at that time.

Update May 10, 2011: News from Simon: 165 individuals & 5 hospitals have now spoken up about unfolding scandal and are doing something about it (Tuesday )

Update May 12, 2011: If I failed to convince you, please read the post of Ramona Bates MD (@rlbates at Twitter, plastic surgeon, blogger at Suture for a Living), called “More Organized Wisdom Un-Fair Play. Ramona asked her profile to be removed from OW half a year ago).  Recommended pages at her blog seem to be written by other people.
She concludes:

“Once again, I encourage my fellow healthcare bloggers (doctors, nurses, patient advocates, etc) to remove yourself from any association with Organized Wisdom and other sites like them”

Related articles





Kaleidoscope #3: 2011 Wk 12

23 03 2011

It has been long since I have posted a Kaleidoscope post with a “kaleidoscope” of facts, findings, views and news gathered over the last 1-2 weeks. There have been only 2 editions: Kaleidoscope 1 (2009 wk 47) and 2 (2010 wk 31).

Here is some recommended reading from the previous two weeks. Benlysta (belimumab) approved by FDA for treatment of lupus.

Belimumab is the first new lupus drug approved in 56 years! Thus, potentially good news for patients suffering from the serious auto-immunity disease SLE (systemic lupus erythematosus).  Belimumab needs to be administered once monthly via the intravenous route. It is a fully human monoclonal antibody specifically designed to inhibit B-lymphocyte stimulator (BLyS™), thereby reducing the number of circulating B cells, and the produced ds-DNA antibodies (which are characteristic for lupus).
Two clinical trials showed that more patients experienced less disease activity when treated with belimumab compared to placebo. Data suggested that some patients had less severe flares, and some reduced their steroid doses (not an impressive difference using “eyeballing”). Patients were selected with signs of B-cell hyperactivity and with fairly stable, but active disease. Belimumab was ineffective in Blacks, which are hit hardest by the disease. The most serious side effect were infections: 3 deaths in the belimumab groups were due to infections.
Thus, overall the efficacy seems limited. Belimumab only benefits 35% of the patients with not the worst form of the disease. But for these patients it is a step forward.

  1. Press Announcement (fda.gov).
  2. Navarra SV, Guzmán RM, Gallacher AE, Hall S, Levy RA, Jimenez RE, Li EK,Thomas M, Kim HY, León MG, Tanasescu C, Nasonov E, Lan JL, Pineda L, Zhong ZJ, Freimuth W, Petri MA; BLISS-52 Study Group. Efficacy and safety of belimumab in patients with active systemic lupus erythematosus: a randomised, placebo-controlled, phase 3 trial. Lancet. 2011 Feb 26;377(9767):721-31. Epub 2011 Feb 4. PubMed PMID: 21296403.
  3. Belimumab: Anti-BLyS Monoclonal Antibody; Benlysta(TM); BmAb; LymphoStat-B. Drugs in R & D (Open Access): 28 May 2010 – Volume 10 – Issue 1 – pp 55-65 doi: 10.2165/11538300-000000000-00000 Adis R&D Profiles (adisonline.com)

Sleep-deprived subjects make risky gambling decisions.

Recent research has shown, that a single night of sleep deprivation alters decision making independent from a shift in attention: most volunteers moved from seeking to minimize the effect of the worst loss to seeking increased reward. This change towards risky decision making was correlated with an increased brain activity in brain regions that assess positive outcomes (ventromedial prefrontal activation) and a simultaneous decreased activation in the brain areas that process negative outcomes (anterior insula). This was assessed by functional MRI.

One co-author (Chee) noted that “casinos often take steps to encourage risk-seeking behavior — providing free alcohol, flashy lights and sounds, and converting money into abstractions like chips or electronic credits”

Interestingly, Chee also linked their findings to empirical evidence that long work hours for medical residents increased the number of accidents. Is a similar mechanism involved?

  1. Venkatraman V, Huettel SA, Chuah LY, Payne JW, Chee MW. Sleep deprivation biases the neural mechanisms underlying economic preferences.  J Neurosci. 2011 Mar 9;31(10):3712-8 (free full text)
  2. Sleep deprived people make risky decisions based on too much optimism (Duke Healthpress release)

Grand Rounds

Grand Rounds is up at Better Health. Volume 7, Number 26 is an “Emotional Edition” where posts are organized into emotion categories. My post about the hysteria and misinformation surrounding the recent Japanese earthquake is placed under Outrage.

There are many terrific posts included. A few posts I want to mention shortly.

First a post by a woman who diagnosed hers and her sons’ disease after numerous tests. Her sons’ pediatrician only tried to reassure, so it seems. (“don’t worry…”).

I was also moved by the South African surgeon, Bongi, who tells the tragic story of a missed diagnosis that still haunts him. “For every surgeon has a graveyard hidden away somewhere in the dark recesses of his mind…”

Bongi’s blog Other Things Amanzi is one of my favorites. Another blog that has become one of my favs is 33 Charts by Dr. Bryan Vartabedian. Included in this Grand Round is “And a child will lead them“. It is a beautiful post about the loss of a young patient:

….”And facing Cooper’s parents for the first time after his passing was strangely difficult for me.  When he was alive I always had a plan.  Every sign, symptom, and problem had a systematic approach.  But when faced with the most inconceivable process, I found myself awkwardly at odds with how to handle the dialog”….

Other Medical Blogs

Another of my recent fav blogs is the blog of cardiologist, dr. Wes. Two recent posts I would especially like to recommend.

The first asks a seemingly simple question: “So which set of guidelines should doctors use?” The answer, however,  may surprise you.

In another post dr Wes describes the retraction of an online-before-print case report entitled “Spontaneous explosion of implantable cardioverter-defibrillator” with dramatic pictures of an “exploded ICD” .(here is the PDF of the cache). This retraction took place after dr. Wes reported the case at his blog. Strange enough the article was republished this February, with another title, “Case report of out-of-hospital heat dissipation of an implantable cardioverter-defibrillator.” (no explosion anymore) and no shocking photos. Food for thought….  The main conclusion of dr Wes? Independent scientific peer-reviewed journals might not be so independent after all. Library matter

Sorry, but I had to laugh about David Rothman’s Common Sense Librarianship: An Ordered List Manifesto. As put it so well by Kathryn Greenhill at her blog Librarians Matter: “It is a hot knife of reason through the butterpat of weighty bullshit that often presses down as soon as we open our mouths to talk about our profession.”

Oh, and a big congrats to Aaron Tay for his  Library Journal moversShakers award. Please read why he deserves this award. What impresses me the most is the way he involves users and converts unhappy users “into strong supporters of the library”. I would recommend all librarians to follow him on Twitter (@aarontay) and to regularly read his blog Musings about Librarianship. Web 2.0

The iPad 2 is here. A very positive review can be found at Techcrunch. The iPad 2 has a camera, is thinner, lighter, and has a much more powerful dual-core chip. Still many people on Twitter complain about the reflective screen. Furthermore the cover is stylish but  not very protective as this blogger noticed 2 days after purchase.
Want to read further: You might like “iPad 2: Thoughts from a first time tablet use” (via @drVes)

It has been five years since Twitter was launched when one of its founders, Jack Dorsey, tweeted “just setting up my twttr’. Now Twitter nearly has 200 million users who now post more than a billion tweets every week. (see Twitter Blog)

Just the other week  Twitter has told developers to stop building apps. It is not exactly clear what this will mean. According to The Next Web it is to prevent confusion of consumers third-party Twitter clients and because of privacy issues. According to i-programmer the decision is mainly driven by the desire of Twitter to be in control of its API and the data that its users create (as to maximize its -future- revenue). I hope it won’t affect Twitter-clients like Tweetdeck and Seesmic, which perform much better (in my view) than Twitter.com.





Frantic Friday #37. The Aftermaths of the Japanese Earthquake & Tsunami. With Emphasis on (Mis)information

19 03 2011

The Frantic Friday belongs to the same series as the Silly Saturday, Funny Friday etc. posts. These are not directly related to Science or library matters. Often these post are about  humorous things, but not in this case. Therefore the name of the series was adapted. It took me a week to write it down, so it reflects what happens over the entire period (and insights did change)

 

Aerial of Sendai, Japan, following earthquake.

Last week was overshadowed by the terrible earthquake in North East Japan, and the subsequent tsunami which swept away many villages in this part of the country. Some people see this as a sign of the world coming to an end, especially since the dates of the Twin tower attack (9-11-01) and the date of the Tsunami in Japan (3-10-11) add up to 12/21/12, the predicted date of the end of the world. Whether you believe in this omen or not (I don’t), the pictures and videos of this event sure do show the unprecedented power of nature, which is devastating beyond imagination. The Jazeera video below was shown on Dutch tv the entire morning: people, cars and boats have no time to escape and a large tsunami is engulfing various cities, eating anything on its path.

Another impressive video shows how a small stream grows to a wild turbulent flood and sweeps away cars and even houses. Sadly, many commenters to this video see the disaster as a punishment for “those that have turned there backs on HIM etc”. Videos like these can now be found anywhere, like at BBC news Asia.

Here are photo’s before and after the tsunami, and here are some photos, not only showing the violent streams, but also the consequences. I was especially moved by this photo of what appear to be mother and child. For after all, this natural disaster is mainly a human tragedy. Lets hope many beloved (human and animal) have found or find each other in good health again, like this reunion of a dog owner and his dog.

As if it wasn’t enough there was also a volcano eruption last Sunday, and the initial small problems with the nuclear plants near to the tsunami area seem now to get out of hand (see below).

Indirectly, there are some library, web 2.0/social media & science aspects to this natural disaster. I will concentrate on (medical and scientific) information

Google immediately reacted  to Japanese tsunami with a Person Finder tool (Engadget). As in the Haiti earthquake (see earlier post), Cochrane makes Evidence Aid resources available.

Immediately after the earthquake we could learn some scientific facts about earthquakes and tsunamis. On thing I learned is that the more superficial the earthquake the more devastating the effects in the area surrounding it. I also learned that a tsunami can have a speed of 800 km per hour, i.e. “flies” with the speed of an airplane, and that a wave can be 1 km long and have an incredible force. Science writers further explain why Japan’s tsunami triggered an enormous whirlpool.

These are facts, but with the nuclear effects we are unsure as to what is happening and “how bad it will be”. I’m a scientist, but surely no expert in this field, and I find the information confusing, contradictory and sometimes misleading.

Lets start with the misleading information. Of course there are people who see the hand of God in this all, but that is so obviously without any foundation (“uit de lucht gegrepen”), that I won’t discuss it further.

First this nuclear fallout map. (it is a lie!)

I saw it on Facebook and took it seriously. Others received it by mail, with an explanation that 550-750 rads means “nausea within a couple of hours and no survivors.” Clearly that is nonsense (fallout killing all people in the US East Coast). Also disturbingly, the makers of this map “bored” the logo of the Australian Radiation Services (ARS). (see Snopes.com, thanks to David Bradley of Sciencebase.com who mentioned it on Facebook).

But the pro-nuclear people come with equal misinformation. There is a strange link on Facebook leading to a post : “MIT scientist says no problems”. The post was blogged by an australian teacher in Japan, who wrote up the words of a friend, family member and MIT-scientist Josef Oehmen (@josefoehmen on Twitter)… But the post really seems to be a repost from something called The Energy Collective, and written by Brooks, a strong proponent of nuclear power. The site is powered by Siemens AG, which recently became an “industry partner” of MIT/LAI. (and the circle is round). Read about this and more at Genius Now in : The Strange Case of Josef Oehmen (access the cache if the site can’t be reached). The German translation of the official piece is here. The comments (permitted) are revealing….

Another misleading claim is that of attorney Ann Coulter in a column and in the O’Reilly show:

With the terrible earthquake and resulting tsunami that have devastated Japan, the only good news is that anyone exposed to excess radiation from the nuclear power plants is now probably much less likely to get cancer.” We shouldn’t worry about the damaged Japanese reactors because they’ll make the locals healthier”

She refers to the hormesis effect, the effect that under certain conditions low doses of certain chemicals/drugs can have opposite effects to high doses in certain experimental models. See PZ Myers at Pharyngula for an excellent dissection of this nonsense.

And -help!- here is a post of a CAM doctor who advises people from the US to immediately take the following (because Japanese Nuclear Radiation Plume Has Reached the United States):

Ample amounts of necessary minerals such as magnesium, iodine, selenium, zinc, and others, Saunas, both infrared and far-infrared, Raising core energy levels with botanical formulas, Supporting and improving individual capacities to mobilize and eliminate toxins, Therapeutic Clays to remove positively charged particles, Solum uliginosum products from Uriel Pharmacy – also available directly from us etcetera.

Thus various examples of misinformation by seemingly well-informed scientists, experts & doctors.

Perhaps this is the downside of Social Media. Twitter and Facebook are very well suited to spread the news fast, but they can also easily spread false information or hoaxes fast-via “your friends”. It is important to check where the news actually comes from (which can be hard if one misuses logo’s and institutions) and if the writer has any biases pro or con nuclear power. But an other disadvantage of Social Media is that we hurry through it by speed-reading.

Besides real lies there is also something called bias.

I have to admit that I have a bias against nuclear power. I was teenager when learned of the Club of Rome, I was in my twenties when the Dutch held large Peace Marches with “Ban the bomb” placards, I was in my thirties when the Dutch cattle had to be kept in stables and we couldn’t eat spinach, because of the Chernobyl nuclear fallout. At the University, my professor in Physics spend one or two lectures talking about the danger of nuclear power and the connection with poverty and the arms race, instead of teaching the regular stuff. During environmental studies I learned about the pitfalls of other energy sources as well. My conclusion was we had to use our energy sources well and I decided to use my feet instead of driving a car (a decision I sometimes regret).

The opinion piece by By David Ropeik “Beware the fear of nuclear….FEAR!” in Scientific American seems a little biased in the opposite direction. This guest post, written soon after the trouble at the Fukushima Daiichi plant, mainly stresses that:

“… the world is facing the risk of getting the risk of nuclear power wrong, and raising the overall risk to public and environmental health far more in the process.”

As if nuke-o-noia that is the most worrying at the moment. He also stresses that in addition to being actually physically hazardous, nuclear power has some psychological characteristics (odorless, man-made) that make it particularly frightening: It is all in the mind, isn’t it?

I do get his point though and agree as to the quiet danger of fossil fuels and the risk of being too dependent upon other countries for energy. But as a commenter said: two wrongs don’t make a right. And isn’t there something like renewable resources and energy saving?

Furthermore the nuclear problems in Japan do show what happens if a country is reliant on nuclear power. The lack of electricity causes great problems with infrastructure. This not only affects Tokyo commuters, but a lack of fuel, electricity, food and the cold weather also hampers the aid efforts. There might also be insufficient fuel to evacuate refugees from the exclusion area, a problem that will grow if the government has to widen the evacuation zone around the plant again (Nature News). Not the most important, but  the japanese quake will likely affect our supply of gadgets and other industries, like the auto-industry.

So we now have polarized discussions between pro- and contra- nuke movements. And it has become an irrational political issue. China has suspended approval for all new nuclear power stations, Germany’s government has announced a safety review of its 17 nuclear power plants, and is temporarily shutting down the seven oldest and the Dutch Government will take the Japanese experience into account when deciding on the Dutch nuclear power program.

It is surprising, that minds have changed overnight: all (potential) risks of nuclear plants were long known.

Regarding misinformation, TEPC, the utility that runs the Fukushima Daiichi nuclear power plant and supplies power for Tokyo, has a long history of disinformation: here were many incidents (200) which were covered up  (Dutch: see NRC-handelsblad, Thursday 2011-03-17; non-official forum here).

There are also signals that the Japanese government, and even the IAEA (according to a Russian nuclear accident specialist) aren’t or weren’t as transparent as one would like them to be. The government seems to downplay the risks and is late with information. The actions are not consistent with what is said: Everything was said to be in control, while people were being evacuated etc. Also the American analysis of the severity of the nuclear was much graver than that of the Japanese government. When the Japanese advise to keep a distance of 30 km, the United States and the British Foreign and Commonwealth Office recommend that citizens stay at least 80 km from the nuclear plant. (Discussed in the NY-Times and The Great Beyond (Nature).
The last days the Japanese government has become more open. It publishes The Japanese science ministry, MEXT, has  its publishes the radiation levels throughout the region and gives more background info about the health risks (source: The Great Beyond). Today, it has also raised the warning level from 4 to 5 on a 7-level international. Outside experts have said for days that this disaster was worse than that at Three Mile Island — which was rated a 5 but released far less radiation outside the plant than Fukushima Daiichi already has. Level 4 means only “local effects”.
The Prime Minister’s Office of Japan now also has an official English account on Twitter: @JPN_PMO.

But now for reliable information? Where can we get it? What about the health risks? Again, I’m no expert in this field, but the following information at least helped me to get an idea about the situation and the actual danger.

  1. It looks like that the situation at the Fukushima Daiichi nuclear power plant is getting out of control (Nature News, March 16th).
  2. It is possible that it the will not be confined to leaks of radioactivity and explosions, but that a nuclear meltdown may occur.
  3. A nuclear meltdown or nuclear reactor explosion is a grave event, but is NOT a nuclear explosion. As explained at Sciencebase:  “There is a risk of particles of radioactive material entering the atmosphere or the ocean, but this does not amount to the impact of an actual nuclear explosion.” Thus even in a worst-case scenario the effects are not as severe as a nuclear explosion.
  4. One major difference with Chernobyl is that radioactivity at Fukushima remains largely contained within the reactor and that we know the problems from the start (not surprised by fall-out).
  5. Still radioactive fumes leak from the power plant. March 16th there was “an alarmingly high dose rate of 0.08 millisieverts (mSv) per hour, 25 kilometres away from the plant (Nature News). March 17th is 17 mSv/hr, 30 kilometres northwest of the reactor. There are also reports of .012 mSv/hr in Fukushima City, 60 km away from the plant. (The Great Beyond). Sanya Gupta monitored that his radiation levels quadrupled, even in Tokyo (see CNN-video).
  6. The time of exposure is as important as the dose. Thus exposure to a  4 to 10 times higher radiation than normal during a couple of days, poses little extra health risk. But if you would receive 4 to 10 times more radiation than usual during months or years it could pose a health risk (cumulative effect). On the other hand peak doses recorded at Fukushima of 400 mSv per hour are enough to induce radiation sickness in about two hours’ time ((The Great Beyond)
  7. Radiation sickness is a (more or less) acute effect of irradiation. It can occur in the immediate surroundings of the radioactive leak. A single dose of 1000 mSv causes radiation sickness and nausea but not death. But 6000 mSv (chernobyl-workers) kills people within a month (see picture in The Guardian)
  8. Over the long term, exposure to radiation, may increase the risk of developing cancer. An exposure rate of 100 mSv/yr is considered the threshold at which cancer rates begin to increase.
  9. To put this into perspective: we are all exposed to 2 mSv natural irradiation per year, one full body CT-scan gives 10 mSv and a flight from New York – Tokyo polar route gives 9 mSv.
  10. The most worrisome on the reported releases of radioactive material in Japan are radioactive cesium-137 (gamma emitter, high energy radiation, penetrating deep) and Iodine-131, a beta emitter (can be easily shielded, dangerous when ingested or inhaled).
  11. Iodine-131 has a short half life of 8 days, but is dangerous when it is absorbed, i.e. through contaminated food and milk. It will accumulate in the thyroid and can cause (mostly non-lethal) thyroid cancer. An easy form of protection is potassium iodide (KI), but this should only be taken by people in the emergency zone, because it can cause serious adverse effects and should not be taken unnecessarily. (For more info see CDC).
  12. Over the long term, the big threat to human health is cesium-137, which has a half-life of 30 years. It is cesium-137 that still contaminates much land in Ukraine around the Chernobyl reactor. Again it can enter the body via food, notably milk.

Note: this is a short summary of what I’ve read. Please go to official sites to get more detailed scientific and medical information.

There are several informative charts or FAQ:

Credits:





Medical Information Matters 2.10 is up at The Search Principle Blog

16 11 2010

In case you missed it: the new edition of Medical Information Matters (edition 2.10) – formerly MedLibs Round is up at the well-known blog “Search Principles” of the equally well-known Dean Giustini, a knowledgeable, helpful and friendly Canadian medical librarian, one of the first bloggers, a web 2.0 pioneer, author of many papers (like this one in the BMJ), main contributor to the UBC Health Library Wiki, educator and expert in EBM. Need I say more?

With a wink to the name of the blog carnival, Dean gave his post the title: Medical Blogging Matters: A Carnival of Ideas, November 2010

And indeed, his post is a real ode to medical bloggers and medical blogging

Dean:

With the rise of Twitter, and the emphasis placed on ‘real time’ idea-sharing and here-I-am visibility on the social web, I often wonder where blogging (all kinds) will be in five years. Perhaps it’s a dying art form.

However, this month, the ‘art of blogging’ seems to be in ample evidence throughout the medical blogosphere and the array of postings illustrates a vast diversity of approaches and opinions. In the posts mentioned, you’ll recognize many of the top names in medical blogging – these dedicated, talented professionals continue to work hard at updating their blogs regularly while carrying on with their work as medical librarians, informaticists and physicians.

Dean started his post by saying

It’s my great honour to be this month’s host for Medical Information Matters — the official name for the medical blog carnival (formerly MedLibs Round) where the “best blog posts in the field of medical information” are shared by prominent bloggers. I am very proud to consider many of these bloggers to be my colleagues and friends.”

But the honor is all mine! I’m glad I finally “dared” to ask him to host this blog carnival and that he accepted it without hesitation. And I, too, consider many of these bloggers, including Dean, to be my colleagues and friends. (Micro)blogging has made the world smaller…

Here are a few tweets mentioning this edition of the blog carnival, showing that it is widely appreciated (see more here):

  1. Dean Giustini
    giustini Here comes “Medical Blogging Matters: A Carnival of Ideas, November 2010″ http://bit.ly/aDzkLT [did I miss anyone? let me know]
  2. Francisco J Grajales
  3. westr
    westr Some big names in there! RT @pfanderson Medical blogging MATTERS http://bit.ly/aDzkLT
  4. Ves Dimov, M.D.
    DrVes Medical Information Matters: the weekly best of related blog posts http://goo.gl/sBgw2
  5. Kevin Clauson

this quote was brought to you by quoteurl

Next month Medical Information Matters will be hosted by another well known blogger: Martin Fenner of Gobblydook. Martin’s blog belonged to the Nature Network, but it was recently moved to the PLOS blog network.

According to the about section:

Martin Fenner works as a medical doctor and cancer researcher at the Hannover Medical School Cancer Center in Germany. He is writing about how the internet is changing scholarly communication. Martin can be found on Twitter as @mfenner.

So it seems that Martin combines 3 professions, that of a doctor, researcher, and a medical information specialist. This promises a wonderful round again.

The deadline for submission is Saturday December 4th (or perhaps even Sunday 5th).

The theme, if any, is not known yet. However, you can ALWAYS submit the URL/permalink of a recent, good quality post at:

http://blogcarnival.com/bc/submit_6092.html

(keep in touch, because we will write a call for submissions post later)

Finally a request to you all:

For 2011, I’m looking for new hosts, be it scientists, researchers, librarians, physicians or other health care workers, people who have hosted this blog carnival before, or not, people who have a longstanding reputation as blogger as well as people who just started blogging. It doesn’t matter, as long as you have a blog and you like hosting this blog carnival.

Please comment below or mail me at laika dot spoetnik at gmail dot com





Medical Information Matters: Call for Submissions

6 11 2010

I would like to remind you that it is almost the first Saturday of the Month and thus submission time for Medical Information Matters, the former MedLibs round.

Medical Information Matters is a monthly compilation of the “best blog post in the field of medical information”, hosted by a different blogger each time. The blogger who will host the upcoming edition is Dean Giustini.

I am sure that every librarian, and many doctors, know Dean. As a starting blogging librarian, I knew 2  international librarian bloggers: Dean Giustini and Krafty Librarian (make that 3, I forgot to mention David Rothman*) . I looked up to them and they did (and do) inspire me.
It is nice that blogging and Social Media can make distances shorter, literally and figuratively…

As far as I know, Dean has no theme for this round. But you can always submit any good quality post about medical information to the blog carnival. Whether you are a librarian, a doctor, a nurse, a patient and/or a scientist and whether your post is on searching, reference management, reliability of information, gaps in information, evidence, social media or education ( to name a few).
You can submit your own post or a good post of someone else, as long as it is in English.

So if that isn’t easy….

Please submit the URL/permalink of your post at:
http://blogcarnival.com/bc/submit_6092.html

If everything goes according to plan, you can read the Medical Information Matters 2.9 at the blog of Dean Giustini next Tuesday.

 

* Thanks to @DrVes via Twitter. Social Media is sooo powerful!





Breast Cancer is not a Pink Ribbon.

20 10 2010

I have always had mixed feelings in case of large happenings like marches and ribbon activities and cancer months. September is the ovarian cancer month (and also a US Prostate Cancer Month and a childhood cancer month) and  October the breast cancer month…. We have only 12 months in a year!

Please, don’t misunderstand me! Awareness is very important, also in the case of breast cancer: Awareness so to recognize breast cancer in an early stage, awareness of preventive measures of cancer,  awareness what women with breast cancer go through, awareness that breast cancer often can be cured, awareness that research is needed, and thus money.

But I also feel that the attention is overdone and often hypocritical, with fancy pink ribbons and “pink”: everywhere. This feeling is strengthened by some recent articles. For instance this article in Health.Chance.org, called Pink Ribbon Hypocrisy: Boozing It Up for Breast Cancer discussing that fast food and alcohol companies Use Breast Cancer as a Marketing Ploy (whereas these items some reputation if it comes to -certain types of- cancer). You can sign a petition here against it.

There is even a book Pink Ribbon Blues – How Breast Cancer Culture Undermines Women’s Health, written by Gayle A. Sulik, that is “thought-provoking and probing argument against the industry of awareness-raising”

From the description:

Pink ribbon paraphernalia saturate shopping malls, billboards, magazines, television, and other venues, all in the name of breast cancer awareness. (…) Gayle Sulik shows that though this “pink ribbon culture” has brought breast cancer advocacy much attention, it has not had the desired effect of improving women’s health. It may, in fact, have done the opposite. Based on eight years of research, analysis of advertisements and breast cancer awareness campaigns, and hundreds of interviews with those affected by the disease, Pink Ribbon Blues highlights the hidden costs of the pink ribbon as an industry, one in which breast cancer has become merely a brand name with a pink logo.

The following quote from a woman who had lost her mother to breast cancer illustrates the feeling of many (see comments):

As the years went by, life provided me with more reasons to hate pink. Frustration over society-defined gender roles piled on as did annoyance at the image of ultimate feminine woman. And then came the big one.

Breast cancer.

My mom passed away after a six-year long battle with breast cancer at the age of 45.

When pink later became symbolic of breast cancer awareness, I wanted to punch some pink piggies. I know that some people choose to wear pink to honor or remember or show support for a loved one. That is not what I get my panties in a bunch about–it’s the way corporate America has grabbed that pink flag and waved it to and fro for their own profit that makes me furious.

I remember once standing in the grocery store and staring at a bag of pink ribbon-adorned M&Ms, my blood boiling harder with every passing second.

She ends her post with:

Everyone has a story. Some have seen the scars of a mastectomy. Some have witnessed the toll that chemotherapy takes on a body. Some have lived the pain. We all know it’s bad.

I, for one, don’t need pink to remind me.

That same is true for me. I’ve seen my mother battling breast cancer -she is a survivor- and I have seen the scars of mastectomy and these are nowhere near pink ribbon.

“Breast Cancer is not a Pink Ribbon” tweeted Gilles Frydman yesterday and he meant a great pictures exhibition that lasted 3 days, showing portraits of young topless breast cancer survivors shot by fashion photographer David Jay.

At first I found it mainly confronting: this is the reality of breast cancer! As described elsewhere (Jezebel):

Seeing scarred and reconstructed mammary glands is not just shocking because of the way breasts are fetishized in our society, but because it speaks to how much we hide, gloss over and tidy up disease. Breasts are one of the defining physical attributes for identifying a woman. Breast cancer eats away at flesh meant to nourish. Surgery is a brutal procedure from which to recover and heal. But cute, clean, pink ribbons have come to symbolize all that.

But at a second and third look, I mainly saw the beauty of the photo’s, the fierceness of the women and their beautiful eyes.

Exactly as put into words at the website of the SCAR project:

Although Jay began shooting The SCAR Project primarily as an awareness raising campaign he was not prepared for something much more immediate . . . and beautiful: “For these young women, having their portrait taken seems to represent their personal victory over this terrifying disease.

SCAR by the way stands for ‘Surviving Cancer. Absolute Reality.”

David Jay was inspired to act when a dear friend was diagnosed with breast cancer at the age of 32.

The SCAR-project is “dedicated to the more than 10,000 women under the age of 40 who will be diagnosed this year alone The SCAR Project is an exercise in awareness, hope, reflection and healing. The mission is three-fold: Raise public consciousness of early-onset breast cancer, raise funds for breast cancer research/outreach programs and help young survivors see their scars, faces, figures and experiences through a new, honest and ultimately empowering lens.”

The exhibition was last week in New York, but you can still see the photographs at the website of the SCAR-project.

Furthermore, you can participate in the project and/or buy the (signed) SCAR project book ($55).

Related Articles





Search OVID EMBASE and Get MEDLINE for Free…. without knowing it

19 10 2010

I have the impression that OVIDSP listens more to librarians than the NLM, who considers the end users of databases like PubMed more important, mainly because there are more of them. On the other hand NLM communicates PubMed’s changes better (NLM Technical Bulletin) and has easier to find tutorials & FAQs, namely at the PubMed homepage.

I gather that the new changes to the OVIDSP interface are the reason why two older OVID posts are the recent number 2 and 3 hits on my blog. My guess is that people are looking for some specific information on OVID’s interface changes that they can’t easily access otherwise.

But this post won’t address the technical changes. I will write about this later.

I just want to mention a few changes to the OVIDSP databases MEDLINE and EMBASE, some of them temporary, that could have been easily missed.

[1] First, somewhere in August, OVID MEDLINE contained only indexed PubMed articles. I know that OVID MEDLINE misses some papers PubMed already has -namely the “as supplied by publisher” subset-, but this time the difference was dramatic: “in data review” and “in process” papers weren’t found as well. I almost panicked, because if I missed that much in OVID MEDLINE, I would have to search PubMed as well, and adapt the search strategy…. and, since I already lost hours because of OVID’s extreme slowness at that time, I wasn’t looking forward to this.

According to an OVID-representative this change was not new, but was already there since (many) months. Had I been blind? I checked the printed search results of a search I performed in June. It was clear that the newer update found less records, meaning that some records were missed in the current (August) update. Furthermore the old Reference Manager database contained non-indexed records. So no problems then.

But to make a long story short. Don’t worry: this change disappeared as quickly as it came.
I would have doubted my own eyes, if my colleague hadn’t seen it too.

If you have done a MEDLINE OVID search in the second half of August you might like to check the results.

[2] Simultaneously there was another change. A change that is still there.

Did you know that OVID EMBASE contains MEDLINE records as well? I knew that you could search EMBASE.com for MEDLINE and EMBASE records using the “highly praised EMTREE“, but not that OVID EMBASE recently added these records too.

They are automatic found by the text-word searches and by the EMTREE already includes all of MeSH.

Should I be happy that I get these records for free?

No, I am not.

I always start with a MEDLINE search, which is optimized for MEDLINE (with regard to the MeSH).

Since indexing by  EMTREE is deep, I usually have (much) more noise (irrelevant hits) in EMBASE.

I do not want to have an extra number of MEDLINE-records in an uncontrolled way.

I can imagine though, that it would be worthwhile in case of a quick search in EMBASE alone: that could save time.
In my case, doing extensive searches for systematic reviews I want to be in control. I also want to show the number of articles from MEDLINE and the number of extra hits from EMBASE.

(Later I realized that a figure shown by the OVID representative wasn’t fair: they showed the hits obtained when searching EMBASE, MEDLINE and other databases in Venn diagrams: MEDLINE offered little extra beyond EMBASE, which is self-evident, considering that EMBASE includes almost all MEDLINE records.- But I only learned this later.)

It is no problem if you want to include these MEDLINE records, but it is easy to exclude them.

You can limit for MEDLINE or EMBASE records.

Suppose your last search set is 26.

Click Limits > Additional Limits > EMBASE (or MEDLINE)

Alternatively type: limit 26 to embase (resp limit 26 to medline) Added together they make 100%

If only they would have told us….


3. EMBASE OVID now also adds conference abstracts.

A good thing if you do an exhaustive search and want to include unpublished material as well (50% of the conference abstracts don’t get published).

You can still exclude them if you like  (see publication types to the right)

Here is what is written at EMBASE.com

Embase now contains almost 800 conferences and more than 260,000 conference abstracts, primarily from journals and journal supplements published in 2009 and 2010. Currently, conference abstracts are being added to Embase at the rate of 1,000 records per working day, each indexed with Emtree.
Conference information is not available from PubMed, and is significantly greater than BIOSIS conference coverage. (…)

4. And did you know that OVID has eliminated StopWords from MEDLINE and EMBASE? Since  a few years you can now search for words or phrases like is there hope.tw. Which is a very good thing, because it broadens the possibility to search for certain word strings. However, it isn’t generally known.

OVID changed it after complaints by many, including me and a few Cochrane colleagues. I thought I had written a post on it before, but I apparently I haven’t ;).

Credits

Thanks to Joost Daams who always has the latest news on OVID.

Related Articles





Problems with Disappearing Set Numbers in PubMed’s Clinical Queries

18 10 2010

In some upcoming posts I will address various problems related to the changing interfaces of bibliographic databases.

We, librarians and end users, are overwhelmed by a flood of so-called upgrades, which often fail to bring the improvements that were promised….. or which go hand-in-hand with temporary glitches.

Christina of Christina’s Lis Rant even made rundown of the new interfaces of last summer. Although she didn’t include OVID MEDLINE/EMBASE, the Cochrane Library and Reference manager in her list, the total number of changed interfaces reached 22 !

As a matter of fact, the Cochrane Library was suffering some outages yesterday, to repair some bugs. So I will postpone my coverage of the Cochrane bugs a little.

And OVID send out a notice last week: This week Ovid will be deploying a software release of the OvidSPplatform that will add new functionality and address improvements to some existing functionality.”

In this post I will confine myself to the PubMed Clinical Queries. According to Christina PubMed changes “were a bit ago”, but PubMed continuously tweaks  its interface, often without paying much attention to its effects.

Back in July, I already covered that the redesign of the PubMed Clinical Queries was no improvement for people who wanted to do more than a quick and dirty search.

It was no longer possible to enter a set number in the Clinical Queries search bar. Thus it wasn’t possible to set up a search in PubMed first and to then enter the final set number in the Clinical Queries. This bug was repaired promptly.

From then on, the set number could be entered again in the clinical queries.

However, one bug was replaced by another: next, search numbers were disappearing from the search history.

I will use the example I used before: I want to know if spironolactone reduces hirsutism in women with PCOS, and if it works better than cyproterone acetate.

Since little is published about this topic,  I only search for  hirsutism and spironolactone. These terms  map correctly with  MeSH terms. In the MeSH database I also see (under “see also”) that spironolactone belongs to the aldosterone antagonists, so I broaden spironolactone (#2) with “Aldosterone antagonists”[pharmacological Action] using “OR” (set #7). My last set (#8) consists of #1 (hirsutism) AND #7 (#2 OR #6)

Next I go to the Clinical Queries in the Advanced Search and enter #8. (now possible again).

I change the Therapy Filter from “broad”  to “narrow”, because the broad filter gives too much noise.

In the clinical queries you see only the first five results.

Apparently even the clinical queries are now designed to just take a quick look at the most recent results, but of course, that is NOT what we are trying to achieve when we search for (the best) evidence.

To see all results for the narrow therapy filter I have to go back to the Clinical Queries again and click on see all (27) [5]

A bit of a long way about. But it gets longer…


The 27 hits, that result from combining the Narrow therapy filter with my search #8 appears. This is set #9.
Note it is a lower number than set #11 (search + systematic review filter).

Meanwhile set #9 has disappeared from my history.

This is a nuisance if I want to use this set further or if I want to give an overview of my search, i.e. for a presentation.

There are several tricks by which this flaw can be overcome. But they are all cumbersome.

1. Just add set number (#11 in this case, which is the last search (#8) + 3 more) to the search history (you have to remember the search set number though).

This is the set number remembered by the system. As you see in the history, you “miss” certain sets. #3 to #5 are for instance are searches you performed in the MeSH-database, which show up in the History of the MeSH database, but not in PubMed’s history.

The Clinical query set number is still there, but it doesn’t show either. Apparently the 3 clinical query-subsets yield a separate set number, whether the search is truly performed or not. In this case  #11 for (#8) AND systematic[sb], #9 for (#8) AND (Therapy/Narrow[filter]). And #10 for (#8) AND the medical genetics filter.

In this way you have all results in your history. It isn’t immediately clear, however, what these sets represent.

2. Use the commands rather than going to the clinical queries.

Thus type in the search bar: #8 AND systematic[sb]

And then: #8 AND (Therapy/Narrow[filter])

It is easiest to keep all filters in Word/Notepad and copy/paste each time you need the filter

3. Add clinical queries as filters to your personal NCBI account so that the filters show up each time you do a search in PubMed. This post describes how to do it.

Anyway these remain just tricks to try to make something right that is wrong.

Furthermore it makes it more difficult to explain the usefulness of the clinical queries to doctors and medical students. Explaining option 3 takes too long in a short course, option 1 seems illogical and 2 is hard to remember.

Thus we want to keep the set numbers in the history, at least.

A while ago Dieuwke Brand notified the NLM of this problem.

Only recently she received an answer saying that:

we are aware of the continuing problem.  The problem remains on our programmers’ list of items to investigate.  Unfortunately, because this problem appears to be limited to very few users, it has been listed as a low priority.

Only after a second Dutch medical librarian confirmed the problem to the NLM, saying it not only affects one or two librarians, but all the students we teach (~1000-2000 students/university/yearly), they realized that it was a more widespread problem than Dieuwke Brand’s personal problem. Now the problem has a higher priority.

Where is the time that a problem was taken for what it was? As another librarian sighed: Apparently something is only a problem if many people complain about it.

Now I know this (I regarded Dieuwke as a delegate of all Dutch Clinical Librarians), I realize that I have to “complain” myself, each time I and/or my colleagues encounter a problem.

Related Articles





Medical Information Matters 2.8 is up!

15 10 2010

The new edition of Medical Information Matters (formerly Medlibs round) is up at Danielhooker.com.

The main theme is “Programs in libraries or medical education”.
Besides two posts from this blog (A Filter for Finding Animal Studies in PubMed” and more on the topic: An Educator by Chance) the following topics are included: a new MeSH (inclusion under mild librarian pressure), PubMed in your pocket, embedding Google Gadgets in normal webpages and experiences with introducing social bookmarking to medical students.
If you find this description to cryptic (and I bet you do), then I invite you to read the entire post here. I found it a very pleasant read.

Since we are already midway October, I would like to invite you to start submitting here (blog carnival submission form).

Our next host is Dean Giustini of the The Search Principle blog. The deadline is in about 3 weeks ( November 6th).

Related Articles





How will we ever keep up with 75 Trials and 11 Systematic Reviews a Day?

6 10 2010

ResearchBlogging.orgAn interesting paper was published in PLOS Medicine [1]. As an information specialist and working part time for the Cochrane Collaboration* (see below), this topic is close to my heart.

The paper, published in PLOS Medicine is written by Hilda Bastian and two of my favorite EBM devotees ànd critics, Paul Glasziou and Iain Chalmers.

Their article gives an good overview of the rise in number of trials, systematic reviews (SR’s) of interventions and of medical papers in general. The paper (under the head: Policy Forum) raises some important issues, but the message is not as sharp and clear as usual.

Take the title for instance.

Seventy-Five Trials and Eleven Systematic Reviews a Day:
How Will We Ever Keep Up?

What do you consider its most important message?

  1. That doctors suffer from an information overload that is only going to get worse, as I did and probably also in part @kevinclauson who tweeted about it to medical librarians
  2. that the solution to this information overload consists of Cochrane systematic reviews (because they aggregate the evidence from individual trials) as @doctorblogs twittered
  3. that it is just about “too many systematic reviews (SR’s) ?”, the title of the PLOS-press release (so the other way around),
  4. That it is about too much of everything and the not always good quality SR’s: @kevinclauson and @pfanderson discussed that they both use the same ” #Cochrane Disaster” (see Kevin’s Blog) in their  teaching.
  5. that Archie Cochrane’s* dream is unachievable and ought perhaps be replaced by something less Utopian (comment by Richard Smith, former editor of the BMJ: 1, 3, 4, 5 together plus a new aspect: SR’s should not only  include randomized controlled trials (RCT’s)

The paper reads easily, but matters of importance are often only touched upon.  Even after reading it twice, I wondered: a lot is being said, but what is really their main point and what are their answers/suggestions?

But lets look at their arguments and pieces of evidence. (Black is from their paper, blue my remarks)

The landscape

I often start my presentations “searching for evidence” by showing the Figure to the right, which is from an older PLOS-article. It illustrates the information overload. Sometimes I also show another slide, with (5-10 year older data), saying that there are 55 trials a day, 1400 new records added per day to MEDLINE and 5000 biomedical articles a day. I also add that specialists have to read 17-22 articles a day to keep up to date with the literature. GP’s even have to read more, because they are generalists. So those 75 trials and the subsequent information overload is not really a shock to me.

Indeed the authors start with saying that “Keeping up with information in health care has never been easy.” The authors give an interesting overview of the driving forces for the increase in trials and the initiation of SR’s and critical appraisals to synthesize the evidence from all individual trials to overcome the information overload (SR’s and other forms of aggregate evidence decrease the number needed to read).

In box 1 they give an overview of the earliest systematic reviews. These SR’s often had a great impact on medical practice (see for instance an earlier discussion on the role of the Crash trial and of the first Cochrane review).
They also touch upon the institution of the Cochrane Collaboration.  The Cochrane collaboration is named after Archie Cochrane who “reproached the medical profession for not having managed to organise a “critical summary, by speciality or subspecialty, adapted periodically, of all relevant randomised controlled trials” He inspired the establishment of the international Oxford Database of Perinatal Trials and he encouraged the use of systematic reviews of randomized controlled trials (RCT’s).

A timeline with some of the key events are shown in Figure 1.

Where are we now?

The second paragraph shows many, interesting, graphs (figs 2-4).

Annoyingly, PLOS only allows one sentence-legends. The details are in the (WORD) supplement without proper referral to the actual figure numbers. Grrrr..!  This is completely unnecessary in reviews/editorials/policy forums. And -as said- annoying, because you have to read a Word file to understand where the data actually come from.

Bastian et al. have used MEDLINE’s publication types (i.e. case reports [pt], reviews[pt], Controlled Clinical Trial[pt] ) and search filters (the Montori SR filter and the Haynes narrow therapy filter, which is built-in in PubMed’s Clinical Queries) to estimate the yearly rise in number of study types. The total number of Clinical trials in CENTRAL (the largest database of controlled clinical trials, abbreviated as CCTRS in the article) and the Cochrane Database of Systematic Reviews (CDSR) are easy to retrieve, because the numbers are published quaterly (now monthly) by the Cochrane Library. Per definition, CDSR only contains SR’s and CENTRAL (as I prefer to call it) contains almost invariably controlled clinical trials.

In short, these are the conclusions from their three figures:

  • Fig 2: The number of published trials has raised sharply from 1950 till 2010
  • Fig 3: The number of systematic reviews and meta-analysis has raised tremendously as well
  • Fig 4: But systematic reviews and clinical trials are still far outnumbered by narrative reviews and case reports.

O.k. that’s clear & they raise a good point : an “astonishing growth has occurred in the number of reports of clinical trials since the middle of the 20th century, and in reports of systematic reviews since the 1980s—and a plateau in growth has not yet been reached.
Plus indirectly: the increase in systematic reviews  didn’t lead to a lower the number of trials and narrative reviews. Thus the information overload is still increasing.
But instead of discussing these findings they go into an endless discussion on the actual data and the fact that we “still do not know exactly how many trials have been done”, to end the discussion by saying that “Even though these figures must be seen as more illustrative than precise…” And than you think. So what? Furthermore, I don’t really get their point of this part of their article.

 

Fig. 2: The number of published trials, 1950 to 2007.

 

 

With regard to Figure 2 they say for instance:

The differences between the numbers of trial records in MEDLINE and CCTR (CENTRAL) (see Figure 2) have multiple causes. Both CCTR and MEDLINE often contain more than one record from a single study, and there are lags in adding new records to both databases. The NLM filters are probably not as efficient at excluding non-trials as are the methods used to compile CCTR. Furthermore, MEDLINE has more language restrictions than CCTR. In brief, there is still no single repository reliably showing the true number of randomised trials. Similar difficulties apply to trying to estimate the number of systematic reviews and health technology assessments (HTAs).

Sorry, although some of these points may be true, Bastian et al. don’t go into the main reason for the difference between both graphs, that is the higher number of trial records in CCTR (CENTRAL) than in MEDLINE: the difference can be simply explained by the fact that CENTRAL contains records from MEDLINE as well as from many other electronic databases and from hand-searched materials (see this post).
With respect to other details:. I don’t know which NLM filter they refer to, but if they mean the narrow therapy filter: this filter is specifically meant to find randomized controlled trials, and is far more specific and less sensitive than the Cochrane methodological filters for retrieving controlled clinical trials. In addition, MEDLINE does not have more language restrictions per se: it just contains a (extensive) selection of  journals. (Plus people more easily use language limits in MEDLINE, but that is besides the point).

Elsewhere the authors say:

In Figures 2 and 3 we use a variety of data sources to estimate the numbers of trials and systematic reviews published from 1950 to the end of 2007 (see Text S1). The number of trials continues to rise: although the data from CCTR suggest some fluctuation in trial numbers in recent years, this may be misleading because the Cochrane Collaboration virtually halted additions to CCTR as it undertook a review and internal restructuring that lasted a couple of years.

As I recall it , the situation is like this: till 2005 the Cochrane Collaboration did the so called “retag project” , in which they searched for controlled clinical trials in MEDLINE and EMBASE (with a very broad methodological filter). All controlled trials articles were loaded in CENTRAL, and the NLM retagged the controlled clinical trials that weren’t tagged with the appropriate publication type in MEDLINE. The Cochrane stopped the laborious retag project in 2005, but still continues the (now) monthly electronic search updates performed by the various Cochrane groups (for their topics only). They still continue handsearching. So they didn’t (virtually?!) halted additions to CENTRAL, although it seems likely that stopping the retagging project caused the plateau. Again the author’s main points are dwarfed by not very accurate details.

Some interesting points in this paragraph:

  • We still do not know exactly how many trials have been done.
  • For a variety of reasons, a large proportion of trials have remained unpublished (negative publication bias!) (note: Cochrane Reviews try to lower this kind of bias by applying no language limits and including unpublished data, i.e. conference proceedings, too)
  • Many trials have been published in journals without being electronically indexed as trials, which makes them difficult to find. (note: this has been tremendously improved since the Consort-statement, which is an evidence-based, minimum set of recommendations for reporting RCTs, and by the Cochrane retag-project, discussed above)
  • Astonishing growth has occurred in the number of reports of clinical trials since the middle of the 20th century, and in reports of systematic reviews since the 1980s—and a plateau in growth has not yet been reached.
  • Trials are now registered in prospective trial registers at inception, theoretically enabling an overview of all published and unpublished trials (note: this will also facilitate to find out reasons for not publishing data, or alteration of primary outcomes)
  • Once the International Committee of Medical Journal Editors announced that their journals would no longer publish trials that had not been prospectively registered, far more ongoing trials were being registered per week (200 instead of 30). In 2007, the US Congress made detailed prospective trial registration legally mandatory.

The authors do not discuss that better reporting of trials and the retag project might have facilitated the indexing and retrieval of trials.

How Close Are We to Archie Cochrane’s Goal?

According to the authors there are various reasons why Archie Cochrane’s goal will not be achieved without some serious changes in course:

  • The increase in systematic reviews didn’t displace other less reliable forms of information (Figs 3 and 4)
  • Only a minority of trials have been assessed in systematic review
  • The workload involved in producing reviews is increasing
  • The bulk of systematic reviews are now many years out of date.

Where to Now?

In this paragraph the authors discuss what should be changed:

  • Prioritize trials
  • Wider adoption of the concept that trials will not be supported unless a SR has shown the trial to be necessary.
  • Prioritizing SR’s: reviews should address questions that are relevant to patients, clinicians and policymakers.
  • Chose between elaborate reviews that answer a part of the relevant questions or “leaner” reviews of most of what we want to know. Apparently the authors have already chosen for the latter: they prefer:
    • shorter and less elaborate reviews
    • faster production ànd update of SR’s
    • no unnecessary inclusion of other study types other than randomized trials. (unless it is about less common adverse effects)
  • More international collaboration and thereby a better use  of resources for SR’s and HTAs. As an example of a good initiative they mention “KEEP Up,” which will aim to harmonise updating standards and aggregate updating results, initiated and coordinated by the German Institute for Quality and Efficiency in Health Care (IQWiG) and involving key systematic reviewing and guidelines organisations such as the Cochrane Collaboration, Duodecim, the Scottish Intercollegiate Guidelines Network (SIGN), and the National Institute for Health and Clinical Excellence (NICE).

Summary and comments

The main aim of this paper is to discuss  to which extent the medical profession has managed to make “critical summaries, by speciality or subspeciality, adapted periodically, of all relevant randomized controlled trials”, as proposed 30 years ago by Archie Cochrane.

Emphasis of the paper is mostly on the number of trials and systematic reviews, not on qualitative aspects. Furthermore there is too much emphasis on the methods determining the number of trials and reviews.

The main conclusion of the authors is that an astonishing growth has occurred in the number of reports of clinical trials as well as in the number of SR’s, but that these systematic pieces of evidence shrink into insignificance compared to the a-systematic narrative reviews or case reports published. That is an important, but not an unexpected conclusion.

Bastian et al don’t address whether systematic reviews have made the growing number of trials easier to access or digest. Neither do they go into developments that have facilitated the retrieval of clinical trials and aggregate evidence from databases like PubMed: the Cochrane retag-project, the Consort-statement, the existence of publication types and search filters (they use themselves to filter out trials and systematic reviews). They also skip other sources than systematic reviews, that make it easier to find the evidence: Databases with Evidence Based Guidelines, the TRIP database, Clinical Evidence.
As Clay Shirky said: “It’s Not Information Overload. It’s Filter Failure.”

It is also good to note that case reports and narrative reviews serve other aims. For medical practitioners rare case reports can be very useful for their clinical practice and good narrative reviews can be valuable for getting an overview in the field or for keeping up-to-date. You just have to know when to look for what.

Bastian et al have several suggestions for improvement, but these suggestions are not always underpinned. For instance, they propose access to all systematic reviews and trials. Perfect. But how can this be attained? We could stimulate authors to publish their trials in open access papers. For Cochrane reviews this would be desirable but difficult, as we cannot demand from authors who work for months for free to write a SR to pay the publications themselves. The Cochrane Collab is an international organization that does not receive subsidies for this. So how could this be achieved?

In my opinion, we can expect the most important benefits from prioritizing of trials ànd SR’s, faster production ànd update of SR’s, more international collaboration and less duplication. It is a pity the authors do not mention other projects than “Keep up”.  As discussed in previous posts, the Cochrane Collaboration also recognizes the many issues raised in this paper, and aims to speed up the updates and to produce evidence on priority topics (see here and here). Evidence aid is an example of a successful effort.  But this is only the Cochrane Collaboration. There are many more non-Cochrane systematic reviews produced.

And then we arrive at the next issue: Not all systematic reviews are created equal. There are a lot of so called “systematic reviews”, that aren’t the conscientious, explicit and judicious created synthesis of evidence as they ought to be.

Therefore, I do not think that the proposal that each single trial should be preceded by a systematic review, is a very good idea.
In the Netherlands writing a SR is already required for NWO grants. In practice, people just approach me, as a searcher, the days before Christmas, with the idea to submit the grant proposal (including the SR) early in January. This evidently is a fast procedure, but doesn’t result in a high standard SR, upon which others can rely.

Another point is that this simple and fast production of SR’s will only lead to a larger increase in number of SR’s, an effect that the authors wanted to prevent.

Of course it is necessary to get a (reliable) picture of what has already be done and to prevent unnecessary duplication of trials and systematic reviews. It would the best solution if we would have a triplet (nano-publications)-like repository of trials and systematic reviews done.

Ideally, researchers and doctors should first check such a database for existing systematic reviews. Only if no recent SR is present they could continue writing a SR themselves. Perhaps it sometimes suffices to search for trials and write a short synthesis.

There is another point I do not agree with. I do not think that SR’s of interventions should only include RCT’s . We should include those study types that are relevant. If RCT’s furnish a clear proof, than RCT’s are all we need. But sometimes – or in some topics/specialties- RCT’s are not available. Inclusion of other study designs and rating them with GRADE (proposed by Guyatt) gives a better overall picture. (also see the post: #notsofunny: ridiculing RCT’s and EBM.

The authors strive for simplicity. However, the real world isn’t that simple. In this paper they have limited themselves to evidence of the effects of health care interventions. Finding and assessing prognostic, etiological and diagnostic studies is methodologically even more difficult. Still many clinicians have these kinds of questions. Therefore systematic reviews of other study designs (diagnostic accuracy or observational studies) are also of great importance.

In conclusion, whereas I do not agree with all points raised, this paper touches upon a lot of important issues and achieves what can be expected from a discussion paper:  a thorough shake-up and a lot of discussion.

References

  1. Bastian, H., Glasziou, P., & Chalmers, I. (2010). Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLoS Medicine, 7 (9) DOI: 10.1371/journal.pmed.1000326

Related Articles





May I Introduce to you: a New Name for the MedLibs Round….

30 09 2010

A couple of weeks or even months ago I asked you to vote for a new name for the MedLibs Round, a blog carnival about medical information.

The decision was clear.

Hurray!

And the winner is……

Drumroll….

Medical Information Matters!

…………………

I’m very pleased with the results because the name reflects that the blog carnival is about medical information and is not purely a carnival for medical librarians.

I hope that Robin of Survive the Journey is still willing and able to make the logo for Medical Information Matters.

Well it will not be long for Medical Information Matters will be “inaugurated”.
We won’t restart the counting. So it will be Medical Information Matters 2.8

There are only a few days left from submitting.
Daniel Hooker at Danielhooker.com: Health libraries, Medicine and the Web is eagerly awaiting your submissions.

You can submit the URL of your post HERE at the Blog Carnival.

Daniel at his call for submissions post:

I’d love to see posts on new things you’re trying out this year: new projects, teaching sessions, innovative services. Maybe it’s something tried and true that you’d like to reflect on. And this goes for anyone starting out fresh this term, not just librarians! We should all be brimming with enthusiasm; the doldrums of winter have yet to set in. If you can find the time to reflect and even just write up your busy workday, I’ll do my best to weave them all together. I, for one, hope to describe some of the projects that I’m involved with at my new workplace. But remember, this “theme” is only a suggestion, we’d be happy to see any contributions that you think would be of interest.

Educators, librarians, doctors or scientists please remember: your submission matters…. No interesting blog carnival without your contribution. I’m looking forward to the next MedLibs round, the first Medical Information Matters Edition (it is a mouth full isn’t it?)

Related Articles





Stories [9]: A Healthy Volunteer

20 09 2010

The host of Next Grand Rounds (Pallimed) asked to submit a recent blog post from another blogger in addition to your own post.
I choose “Orthostatics – one more time” from DB Medical rants and a post commenting on that from Musings of a Dinosaur.

Bob Center’s (@medrants) posts was about the value of orthostatic vital sign measurements (I won’t go into any details here), and about who should be doing them, nurses or doctors. In his post, Bob Center also mentioned briefly that students were seeing this as scut work similar as drawing your own bloods and carrying them to the lab.

That reminded me of something that happened when I was working in the lab as a PhD, 20 years ago.

I was working on a chromosomal translocation between chromosome 14 and 18. (see Fig)

The t(14;18) is THE hallmark of follicular lymphoma (lymphoma is a B cell cancer of the lymph nodes).

This chromosomal translocation is caused by a faulty coupling of an immunoglobulin chain to the BCL-2 proto-oncogene during the normal rearrangement process of the immunoglobulins in the pre-B-cells.

This t(14;18) translocation can be detected by genetic techniques, such as PCR.

Using PCR, we found that the t(14:18) translocation was not only present in follicular lymphoma, but also in benign hyperplasia of tonsils and lymph nodes in otherwise healthy persons. Just one out of  1 : 100,000 cells were positive. When I finally succeeded in sequencing the PCR-amplified breakpoints, we could show that each breakpoint was unique and not due to contamination of our positive control (read my posts on XMRV to see why this is important).

So we had a paper. Together with experiments in transgenic mice, our results hinted that t(14;18) translocations is necessary but not sufficient for follicular lymphoma. Enhanced expression of BCL-2 might give make the cells with the translocation “immortal”.

All fine, but hyperplastic tonsils might still form an exception, since they are not completely normal. We reasoned that if the t(14;18) was an accidental mistake in pre B cells it might sometimes be found in normal B cells in the blood too.

But then we needed normal blood from healthy individuals.

At the blood bank we could only get pooled blood at that time. But that wasn’t suitable, because if a translocation was present in one individual it would be diluted with the blood of the others.

So, as was quite common then, we asked our colleagues to donate some blood.

The entire procedure was cumbersome: a technician first had to enrich for T and  B cells, we had to separate the cells by FACS and I would then PCR and sequence them.

The PCR and sequencing techniques had to be adopted, because the frequency of positive cells was lower than in the tonsils and approached the detection limit. ….. That is in most people. But not in all. One of our colleagues had relatively prominent bands, and several breakpoints.

It was explained to him that this meant nothing really. Because we did find similar translocations in every healthy person.

But still, I wouldn’t feel 100% sure, if so many of my blood cells (one out of 1000 or 10.000) contained t(14:18) translocations.

He was one of the first volunteers we tested, but from then on it was decided to test only anonymous persons.

Related Articles





Does the NHI/FDA Paper Confirm XMRV in CFS? Well, Ditch the MR and Scratch the X… and… you’ve got MLV.

30 08 2010

ResearchBlogging.orgThe long awaited paper that would ‘solve’ the controversies about the presence of Xenotropic Murine Leukemia Virus-related virus (XMRV) in patients with chronic fatigue syndrome (CFS) was finally published in PNAS last week [1]. The study, a joint effort of the NIH and the FDA, was withheld, on request of the authors [2], because it contradicted the results of another study performed by the CDC. Both studies were put on hold.

The CDC study was published in Retrovirology online July 1 [3]. It was the fourth study in succession [4,5,6] and the first US study, that failed to demonstrate XMRV since researchers of the US Whittemore Peterson Institute (WPI) had published their controversial paper regarding the presence of XMRV in CFS [7].

The WPI-study had several flaws, but so had the negative papers: these had tested a less rigorously defined CFS-population, had used old and/or too few samples (discussed in two previous posts here and here).
In a way,  negative studies, failing to reproduce a finding, are less convincing then positive studies.  Thus everyone was eagerly looking forward to the release of the PNAS-paper, especially because the grapevine whispered this study was  to confirm the original WPI findings.

Indeed after publication, both Harvey Alter, the team leader of the NIH/FDA study, and Judy Mikovitz of the WPI emphasized that the PNAS paper essentially confirmed the presence of XMRV in CFS.

But that isn’t true. Not one single XMRV-sequence was found. Instead related MLV-sequences were detected.

Before I go into further details, please have a look at the previous posts if you are not familiar with the technical details , like the PCR-technique. Here (and in a separate spreadsheet) I also describe the experimental differences between the studies.

Now what did Lo et al exactly do? What were their findings? And in what respect do their findings agree or disagree with the WPI-paper?

Like WPI, Lo et al used nested PCR to enable detect XMRV. Nested means that there are two rounds of amplification. Outer primers are used to amplify the DNA between the two primers used (primers are a kind of very specific anchors fitting a small approximately 20 basepair long piece of DNA). Then a second round is performed with primers fitting a short sequence within the amplified sequence or amplicon.

The first amplified gag product is ~730 basepairs long, the second ~410 or ~380 basepairs, depending on the primer sets used:  Lo et al used the same set of outer primers as WPI to amplify the gag gene, but the inner gag primers were either those of WPI (410 bp)  or a in-house-designed primer set (380 bp).

Using the nested PCR approach Lo et al found gag-gene sequences in peripheral blood mononuclear cells (PBMC)  in 86.5% of all tested CFS-patients (32/37)  and in 96% (!) of the rigorously evaluated CFS-patients (24/25) compared with only 6.8% of the healthy volunteer blood donors (3/44). Half of the patients with gag detected in their PBMC, also had detectable gag in their serum (thus not in the cells). Vice versa, all but one patient with gag-sequences in the plasma also had gag-positive PBMC. Thus these findings are consistent.

The gels  (Figs 1 and 2) showing the PCR-products in PBMC don’t look pretty, because there are many aspecific bands amplified from human PBMC. These aspecific bands are lacking when plasma is tested (which lacks PBMC). To get the idea. The researchers are trying to amplify a 730 bp long sequence, using primers that are 23 -25 basepairs long, that need to find the needle in the haystack (only 1 in 1000 to 10,000 PBMC may be infected, 1 PBMC contains appr 6*10^9 basepairs). Only the order of A, C, G and T varies! Thus there is a lot of competition of sequences that have a near-fit, but are more preponderant than the true gag-sequences fitting the primers).

Therefore, detecting a band with the expected size does not suffice to demonstrate the presence of a particular viral sequence. Lo et al verified whether it were true gag-sequences, by sequencing each band with the appropriate size. All the sequenced amplicons appeared true gag-sequences. What makes there finding particularly strong is that the sequences were not always identical. This was one of the objections against the WPI-findings: they only found the same sequence in all patients (apart from some sequencing errors).

Another convincing finding is that the viral sequences could be demonstrated in samples that were taken 2-15 years apart. The more recent sequences had evolved and gained one or more mutations. Exactly what one would expect from a retrovirus. Such findings also make contamination unlikely. The lack of PCR-amplifiable mouse mitochondrial DNA also makes contamination a less likely event (although personally I would be more afraid of contamination by viral amplicons used as a positive control). The negative controls (samples without DNA) were also negative in all cases. The researchers also took all necessary physical precautions to prevent contamination (i.e. the blood samples were prepared at another lab than did the testing, both labs never sequenced similar sequences before).
(people often think of conspiracy wherever the possibility of contamination is mentioned, but this is a real pitfall when amplifying low frequency targets. It took me two years to exclude contamination in my experiments)

To me the data look quite convincing, although we’re still far from concluding that the virus is integrated in the human genome and infectious. And, of course, mere presence of a viral sequence in CFS patients, does not demonstrate a causal relationship. The authors recognize this and try to tackle this in future experiments.

Although the study seems well done, it doesn’t alleviate the confusion raised.

The reason, as said, is that the NIH/FDA researchers didn’t find a single XMRV sequence  in any of the samples!

Instead a variety of related MLV retrovirus sequences were detected.

Sure the two retroviruses belong to a similar “family”. The gag gene sequences share 96.6% homology.

However there are essential differences.

One is that XMRV is a  Xenotropic virus, hence the X: which means it can no longer enter mouse cells (MR= murine (mouse) related) but can infect cells of other species, including humans. (to be more precise it has both xenotropic and polytropic characteristics). According to the phylogenetic tree Lo et al  constructed,  the viral sequences they found are more diverse and best matches the so-called polytropic MLV viruses (able to infect both mouse and non-mouse cells infected). (see the PNAS commentary by Valerie Courgnaud et al for an explanation)

The main question, this paper raises is why they didn’t find XMRV, like WPI did.

Sure, Mikovitz —who is “delighted” by the results—now hurries to say that in the meantime, her group has found more diversity in the virus as well [8]. Or as a critical CFS patient writes on his blog:

In my opinion, the second study is neither a confirmation for, nor a replication of the first. The second study only confirms that WPI is on to something and that there might be an association between a type of retroviruses and ME/CFS.
For 10 months all we’ve heard was “it’s XMRV”. If you didn’t find
XMRV you were doing something wrong: wrong selection criteria, wrong methods, or wrong attitude. Now comes this new paper which doesn’t find XMRV either and it’s heralded as the long awaited replication and confirmation study. Well, it isn’t! Nice piece of spin by Annette Whittemore and Judy Mikovits from the WPI as you can see in the videos below (… ). WPI may count their blessings that the NIH/FDA/Harvard team looked at other MLVs and found them or otherwise it could have been game over. Well, probably not, but how many negative studies can you take?

Assuming the NIH/FDA findings are true, then the key question is not why most experiments were completely negative (there may many reasons why, for one thing they only tested XMRV), but why Lo didn’t find any XMRV amongst the positive CFS patients, and WPI didn’t find any MLV in their positive patient samples.

Two US cohorts of CFS patients with mutually exclusive presence of either XMRV or MLV, whereas the rest of the world finds nothing?? I don’t believe it. One would at least expect overlap.

My guess is that it must be something in the conditions used. Perhaps the set of primers.

As said, Lo used the same external primers as WPI, but varied the internal primers. Sometimes they used those of WPI (GAG-I-F/GAG-I-R ; F=forward, R=reverse) yielding a ~410 basepair product, sometimes their own primers (NP116/NP117), yielding a ~380 basepair product. In the Materials and Methods section  Lo et al write The NP116/NP117 was an in-house–designed primer set based on the highly conserved sequences found in different MLV-like viruses and XMRVs”.
In the supplement they are more specific:

…. (GAG-I-F/GAG-I-R (intended to be more XMRV specific) or the primer set NP116/NP117 (highly conserved sequences for XMRV and MLV).

Is it possible that the conditions that WPI used were not so suitable for finding MLV?

Lets look at Fig. S1 (partly depicted below), showing the multiple sequence alignment of 746 gag nucleotides (nt) amplified from 21 CFS patient samples (3 types) and one blood donor (BD22) [first 4 rows] and their comparison with known MLV (middle rows) and XMRV (last rows) sequences. There is nothing remarkable with the area of the reverse primer (not shown). The external forward primer (–>) fits all sequences (dots mean identical nucleotides). Just next to this primer are 15 nt deletions specific for XMRV (—-), but that isn’t hurdle for the external primers. The internal primers (–>) overlap, but the WPI-internal primer starts earlier, in the region with heterogeneity: here there are two mismatches between MLV- and XMRV-like viruses. In this region the CFS type MLV (nt 196) starts with TTTCA, whereas XMRV sequences all have TCTCG. And yes, the WPI-primers starts as follows: TCTCG. Thus there is a complete match with XMRV, but a 2 bp mismatch with MLV. Such a mismatch might easily explain why WPI (not using very optimal PCR conditions) didn’t find any low frequency MLV-sequences. The specific inner primer designed by the group of Lo and Alter, do fit both sequences, so differences in this region don’t explain the failure of Lo et al to detect XMRV. Perhaps MLV is more abundant and easier to detect?

But wait a minute. BD22, a variant detected in normal donor blood does have the XMRV variant sequence in this particular (very small) region. This sequence and the two other sequenced normal donor MLV variants differ form the patient variants, although -according to Lo- both patient and healthy donor variants differ more from XMRV then from each other (Figs 4 and 2s). Using the eyeball test I do see many similarities between XMRV and BD22 though (not only in the above region).

The media pay no attention to these differences between patient and healthy control viral sequences, and the different primer sets used. Did no one actually read the paper?

Whether theses differences are relevant, depends on whether identical conditions were used for each type of sample. It worries me that Lo says he sometimes uses the WPI inner primer sets and sometimes the other specific set. When is sometimes? It is striking that Fig 1 shows the results from CFS patients done with the specific primers and Fig 2 the results from normal donor blood done with the WPI-primers. Why? Is this the reason they picked up a sequence that fit the WPI-primers (starting with TCTCG)?

I don’t like it. I want to know how many times tested samples were positive or negative with either primer set. I not only want to see the PCR results of CFS-plasma (positive in half of the PBMC+ cases), but also of the control plasma. And I want a mix of the patient, normal samples, positive and negative controls on one gel. Everyone doing PCR knows that the signal can differ per PCR and per gel. Furthermore, the second PCR round gives way too much aspecific bands, whereas usually you get rid of those under optimal conditions.

Another confusing finding is a statement at the FDA site:

Additionally, the CDC laboratory provided 82 samples from their published negative study to FDA, who tested the samples blindly.  Initial analysis shows that the FDA test results are generally consistent with CDC, with no XMRV-positive results in the CFS samples CDC provided (34 samples were tested, 31 were negative, 3 were indeterminate).

What does this mean? Which inner primers did the FDA use? With the WPI inner primers MLV sequences might just not be found (although there might be other reasons as well, as the less stringent patient criteria).

And what to think of the earlier WPI findings? They did find “XMRV” sequences while no one else did.

I have always been skeptic (see here and here), because:

  • no mention of sensitivity in their paper
  • No mention of a positive control. The negative controls were just vials without added DNA.
  • No variation in the sequences detected, a statement that they retracted after the present NIH/FDA publication. What a coincidence.
  • Although the PCR is near the detection limit, only first round products are shown. These are stronger then you would expect them to be after one round.
  • The latter two points are suggestive of contamination. No extra test were undertaken to exclude this.
  • Surprisingly in an open letters/news items (Feb 18), they disclose that culturing PBMC’s is necessary to obtain a positive signal.  They refer to the original Science paper, but this paper doesn’t mention the need for culturing at all.
  • In an other open letter* Annette Whittemore, director of the WPI,writes to Dr McClure, virologist of tone of the negative papers that WPI researchers had detected XMRV in patient samples from both Dr. Kerr’s and Dr. van Kuppeveld’s cohorts. So if we must believe Annette, the negative samples weren’t negative
  • At this stage of controversy, the test is sold as “a reliable diagnostic tool“ by a firm with strong ties to WPI. In one such mail Mikovits says: “First of all the current diagnostic testing will define with essentially 100% accuracy! XMRV infected patients”.

Their PR machine, ever changing “findings” and anti-scientific attitude are worrying. Read more about at erv here.

What we can conclude from this all. I don’t know. I presume that WPI did find “something”, but weren’t cautious, critical and accurate enough in their drive to move forward (hence their often changing statements). I presume that the four negative findings relate to the nature of their samples or the use of the WPI inner primers or both. I assume that the NIH/CDC findings are real, although the actual positive rates might vary depending on conditions used (I would love to see all actual data).

Virologist “erv”is less positive, about the quality of the findings and their implications. In one of her comments (17) she responds:

No. An exogenous mouse ERV in humans makes no sense. But thats what their tree says. Mouse ERV is even more incredible than XMRV. Might be able to figure this out more if they upload their sequences to genbank. I realize they tried very hard not to contaminate their samples with mouse cells. That doesnt mean mouse DNA isnt in any of their store-bought reagents. There are H2O lanes in the mitochondral gels, but not the MLV gels (Fig 1, Fig 2). Why? Positive and negative controls go on every gel, end of story. First lesson every rotating student in our lab learns.

Finding mere virus-like sequences in CFS-patients is not enough. We need more data, more carefully gathered and presented. Not only in CFS patients and controls, but in cohorts of patients with different diseases and controls under controlled conditions. This will tell something about the specificity of the finding for CFS. We also need more information about XMRV infectivity and serology.

We also need to find out what being normal healthy and MLV+ means.

The research on XMRV/MLV seems to progress with one step forward, two steps back.

With the CFS patients, I truly hope that we are walking in the right direction.

Note

The title from this post was taken from: http://www.veteranstoday.com/2010/08/20/xmrv-renamed-to-hgrv/

References

  1. Lo SC, Pripuzova N, Li B, Komaroff AL, Hung GC, Wang R, & Alter HJ (2010). Detection of MLV-related virus gene sequences in blood of patients with chronic fatigue syndrome and healthy blood donors. Proceedings of the National Academy of Sciences of the United States of America PMID: 20798047
  2. Schekman R (2010). Patients, patience, and the publication process. Proceedings of the National Academy of Sciences of the United States of America PMID: 20798042
  3. Switzer WM, Jia H, Hohn O, Zheng H, Tang S, Shankar A, Bannert N, Simmons G, Hendry RM, Falkenberg VR, Reeves WC, & Heneine W (2010). Absence of evidence of xenotropic murine leukemia virus-related virus infection in persons with chronic fatigue syndrome and healthy controls in the United States. Retrovirology, 7 PMID: 20594299
  4. Erlwein, O., Kaye, S., McClure, M., Weber, J., Wills, G., Collier, D., Wessely, S., & Cleare, A. (2010). Failure to Detect the Novel Retrovirus XMRV in Chronic Fatigue Syndrome PLoS ONE, 5 (1) DOI: 10.1371/journal.pone.0008519
  5. Groom, H., Boucherit, V., Makinson, K., Randal, E., Baptista, S., Hagan, S., Gow, J., Mattes, F., Breuer, J., Kerr, J., Stoye, J., & Bishop, K. (2010). Absence of xenotropic murine leukaemia virus-related virus in UK patients with chronic fatigue syndrome Retrovirology, 7 (1) DOI: 10.1186/1742-4690-7-10
  6. van Kuppeveld, F., Jong, A., Lanke, K., Verhaegh, G., Melchers, W., Swanink, C., Bleijenberg, G., Netea, M., Galama, J., & van der Meer, J. (2010). Prevalence of xenotropic murine leukaemia virus-related virus in patients with chronic fatigue syndrome in the Netherlands: retrospective analysis of samples from an established cohort BMJ, 340 (feb25 1) DOI: 10.1136/bmj.c1018
  7. Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL, Ruscetti SK, Bagni RK, Petrow-Sadowski C, Gold B, Dean M, Silverman RH, & Mikovits JA (2009). Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. Science (New York, N.Y.), 326 (5952), 585-9 PMID: 19815723
  8. Enserink M (2010). Chronic fatigue syndrome. New XMRV paper looks good, skeptics admit–yet doubts linger. Science (New York, N.Y.), 329 (5995) PMID: 20798285

———
Related Articles





MedLibs Round: Update & Call for Submissions June 2010

4 06 2010

In the past months we had some excellent hosts of the round, really “la crème de la crème” of the medical information/libarary blogosphere:

2010 was heralded by Dr Shock MD PhD, followed by Emerging Technologies Librarian (@pfanderson) The Krafty Librarian (@krafty) and @Eagledawg (Nikki Dettmar).

Nikki  hosted the round for a second time, but now on her new blog: Eagledawg.net. The title: E(Patients)-I(Pad)-O(pportunities):Medlibs Round

Last Month the round was hosted by Danni (Danni4info) at The Health Informaticist, my favorite English EBM-library blog. It is a great round again, about “dealing with PubMed trending analysis, liability in information provision, the ‘splinternet’, a search engine optimisation (SEO) teaser from CILIP’s fresh off the presses Update magazine, and more. Missed it? You can read it here.

And now we have a few days left to submit our posts for the Next MedLibs Round, hosted by yet another excellent EBM/librarian blogger: @creaky at EBM and Clinical Support Librarians@UCHC.

She would like posts about “Reference Questions (or People) I Won’t Forget” (thus “memorable” encounters that took place in a public service/reference desk setting, over your career) or “how the library/librarian” has helped you.
But as always other relevant and good quality posts related to medical information and medical librarianship will also be considered.

For more details see the (2nd!) Call for submissions post at EBM and Clinical Support Librarians@UCHC

I am sure you all have a story to tell. So please share it with @creaky and us!

As always, you can submit the permalink (URL) (of your post(s) on your blog) here.

************

I would also like to take the opportunity to ask if there are any med- or medlib-bloggers out there who would like to host the MEDLIBS round August, September, October.

The MEDLIBs Round is still called the MedLibs round because I got too little response (6 votes including mine) to the poll with other name suggestions. Neither did I get any suggestions regarding the design of the MEDLIBS-logo, Robin of Survive the Journey has offered to make [for details see request here]. I hope you will take the time to fill in the poll below, and to think about any suggestions for a logo. Thanks!

@ links to the twitteraccounts








Follow

Get every new post delivered to your Inbox.

Join 607 other followers