P C R : Publish, Cite Cyagen and get a Reward of $100 or more !?

15 08 2015

Thursday I got an odd mail from Cyagen.

Cyagen offered me $100 or more in reward for citing their animal model services in an article. Even more surprising the reward was dependent on the impact factor of the journal I was publishing in.

Cyagen gives the following example:

If you published a paper in Science (IF = 30) and cite Cyagen Biosciences, you will be entitled to a voucher with a face value of $3,000 upon notification of the publication (PMID).

Thus the higher the impact factor the higher the reward.

CYAGEN 1 2015-08-15_14-51-18

Click to enlarge

First I thought this was link bait, because the mail was send from vresp.com. (spam reputation) and not cyagen.com.

But tweets from @doctorsdilemma and @PhytophthoraLab pointed me to this advertorial at Cyagen’s website.

It was real.

CYAGEN 2 2015-08-15_14-49-56

This doesn’t feel right, because it “smells like a conflict of interest”.

In clinical research we all know where conflicts of interest can lead to: biased results. Or according to Corollary 5 in the famous Iaonnides paper “Why Most Published Research Findings Are False”:

“The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.”

We expressed our concerns on Twitter but it took a while for Cyagen to respond to our concern (see Facebook-Q&A *).

Meanwhile Ben Goldacre from BadScience and Retraction Watch wrote a post about Cyagen’s “pay-for-citation program”.

Ben Goldacre emphasizes the COI-problem:

I would imagine that this is something journal editors will be interested in, and concerned by. We worry about “conflict of interest” a lot in science, and especially medicine: if someone has shares in a company, or receives money from it, and their publication discusses that company’s products, then this needs to be declared in the paper. If you get funding for a study then, again, you must declare it. If you have received payment for making an academic citation, then in my view this is clearly a source of funds that should be declared.

The most problematic point, according to Goldacre, is that (part of) the researchers “have received funds from Cyagen, in exchange for an academic citation.”  He keeps an archive of the 164 papers that cite Cyagen, so one can always check whether “any one of them has declared receiving money from the company”.
(because in his opinion they should)

Some of the commenters to Goldacres’ post go even further and state:

“It’s one thing to fund a study. It’s another thing entirely to pay someone to write what you want — that’s not science, it’s PR. You’re not being puritanical at all, this is corruption, plain and simple.”

and

“This is an old and very well researched problem. TV makers get paid to have the popular detective drive a certain make of car, scoff a certain brand of drink and so on. That this kind of advertising is not innocent and that it works is beyond doubt. The media have established firm rules about this, science journals need just copy them.”

Although Retraction Watch lets both sides be heard, the title of its post “Researchers, need $100? Just mention Cyagen in your paper!” suggests researchers are being bribed by Cyagen to mention its product anywhere in the paper.

I agree that the Cyagen incentive is a (very curious) marketing trick, but  it has nothing to do with ghostwriting or corruption and IMHO the authors of the 164 papers are not to blame and need not declare anything.

Let me explain why.

  • First, as everybody familiar with benchwork knows, authors are required to mention the brand and source of the used reagents and tools (including strains of mice). This is for transparency and reproducibility. There can be huge differences between products and it is important to know which enzyme, primer or mouse strain someone has used.
  • Authors only MENTION the products, reagents etc. that they have used. Mentioning a brand doesn’t mean an endorsement, although you probably wouldn’t choose a product if you thought it worked suboptimal.
    Indeed the first two papers on the Cyagen citations list both mentionCyagen only ONCE in the materials and methods where it belongsCYAGEN 3 2015-08-15_14-32-01
  • The product itself is not being tested, it is just being used, something else is being tested. (it would be different if the discount was for a diet supplement that was being tested)
  • The authors don’t receive money, they receive a discount on their next purchase. This is common practice in the lab when you are a regular customer.
  • You don’t write COI’s for getting (normal) discounts.

Still, I do think that this incentive is rather inappropriate and not a very wise thing to do (even from a marketing point of view).

It is not the discount that is problematic, neither is the mention of the product’s name and brand in the articles (as this is required).

It is the coupling of the discount (erroneously called “reward”) to “a mention” (erroneously called “citation”) that is not a proper thing to do and even more so, the coupling of the height of the discounts to the impact factor of a journal.

The idea behind it is that Cyagen not only wants to thank its customers, it wants to “increase the number of publications featuring Cyagen as a product or service provider to add strength to our company’s reputation and visible experience”

Of course, Cyagen’s reputation is not linearly dependent on the number of the publications and certainly not the citation score. Reputation depends on other things, like the quality of your product, client satisfaction and …  your reputatIon.

It now seems that the entire marketing campaign will just have the opposite effect: it will harm Cyagen’s reputation (and perhaps also that of the researchers).

My advice to Cyagen is to directly remove the incentive from their website and stop the unsolicited mailing.

And please, Cyagen, stop to defend your P(C)R-tactics, just admit it was a wrong approach and learn from it.

* Later I found out that the a Q&A on Facebook had been extracted from an interview a blogger was still in the process of doing with a product manager of Cyagen. Apparently social media etiquette is another thing Cyagen doesn’t master very well.

Update post:2015-08-16

Advertisements




FUTON Bias. Or Why Limiting to Free Full Text Might not Always be a Good Idea.

8 09 2011

ResearchBlogging.orgA few weeks ago I was discussing possible relevant papers for the Twitter Journal Club  (Hashtag #TwitJC), a succesful initiative on Twitter, that I have discussed previously here and here [7,8].

I proposed an article, that appeared behind a paywall. Annemarie Cunningham (@amcunningham) immediately ran the idea down, stressing that open-access (OA) is a pre-requisite for the TwitJC journal club.

One of the TwitJC organizers, Fi Douglas (@fidouglas on Twitter), argued that using paid-for journals would defeat the objective that  #TwitJC is open to everyone. I can imagine that fee-based articles could set a too high threshold for many doctors. In addition, I sympathize with promoting OA.

However, I disagree with Annemarie that an OA (or rather free) paper is a prerequisite if you really want to talk about what might impact on practice. On the contrary, limiting to free full text (FFT) papers in PubMed might lead to bias: picking “low hanging fruit of convenience” might mean that the paper isn’t representative and/or doesn’t reflect the current best evidence.

But is there evidence for my theory that selecting FFT papers might lead to bias?

Lets first look at the extent of the problem. Which percentage of papers do we miss by limiting for free-access papers?

survey in PLOS by Björk et al [1] found that one in five peer reviewed research papers published in 2008 were freely available on the internet. Overall 8,5% of the articles published in 2008 (and 13,9 % in Medicine) were freely available at the publishers’ sites (gold OA).  For an additional 11,9% free manuscript versions could be found via the green route:  i.e. copies in repositories and web sites (7,8% in Medicine).
As a commenter rightly stated, the lag time is also important, as we would like to have immediate access to recently published research, yet some publishers (37%) impose an access-embargo of 6-12 months or more. (these papers were largely missed as the 2008 OA status was assessed late 2009).

PLOS 2009

The strength of the paper is that it measures  OA prevalence on an article basis, not on calculating the share of journals which are OA: an OA journal generally contains a lower number of articles.
The authors randomly sampled from 1.2 million articles using the advanced search facility of Scopus. They measured what share of OA copies the average researcher would find using Google.

Another paper published in  J Med Libr Assoc (2009) [2], using similar methods as the PLOS survey examined the state of open access (OA) specifically in the biomedical field. Because of its broad coverage and popularity in the biomedical field, PubMed was chosen to collect their target sample of 4,667 articles. Matsubayashi et al used four different databases and search engines to identify full text copies. The authors reported an OA percentage of 26,3 for peer reviewed articles (70% of all articles), which is comparable to the results of Björk et al. More than 70% of the OA articles were provided through journal websites. The percentages of green OA articles from the websites of authors or in institutional repositories was quite low (5.9% and 4.8%, respectively).

In their discussion of the findings of Matsubayashi et al, Björk et al. [1] quickly assessed the OA status in PubMed by using the new “link to Free Full Text” search facility. First they searched for all “journal articles” published in 2005 and then repeated this with the further restrictions of “link to FFT”. The PubMed OA percentages obtained this way were 23,1 for 2005 and 23,3 for 2008.

This proportion of biomedical OA papers is gradually increasing. A chart in Nature’s News Blog [9] shows that the proportion of papers indexed on the PubMed repository each year has increased from 23% in 2005 to above 28% in 2009.
(Methods are not shown, though. The 2008 data are higher than those of Björk et al, who noticed little difference with 2005. The Data for this chart, however, are from David Lipman, NCBI director and driving force behind the digital OA archive PubMed Central).
Again, because of the embargo periods, not all literature is immediately available at the time that it is published.

In summary, we would miss about 70% of biomedical papers by limiting for FFT papers. However, we would miss an even larger proportion of papers if we limit ourselves to recently published ones.

Of course, the key question is whether ignoring relevant studies not available in full text really matters.

Reinhard Wentz of the Imperial College Library and Information Service already argued in a visionary 2002 Lancet letter[3] that the availability of full-text articles on the internet might have created a new form of bias: FUTON bias (Full Text On the Net bias).

Wentz reasoned that FUTON bias will not affect researchers who are used to comprehensive searches of published medical studies, but that it will affect staff and students with limited experience in doing searches and that it might have the same effect in daily clinical practice as publication bias or language bias when doing systematic reviews of published studies.

Wentz also hypothesized that FUTON bias (together with no abstract available (NAA) bias) will affect the visibility and the impact factor of OA journals. He makes a reasonable cause that the NAA-bias will affect publications on new, peripheral, and under-discussion subjects more than established topics covered in substantive reports.

The study of Murali et al [4] published in Mayo Proceedings 2004 confirms that the availability of journals on MEDLINE as FUTON or NAA affects their impact factor.

Of the 324 journals screened by Murali et al. 38.3% were FUTON, 19.1%  NAA and 42.6% had abstracts only. The mean impact factor was 3.24 (±0.32), 1.64 (±0.30), and 0.14 (±0.45), respectively! The authors confirmed this finding by showing a difference in impact factors for journals available in both the pre and the post-Internet era (n=159).

Murali et al informally questioned many physicians and residents at multiple national and international meetings in 2003. These doctors uniformly admitted relying on FUTON articles on the Web to answer a sizable proportion of their questions. A study by Carney et al (2004) [5] showed  that 98% of the US primary care physicians used the Internet as a resource for clinical information at least once a week and mostly used FUTON articles to aid decisions about patient care or patient education and medical student or resident instruction.

Murali et al therefore conclude that failure to consider FUTON bias may not only affect a journal’s impact factor, but could also limit consideration of medical literature by ignoring relevant for-fee articles and thereby influence medical education akin to publication or language bias.

This proposed effect of the FFT limit on citation retrieval for clinical questions, was examined in a  more recent study (2008), published in J Med Libr Assoc [6].

Across all 4 questions based on a research agenda for physical therapy, the FFT limit reduced the number of citations to 11.1% of the total number of citations retrieved without the FFT limit in PubMed.

Even more important, high-quality evidence such as systematic reviews and randomized controlled trials were missed when the FFT limit was used.

For example, when searching without the FFT limit, 10 systematic reviews of RCTs were retrieved against one when the FFT limit was used. Likewise when searching without the FFT limit, 28 RCTs were retrieved and only one was retrieved when the FFT limit was used.

The proportion of missed studies (appr. 90%) is higher than in the studies mentioned above. Possibly this is because real searches have been tested and that only relevant clinical studies  have been considered.

The authors rightly conclude that consistently missing high-quality evidence when searching clinical questions is problematic because it undermines the process of Evicence Based Practice. Krieger et al finally conclude:

“Librarians can educate health care consumers, scientists, and clinicians about the effects that the FFT limit may have on their information retrieval and the ways it ultimately may affect their health care and clinical decision making.”

It is the hope of this librarian that she did a little education in this respect and clarified the point that limiting to free full text might not always be a good idea. Especially if the aim is to critically appraise a topic, to educate or to discuss current best medical practice.

References

  1. Björk, B., Welling, P., Laakso, M., Majlender, P., Hedlund, T., & Guðnason, G. (2010). Open Access to the Scientific Journal Literature: Situation 2009 PLoS ONE, 5 (6) DOI: 10.1371/journal.pone.0011273
  2. Matsubayashi, M., Kurata, K., Sakai, Y., Morioka, T., Kato, S., Mine, S., & Ueda, S. (2009). Status of open access in the biomedical field in 2005 Journal of the Medical Library Association : JMLA, 97 (1), 4-11 DOI: 10.3163/1536-5050.97.1.002
  3. WENTZ, R. (2002). Visibility of research: FUTON bias The Lancet, 360 (9341), 1256-1256 DOI: 10.1016/S0140-6736(02)11264-5
  4. Murali NS, Murali HR, Auethavekiat P, Erwin PJ, Mandrekar JN, Manek NJ, & Ghosh AK (2004). Impact of FUTON and NAA bias on visibility of research. Mayo Clinic proceedings. Mayo Clinic, 79 (8), 1001-6 PMID: 15301326
  5. Carney PA, Poor DA, Schifferdecker KE, Gephart DS, Brooks WB, & Nierenberg DW (2004). Computer use among community-based primary care physician preceptors. Academic medicine : journal of the Association of American Medical Colleges, 79 (6), 580-90 PMID: 15165980
  6. Krieger, M., Richter, R., & Austin, T. (2008). An exploratory analysis of PubMed’s free full-text limit on citation retrieval for clinical questions Journal of the Medical Library Association : JMLA, 96 (4), 351-355 DOI: 10.3163/1536-5050.96.4.010
  7. The #TwitJC Twitter Journal Club, a new Initiative on Twitter. Some Initial Thoughts. (laikaspoetnik.wordpress.com)
  8. The Second #TwitJC Twitter Journal Club (laikaspoetnik.wordpress.com)
  9. How many research papers are freely available? (blogs.nature.com)




Cochrane Library underused in the US??

10 02 2008

Al rondneuzende ben ik inmiddels enkele interessante blogs tegengekomen. Een ervan is van de Krafty Librarian. Heel wetenswaardig wat ze allemaal te melden heeft op mijn interessegebied. Het is me zelfs reeds gelukt om een RSS feed van haar blog te krijgen. Ik loop dus al een dag voor op de cursus (heb per ongeluk op het knopje RSS feed gedrukt en automatisch werd mij de keuze geboden uit tig RSS-readers waarvan ik maar lukraak die van Google heb gedownload – en het werkte – voor een enkele feed.) .

Wat ik heb geselecteerd kunnen jullie lezen onder “Laika’s selecties” in de rechterkolom, een mogelijkheid die Google biedt.

De Krafty Librarian wees o.a. op een recent artikel in Obstetrics and Gynaecology. Dat wil ik er even uit lichten omdat het zo mooi aansluit op het verzoek tot financiële steun om de Cochrane Library voor iedereen toegankelijk te maken in Europa. Methodologisch is het niet zo sterk, maar het legt wel de vinger op de zere plek: 1. niet alle zogenaamde evidence based reviews zijn evidence based 2. hoe zorg je ervoor dat de evidence (die netjes op een rij gezet is in een systematic review) ook gevonden en gebruikt wordt?

Do clinical experts rely on the cochrane library? Obstet Gynecol. 2008 Feb;111(2):420-2. Grimes DA, Hou MY, Lopez LM, Nanda K.

Abstract:
In part because of limited public access, Cochrane reviews are underused in the United States compared with other developed nations. To assess use of these reviews by opinion leaders, we examined citation of Cochrane reviews in the Clinical Expert Series of Obstetrics & Gynecology from inception through June of 2007. We reviewed all 54 articles for mention of Cochrane reviews, then searched for potentially relevant Cochrane reviews that the authors could have cited. Thirty-six of 54 Clinical Expert Series articles had one or more relevant Cochrane reviews published at least two calendar quarters before the Clinical Expert Series article. Of these 36 articles, 19 (53%) cited one or more Cochrane reviews. We identified 187 instances of relevant Cochrane reviews, of which 40 (21%) were cited in the Clinical Expert Series articles. No temporal trends were evident in citation of Cochrane reviews. Although about one half of Clinical Expert Series articles cited relevant Cochrane reviews, most eligible reviews were not referenced. Wider use of Cochrane reviews could strengthen the scientific basis of this popular series.

In het kort: Clinical Expert Series articles in Obstetrics & Gynecology is een populaire serie gewijd aan practische evidence based overzichten op dit gebied, waarbij e.e.a. wordt afgezet tegen de klinische expertise van de auteur (opinion leader). Men heeft nu in een bepaals tijdsbestek gekeken in hoeveel artikelen een relevant Cochrane review wel en hoeveel artikelen Cochrane reviews ten onrechte niet werden geciteerd. Slechts 21% van de relevante reviews werd geciteerd, hetgeen opmerkelijk is, daar Cochrane reviews toch als zeer hoge evidence beschouwd worden.

Hoe zou dit verklaard kunnen worden? Volgens Grimes et al:

  • Sommige auteurs zijn niet op de hoogte van de Cochrane Library. Lijkt niet voor de hand liggend, omdat Cochrane abstracts sinds 2000 in PubMed opgenomen zijn
  • Hoewel auteurs gevraagd wordt om hun manuscripten te baseren op evidence , wordt hen niet expliciet gevraagd in de Cochrane Library te zoeken naar relevante reviews.
  • Beperkte toegang tot de volledige tekst van Cochrane Reviews. Auters zijn echter werkzaam op medische scholen, die i.h.a. toegang hebben tot de Cochrane Library.
  • Voorkeur voor het noemen van de primaire bronnen (specifieke RCT’s, randomized controlled trials) boven de systematische reviews die deze RCT’s samenvatten.
  • Cochrane Reviews werden wel gevonden maar niet relevant beschouwd.
  • Cochrane Reviews zijn niet gebruikersvriendelijk. (format; nadruk op methodologie en ontoegankelijk woordgebruik) [9]

…….het volgende stukje licht toe wat het gevolg is als belangrijke evidence niet gevonden wordt:

Despite their clinical usefulness, Cochrane systematic reviews of randomized controlled trials are underused in the United States. For example, a Cochrane review documenting that magnesium sulfate is ineffective as a tocolytic agent received little attention in the United States and Canada, where this treatment has dominated practice for several decades.[10] This therapy had been abandoned in other industrialized nations, where access to the Cochrane Library is easier.[11] Citizens of many countries have free online access to the Cochrane Library through governmental or other funding. In the United States, only Wyoming residents have public access through libraries, thanks to funding by its State Legislature.

9. Rowe BH, Wyer PC, Cordell WH. Evidence-based emergency medicine. Improving the dissemination of systematic reviews in emergency medicine. Ann Emerg Med 2002;39:293-5.
10. Crowther CA, Hiller JE, Doyle LW. Magnesium sulphate for preventing preterm birth in threatened preterm labour. Cochrane Database Syst Rev 2002;(4):CD001060.

11. Grimshaw J. So what has the Cochrane Collaboration ever done for us? A report card on the first 10 years. CMAJ 2004;171:747-9.