Between the Lines. Finding the Truth in Medical Literature [Book Review]

19 07 2013

In the 1970s a study was conducted among 60 physicians and physicians-in-training. They had to solve a simple problem:

“If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5 %, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person’s symptoms or signs?” 

Half of the “medical experts” thought the answer was 95%.
Only a small proportion, 18%, of the doctors arrived at the right answer of 2%.

If you are a medical expert who comes the same faulty conclusion -or need a refresher how to arrive at the right answer- you might benefit from the book written by Marya Zilberberg: “Between the Lines. Finding the Truth in Medical Literature”.

The same is true for a patient whose doctor thinks he/she is among the 95% to benefit form such a test…
Or for journalists who translate medical news to the public…
Or for peer reviewers or editors who have to assess biomedical papers…

In other words, this book is useful for everyone who wants to be able to read “between the lines”. For everyone who needs to examine medical literature critically from time to time and doesn’t want to rely solely on the interpretation of others.

I hope that I didn’t scare you off with the abovementioned example. Between the Lines surely is NOT a complicated epidemiology textbook, nor a dull studybook where you have to struggle through a lot of definitions, difficult tables and statistic formulas and where each chapter is followed by a set of review questions that test what you learned.

This example is presented half way the book, at the end of Part I. By then you have enough tools to solve the question yourself. But even if you don’t feel like doing the exact calculation at that moment, you have a solid basis to understand the bottomline: the (enormous) 93% gap (95% vs 2% of the people with a positive test are considered truly positive) serves as the pool for overdiagnosis and overtreatment.

In the previous chapters of Part I (“Context”), you have learned about the scientific methods in clinical research, uncertainty as the only certain feature of science, the importance of denominators, outcomes that matter and outcomes that don’t, Bayesian probability, evidence hierarchies, heterogeneous treatment effects (does the evidence apply to this particular patient?) and all kinds of biases.

Most reviewers prefer part I of the book. Personally I find part II (“Evaluation”) as interesting.

Part II deals with the study question, and study design, pros and cons of observational and interventional studies, validity, hypothesis testing and statistics.

Perhaps part II  is somewhat less narrative. Furthermore, it deals with tougher topics like statistics. But I find it very valuable for being able to critically appraise a study. I have never seen a better description of “ODDs”: somehow ODDs it is better to grasp if you substitute “treatment A” and “treatment B” for “horse A” and “horse B”, and substitute “death” for “loss of a race”.
I knew the basic differences between cohort studies, case control studies and so on, but I kind of never realized before that ODDs Ratio is the only measure of association available in a case-control study and that case control studies cannot estimate incidence or prevalence (as shown in a nice overview in table 4).

Unlike many other books about “the art of reading of medical articles”, “study designs” or “Evidence Based Medicine”, Marya’s book is easy to read. It is written at a conversational tone and statements are illustrated by means of current, appealing examples, like the overestimation of risk of death from the H1N1 virus, breast cancer screening and hormone replacement therapy.

Although I had printed this book in a wrong order (page 136 next to 13 etc), I was able to read (and understand) 1/3 of the book (the more difficult part II) during a 2 hour car trip….

Because this book is comprehensive, yet accessible, I recommend it highly to everyone, including fellow librarians.

Marya even mentions medical librarians as a separate target audience:

Medical librarians may find this book particularly helpful: Being at the forefront of evidence dissemination, they can lead the charge of separating credible science from rubbish.

(thanks Marya!)

In addition, this book may be indirectly useful to librarians as it may help to choose appropriate methodological filters and search terms for certain EBM-questions. In case of etiology questions words like “cohort”, “case-control”, “odds”, “risk” and “regression” might help to find the “right” studies.

By the way Marya Ziberberg @murzee at Twitter and she writes at her blog Healthcare etc.

p.s. 1 I want to apologize to Marya for writing this review more than a year after the book was published. For personal reasons I found little time to read and blog. Luckily the book lost none of its topicality.

p.s. 2 patients who are not very familiar with critical reading of medical papers might benefit from reading “your medical mind” first [1]. 

bwtn the lines

Amazon Product Details





BAD Science or BAD Science Journalism? – A Response to Daniel Lakens

10 02 2013

ResearchBlogging.orgTwo weeks ago  there was a hot debate among Dutch Tweeps on “bad science, bad science journalism and bad science communication“. This debate was started and fueled by different Dutch blog posts on this topic.[1,4-6]

A controversial post, with both fierce proponents and fierce opposition was the post by Daniel Lakens [1], an assistant professor in Applied Cognitive Psychology.

I was among the opponents. Not because I don’t like a new fresh point of view, but because of a wrong reasoning and because Daniel continuously compares apples and oranges.

Since Twitter debates can’t go in-depth and lack structure and since I cannot comment to his Google sites blog, I pursue my discussion here.

The title of Daniels post is (freely translated, like the rest of his post):

Is this what one calls good science?” 

In his post he criticizes a Dutch science journalist, Hans van Maanen, and specifically his recent column [2], where Hans discusses a paper published in Pediatrics [3].

This longitudinal study tested the Music Marker theory among 309 Dutch kids. The researchers gathered information about the kids’ favorite types of music and tracked incidents of “minor delinquency”, such as shoplifting or vandalism, from the time they were 12 until they reached age 16 [4]. The researchers conclude that liking music that goes against the mainstream (rock, heavy metal, gothic, punk, African American music, and electronic dance music) at age 12 is a strong predictor of future minor delinquency at 16, in contrast to chart pop, classic music, jazz.

The University press office send out a press release [5 ], which was picked up by news media [4,6] and one of the Dutch authors of this study,  Loes Keijsers,  tweeted enthusiastically: “Want to know whether a 16 year old adult will suffer from delinquency, than look at his music taste at age 12!”

According to Hans, Loes could have easily broadcasted (more) balanced tweets, likeMusic preference doesn’t predict shoplifting” or “12 year olds who like Bach keep quiet about shoplifting when 16.” But even then, Hans argues, the tweets wouldn’t have been scientifically underpinned either.

In column style Hans explains why he thinks that the study isn’t methodologically strong: no absolute numbers are given; 7 out of 11 (!) music styles are positively associated with delinquency, but these correlations are not impressive: the strongest predictor (Gothic music preference) can explain no more than 9%  of the variance in delinquent behaviour, which can include anything from shoplifting, vandalism, fighting, graffiti spraying, switching price tags.  Furthermore the risks of later “delinquent” behavior are small:  on a scale 1 (never) to 4 (4 times or more) the mean risk was 1,12. Hans also wonders whether it is a good idea to monitor kids with a certain music taste.

Thus Hans concludesthis study isn’t good science”. Daniel, however, concludes that Hans’ writing is not good science journalism.

First Daniel recalls he and other PhD’s took a course on how to peer review scientific papers. On basis of their peer review of a (published) article 90% of the students decided to reject it. The two main lessons learned by Daniel were:

  • It is easy to critize a scientific paper and grind it down. No single contribution to science (no single article) is perfect.
  • New scientific insights, although imperfect, are worth sharing, because they help to evolve science. *¹

According to Daniel science jounalists often make the same mistakes as the peer reviewing PhD-students: critisizing the individuel studies without a “meta-view” on science.

Peer review and journalism however are different things (apples and oranges if you like).

Peer review (with all its imperfections) serves to filter, check and to improve the quality of individual scientific papers (usually) before they are published  [10]. My papers that passed peer review, were generally accepted. Of course there were the negative reviewers, often  the ignorant ones, and the naggers, but many reviewers had critique that helped to improve my paper, sometimes substantially. As a peer reviewer myself I only try to separate the wheat from the chaff and to enhance the quality of the papers that pass.

Science journalism also has a filter function: it filters already peer reviewed scientific papers* for its readership, “the public” by selecting novel relevant science and translating the scientific, jargon-laded language, into language readers can understand and appreciate. Of course science journalists should put the publication into perspective (call it “meta”).

Surely the PhD-students finger exercise resembles the normal peer review process as much as peer review resembles science journalism.

I understand that pure nitpicking seldom serves a goal, but this rarely occurs in science journalism. The opposite, however, is commonplace.

Daniel disapproves Hans van Maanen’s criticism, because Hans isn’t “meta” enough. Daniel: “Arguing whether an effect size is small or mediocre is nonsense, because no individual study gives a good estimate of the effect size. You need to do more research and combine the results in a meta-analysis”.

Apples and oranges again.

Being “meta” has little to do with meta-analysis. Being meta is … uh … pretty meta. You could think of it as seeing beyond (meta) the findings of one single study*.

A meta-analysis, however, is a statistical technique for combining the findings from independent, but comparable (homogeneous) studies in order to more powerfully estimate the true effect size (pretty exact). This is an important, but difficult methodological task for a scientist, not a journalist. If a meta-analysis on the topic exist, journalists should take this into account, of course (and so should the researchers). If not, they should put the single study in broader perspective (what does the study add to existing knowledge?) and show why this single study is or is not well done?

Daniel takes this further by stating that “One study is no study” and that journalists who simply echo the press releases of a study ànd journalists who just amply criticizes only single publication (like Hans) are clueless about science.

Apples and oranges! How can one lump science communicators (“media releases”), echoing journalists (“the media”) and critical journalists together?

I see more value in a critical analysis than a blind rejoicing of hot air. As long as the criticism guides the reader to appreciate the study.

And if there is just one single novel study, that seems important enough to get media attention, shouldn’t we judge the research on its own merits?

Then Daniel asks himself: “If I do criticize those journalists, shouldn’t I criticize those scientists who published just a single study and wrote a press release about it? “

His conclusion? “No”.

Daniel explains: science never provides absolute certainty, at the most the evidence is strong enough to state what is likely true. This can only be achieved by a lot of research by different investigators. 

Therefore you should believe in your ideas and encourage other scientists to pursue your findings. It doesn’t help when you say that music preference doesn’t predict shoplifting. It does help when you use the media to draw attention to your research. Many researchers are now aware of the “Music Marker Theory”. Thus the press release had its desired effect. By expressing a firm belief in their conclusions, they encourage other scientists to spend their sparse time on this topic. These scientists will try to repeat and falsify the study, an essential step in Cumulative Science. At a time when science is under pressure, scientists shouldn’t stop writing enthusiastic press releases or tweets. 

The latter paragraph is sheer nonsense!

Critical analysis of one study by a journalist isn’t what undermines the  public confidence in science. Rather it’s the media circus, that blows the implications of scientific findings out of proportion.

As exemplified by the hilarious PhD Comic below research results are propagated by PR (science communication), picked up by media, broadcasted, spread via the internet. At the end of the cycle conclusions are reached, that are not backed up by (sufficient) evidence.

PhD Comics – The news Cycle

Daniel is right about some things. First one study is indeed no study, in the sense that concepts are continuously tested and corrected: falsification is a central property of science (Popper). He is also right that science doesn’t offer absolute certainty (an aspect that is often not understood by the public). And yes, researchers should believe in their findings and encourage other scientists to check and repeat their experiments.

Though not primarily via the media. But via the normal scientific route. Good scientists will keep track of new findings in their field anyway. Suppose that only findings that are trumpeted in the media would be pursued by other scientists?

7-2-2013 23-26-31 media & science

And authors shouldn’t make overstatements. They shouldn’t raise expectations to a level which cannot be met. The Dutch study only shows weak associations. It simply isn’t true that the Dutch study allows us to “predict” at an individual level if a 12 year old will “act out” at 16.

This doesn’t help lay-people to understand the findings and to appreciate science.

The idea that media should just serve to spotlight a paper, seems objectionable to me.

Going back to the meta-level: what about the role of science communicators, media, science journalists and researchers?

According to Maarten Keulemans, journalist, we should just get rid of all science communicators as a layer between scientists and journalists [7]. But Michel van Baal [9] and Roy Meijer[8] have a point when they say that  journalists do a lot PR-ing too and they should do better than to rehash news releases.*²

Now what about Daniel criticism of van Maanen? In my opinion, van Maanen is one of those rare critical journalists who serve as an antidote against uncritical media diarrhea (see Fig above). Comparable to another lone voice in the media: Ben Goldacre. It didn’t surprise me that Daniel didn’t approve of him (and his book Bad Science) either [11]. 

Does this mean that I find Hans van Maanen a terrific science journalist? No, not really. I often agree with him (i.e. see this post [12]). He is one of those rare journalists who has real expertise in research methodology . However, his columns don’t seem to be written for a large audience: they seem too complex for most lay people. One thing I learned during a scientific journalism course, is that one should explain all jargon to one’s audience.

Personally I find this critical Dutch blog post[13] about the Music Marker Theory far more balanced. After a clear description of the study, Linda Duits concludes that the results of the study are pretty obvious, but that the the mini-hype surrounding this research is caused by the positive tone of the press release. She stresses that prediction is not predetermination and that the musical genres are not important: hiphop doesn’t lead to criminal activity and metal not to vandalism.

And this critical piece in Jezebel [14],  reaches far more people by talking in plain, colourful language, hilarious at times.

It also a swell title: “Delinquents Have the Best Taste in Music”. Now that is an apt conclusion!

———————-

*¹ Since Daniel doesn’t refer to  open (trial) data access nor the fact that peer review may , I ignore these aspects for the sake of the discussion.

*² Coincidence? Keulemans has covered  the music marker study quite uncritically (positive).

Photo Credits

http://www.phdcomics.com/comics/archive.php?comicid=1174

References

  1. Daniel Lakens: Is dit nou goede Wetenschap? - Jan 24, 2013 (sites.google.com/site/lakens2/blog)
  2. Hans van Maanen: De smaak van boefjes in de dop,De Volkskrant, Jan 12, 2013 (vanmaanen.org/hans/columns/)
  3. ter Bogt, T., Keijsers, L., & Meeus, W. (2013). Early Adolescent Music Preferences and Minor Delinquency PEDIATRICS DOI: 10.1542/peds.2012-0708
  4. Lindsay Abrams: Kids Who Like ‘Unconventional Music’ More Likely to Become Delinquent, the Atlantic, Jan 18, 2013
  5. Muziekvoorkeur belangrijke voorspeller voor kleine criminaliteit. Jan 8, 2013 (pers.uu.nl)
  6. Maarten Keulemans: Muziek is goede graadmeter voor puberaal wangedrag - De Volkskrant, 12 januari 2013  (volkskrant.nl)
  7. Maarten Keulemans: Als we nou eens alle wetenschapscommunicatie afschaffen? – Jan 23, 2013 (denieuwereporter.nl)
  8. Roy Meijer: Wetenschapscommunicatie afschaffen, en dan? – Jan 24, 2013 (denieuwereporter.nl)
  9. Michel van Baal. Wetenschapsjournalisten doen ook aan PR – Jan 25, 2013 ((denieuwereporter.nl)
  10. What peer review means for science (guardian.co.uk)
  11. Daniel Lakens. Waarom raadde Maarten Keulemans me Bad Science van Goldacre aan? Oct 25, 2012
  12. Why Publishing in the NEJM is not the Best Guarantee that Something is True: a Response to Katan - Sept 27, 2012 (laikaspoetnik.wordpress.com)
  13. Linda Duits: Debunk: worden pubers crimineel van muziek? (dieponderzoek.nl)
  14. Lindy west: Science: “Delinquents Have the Best Taste in Music” (jezebel.com)




Jeffrey Beall’s List of Predatory, Open-Access Publishers, 2012 Edition

19 12 2011

Perhaps you remember that I previously wrote [1] about  non-existing and/or low quality scammy open access journals. I specifically wrote about Medical Science Journals of  the http://www.sciencejournals.cc/ series, which comprises 45 titles, none of which having published any article yet.

Another blogger, David M [2] also had negative experiences with fake peer review invitations from sciencejournals. He even noticed plagiarism.

Later I occasionally found other posts about open access spam, like the post of Per Ola Kristensson [3] (specifically about Bentham, Hindawi and InTech OA publishers), of Peter Murray-Rust [4] ,a chemist interested in OA (about spam journals and conferences, specifically about Scientific Research Publishing) and of Alan Dove PhD [5] (specifically about The Journal of Computational Biology and Bioinformatics Research (JCBBR) published by Academic Journals).

But now it appears that there is an entire list of “Predatory, Open-Access Publishers”. This list was created by Jeffrey Beall, academic librarian at the University of Colorado Denver. He just updated the list for 2012 here (PDF-format).

According to Jeffrey predatory, open-access publishers

are those that unprofessionally exploit the author-pays model of open-access publishing (Gold OA) for their own profit. Typically, these publishers spam professional email lists, broadly soliciting article submissions for the clear purpose of gaining additional income. Operating essentially as vanity presses, these publishers typically have a low article acceptance threshold, with a false-front or non-existent peer review process. Unlike professional publishing operations, whether subscription-based or ethically-sound open access, these predatory publishers add little value to scholarship, pay little attention to digital preservation, and operate using fly-by-night, unsustainable business models.

Jeffrey recommends not to do business with the following (illegitimate) publishers, including submitting article manuscripts, serving on editorial boards, buying advertising, etc. According to Jeffrey, “there are numerous traditional, legitimate journals that will publish your quality work for free, including many legitimate, open-access publishers”.

(For sake of conciseness, I only describe the main characteristics, not always using the same wording; please see the entire list for the full descriptions.)

Watchlist: Publishers, that may show some characteristics of  predatory, open-access publisher
  • Hindawi Way too many journals than can be properly handled by one publisher
  • MedKnow Publications vague business model. It charges for the PDF version
  • PAGEPress many dead links, a prominent link to PayPal
  • Versita Open paid subscription for print form. ..unclear business model

An asterisk (*) indicates that the publisher is appearing on this list for the first time.

How complete and reliable is this list?

Clearly, this list is quite exhaustive. Jeffrey did a great job listing  many dodgy OA journals. We should watch (many) of these OA publishers with caution. Another good thing is that the list is updated annually.

(http://www.sciencejournals.cc/ described in my previous post is not (yet) on the list ;)  but I will inform Jeffrey).

Personally, I would have preferred a distinction between real bogus or spammy journals and journals that seem to have “too many journals to properly handle” or that ask (too much ) money for subscription/from the author. The scientific content may still be good (enough).

Furthermore, I would rather see a neutral description of what is exactly wrong about a journal. Especially because “Beall’s list” is a list and not a blog post (or is it?). Sometimes the description doesn’t convince me that the journal is really bogus or predatory.

Examples of subjective portrayals:

  • Dove Press:  This New Zealand-based medical publisher boasts high-quality appearing journals and articles, yet it demands a very high author fee for publishing articles. Its fleet of journals is large, bringing into question how it can properly fulfill its promise to quickly deliver an acceptance decision on submitted articles.
  • Libertas Academia “The tag line under the name on this publisher’s page is “Freedom to research.” It might better say “Freedom to be ripped off.” 
  • Hindawi  .. This publisher has way too many journals than can be properly handled by one publisher, I think (…)

I do like funny posts, but only if it is clear that the post is intended to be funny. Like the one by Alan Dove PhD about JCBBR.

JCBBR is dedicated to increasing the depth of research across all areas of this subject.

Translation: we’re launching a new journal for research that can’t get published anyplace else.

The journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence in this subject area.

We’ll take pretty much any crap you excrete.

Hattip: Catherine Arnott Smith, PhD at the MedLib-L list.

  1. I Got the Wrong Request from the Wrong Journal to Review the Wrong Piece. The Wrong kind of Open Access Apparently, Something Wrong with this Inherently… (laikaspoetnik.wordpress.com)
  2. A peer-review phishing scam (blog.pita.si)
  3. Academic Spam and Open Access Publishing (blog.pokristensson.com)
  4. What’s wrong with Scholarly Publishing? New Journal Spam and “Open Access” (blogs.ch.cam.ac.uk)
  5. From the Inbox: Journal Spam (alandove.com)
  6. Beall’s List of Predatory, Open-Access Publishers. 2012 Edition (http://metadata.posterous.com)
  7. Silly Sunday #42 Open Access Week around the Globe (laikaspoetnik.wordpress.com)




I Got the Wrong Request from the Wrong Journal to Review the Wrong Piece. The Wrong kind of Open Access Apparently, Something Wrong with this Inherently…

27 08 2011

Meanwhile you might want to listen to “Wrong” (Depeche Mode)

Yesterday I screened my spam-folder. Between all male enhancement and lottery winner announcements, and phishing mails for my bank account, there was an invitation to peer review a paper in “SCIENCE JOURNAL OF PATHOLOGY”.

Such an invitation doesn’t belong in the spam folder, doesn’t it? Thus I had a closer look and quickly screened the letter.

I don’t know what alarmed me first. The odd hard returns, the journal using a Gmail address, an invitation for a topic (autism) I knew nothing about, an abstract that didn’t make sense and has nothing to do with Pathology, the odd style of the letter: the informal, but impersonal introduction (How are you? I am sure you are busy with many activities right now) combined with a turgid style (the paper addresses issues of value to our broad-based audience, and that it cuts through the thick layers of theory and verbosity for them and makes sense of it all in a clean, cohesive manner) and some misspellings. And then I never had an invitation from an editor, starting with the impersonal “Colleagues”… 

But still it was odd. Why would someone take the trouble of writing such an invitation letter? For what purpose? And apparently the person did know that I was a scientist, who does -or is able to- peer review medical scientific papers. Since the mail was send to my Laika Gmail account, the most likely source for my contact info must have been my pseudonymous blog. I seldom use this mail account for scientific purposes.

What triggered my caution flag the most, was the topic: autism. I immediately linked this to the anti-vaccination quackery movement, that’s trying to give skeptic bloggers a hard time and fights a personal, not a scientific battle. I also linked it to #epigate, that was exposed at Liz Ditz I Speak of Dreams, a blog with autism as a niche topic.

#Epigate is the story of René Najeraby aka @EpiRen, a popular epidemiologist blogger who was asked to stop engaging in social media by his employers, after a series of complaints by a Mr X, who also threatened other pseudonymous commenters/bloggers criticizing his actions. According to Mr. X no one will be safe, because all i have to do is file a john doe – or hire a cyber investigator. these courses of action cost less than $10,000 each; which means every person who is afraid of the light can be exposed”  In another comment at Liz Ditz’ he actually says he will go after a specific individual: “Anarchic Teapot”.

Ok, I admit that the two issues might be totally coincidental, and they probably are, but I’m hypersensitive for people trying to silence me via my employers (because that did happen to me in the past). Anyway,asking a pseudonymous blogger to peer-review might be a way to hack the real identity of such a blogger. Perhaps far-fetched, I know.

But what would the “editor” do if I replied and said “yes”?

I became curious. Does The Science Journal of Pathology even exist?

Not in PubMed!!

But the Journal “Science Journal of Pathology” does exist on the Internet…. and John Morrison is the editor. But he is the only one. As a matter of fact he is the entire staff…. There are “search”, “current” and “archives” tabs, but the latter two are EMPTY.

So I would have the dubious honor of reviewing the first paper for this journal?…. ;)

  1. (First assumption – David) – High school kids are looking for someone to peer review (and thus improve) their essays to get better grades.
    (me: school kids could also be replaced by “non-successful or starting scientists”)
  2. (Second assumption – David) Perhaps they are only looking to fill out their sucker lists. If you’ve done a bad review, they may blackmail you in other to keep it quiet.
  3. (me) – The journal site might be a cover up for anything (still no clue what).
  4. (me) - The site might get a touch of credibility if the (upcoming) articles are stamped with : “peer-reviewed by…”
  5. (David & me) the scammers target PhD’s or people who the “editors” think have little experience in peer reviewing and/or consider it a honor to do so.
  6. (David & me) It is phishing scam.You have to register on the journal’s website in order to be able to review or submit. So they get your credentials. My intuition was that they might just try to track down the real name, address and department of a pseudonymous blogger, but I think that David’s assumption is more plausible. David thinks that a couple of people in Nigeria is just after your password for your mail, amazon, PayPal etc for “the vast majority of people uses the same password for all logins, which is terribly bad practice, but they don’t want to forget it.”

With David, I would like to warn you for this “very interesting phishing scheme”, which aims at academics and especially PhD’s. We have no clue as to their real intentions, but it looks scammy.

Besides that the scam may affect you personally, such non-existing and/or low quality open access journals do a bad service to the existing, high quality open access journals.

There should be ways to remove such scam websites from the net.

Notes

“Academic scams – my wife just received a version of this for an Autism article, PhD/DPhil/Masters students beware that mentions a receipt of a similar autism”
Related articles




Friday Foolery #39. Peer Review LOL, How to Write a Comment & The Best Rejection Letter Evvah!

15 04 2011

LOL? Peer review?! Comments?

Peer review is never funny, you think.
It is hard to review papers, especially when they are poorly written. From the author’s point of view, it is annoying and frustrating to see a paper rejected on basis of comments of peer reviewers, who either don’t understand the paper or thwart you in your attempts to get the paper published, for instance because you are a competitor in the field.

Still, from a (great) distance the peer review process can be funny… in some respects.

Read for instance a collection of memorable quotes from peer review critiques of the past year in Environmental Microbiology (EM does this each December). Here are some excerpts:

  • Done! Difficult task, I don’t wish to think about constipation and faecal flora during my holidays!
  • This paper is desperate. Please reject it completely and then block the author’s email ID so they can’t use the online system in future.
  • It is sad to see so much enthusiasm and effort go into analyzing a dataset that is just not big enough.
  • The abstract and results read much like a laundry list.
  • .. I would suggest that EM is setting up a fund that pays for the red wine reviewers may need to digest manuscripts like this one.
  • I have to admit that I would have liked to reject this paper because I found the tone in the Reply to the Reviewers so annoying.
  • I started to review this but could not get much past the abstract.
  • This paper is awfully written. There is no adequate objective and no reasonable conclusion. The literature is quoted at random and not in the context of argument…
  • Stating that the study is confirmative is not a good start for the Discussion.
  • I suppose that I should be happy that I don’t have to spend a lot of time reviewing this dreadful paper; however I am depressed that people are performing such bad science.
  • Preliminary and intriguing results that should be published elsewhere.
  • Reject – More holes than my grandad’s string vest!
  • The writing and data presentation are so bad that I had to leave work and go home early and then spend time to wonder what life is about.
  • Very much enjoyed reading this one, and do not have any significant comments. Wish I had thought of this one.
  • This is a long, but excellent report. [...] It hurts me a little to have so little criticism of a manuscript.

More seriously, the Top 20 Reasons (Negative Comments) Written by the Reviewers Recommending Rejection of 123 Medical Education Manuscripts can be found at Academic Medicine (vol 76, no . 9 / 2 0 0 1). The top 5 is:

  1. Statistics: inappropriate, incomplete, or insufficiently described, etc.  11.2 %
  2. Overinterpretation of the results 8.7 %
  3. Inappropriate, suboptimal, insufficiently described instrument 7.3%
  4. Sample too small or biased  5.6 %
  5. Text difficult to follow, to understand 3.9%

Neuroskeptic describes 9 types of review decisions in the The Wheel of Peer Review. Was your paper reviewed by “Bee-in-your-Bonnet” or by “Cite Me, Me, Me!”

Rejections are of all times. Perhaps the best rejection letter ever is written by Sir David Brewster editor of The Edinburgh Journal of Science to Charles Babbage on July 3, 1821. Noted in James Gleick’s, The Information. A History, a Theory, a Flood

Excerpt at Marginal Revolution (HT @TwistedBacteria):

The subjects you propose for a series of Mathematical and Metaphysical Essays are so very profound, that there is perhaps not a single subscriber to our Journal who could follow them. 

Responses to a rejection are also of all ages. See this video anno 1945 (yes this scene has been used tons of times for other purposes)

Need tips?

Read How to Publish a Scientific Comment in 1 2 3 Easy Steps (well literally 123 steps) by Prof. Rick Trebino. Based on real life. It is Hilarious!

PhD comics made a paper review worksheet (you don’t even have to read the manuscript!) and gives you advise how NOT to address reviewer comments. LOL.

And here is a Sample Cover Letter for Journal Manuscript Resubmissions. Ain’t that easy?

Yet if you are still unsuccessful and want a definitive decision rendered within hours of submission you can always send your paper to the Journal of Universal Rejection.





MedLibs Round 2.6

11 07 2010

Welcome to this months edition of MedLib’s Round, a blog carnival of “excellent blog posts in the field of medical information”.

This round is a little belated, because of late submissions and my absence earlier this week.
But lets wait no longer …..!

Peer Review, Impact Factors & Conflict of Interest

Walter Jessen at Highlight HEALTH writes about the NIH Peer Review process. Included is an interesting video, that provides an inside look at how scientists from across the US review NIH grant applications for scientific and technical merit. These scientists do seem take their job seriously.

But what about peer review of scientific papers? Richard Smith, doctor, former editor of the BMJ and a proponent of open access publishing, wrote a controversial post at the BMJ Groups Blog called scrap peer review and beware of “top journals. Indeed  the “top journals” publish the sexy stuff, whereas evidence comprises both the glamorous and the unglamorous. But is prepublication peer review really that bad and should we only filter afterwards?

In a thoughtful post at his Nature blog Confessions of a (former) Lab Rat another Richard (Grant) argues that although peer review suffers terribly from several shortcomings it is still required. Richard Grant also clears up one misconception:

Peer review, done properly, might guarantee that work is done correctly and to the best of our ability and best intentions, but it will not tell you if a particular finding is right–that’s the job of other experimenters everywhere; to repeat the experiments and to build on them.

At Scholarly Kitchen (about what is hot and cooking in scholarly publishing) they don’t think peer review is a clear concept, since the list of ingredients differ per journal and article. Read their critical analysis and suggestions for improvement of the standard recipe here.

The science blogosphere was buzzing in outrage about the adding a corporate nutrition blog sponsored by PepsiCo to ScienceBlog (i.e see this post at the Guardian Science Blog). ScienceBlogs is the platform of eminent science bloggers, like OracPharyngula and Molecule of the Day. After some bloggers left ScienceBlog and others threatened to do so, the Pepsico Blog was retracted.

An interesting view is presented by David Crotty at Scholarly Kitchen. He states that it is “hypocritical for ScienceBlog’s bloggers to have objected so strenuously: ScienceBlogs has never been a temple of purity, free of bias or agenda.” Furthermore the bloggers enjoy more traffic and a fee for being a scienceblogger, and promote their “own business” too. David finds it particularly ironic that these complaints come from the science blogosphere, which has regularly been a bastion of support for the post-publication review philosophy. Read more here.

Indeed according to a note of Scienceblog at the disappeared blog their intention was “to engage industry in pursuit of science-driven social change”, although it was clearly not the right way.

The partiality of business, including pharma, makes it’s presence in and use of Social Media somewhat tricky. Still it is important for pharma to get involved in web2.0. Interested in a discussion on this topic? Than follow the tags #HCSM (HealthCare Social Media) and #HCSMEU (Europe) on Twitter.
Andrew Spong, has launched an open wiki, where you can read all about #HCSMEU.

The value of journal impact factors is also debatable. In the third part of the series “Show me the evidence” Kathleen Crea at EBM and Clinical Support Librarians @ UCHC starts with an excerpt of an article with the intriguing title “The Top-Ten in Journal Impact Factor Manipulation”:

The assumption that Impact Factor (IF) is a number absolutely proportional to science quality has led to misuses beyond the index’s original scope, even in the opinion of its devisor.”

The post itself (Teaching & Learning in Medicine, Research Methodology, Biostatistics: Show Me the Evidence (Part 3)b) is not so much about evidence, but offers a wealth of information about  journal impact factors, comparisons of sites for citation analysis, and some educational materials for teaching others about citation analysis. Not only are Journal Citation Reports and SCOPUS discussed, but also the Eigenfactor, h-index and JANE.

Perhaps we need another system of publishing and peer review? Will the future be to publish triplets and peer review these via Twitter by as many reviewers as possible? Read about this proposal of Barend Mons (of the same group that created JANE) at this blog. Here you can also find a critical review of an article comparing Google Scholar and PubMed for retrieving evidence.

Social Media, Blogs & Web 2.0 tools

There are several tools to manage the scientific articles, like CiteULike and Mendeley. At his blog Gobbledygook Martin Fenner discusses the pros and cons of a new web-based tool specifically for discussing papers in Journal Clubs: JournalFire

At the The Health Informaticists they found an interesting new feature of Skype:  screen sharing. Here you can read all about it.

Andrew Sprong explains at his blog STweM how to create a PDF archive of hashtagged tweets using whatthehashtag?! and Google DocsScribd or Slideshare. A tweet archive is very useful in case of  live tweet or stream sessions at conferences. (each tweet is then labeled with a # or hashtag, but tweets are lost after a few days if not archived)

L1010201At Cool Toy of the DayPatricia Anderson posts a lot about healthcare tools. She submitted Cool Toys Pic of the day – Eyewriter“, a tool for allowing persons with ALS and paralysis to draw artwork with their eyes. But you find a lot more readworthy posts at this blog and her main blog Emerging Technologies Librarian.

Heidi Allen at Heidi Allen Digital Strategy started a discussion on the meaning of social-medicine for Physicians. The link to the original submission doesn’t work right now, but if you follow this link you see several posts on social-medicine, including “Physicians in Social Media”, where 3 well-known physicians give their view on the meaning of social-medicine.

Dr Shock at Dr Shock MD PhD, wonders whether “the information on postpartum depression in popular lay magazines correspond to scientific knowledge?” Would it surprise you that this is not the case for many articles on this topic?

The post of Guus van den Brekel at DigiCMB with the inspiring title Discovering new seas of knowledge partly goes about the seas of knowledge gained at the EAHIL2010 (European Association for Health Information and Libraries) meeting, with an overview of many sessions, and materials when possible. And I should stress when possible, because the other  part of the post is about the difficulty of obtaining access to this sea of knowledge. Guus wonders:

In this age of Open Access, web 2.0 and the expectancy of the “users” -being us librarians (…) one would assume that much (if not all) is freely available via Conferences websites and/or social media. Why then do I find it hard to find the extra info about those events, including papers and slides and possibly even webcasts? Are we still not into the share-mode and overprotective to one’s own achievements(….)

Guus makes a good point,especially in this era, when not all of us are able to go and visit far away places. Luckily we have Guus who did a good job of compiling as much material as possible.

Wondering about the evidence for the usefulness of web 2.0, then have a look at this excellent wiki by Dean Giustini: http://hlwiki.slais.ubc.ca/index.php/Evidence-based_web_2.0.
The Health Librarianship Wiki Canada (the mother wiki) has a great new design and is a very rich source of information for medical librarians.

Another good source for recent peer reviewed papers about using social media in medicine and healthcare is a new series by Bertalan Mesko at Science Roll. First it was called Evidence Based Social Media News and now Social media journal club.

EHR and the clinical librarian.

Nikki Dettmar presents two posts on Electronic Health Records at Eagledawg.net, inspired by a recent Medical Library Association meeting that included a lot about electronic health records (EHRs). In the first part “Electronic Health Records: Not All About the Machine” she mentions the launch of an OpenNotes study that “evaluates the impact on both patients and physicians of sharing, through online medical record portals, the comments and observations made by physicians after each patient encounter.” The second post is entitled “a snapshot of ephemeral chaos“. And yes the title says it all.

Bertalan Mesko at Science Roll describes a try out of a Cardiology Resident and Research Fellow in Google Wave to see whether that platform is suitable for creating a database of the electronic records of a virtual patient. The database looks fine at first glance, but is it safe?

Alisha764′s Blog celebrated its 1 year anniversary in February. Alisha Miles aim for the next year is to not only post more but to focus on hospital libraries including her experience as a hospital librarian. Excellent idea, Alisha! I liked the post Rounding: A solo medical librarian’s perspective with several practical tips if you join the round as a librarian. I hope you can find time to write more like this, Alisha!

Our next host is Walter Jessen at Highlight HEALTH. You can already start submitting the link to a (relevant) post you have written here.

See the MedLibs Archive for more information.

Photo Credits:





Ten Years of PubMed Central: a Good Thing that’s Only Going to Get Better.

26 05 2010

PubMed Central (PMC) is a free digital archive of biomedical and life sciences journal literature at the U.S. National Institutes of Health (NIH), developed and managed by NIH’s National Center for Biotechnology Information (NCBI) in the National Library of Medicine (NLM) (see PMC overview).
PMC is a central repository for biomedical peer reviewed literature in the same way as NCBI’s GenBank is the public archive of DNA sequences. The idea behind it “that giving all users free access to the material in PubMed Central is the best way to ensure the durability and utility of the electronical archive as technology changes over time and to integrate the literature with other information resources at NLM”.
Many journals are already involved, although most of them adhere to restrictions (i.e. availability after 1 year). For list see http://www.ncbi.nlm.nih.gov/pmc/journals/

PMC, the brain child of Harold Varmus, once the Director of the National Institutes of Health, celebrated its 10 year anniversary earlier this year.

For this occasion Dr. Lipman, Director of the NCBI, gave an overview of past and future plans for the NIH’s archive of biomedical research articles. See videotape of the Columbia University Libraries below:

more about “Ten Years of PubMed Central | Scholar…“, posted with vodpod

The main points raised by David Lipman (appr. time given if you want to learn more about it; the text below is not a transcription, but a summary in my own words):

PAST/PRESENT

  • >7:00. BiomedCental (taken over by Spinger) and PLoS ONE show that Open Access can be a sustaining way in Publishing Science.
  • 13:23 Publisher keeps the copyright. He may stop depositing but the content already deposited remains in PMC.
  • 13:50 PMC is also an obligatory repository for author manuscripts under various funding agencies mandates, like the NIH and the UK welcome trust.
  • 14:31 One of the ideas from the beginning was to crosslink the literature with the underlying molecular and other databases. For instance NCBI is capable of mining out the information in the archived text and connecting it to the compound and the protein structure database.
  • 16:50 There is a back issue digitization for the journals that are participating, enabling to find research that you wouldn’t have easily found otherwise.
  • PMC has become international (not restricted to USA)
  • The PMC archive becomes more useful if it becomes more comprehensive
  • Before PMC you could do a Google Scholar search and find a paper in PubMed, that appeared funded by NIH, but then you had to pay $30 for it in order to get it. That’s hard to explain to the taxpayers (Lipman had a hard time explaining it to his dad who was looking for medical information online). This was the impetus for making the results of NIH-sponsored results freely available.

PRESENT/FUTURE

  • 23:00 Discovery initiative: is the use of tracking tools to find out which changes to the website work for users and which don’t. Thus modifications should lead to alterations in users behavior (statistics is easy with millions of users). Discovery initiative led to development and improvement of sensors, like sensors for disease names, drug names, genes and citations. What is being measured is if people click through (if it isn’t interesting, they usually don’t) and how quickly they find results. Motto: train the machine, not the users.
  • 30:37 We changed the looks of PMC. Planning to make a better presentation on the i-phone and on broad monitors.
  • 31:40. There are almost 2 million articles in PubMed Central, 585 journals fully participate in PMC
  • 32.30 It takes very long to publish a paper, even in Open Access papers. Therefore a lot of people are not publishing little discoveries, which are not important enough to put a lot of time in. Publishing should be almost as easy as writing a blog, but with peer review. This requires a new type of journal, with peer review, but with instant feedback from readers and reviewers and rapid response to comments. The Google Knol authoring system offers a fast and simple authoring system where authors (with a Google profile) can collaborate and compose the article on the server. Uploading of documents and figures is easy, the article updates are simple and fast, there is a simple workflow for moderators. After the paper is accepted you press a button, the paper is immediately available and the next day PMC automatically gets the XML content. There is also a simple Reference Manager included to paste citations.
  • Principle: How you can start a journal with this system (see Figure). Till now: 60 articles in PLOS Currents Influenza. There are also plans for other journals: the CDC is announcing a Systematic Reviews journal, for instance.

QUESTIONS (>39:30):

  • Process by which “KNOL-journal” is considered for inclusion in NLM?
    • Decide: is it in scope?, implicit policy (health peer review being done), who are the people involved, look at a dozen articles.
  • As the content in PMC increases, will it become possible to search in the full text, just like in Google Scholar?
    • Actually the full text is searchable in PMC as apposed to PubMed, but we are not that happy with the full text retrieval. Even with a really good approach, searching full text works just a little bit better than searching PubMed.
      We are incorporating more of the information of PMC into PubMed, and are working on a separate image database with all the figures from books and articles in PMC (with other search possibilities). Subsets of book(chapter)s (like practice guidelines) will get PubMed abstracts and become searchable in PubMed as well.
  • Are there ways to track a full list of our institutions OA articles in PMC (not picking up everything in PubMed)
    • Likely NIH will be contacting offices responsible for research to let them know what articles are out of compliance,  get their assistance in making sure that those get in.
    • Authors can easily update the electornic My Bibliography (in My NCBI in PubMed).
    • Author ID project, involves computational disambiguation. Where you are asked if you are the author of a paper if you didn’t include it. It may also be possible to have automatic reporting to the institutions.
  • What did it took politically to get the appropriation bill passed (PMC initiative)?
    • Congress always pushed more open access, because it was already spending money on the research. Most of the initiative came more from librarians (i.e. small libraries not having sufficient access) and government, than from the NIH.
  • Is there way to narrow down to NIH, free full text papers from PMC?
    • In PubMed, you can filter free full text articles in general via the limits.
  • Are all the articles deposited in PMC submitted the final manuscript?
    • Generally, yes.

HT: @bentoth on Twitter








Follow

Get every new post delivered to your Inbox.

Join 609 other followers