There seem to be two camps in the library, the medical and many other worlds: those who embrace Web 2.0, because they consider it useful for their practice and those who are unaware of Web 2.0 or think it is just a fad. There are only a few ways the Web 2.0-critical people can be convinced: by arguments (hardly), by studies that show evidence of its usefulness and by examples of what works and what doesn’t work.
The paper of Shamsha Damani and Stephanie Fulton published in the latest Medical Reference Services Quarterly  falls in the latter category. Perhaps the name Shamsha Damania rings a bell: she is a prominent twitterer and has written quest posts at this blog on several occasions (here, here, here and here)
As clinical librarians at The University of Texas MD Anderson Cancer Center, Shamsha and Stephanie are immersed in clinical teams and provide evidence-based literature for various institutional clinical algorithms designed for patient care.
These were some of the problems the clinical librarians encountered when sharing the results of their searches with the teams by classic methods (email):
First, team members were from different departments and were dispersed across the sprawling hospital campus. Since the teams did not meet in person very often, it was difficult for the librarians to receive timely feedback on the results of each literature search. Second, results sent from multiple database vendors were either not received or were overlooked by team members. Third, even if users received the bibliography, they still had to manually search for and locate the full text of articles. The librarians also experimented with e-mailing EndNote libraries; however, many users were not familiar with EndNote and did not have the time to learn how to use it. E-mails in general tended to get lost in the shuffle, and librarians often found themselves re-sending e-mails with attachments. Lastly, it was difficult to update the results of a literature search in a consistent manner and obtain meaningful feedback from the entire team.
Therefore, they tried several Web 2.0 tools for sharing search results with their clinical teams.
In their article, the librarians share their experience with the various applications they explored that allowed centralization of the search results, provided easy online access, and enabled collaboration within the group.
Online Reference Management Tools were the librarians’ first choice, since these are specifically designed to help users gather and store references from multiple databases and allow sharing of results. Of the available tools, Refworks was eventually not tested, because it required two sets of usernames and passwords. In contrast, EndNote Web can be accessed from any computer with a username and password. Endnoteweb is suitable for downloading and managing references from multiple databases and for retrieving full text papers as well as for online collaboration. In theory, that is. In practice, the team members experienced several difficulties: trouble to remember the usernames and passwords, difficulties using the link resolver and navigating to the full text of each article and back to the Endnote homepage. Furthermore, accessing the full text of each article was considered a too laborious process.
Next, free Social bookmarking sites were tested allowing users to bookmark Web sites and articles, to share the bookmarks and to access them from any computer. However, most team members didn’t create an account and could therefore not make use of the collaborative features. The bookmarking sites were deemed ‘‘user-unfriendly’’, because (1) the overall layout and the presentation of results -with the many links- were experienced as confusing, (2) sorting possibilities were not suitable for this purpose and (3) it was impossible to search within the abstracts, which were not part of the bookmarked records. This was true both for Delicious and Connotea, even though the latter is more apt for science and medicine, includes bibliographic information and allows import and export of references from other systems. An other drawback was that the librarians needed to bookmark and comment each individual article.
Wikis (PBWorks and SharePoint) appeared most user-friendly, because they were intuitive and easy to use: the librarians had created a shared username and password for the entire team, the wiki was behind the hospital’s firewall (preferred by the team) and the users could access the articles with one click. For the librarians it was labor-consuming as they annotated the bibliographies, published it on the wiki and added persistent links to each article. It is not clear from the article how final reference lists were created by the team afterwards. Probably by cut & paste, because Wikis don’t seem suitable as a Word processor nor are they suitable for import and export of references.
It is informative to read the pros and cons of the various Web 2.0 tools for collaborating and delivering search results. For me, it was even more valuable to read how the research was done. As the authors note (quote):
There is no ‘‘one-size-fits-all’’ approach. Each platform must be tested and evaluated to see how and where it fits within the user’s workflow. When evaluating various Web 2.0 technologies, librarians should try to keep users at the forefront and seek feedback frequently in order to provide better service. Only after months of exploration did the librarians at MD Anderson Cancer Center learn that their users preferred wikis and 1-click access to full-text articles. Librarians were surprised to learn that users did not like the library’s link resolvers and wanted a more direct way to access information.
Indeed, there is no ‘‘one-size-fits-all’’ approach. For that reason too, the results obtained may only apply in certain settings.
I was impressed by the level of involvement of the clinical librarians and the time they put not only in searching, but also in presenting the data, in ranking the references according to study design, publication type, and date and in annotating the references. I hope they prune the results as well, because applying this procedure to 1000 or more references is no kidding. And, although it may be ideal for the library users, not all librarians work like this. I know of no Dutch librarian who does. Because of the workload such a ready made wiki may not be feasible for many librarians .
The librarians starting point was to find an easy and intuitive Web based tool that allowed collaborating and sharing of references.
The emphasis seems more on the sharing, since end-users did not seem to collaborate via the wikis themselves. I also wonder if the simpler and free Google Docs wouldn’t fulfill most of the needs. In addition, some of the tools might have been perceived more useful if users had received some training beforehand.
The training we offer in Reference Manager, is usually sufficient to learn to work efficiently with this quite complex reference manager tool. Of course, desktop software is not suitable for collaboration online (although it could always be easily exported to an easier system), but a short training may take away most of the barriers people feel when using a new tool (and with the advantage that they can use this tool for other purposes).
Of the Web 2.0 tools tested, wikis were the most intuitive and easy to use tools for collaborating with clinical teams and for delivering the literature search results. Although it is easy to use by end-users, it seems very time-consuming for librarians, who make ready-to-use lists with annotations.
Clinical teams of MD Anderson must be very lucky with their clinical librarians.
Damani S, & Fulton S (2010). Collaborating and delivering literature search results to clinical teams using web 2.0 tools. Medical reference services quarterly, 29 (3), 207-17 PMID: 20677061
Are you a Twitter user? Tweet this!
Added: August 9th 2010, 21:30 pm
It looks beautiful, but -as said- where is the collaborative aspect? Like Dymphie I have the impression that these lists are no different from the “normal” reference lists. Or am I missing something? I also agree with Dymphie that instructing people in Reference Manager may be much more efficient for this purpose.
It is interesting to read Christina Pikas view about this paper. At her blog Christina’s Lis Rant (just moved to the new Scientopia platform) Christina first describes how she delivers her search results to her customers and which platforms she uses for this. Then she shares some thoughts about the paper, like:
- they (the authors) ruled out RefWorks because it required two sets of logins/passwords – hmm, why not RefWorks with RefShare? Why two sets of passwords?
- SharePoint wikis suck. I would probably use some other type of web part – even a discussion board entry for each article.
- they really didn’t use the 2.0 aspects of the 2.0 tools – particularly in the case of the wiki. The most valued aspects were access without a lot of logins and then access to the full text without a lot of clicks.
Like Christina, I would be interested in hearing other approaches – particularly using newer tools.