Friday Foolery #48 Brilliant Library Notices

13 01 2012

Today’s Friday Foolery post is handed on a silver platter by my Australian friend Mike Cadogan @sandnsurf from Life in the Fast Lane

Yes, aren’t these brilliant librarian notices from the Milwaukee Public Library?!

Note:

@Bitethedust, also from Australian rightly noticed: there’s no better place to stick @sandnsurf than in Friday foolery

Indeed at Life at the Fast Lane they have fun posts amidst the serious (mostly ER) topics. Want more Friday Fun than have a look at the Funtabulously Frivolous Friday Five Posts.





National Library Week

12 04 2011

It is National Library Week! Did you know that?

To be honest I didn’t.

Today, Tuesday, is even National Library Workers Day — a time to thank librarians and the rest of the library staff (LA-Times).

I didn’t know that either, until I received a tweet from @doc_emer which was retweeted by doctor_V (see Fig).

Now I know.

Thank you Dr. Emer and Bryan Vartabedian (Doctor V). You made my day!

*********************************

Added:

 

@amcunningham (AnneMarie Cunningham) tweeted:
Since it’s national library week, thought I’d say thanks to all the great librarians on this list:) http://bit.ly/gkzKZm

 

 





Internet Sources & Blog Posts in a Reference List? Yes or No?

13 02 2011

A Dutch librarian asked me to join a blog carnival of Dutch Librarians. This carnival differs from medical blog carnivals (like the Grand Rounds and “Medical Information Matters“) in its approach. There is one specific topic which is discussed at individual blogs and summarized by the host in his carnival post.

The current topic is “Can you use an internet source”?

The motive of the archivist Christian van der Ven for starting this discussion was the response to a post at his blog De Digitale Archivaris. In this post he wondered whether blog posts could be used by students writing a paper. It struck him that students rarely use internet sources and that most teachers didn’t encourage or allow to use these.

Since I work as a medical information specialist I will adapt the question as follows:

“Can you refer to an internet source in a biomedical scientific article, paper, thesis or survey”?

I explicitly use “refer to” instead of “use”. Because I would prefer to avoid discussing “plagiarism” and “copyright”. Obviously I would object to any form of uncritical copying of a large piece of text without checking it’s reliability and copyright-issues (see below).

”]

Previously, I have blogged about the trouble with Wikipedia as a source for information. In short, as Wikipedians say, Wikipedia is the best source to start with in your research, but should never be the last one (quote from @berci in a twitterinterview). In reality, most students and doctors do consult Wikipedia and dr. Google (see here and here). However, they may not (and mostly should not) use it as such in their writings. As I have indicated in the earlier post it is not (yet) a trustworthy source for scientific purposes.

But Internet is more than Wikipedia and random Googling. As a matter of fact most biomedical information is now in digital form. The speed at which biomedical knowledge is advancing is tremendous. Books are soon out of date. Thus most library users confine themselves to articles in peer-reviewed scientific papers or to datasets (geneticists). Generally my patrons search the largest freely available database PubMed to access citations in mostly peer-reviewed -and digital- journals. These are generally considered as (reliable)  internet sources. But they do not essentially differ from printed equivalents.

However there are other internet sources that provide reliable or useful information. What about publications by the National Health Council, an evidence based guideline by NICE and/or published evidence tables? What about synopses (critical appraisals) such as published by DARE, like this one? What about evidence summaries by Clinical Evidence like, this one? All excellent, evidence based, commendable online resources. Without doubt these can be used as a reference in a paper. Thus there is no clearcut answer to the abovementioned question. Whether an internet source should be used as a reference in a paper is dependent on the following:

  1. Is the source relevant?
  2. Is the source reliable?
  3. What is the purpose of the paper and the topic?

Furthermore it depends on the function of the reference (not mutually exclusive):

  1. To give credit
  2. To add credibility
  3. For transparency and reproducibility
  4. To help readers find further information
  5. For illustration (as an example)

Lets illustrate this with a few examples.

  • Students who write an overview on a medical topic can use any relevant reference, including narrative reviews, UpToDate and other internet sites if appropriate .
  • Interns who have to prepare a CAT (critically appraised topic) should refer to 2-3 papers, providing the highest evidence (i.e. a systematic review and/or randomized controlled trial).
  • Authors writing systematic reviews only include high quality primary studies (except for the introduction perhaps). In addition they should (ideally) check congress abstracts, clinical trial registers (like clinicaltrials.gov), or actual raw data (i.e. produced by a pharmaceutical company).
  • Authors of narrative reviews may include all kinds of sources. That is also true for editorials, primary studies or theses. Reference lists should be as accurate and complete as possible (within the limits posed by for instance the journal).

Blog, wikis, podcasts and tweets.
Papers can also refer to blog posts, wikis or even tweets (there is APA guidance how to cite these). Such sources can merely be referred to because they serve as an example (articles about social media in Medicine for instance, like this recent paper in Am Pharm Assoc that analyzes pharmacy-centric blogs.

Blog posts are usually seen as lacking in factual reliability. However, there are many blogs, run by scientists, that are (or can be) a trustworthy source. As a matter of fact it would be inappropriate not to cite these sources, if  the information was valuable, useful and actually used in the paper.
Some examples of excellent biomedical web 2.0 sources.

  • The Clinical Cases and Images Blog of Ves Dimov, MD (drVes at Twitter), a rich source of clinical cases. My colleague once found the only valuable information (a rare patient case) at Dr Ves’ blog, not in PubMed or other regular sources. Why not cite this blog post, if this patient case was to be published?
  • Researchblogging.org is an aggregator of expert blogposts about peer-reviewed research. There are many other high quality scientific blogging platforms like Scientopia, the PLOSblogs etc. These kind of blogs critically analyse peer reviewed papers. For instance this blog post by Marya Zilberberg reveals how a RCT stopped early due to efficacy can still be severely flawed, but lead to a level one recommendation. Very useful information that you cannot find in the actual published study nor in the evidence based guideline
  • An example of an excellent and up-to-date wiki is the open HLWIKI (maintained by Dean Giustini, @giustini at Twitter) with entries about health librarianship, social media and current information technology topics, having over 565+ pages of content since 2006! It has a very rich content with extensive reference lists and could thus be easily used in papers on library topics.
  • Another concept is usefulchem.wikispaces.com (an initiative of Jean Claude Bradley, discussed in a previous post. This is not only a wiki but also an open notebook, where actual primary scientific data can be found. Very impressive.
  • There is also WikiProteins (part of a conceptwiki), an open, collaborative wiki  focusing on proteins and their role in biology and medicine.

I would like to end my post with two thoughts.

First the world is not static. In the future scientific claims could be represented as formal RDF statements/triplets  instead of or next to the journal publications as we know them (see post on nanopublications). Such “statements” (already realized with regard to proteins and genes) are more easily linked and retrieved. In effect, peer review doesn’t prevent fraud, misrepresentation or overstatements.

Another side of the coin in this “blogs as an internet source”-dicussion is whether the citation is always appropriate and/or accurate?

Today a web page (cardio.nl/ACS/StudiesRichtlijnenProtocollen.html), evidently meant for education of residents, linked to one of my posts. Almost the entire post was copied including a figure, but the only link used was one of my tags EBM (hidden in the text).  Even worse, blog posts are sometimes mentioned to give credit to disputable context. I’ve mentioned the tactics of Organized Wisdom before. More recently a site called deathbyvaccination.com links out of context to one of my blog post. Given the recent revelation of fraudulent anti-vaccine papers, I’m not very happy with that kind of “attribution”.

Related Articles





Medical Information Matters: Call for Submissions

6 11 2010

I would like to remind you that it is almost the first Saturday of the Month and thus submission time for Medical Information Matters, the former MedLibs round.

Medical Information Matters is a monthly compilation of the “best blog post in the field of medical information”, hosted by a different blogger each time. The blogger who will host the upcoming edition is Dean Giustini.

I am sure that every librarian, and many doctors, know Dean. As a starting blogging librarian, I knew 2  international librarian bloggers: Dean Giustini and Krafty Librarian (make that 3, I forgot to mention David Rothman*) . I looked up to them and they did (and do) inspire me.
It is nice that blogging and Social Media can make distances shorter, literally and figuratively…

As far as I know, Dean has no theme for this round. But you can always submit any good quality post about medical information to the blog carnival. Whether you are a librarian, a doctor, a nurse, a patient and/or a scientist and whether your post is on searching, reference management, reliability of information, gaps in information, evidence, social media or education ( to name a few).
You can submit your own post or a good post of someone else, as long as it is in English.

So if that isn’t easy….

Please submit the URL/permalink of your post at:
http://blogcarnival.com/bc/submit_6092.html

If everything goes according to plan, you can read the Medical Information Matters 2.9 at the blog of Dean Giustini next Tuesday.

 

* Thanks to @DrVes via Twitter. Social Media is sooo powerful!





How will we ever keep up with 75 Trials and 11 Systematic Reviews a Day?

6 10 2010

ResearchBlogging.orgAn interesting paper was published in PLOS Medicine [1]. As an information specialist and working part time for the Cochrane Collaboration* (see below), this topic is close to my heart.

The paper, published in PLOS Medicine is written by Hilda Bastian and two of my favorite EBM devotees ànd critics, Paul Glasziou and Iain Chalmers.

Their article gives an good overview of the rise in number of trials, systematic reviews (SR’s) of interventions and of medical papers in general. The paper (under the head: Policy Forum) raises some important issues, but the message is not as sharp and clear as usual.

Take the title for instance.

Seventy-Five Trials and Eleven Systematic Reviews a Day:
How Will We Ever Keep Up?

What do you consider its most important message?

  1. That doctors suffer from an information overload that is only going to get worse, as I did and probably also in part @kevinclauson who tweeted about it to medical librarians
  2. that the solution to this information overload consists of Cochrane systematic reviews (because they aggregate the evidence from individual trials) as @doctorblogs twittered
  3. that it is just about “too many systematic reviews (SR’s) ?”, the title of the PLOS-press release (so the other way around),
  4. That it is about too much of everything and the not always good quality SR’s: @kevinclauson and @pfanderson discussed that they both use the same ” #Cochrane Disaster” (see Kevin’s Blog) in their  teaching.
  5. that Archie Cochrane’s* dream is unachievable and ought perhaps be replaced by something less Utopian (comment by Richard Smith, former editor of the BMJ: 1, 3, 4, 5 together plus a new aspect: SR’s should not only  include randomized controlled trials (RCT’s)

The paper reads easily, but matters of importance are often only touched upon.  Even after reading it twice, I wondered: a lot is being said, but what is really their main point and what are their answers/suggestions?

But lets look at their arguments and pieces of evidence. (Black is from their paper, blue my remarks)

The landscape

I often start my presentations “searching for evidence” by showing the Figure to the right, which is from an older PLOS-article. It illustrates the information overload. Sometimes I also show another slide, with (5-10 year older data), saying that there are 55 trials a day, 1400 new records added per day to MEDLINE and 5000 biomedical articles a day. I also add that specialists have to read 17-22 articles a day to keep up to date with the literature. GP’s even have to read more, because they are generalists. So those 75 trials and the subsequent information overload is not really a shock to me.

Indeed the authors start with saying that “Keeping up with information in health care has never been easy.” The authors give an interesting overview of the driving forces for the increase in trials and the initiation of SR’s and critical appraisals to synthesize the evidence from all individual trials to overcome the information overload (SR’s and other forms of aggregate evidence decrease the number needed to read).

In box 1 they give an overview of the earliest systematic reviews. These SR’s often had a great impact on medical practice (see for instance an earlier discussion on the role of the Crash trial and of the first Cochrane review).
They also touch upon the institution of the Cochrane Collaboration.  The Cochrane collaboration is named after Archie Cochrane who “reproached the medical profession for not having managed to organise a “critical summary, by speciality or subspecialty, adapted periodically, of all relevant randomised controlled trials” He inspired the establishment of the international Oxford Database of Perinatal Trials and he encouraged the use of systematic reviews of randomized controlled trials (RCT’s).

A timeline with some of the key events are shown in Figure 1.

Where are we now?

The second paragraph shows many, interesting, graphs (figs 2-4).

Annoyingly, PLOS only allows one sentence-legends. The details are in the (WORD) supplement without proper referral to the actual figure numbers. Grrrr..!  This is completely unnecessary in reviews/editorials/policy forums. And -as said- annoying, because you have to read a Word file to understand where the data actually come from.

Bastian et al. have used MEDLINE’s publication types (i.e. case reports [pt], reviews[pt], Controlled Clinical Trial[pt] ) and search filters (the Montori SR filter and the Haynes narrow therapy filter, which is built-in in PubMed’s Clinical Queries) to estimate the yearly rise in number of study types. The total number of Clinical trials in CENTRAL (the largest database of controlled clinical trials, abbreviated as CCTRS in the article) and the Cochrane Database of Systematic Reviews (CDSR) are easy to retrieve, because the numbers are published quaterly (now monthly) by the Cochrane Library. Per definition, CDSR only contains SR’s and CENTRAL (as I prefer to call it) contains almost invariably controlled clinical trials.

In short, these are the conclusions from their three figures:

  • Fig 2: The number of published trials has raised sharply from 1950 till 2010
  • Fig 3: The number of systematic reviews and meta-analysis has raised tremendously as well
  • Fig 4: But systematic reviews and clinical trials are still far outnumbered by narrative reviews and case reports.

O.k. that’s clear & they raise a good point : an “astonishing growth has occurred in the number of reports of clinical trials since the middle of the 20th century, and in reports of systematic reviews since the 1980s—and a plateau in growth has not yet been reached.
Plus indirectly: the increase in systematic reviews  didn’t lead to a lower the number of trials and narrative reviews. Thus the information overload is still increasing.
But instead of discussing these findings they go into an endless discussion on the actual data and the fact that we “still do not know exactly how many trials have been done”, to end the discussion by saying that “Even though these figures must be seen as more illustrative than precise…” And than you think. So what? Furthermore, I don’t really get their point of this part of their article.

 

Fig. 2: The number of published trials, 1950 to 2007.

 

 

With regard to Figure 2 they say for instance:

The differences between the numbers of trial records in MEDLINE and CCTR (CENTRAL) (see Figure 2) have multiple causes. Both CCTR and MEDLINE often contain more than one record from a single study, and there are lags in adding new records to both databases. The NLM filters are probably not as efficient at excluding non-trials as are the methods used to compile CCTR. Furthermore, MEDLINE has more language restrictions than CCTR. In brief, there is still no single repository reliably showing the true number of randomised trials. Similar difficulties apply to trying to estimate the number of systematic reviews and health technology assessments (HTAs).

Sorry, although some of these points may be true, Bastian et al. don’t go into the main reason for the difference between both graphs, that is the higher number of trial records in CCTR (CENTRAL) than in MEDLINE: the difference can be simply explained by the fact that CENTRAL contains records from MEDLINE as well as from many other electronic databases and from hand-searched materials (see this post).
With respect to other details:. I don’t know which NLM filter they refer to, but if they mean the narrow therapy filter: this filter is specifically meant to find randomized controlled trials, and is far more specific and less sensitive than the Cochrane methodological filters for retrieving controlled clinical trials. In addition, MEDLINE does not have more language restrictions per se: it just contains a (extensive) selection of  journals. (Plus people more easily use language limits in MEDLINE, but that is besides the point).

Elsewhere the authors say:

In Figures 2 and 3 we use a variety of data sources to estimate the numbers of trials and systematic reviews published from 1950 to the end of 2007 (see Text S1). The number of trials continues to rise: although the data from CCTR suggest some fluctuation in trial numbers in recent years, this may be misleading because the Cochrane Collaboration virtually halted additions to CCTR as it undertook a review and internal restructuring that lasted a couple of years.

As I recall it , the situation is like this: till 2005 the Cochrane Collaboration did the so called “retag project” , in which they searched for controlled clinical trials in MEDLINE and EMBASE (with a very broad methodological filter). All controlled trials articles were loaded in CENTRAL, and the NLM retagged the controlled clinical trials that weren’t tagged with the appropriate publication type in MEDLINE. The Cochrane stopped the laborious retag project in 2005, but still continues the (now) monthly electronic search updates performed by the various Cochrane groups (for their topics only). They still continue handsearching. So they didn’t (virtually?!) halted additions to CENTRAL, although it seems likely that stopping the retagging project caused the plateau. Again the author’s main points are dwarfed by not very accurate details.

Some interesting points in this paragraph:

  • We still do not know exactly how many trials have been done.
  • For a variety of reasons, a large proportion of trials have remained unpublished (negative publication bias!) (note: Cochrane Reviews try to lower this kind of bias by applying no language limits and including unpublished data, i.e. conference proceedings, too)
  • Many trials have been published in journals without being electronically indexed as trials, which makes them difficult to find. (note: this has been tremendously improved since the Consort-statement, which is an evidence-based, minimum set of recommendations for reporting RCTs, and by the Cochrane retag-project, discussed above)
  • Astonishing growth has occurred in the number of reports of clinical trials since the middle of the 20th century, and in reports of systematic reviews since the 1980s—and a plateau in growth has not yet been reached.
  • Trials are now registered in prospective trial registers at inception, theoretically enabling an overview of all published and unpublished trials (note: this will also facilitate to find out reasons for not publishing data, or alteration of primary outcomes)
  • Once the International Committee of Medical Journal Editors announced that their journals would no longer publish trials that had not been prospectively registered, far more ongoing trials were being registered per week (200 instead of 30). In 2007, the US Congress made detailed prospective trial registration legally mandatory.

The authors do not discuss that better reporting of trials and the retag project might have facilitated the indexing and retrieval of trials.

How Close Are We to Archie Cochrane’s Goal?

According to the authors there are various reasons why Archie Cochrane’s goal will not be achieved without some serious changes in course:

  • The increase in systematic reviews didn’t displace other less reliable forms of information (Figs 3 and 4)
  • Only a minority of trials have been assessed in systematic review
  • The workload involved in producing reviews is increasing
  • The bulk of systematic reviews are now many years out of date.

Where to Now?

In this paragraph the authors discuss what should be changed:

  • Prioritize trials
  • Wider adoption of the concept that trials will not be supported unless a SR has shown the trial to be necessary.
  • Prioritizing SR’s: reviews should address questions that are relevant to patients, clinicians and policymakers.
  • Chose between elaborate reviews that answer a part of the relevant questions or “leaner” reviews of most of what we want to know. Apparently the authors have already chosen for the latter: they prefer:
    • shorter and less elaborate reviews
    • faster production ànd update of SR’s
    • no unnecessary inclusion of other study types other than randomized trials. (unless it is about less common adverse effects)
  • More international collaboration and thereby a better use  of resources for SR’s and HTAs. As an example of a good initiative they mention “KEEP Up,” which will aim to harmonise updating standards and aggregate updating results, initiated and coordinated by the German Institute for Quality and Efficiency in Health Care (IQWiG) and involving key systematic reviewing and guidelines organisations such as the Cochrane Collaboration, Duodecim, the Scottish Intercollegiate Guidelines Network (SIGN), and the National Institute for Health and Clinical Excellence (NICE).

Summary and comments

The main aim of this paper is to discuss  to which extent the medical profession has managed to make “critical summaries, by speciality or subspeciality, adapted periodically, of all relevant randomized controlled trials”, as proposed 30 years ago by Archie Cochrane.

Emphasis of the paper is mostly on the number of trials and systematic reviews, not on qualitative aspects. Furthermore there is too much emphasis on the methods determining the number of trials and reviews.

The main conclusion of the authors is that an astonishing growth has occurred in the number of reports of clinical trials as well as in the number of SR’s, but that these systematic pieces of evidence shrink into insignificance compared to the a-systematic narrative reviews or case reports published. That is an important, but not an unexpected conclusion.

Bastian et al don’t address whether systematic reviews have made the growing number of trials easier to access or digest. Neither do they go into developments that have facilitated the retrieval of clinical trials and aggregate evidence from databases like PubMed: the Cochrane retag-project, the Consort-statement, the existence of publication types and search filters (they use themselves to filter out trials and systematic reviews). They also skip other sources than systematic reviews, that make it easier to find the evidence: Databases with Evidence Based Guidelines, the TRIP database, Clinical Evidence.
As Clay Shirky said: “It’s Not Information Overload. It’s Filter Failure.”

It is also good to note that case reports and narrative reviews serve other aims. For medical practitioners rare case reports can be very useful for their clinical practice and good narrative reviews can be valuable for getting an overview in the field or for keeping up-to-date. You just have to know when to look for what.

Bastian et al have several suggestions for improvement, but these suggestions are not always underpinned. For instance, they propose access to all systematic reviews and trials. Perfect. But how can this be attained? We could stimulate authors to publish their trials in open access papers. For Cochrane reviews this would be desirable but difficult, as we cannot demand from authors who work for months for free to write a SR to pay the publications themselves. The Cochrane Collab is an international organization that does not receive subsidies for this. So how could this be achieved?

In my opinion, we can expect the most important benefits from prioritizing of trials ànd SR’s, faster production ànd update of SR’s, more international collaboration and less duplication. It is a pity the authors do not mention other projects than “Keep up”.  As discussed in previous posts, the Cochrane Collaboration also recognizes the many issues raised in this paper, and aims to speed up the updates and to produce evidence on priority topics (see here and here). Evidence aid is an example of a successful effort.  But this is only the Cochrane Collaboration. There are many more non-Cochrane systematic reviews produced.

And then we arrive at the next issue: Not all systematic reviews are created equal. There are a lot of so called “systematic reviews”, that aren’t the conscientious, explicit and judicious created synthesis of evidence as they ought to be.

Therefore, I do not think that the proposal that each single trial should be preceded by a systematic review, is a very good idea.
In the Netherlands writing a SR is already required for NWO grants. In practice, people just approach me, as a searcher, the days before Christmas, with the idea to submit the grant proposal (including the SR) early in January. This evidently is a fast procedure, but doesn’t result in a high standard SR, upon which others can rely.

Another point is that this simple and fast production of SR’s will only lead to a larger increase in number of SR’s, an effect that the authors wanted to prevent.

Of course it is necessary to get a (reliable) picture of what has already be done and to prevent unnecessary duplication of trials and systematic reviews. It would the best solution if we would have a triplet (nano-publications)-like repository of trials and systematic reviews done.

Ideally, researchers and doctors should first check such a database for existing systematic reviews. Only if no recent SR is present they could continue writing a SR themselves. Perhaps it sometimes suffices to search for trials and write a short synthesis.

There is another point I do not agree with. I do not think that SR’s of interventions should only include RCT’s . We should include those study types that are relevant. If RCT’s furnish a clear proof, than RCT’s are all we need. But sometimes – or in some topics/specialties- RCT’s are not available. Inclusion of other study designs and rating them with GRADE (proposed by Guyatt) gives a better overall picture. (also see the post: #notsofunny: ridiculing RCT’s and EBM.

The authors strive for simplicity. However, the real world isn’t that simple. In this paper they have limited themselves to evidence of the effects of health care interventions. Finding and assessing prognostic, etiological and diagnostic studies is methodologically even more difficult. Still many clinicians have these kinds of questions. Therefore systematic reviews of other study designs (diagnostic accuracy or observational studies) are also of great importance.

In conclusion, whereas I do not agree with all points raised, this paper touches upon a lot of important issues and achieves what can be expected from a discussion paper:  a thorough shake-up and a lot of discussion.

References

  1. Bastian, H., Glasziou, P., & Chalmers, I. (2010). Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLoS Medicine, 7 (9) DOI: 10.1371/journal.pmed.1000326

Related Articles





May I Introduce to you: a New Name for the MedLibs Round….

30 09 2010

A couple of weeks or even months ago I asked you to vote for a new name for the MedLibs Round, a blog carnival about medical information.

The decision was clear.

Hurray!

And the winner is……

Drumroll….

Medical Information Matters!

…………………

I’m very pleased with the results because the name reflects that the blog carnival is about medical information and is not purely a carnival for medical librarians.

I hope that Robin of Survive the Journey is still willing and able to make the logo for Medical Information Matters.

Well it will not be long for Medical Information Matters will be “inaugurated”.
We won’t restart the counting. So it will be Medical Information Matters 2.8

There are only a few days left from submitting.
Daniel Hooker at Danielhooker.com: Health libraries, Medicine and the Web is eagerly awaiting your submissions.

You can submit the URL of your post HERE at the Blog Carnival.

Daniel at his call for submissions post:

I’d love to see posts on new things you’re trying out this year: new projects, teaching sessions, innovative services. Maybe it’s something tried and true that you’d like to reflect on. And this goes for anyone starting out fresh this term, not just librarians! We should all be brimming with enthusiasm; the doldrums of winter have yet to set in. If you can find the time to reflect and even just write up your busy workday, I’ll do my best to weave them all together. I, for one, hope to describe some of the projects that I’m involved with at my new workplace. But remember, this “theme” is only a suggestion, we’d be happy to see any contributions that you think would be of interest.

Educators, librarians, doctors or scientists please remember: your submission matters…. No interesting blog carnival without your contribution. I’m looking forward to the next MedLibs round, the first Medical Information Matters Edition (it is a mouth full isn’t it?)

Related Articles





The University Library (UBA) goes Mobile.

4 04 2010
UBA mobielOur Medical Library at the AMC hospital is one of main (autonomous) libraries of the UBA, the University Library of the University of Amsterdam.

The UBA developed the Spoetnik (library 23 things-like) course -inspiring the start of this blog-, has a library-coach with chat function, a library blog (UBA-e), and is now on Twitter as @bibliotheekuva.
Plus, as I just learned, a small team of the UBA recently launched a mobile version of the library website.

I like their approach. This team consisting of Driek Heesakkers (project leader), Lukas Koster, Gre Ootjers, Roxana Popistasu en Alice Doek, realized this “perpetual beta version” in no more than 7 weeks (from first meeting till launch at April 1st). There aim was not to strive for perfection, but to develop a version first and to learn from their mistakes and the feedback from the users. Thus highly interactive.

Another excellent principle was that they designed ONE mobile app for all smart phones.

This is what UBA mobile offers right now:

  • The library catalog (searching; reserve items; renew loans)
  • Opening hours and addresses of library locations
  • Locations (on a map)
  • Contact phone numbers
  • Questions, feedback
  • News via @bibliotheekuva-tweets

The most important feature, full access to the digital library (with link to all subscriptions) is not yet realized.

I hope our medical library will follow this shining example. Many medical students and doctors use smart-phones and I’m sure a digital version of our medical library website would surely be appreciated by our clients.

Mobile is the future. What do you think?

Below a short and clear presentation by Lukas Koster at UGUL (UGame ULearn) 2010.

The web address of the mobile site is: http://cf.uba.uva.nl/mobiel.

Short notice about UBA mobile at the news section of the UBA.

Janneke Staaks (librarian for: Psychology, Cultural Anthropology and Pedagogical and Educational Sciences) has dealt more in depth with this subject. See this post at her (Dutch) blog FMG Library.