Archive for the 'libraries' Category

New user experience person working in (digital) libraries

A couple of days ago I found another person who is doing work pretty similar to me, only she blogs about her practical work a lot more than I do.  Lorraine is posting a series of usability analyses on digital library sites and software, and her insights are very interesting.  You can read her blog at, and you can follow her on Twitter (lorraine_p).  I’ve also added her to my blogroll.


Why users like federated search (even though they shouldn’t)

‘Federated Search’ is a library term, it refers to search engines that search a variety of library databases (things that contain journal articles, conference papers and the like) and combine the results in some way to be presented to the user.

Federated searching is a somewhat fraught topic in libraries; many librarians don’t like federated searching and are hestitant to recommend it to library users.  This reluctance is not without good reason–federated search is inferior in many ways to using native database search interfaces, including problems with relevance ranking, the false appearance of comprehensiveness, and the inadequate de-duplication that many offer.  On the other side of this, federated search offers the holy grail of library searching: a single search box (well, almost–federated search usually doesn’t include the local catalogue, though sometimes it does, as in this example at UNSW).  The single search box is seen as being “like Google” in offering users a lot of different content from one search–and even has a slight edge over Google scholar in that search results will usually reflect more closely than Google Scholar which results a searcher can actually access.

Federated search has some issues that would normally be pretty big rpoblems from a user perspective too:

  • The relevance ranking doesn’t really work. Because federated search is pulling in material from a range of sources, each of which use different approaches to relevance ranking and different metrics to express a rank.  Any combination of these results is likely to produces flawed relevance ranking.  This means that often, the most relevant results will not be in the magical first couple of pages.
  • Federated search is very, very slow.  Again, because federated search is searching a number of remote databases and then applying some metric to combine results before these are presented to the user, federated search is very slow. Typically users are unhappy with slow response times, so this should be a real problem for users.

So, we know librarians are often hesitant about recommending federated search, and that users have every reason not to like it…and yet study after study shows that users do like and use federated search.  So why is federated search so popular?

  • One stop shopping: Federated search offers users a one-stop shop, and even though they know it isn’t as good, they will often use it anyway.
  • Time saving: Despite the long load time for search results, users know they will save themselves time (and likely frustration) by visiting only a single site.
  • Search syntax: Search syntax varies slightly from site to site, and federated search allows users to forgo learning the variationson syntax required by individual databases.  Given that we know boolean searching is hard (sorry, paywall), it is easy to surmise that learning less about it is considered a good thing by users.
  • Low user expectations: Users expect library systems to be slow and clunky, so their expectations of federated search are lower than they would be for other web-based services.

Users’ willingness to use a system we don’t expect them to like is an object lesson in how usability principles are not entirely universal: Occasionally users will tolerate unusable systems over more-usable ones because the end result is still a faster and easier user experience.

So, does users’ willingness to put up with the limitations of federated search mean we should stop striving for anything better? I don’t think so.  I think that as web technology improves, users will have less tolerance for slow and clunky systems.  We’ve already seen this at Swinburne with the library catalogue–while it hasn’t changed our users surveys show increasing levels of dissatisfaction as a result of user expectations that have been raised by their interactions with other systems.  I don’t believe that users are going to be willing to individually visit library databases in the future any more than they are now; even Google is meshing different kinds of data in its search results.  I believe there is real benefit to be had for librarians and library users alike in making headway in one-stop searching, I’m very much looking forward to seeing Primo Central and Summon (the next generation of federated search, where metadata is locally indexed making search faster and relevance ranking better) in action.  In the meantime though?  Users still like federated search, even though it is slow and awkward.

VuFind: An interesting case of open source usability

We all know that library users are consistently frustrated with library systems, and cannot find what they want, particularly since the advent of Google (PDF). Some academics berate and despair of their students’ information seeking practices, and claim that Google is ruining young minds. In my opinion, as I have stated before, berating students (and Google) is going after the wrong target. It is human nature to maximise benefits while minimising effort, and for many students the time they will spendf searching a number of interfaces for relevant resources–particularly when the interfaces are confusing, archaic, and unhelpful–is simply better spent reading the resources they find on Google, and writing their assignments. The only way to change this “satisificng” approach and reveal the vast range of library resources available to our students is to make them findable through interfaces that do not confuse or humiliate users, and do not require a librarian to operate. While libraries can’t expect to compete with Google while they are buying information from a multitude of vendors that do not have standardised search results or formats, library search interfaces can offer some additional features (such as metadata-based faceting and primary browsing) that Google doesn’t offer–and if the information is better, or gets better results (like higher grades) that will also prove an incentive to use library interfaces.

Typicall I expect library catalogues to be ugly and cantankerous, I see that as the price I pay for finding the books I want(and don’t even get me started on finding journal articles–usually I start with Google Scholar). This is why, when I looked at VuFind on the National Library web site, I was so impressed with it: it is clean, attractive, and very usable:

  • It searches more than one type of holding; my search results included books, online resources, and microfilm. This is much closer to the “one stop shop” expectations that users have than any library system I have used in the past.
  • I can choose between my search results based on metadata facets–that is, I can choose books, or works by a certain author, or items from a specific subject. This means that single term searches are much more likely to be successful, as I can easily disambiguate my search and bring the results that are most relevant to me to the top
  • Results are relevance ranked (don’t laugh, some library systems don’t do this). This feature is the one that has given Google search engine market dominance; their excellent relevance ranking meant that people found what they were looking for in the one to two pages of results they typically view.

These are just a few of the features that make VuFind feel like a breath of fresh air. Another thing that is unusual about VuFind, though, and one that makes it especially exciting to me, is th fact that it is open source. This basically means that you can get the software for free (though if you want support you will generally pay for it), and that if you want to change something about it, all you need is a willing programmer.

Open source software provides large scope for improving usability of software locally, because unusable features can be altered, however generally speaking open source software is not as usable as its “closed source” or commercial counterparts (a problem that is recognised, but not well handled, in the open source community). Dave Nichols and Mike Twidale, colleagues of mine, have long been interested in usability in open source software (and indeed how to open source usability bug reporting). In a 2003 paper they published (which anyone interested in open source or usability should read), they suggested several reasons why open source software might have usability problems:

  • Open source communities, famous for comments like “RTFM” (read the **&%@& manual), are not generally welcoming to experts from other backgrounds, as usability experts often are
  • Design for usability generally has to start before design for coding
  • Open source communities are populated by programmers, who generally cannot see the problems that users with a lesser understanding of computers might have
  • Open source software programming is often done to meet a need of the programmer, and as mentioned above, programmers have very different user interface needs to other users
  • Design by committee and software bloat are not usually good for usability, and open source software is prone to both

In another paper on open source usability, Dave and Mike noted that it can be hard to report usability bugs in the same way as technical bugs, and that open source interfaces may be prevented from innovating by playing “catch up” with their commercial counterparts.

So VuFind is positively fascinating for its usability, both among library systems (though some of the newer commercial systems look interesting), and among open source projects (Koha is similarly fascinatingly usable and open source). Why is it that VuFind is such an exception to the rules?

  • It was created by a library, under one umbrella, and not in a typical open source community. Being under a single umbrella demonstrably helps open source projects’ usability (Dave and Mike again, there), largely by ameliorating design by committee and imposing some order on the process. This will also have meant that the community was different — VuFind’s website comments that it was developed “by libraries“, and thus not just programmers, meaning that feedback from other disciplines was likely welcome
  • Typical library system websites (though again, I can’t speak for some of the newer ones) are not effective for users, so VuFind didn’t have to play interface “catch up”
  • VuFind was developed “for libraries” not “for programmers”
  • It looks suspiciously (to me) like VuFind might have had a formal usability process, though I can’t find any evidence for this one way or another

In the end, whatever the specific differences are, VuFind is not just exciting in terms of its user experience, but fascinating, and an exemplar of how to do usability in an open source project. I don’t know if it is the way we will go with our discovery layer (and not having seen many of the other possibilities, I can’t comment on whether it is the way we should go either), but it certainly is a fascinating project, and I will be watching it further.

The ‘Google effect’: A trend toward mediocrity, or away from it?

Today, there is a special section of the Guardian on digital academic libraries. It covers a wide range of perspectives, and is probably worth a read if you’re interested in academic libraries, digitization, digital preservation, or student habits.

I have to take issue, though, with ‘Academia’s big guns fight the ‘Google Effect”’. The definition of ‘Google effect’ given in this article, and apparently coined by one Tara Brabazon, is ‘a tendency towards mediocrity’. The article goes on to accuse students of information illiteracy, and point out that they like to use Google for everything, which gives them less-than-academic results. Attempts to provide good academic-resource search engines are touched upon, as is Google Scholar (which is ‘acceptable’, but ‘too broad’ according to Professor Brabazon).

There is actually an excellent study (see ‘British library and JISC’ on this page) about information literacy skills of the current generation of university students which is the basis for much of another article in the series. That study found that undergraduates are not necessarily as information literate as they are perceived to be, and that they use ‘shallow’ searching and don’t really read online (but neither, necessarily, do their older counterparts).

I’m not arguing with the results of that study — it seems pretty sound to me. I suspect, however, that the thing that has changed with the ‘Google generation’, though, is not actually their information literacy, but their ability access information without strong information literacy skills and/or the help of a librarian. Google, having a very simple user interface, and great results ranking, has made it easy for the average person to find answers to their questions on the internet. It has also shown users that it isn’t necessary to jump through hoops, understand boolean search, or wade through pages of results to find information.

The mediocrity Professor Brabazon has termed ‘the Google effect’ arguably does not apply so much to her students, who I suspect are much the same as always, but to the information interfaces they are forced to use to locate scholarly materials. It is understandable, I think, that students prefer to spend time on their assignments reading and writing, and now they have tools which to them appear to let them bypass the cumbersome, splintered interfaces of academic journals. There is an information literacy problem here, but it is far from “whippersnappers these days not knowing how to use our journal databases”; it is the twofold problem of the proliferation of self-published non-authoritative easily accessible material that is the internet, and the vastly superior search technologies available to sift through that material.

If Professor Brabazon and her colleagues want to encourage young people to use scholarly resources the answer is not to lambast them for being mediorce (when likely they are no different to those who have come before them), nor to throw up their hands in disgust; the answer is to improve search interfaces and online access to academic materials so they can compete with Google, or (in my opinion the more likely solution) encourage widespread use of Google Scholar.

The ‘Google effect’ as I see it is not ‘a tendency toward mediocrity’ in students, it is an exposure of the dire mediocrity of the interfaces and search process for academic material. Google has democratized information searching, and made it possible for the average untrained adult to find information — academic publishers and other information providers need to catch up by providing seamless, well-ranked searches (again most likely through Google Scholar), and at least for those who are subscribers to their resources (either individually or through their institution)* make the results available with a single click. The alternative to this will not be improved information literacy skills, people are not going to learn something more difficult if they believe the tools they have will do an adequate job. I hope the end result of the Google effect will be a trend away from mediocrity–the mediocrity of academic information interfaces–and toward usable information search interfaces for all kinds of materials.

*Agruably, these results should be more widely available than that, but this post is not about the merits of open access, and academic publishers are not likely to change their access model so radically any time soon.

eBooks: Neither e-anything, nor really books.

Gordon gave me the idea for this post, while venting his frustrations about eBooks (someone needs to tell me whether that capitalization ought to be there — I never really know). His specific irritation was that he could not print more than four pages, thus meaning that the e-version of a real book one of his lecturers has set as required reading does not do the same job as a physical copy would (and the physical copy is on back order). What, asks Gordon, is the point of these things?

To me, it seems that eBooks are a bit like Wikipedia (only more authoritative): They’re good for getting short sharp bursts of information while you’re already online. My library’s subscription to Safari Techbooks saved me no small amount of time during the tail end of my masters; instead of having a recall war with someone over the only book our library had on the (then relatively new) DHTML, I was able to read about it, with code examples, online and just in time. EBooks are probably good for all sorts of things like that, from physics equations to Shakespeare’s 116th sonnet. If you want to read ‘King Lear’ or ‘A Brief History of Time’, though, forget it. Buy the book, if you can;t get it from your library.

The (apparent) reason why eBooks are so awful seems to be to me a triumph of copyright over common sense. Copyright concerns seem to be the reason why eBooks are neither fish nor fowl, neither electronic nor book. eBooks, at least the ones I have seen at Swinburne, are printed in a PDF-like format, making for worse on-screen reading, longer load times, and a distinct lack of the rich hyperlinking that adds value to online reference content. I can only think of 3 reasons why PDFs are being used here instead of natively online formats:

  • Because you can lock a PDF and prevent someone copying the text
  • Because the books are natively created in PDF-like form and the publisher sees no need to convert them
  • To present the Greek symbols so often found in mathematics textbooks.

The only reason of those, that is really good enough, is the last one, and we can only hope that text-presentation technologies catch up with need soon enough that we won’t be dependent on preformatted text for too much longer (yes, theoretically unicode can handle it, but too often web browsers fail to adequately interpret unicode, resulting in either garbled nonsense, or that little square box thing). Not only do eBooks fail at being electronic, though, they fail at being books. They can’t be read without a web connection, the amount you can print is dictated by an online publisher and embedded in the technology, rather than reflecting copyright law, and all the wonderful affordances of a regular book — annotating, falling open at a frequently used page, coming back to where you left off and prolonged comfortable reading — are not available.

Despite the poor usability and poor readability of online book, though, I think it is important that we continue to have the available. The web statistics show that eBooks are quite heavily used, and a recent survey of our students has demonstrated that they like and expect to be able to access their textbooks online. Are our eBooks popular because the next generation is different? Possibly. My guess, though, is that most students are using eBooks for reference, to avoid purchasing (or carrying about) hard copy textbooks. As for me? I’ll read more eBooks when the usability of the electronic interface, and the complete unwillingness on the part of publishers to publish in an online readable format changes.

Addendum 18-2-2008: My colleague Tony has made some excellent points in the comments on this post that need raising here: eBooks have a significant advantage over traditional books in storage space and price, and are a hugely valuable resource for distance students.  Not only that, but our eBook provider (EBL) is very generous in terms of the printing users are allowed — 20%, considerably more than copyright in Australia; it seems Swinburne users have been facing some technical hitches at our end in this regard.  Tony’s most important point, though, was one I missed because I am used to libraries (and I should have caught this): It is not necessarily easy to find a book on the shelves of a library, or find the right information in that book, so eBooks may have the advantage in this regard.  All these points are reasons to continue to purchase eBooks, but also to manage expectations about what they are, so their users get the most out of them and not the least.

The angry librarian: A great example of the human side of bad user experience

I was tipped off to the angry librarian when it went around the office; if you haven’t seen it please watch it below and then read the rest of this post.

I hope that was an especially painful 5 minutes and 10 seconds — I know I found it painful, and not, as many of the commenters on YouTube did, because “that spacey girl is so dumb”. This is an excellent (if spoofed) example of a bad user experience in an unusable system that involves a human being. The girl’s task is relatively straightforward, she wants to print a picture in colour for a university assignment. When she tries (and fails) to complete the task on her own, she asks the librarian on duty for assistance.

From this point, the librarian completely fails to offer a good user experience; he doesn’t provide enough information at any stage in the proceedings for the girl to know that what she wants to do is impossible, and during their conversation, the girl (a library user, the person on the customer end of the equation) makes the only attempts that are made toward solving the problem — only to have each one rebuffed in a ruder and ruder manner.

Rebuffing the girl’s attempts to print a document in colour takes five minutes, time that is wasted for the librarian and wasted and frustrating for her. There are ways to deal with this that would have taken much less time, and would have been a much better experience for both parties:

  • The obvious: Make colour printing available to students.
  • If colour printing is not available for students, then make this fact obvious, and provide an alternative, for example “I’m sorry, we can’t do colour printing for students, but the copy shop next door can and is open 9am to 9pm 7 days a week”.

The bad user experience in this case was caused by an interaction between an obstinate person (the librarian) and a set of rules that would be incomprehensible to the average user (and aren’t readily available for users to read). While I am sure that this scenario is not in the least bit library-specific, this video is an excellent incentive to assess how our rules and our customer service may make our users’ lives difficult.

Library 2.0: Library 1.0++

I have to say, I am a little uncomfortable commenting on library 2.0. I’m not a librarian, and I have neither the academic background nor the practical experience to know what Library 1.0 delivery really means, nor what the rationale is (was?) for doing things in a library 1.0 way.

There seems to be a lot of chaos over what library 2.0 actually means, which is no doubt adding to my discomfort posting about it; the general consensus seems to me to be that the difference between library 2.0 and library 1.0 is that library 2.0 is user centric and user driven; and a lot of it seems to be driven by new technologies (though it doesn’t have to be) Now, I’m all for a great user experience, and often that is something that will involve a certain amount of user centrism, but I’m decidedly ambivalent about what it means for libraries.

To go any further with this post, I have to define what I think libraries are (or should be), and this will no doubt get me in a world of trouble with my librarian co-workers: I think libraries are free access point of information of many kinds, with value added in spaces to get that information, and librarians themselves. I think the defining point of libraries is actually librarians; they select targeted authoritative collections, and can help unsure users sort the wheat from the chaff online.

Back to library 2.0, though. Some library blogs refer to library 2.0 in terms of teen gaming nights and library blogs, others talk about user control of information.  I question what any of these things have to do with librarianship — the difference between a library and the internet, as I expounded in my masters thesis, is that a library is a carefully collected information set (and the internet is not).  The internet is always going to have more choices than the library (some of which would never make it in to a library) and users are also going to be far more in control of the likes of Google than they are of EBSCO (unless EBSCO buys PageRank from Google).  Library blogs are notoriously silent, and I can’t really understand what teen gaming has to do with libraries at all.  If these things are the best library 2.0 can offer us, I’m with the Annoyed Librarian. Not only do these things not gel with what I want in a library (and after all, I am a library user too), they seem to dilute what it even means to be a library.

Kathryn Greenhill, however, has a post that makes many aspects of library 2.0 something I could get behind.  It paints library 2.0 as a move away from the purported days-gone-by librarian shusher model (did anyone ever really get shushed?  I never did and I’m not a particularly quiet soul) and toward an era where librarians have control of their catalogue software (thus creating scope for things like user tagging, which are long overdue), library spaces accomodate collaborative and individual work, librarians seek feedback and listen to their users, and library services are available on the internet. Library usability, particularly in terms of online services, has a big part to play in this version of library 2.0 — and I am all for it — and apparently so is Swinburne, because we are doing many of these things already.

The big risk of library 2.0 is throwing the baby out with the bathwater; trying so hard to be everything to everyone that libraries are no longer libraries.  The big opportunity is providing increasingly relevant, increasingly user-friendly and increasingly useful spaces and services.  I think the way forward is to get off the bandwagon — the term library 2.0 is so overused as to be meaningless — and at least in Swinburne’s case, to keep doing what we are doing — listening to our users, and providing the best responses we can in a library context.



Some rights reserved.

Comment moderation

If it is your first time posting, your comment will automatically be held for my moderation -- I try get to these as soon as possible. After that, your comments will appear automatically. If your comment is on-topic and isn't abusing me or anyone else who comments, chances are I'll leave it alone. That said, I reserve the right to delete (or infinitely moderate) any comments that are abusive, spammy or otherwise irelevant.