Archive for the 'Google' Category

Captcha and accessibility

I’ve written before about the problems with anti-spam devices, but today I read some wonderful blog posts on this, and since I’m neither a user with a dsiability that prevents me from using CAPTCHA, nor an expert on accessibility for users with visual impairments, I will let the posts speak for themselves:

  • One user’s experience trying to sign up for a Gmail account, which failed because CAPTCHA has accessibility problems.
  • A study, showing that this is the majority experience of CAPTCHA (73% of users were unsuccessful using the ‘accessible’ version of CAPTCHA)
  • A discussion of the issue at Feminists With Disabilities, noting that to provide Google with feedback, you have to get through Captcha first, and how this further disavantages an already disadvantaged user population.
  • A link to the Google accessibility reporting function–please use this liberally if you notice any other problems with Google’s interfaces (and you have been able to sign up for an account).

As this article on anti-spam devices points out, it’s not just users with visual impairments that suffer when presented with CAPTCHA, it’s also users with reading difficulties, and even users without disabilities suffer some inconvenience.

It is telling that  one of the best cited posts on Captcha effectiveness (which finds CAPTCHA to be very effective) refers only to the ability of CAPTCHA to prevent spam. The “false positives”, where CAPTCHA fails to allow a human being to access a website, are dismissed with a single line “these are eminently human-solvable, in my opinion”, while pointing out that CAPTCHA is used on most interactive internet sites.

Spam is a usability and accessibility problem, but the way to solve it should not prevent users with disabilities accessing internet content. Not only is CAPTCHA as an approach inaccessible and unusable, but it’s widepsread implementation could end up costing sites which use it a lot of money.

Search isn’t king anymore: Google recognises browsing

Earlier this week, I was doing some Googling and I noticed something weird: Google now has facets that are visible all the time:

Google search results showing a range of left-hand facets and an updated interface, for example a new button shape.

Google with facets

You might also notice that the interface appears more modern–the shape and appearance of the button has changed, for example.  You can read more about that at the Google blog, but it’s notable that a lot of what they have done is good for users; the new logo is more readable and will likely be faster to download for example.

The thing that really excites me is that Google has recognised that search is no longer king: by including always-visible facets on the Google results page, they have recognised that browsing, refining, and manipulating results sets are part of the natural human information seeking process.

Larry Page (one of Google’s founders) once said that “the ultimate search engine would…always give you the right thing. And we’re a long….way from that”. I don’t think he’s right, and the reason why I don’t think he’s right is that it is not always readily apparent, even to the information seeker themselves, what they want. Sometimes, it’s easy to figure out what information will answer our questions; when we want to know what the formula is to convert degrees fahrenheit to degrees celsius, for example, information seeking in the information age is straightforward and requires only a simple search ( ‘how to convert from deg c to deg f‘will get a perfectly serviceable answer, and in fact if all you want to do is convert a temperature you can use Google’s ‘in’ operator by typing ‘16 C in F‘, for example).  Sometimes, though, you don’t know exactly what you want; “a good present for my brother” or “a good book” or “how users search the library shelves” are information needs that can’t be met by typing a simple phrase into Google; they require a process that includes searching, browsing, and refining.  Take the “good book” example from above; you might feel like a mystery or modern literature, and once you;ve decided on that you might like a certain author or subgenre, but who or what that might be might also require some digging around to discover, and once youv’e decided what you want to read you have to figure out how to get it–as an ebook? from a library? from a bookstore, either online or physical? This example shows how we search out there in the real world when there isn’t a straight answer (and sometimes not even a straigth question), and how important it is to have the option to browse; again taking the book example, browsing might also show you other books that seem like you might enjoy them.

This isn’t the first time I’ve talked about Google and browsing–I’ve discussed before what a great thing it is that Google is incorporating browsing (and you can read more about how important browsing is in that post), and how their choice of facet location has influenced where we put the facets in our library search (and I’m really glad we went with Google on this one now). This is the most exciting time I’ve talked about it, though; Google’s results pages now reflect a truly natural information seeking process (without destroying the interface for “quick searches”), and thus represent a much better user experience than they have in the past.  Not only that, but this development will have a feedback effect: because Google has them, facets are more likely to be used in other information seeking interfaces (because users are used to them), and thus the experience of many of these interfaces will be improved as well.

Why users like federated search (even though they shouldn’t)

‘Federated Search’ is a library term, it refers to search engines that search a variety of library databases (things that contain journal articles, conference papers and the like) and combine the results in some way to be presented to the user.

Federated searching is a somewhat fraught topic in libraries; many librarians don’t like federated searching and are hestitant to recommend it to library users.  This reluctance is not without good reason–federated search is inferior in many ways to using native database search interfaces, including problems with relevance ranking, the false appearance of comprehensiveness, and the inadequate de-duplication that many offer.  On the other side of this, federated search offers the holy grail of library searching: a single search box (well, almost–federated search usually doesn’t include the local catalogue, though sometimes it does, as in this example at UNSW).  The single search box is seen as being “like Google” in offering users a lot of different content from one search–and even has a slight edge over Google scholar in that search results will usually reflect more closely than Google Scholar which results a searcher can actually access.

Federated search has some issues that would normally be pretty big rpoblems from a user perspective too:

  • The relevance ranking doesn’t really work. Because federated search is pulling in material from a range of sources, each of which use different approaches to relevance ranking and different metrics to express a rank.  Any combination of these results is likely to produces flawed relevance ranking.  This means that often, the most relevant results will not be in the magical first couple of pages.
  • Federated search is very, very slow.  Again, because federated search is searching a number of remote databases and then applying some metric to combine results before these are presented to the user, federated search is very slow. Typically users are unhappy with slow response times, so this should be a real problem for users.

So, we know librarians are often hesitant about recommending federated search, and that users have every reason not to like it…and yet study after study shows that users do like and use federated search.  So why is federated search so popular?

  • One stop shopping: Federated search offers users a one-stop shop, and even though they know it isn’t as good, they will often use it anyway.
  • Time saving: Despite the long load time for search results, users know they will save themselves time (and likely frustration) by visiting only a single site.
  • Search syntax: Search syntax varies slightly from site to site, and federated search allows users to forgo learning the variationson syntax required by individual databases.  Given that we know boolean searching is hard (sorry, paywall), it is easy to surmise that learning less about it is considered a good thing by users.
  • Low user expectations: Users expect library systems to be slow and clunky, so their expectations of federated search are lower than they would be for other web-based services.

Users’ willingness to use a system we don’t expect them to like is an object lesson in how usability principles are not entirely universal: Occasionally users will tolerate unusable systems over more-usable ones because the end result is still a faster and easier user experience.

So, does users’ willingness to put up with the limitations of federated search mean we should stop striving for anything better? I don’t think so.  I think that as web technology improves, users will have less tolerance for slow and clunky systems.  We’ve already seen this at Swinburne with the library catalogue–while it hasn’t changed our users surveys show increasing levels of dissatisfaction as a result of user expectations that have been raised by their interactions with other systems.  I don’t believe that users are going to be willing to individually visit library databases in the future any more than they are now; even Google is meshing different kinds of data in its search results.  I believe there is real benefit to be had for librarians and library users alike in making headway in one-stop searching, I’m very much looking forward to seeing Primo Central and Summon (the next generation of federated search, where metadata is locally indexed making search faster and relevance ranking better) in action.  In the meantime though?  Users still like federated search, even though it is slow and awkward.

Apologising: Google is doing it right

As some of you will know, gmail went down for 100 minutes early thismorning.  I did notice it, but assumed it was my internet connection acting weird again–and I didn’t really need to read email at 7AM anyway.  For people elsewhere, however (for example in the US where this was anything from midday to close of business) and even people in New Zealand where the workday was just beginning this could have been a real problem, especially for those using gmail for business porposes.

Given how reliable Google usually is, this sudden and lengthy failure will understandably shake confidence in the service, and may even make people more righteously angry than service failures by unreliable companies (consider my eyerolling acceptance above, when I thought the problem was my ISP).

Generally speaking, users can think one of three ways when things go wrong (and lets face it, things do go wrong sometimes with any product or service):

  • That the product or service is unreliable and therefore they have lost faith in the product or service and the parent company
  • That something went wrong, but that the company did what they could about it and the solution was acceptable so they will continue to use the product or service
  • That the resolution to the problem was not satisfactory, but that they have no option but to use the company next time anyway (for example when the company has a monopoly–if this is the case though, as soon as the company no longer has a monoply they can expect customers to jump ship).

Google probably has a lot of people in the second category after today, because they did two things right: They updated people, and they wrote a fabulous and public apology.  The apology was probably even more effective than one normally would be because a large company apologised for an outage in a free service, but there are a few other things Google did right:

  • They apologised unreservedly, and with an understanding of their users.  There was no “we’re really sorry but it wasn’t our fault” or “we’re really sorry but you shouldn’t be so mad”–they understood why people might be annoyed, and they said sorry.
  • They explained the cause of the problem.  Not everyone is going to care about this, but it is good practice to explain for those who do, when writing for a public audience
  • They described what they are doing to make sure it doesn’t happen again.
  • They subtly reminded users why they chose gmail in the first place, not by saying “we are the most reliable”, but “we’re trying to keep failures rare”.
  • The apology was public (right up there on Google’s gmail blog), but not forced on those who didn’t notice the failure.

This is probably the work of Google’s PR people, but dealing with the failures that inevitably happen in life is a really important part of good user experience, and (I swear I don’t work for Google) this is one that Google have done really well.

How to deal with ‘too much information’: where should we put search refinement facets?

Swinburne Library is in the process of making some changes; we’re replacing our library system with a fancy new one, and as the user-experience-person-in-situ it is up to me to make suggestions for the search and discovery interface our users will see.  Some of those decisions I will blog about here, and search facet placement is one of them.

Search facets are one of the search tools that I think will be most instrumental in making stuff easier to find (and the OCLC report (PDF) on user expectations vs. librarian expectations suggests library users feel the same way). Facets are the little categories you see on search interfaces that let you narrow down your search results to things that are more relevant to you; they started out in tools with well-defined metadata (like eBay and Amazon, and even some of the newer library systems) and they are slowly working their way into searches with less-well-defined metadata, like Google.

With anything new like this, though, you have to figure out where in the search interface to put it.  So far I have seen facets placed to the left of search results:

facets to the left of search results

to the right of search results:

Facets to the right

and below search results:

Facets below search results

At Swinburne, we talked a bit about facet placement, and in all likelihood ours will be on the left.

So, what are the arguments for and against each position?

  • Facets below search results: When facets are below search results, they don’t distract the user when they are viewing search results, which is a good thing.  However, given that the vast majority of users don’t scroll all the way down, and only look at the first couple of pages of search results (and they look more at the first results on these pages), placing facets below search result is pretty likely to mean that users don’t see them or use them.  This likelihood is reinforced by the fact that this is a significantly uncommon location for facets, so users won’t think to look for them here.
  • Facets to the right of search results: From a user-centred-design purist standpoint, in my opinion the right-hand position for facets is probably the best in an interface where the language is read from left to right.  This position means that user see search result first, and then facets if the search results don’t contain anything immediately useful.  Given the number of commonly used interfaces that put facets on the left, however, this could be a risky proposal.
  • Facets to the left of search results: This is what Google have gone with (possibly because their advertising is on the right).  It is also common in other commonly-used information seeking interfaces, such as eBay, Dymocks (in Australia) and Amazon (for the US and the UK). Use of these interfaces will train users to look to the left for facets; and it would seem that at least a small sample of users have already developed this preference for left-hand search facets.

Swinburne has a real opportunity with this project to provide a search interface for our users that is not “slow motion search, typical library“; however, to do this we must pay as much attention as we can to our users.  Putting the search facets to the left is just one of the decisions we will make with the users in mind, and I hope to blog about more in the future.

Google search isn’t just search anymore

I know I’m a bit lot late to the table with this, but Google search isn’t restricted to just searching anymore!  They’ve introduced some browsing tools as well (see the video below for more):

Now, it’s easy to figure out that I am very pro-browsing, and therefore I think it’s great that Google has included these things into their search experience, but I’d like to unpack just why I think browsing is such a good thing (and make a couple of suggestions for extensions of what Google is doing) along the way.

Google has been very pro-search as an information organisation and finding strategy for a long time, their search-don’t-sort appraoch to gmail being one obvious example of this.  It’s completely understandable that this has been Google’s whole approach for so long, after all, search is what they do (and they do it very well).

Search isn’t always the answer though (and if you watch this video of a Google user experience researcher talking about the search options, it is evident that Google knows that).  For one things, humans employ more than just search in their information seeking strategies: the research (PDF) shows that information seeking is generally an interative process that includes searching, browsing, and refinement.  Not only is search not the only approach we use for finding information, but sometimes search isn’t enough on its own: with all the information on the web, it can be hard to know when someone types ‘Placebo’ into a search box whether they want to know about the psychological effects of sugar pills, or whether they’re interested in the British based rock band (this ambiguity applies to any number of terms). Similarly, information seekers may want a particular type of information (for example reviews, or places where a product can be bought), or information from a particular geographic location, time or author, or general subject field.  Also, even with known-item searches (those where the searcher knows exactly what they are looking for, and that it exists somewhere, because they have found a pointer to it or seen it) if the searcher doesn’t remember the exact words that occur in the document, they might not find what they are looking for.

Google’s ‘more search options’ are beginning to deal with this problem.  They allow people to find three specific types of content (reviews, forums and video), they provide suggested search terms, they allow the user to look at results from a specific time, and also see how the search terms popularity has changed over time.  I’m not entirely sure what value the ‘wonder wheel (see below)’ adds, given that the related search terms provide all the wonder wheel terms and more, but  I suppose some people may find the visual presentation useful.

Google's wonder wheel, a visual display of related search termsIt certainly is heartening, for someone as vested in browsing as I am, to see Google incorporating browsing into their search.  All I want now is to see it expanded:  I want to filter news by topic and country (and standard search results for that matter); when I use Scholar, I want to be able to browse by author or year.  What Google has provided is an excellent start, and I look forward to seeing where this goes in the future.

The ‘Google effect’: A trend toward mediocrity, or away from it?

Today, there is a special section of the Guardian on digital academic libraries. It covers a wide range of perspectives, and is probably worth a read if you’re interested in academic libraries, digitization, digital preservation, or student habits.

I have to take issue, though, with ‘Academia’s big guns fight the ‘Google Effect”’. The definition of ‘Google effect’ given in this article, and apparently coined by one Tara Brabazon, is ‘a tendency towards mediocrity’. The article goes on to accuse students of information illiteracy, and point out that they like to use Google for everything, which gives them less-than-academic results. Attempts to provide good academic-resource search engines are touched upon, as is Google Scholar (which is ‘acceptable’, but ‘too broad’ according to Professor Brabazon).

There is actually an excellent study (see ‘British library and JISC’ on this page) about information literacy skills of the current generation of university students which is the basis for much of another article in the series. That study found that undergraduates are not necessarily as information literate as they are perceived to be, and that they use ‘shallow’ searching and don’t really read online (but neither, necessarily, do their older counterparts).

I’m not arguing with the results of that study — it seems pretty sound to me. I suspect, however, that the thing that has changed with the ‘Google generation’, though, is not actually their information literacy, but their ability access information without strong information literacy skills and/or the help of a librarian. Google, having a very simple user interface, and great results ranking, has made it easy for the average person to find answers to their questions on the internet. It has also shown users that it isn’t necessary to jump through hoops, understand boolean search, or wade through pages of results to find information.

The mediocrity Professor Brabazon has termed ‘the Google effect’ arguably does not apply so much to her students, who I suspect are much the same as always, but to the information interfaces they are forced to use to locate scholarly materials. It is understandable, I think, that students prefer to spend time on their assignments reading and writing, and now they have tools which to them appear to let them bypass the cumbersome, splintered interfaces of academic journals. There is an information literacy problem here, but it is far from “whippersnappers these days not knowing how to use our journal databases”; it is the twofold problem of the proliferation of self-published non-authoritative easily accessible material that is the internet, and the vastly superior search technologies available to sift through that material.

If Professor Brabazon and her colleagues want to encourage young people to use scholarly resources the answer is not to lambast them for being mediorce (when likely they are no different to those who have come before them), nor to throw up their hands in disgust; the answer is to improve search interfaces and online access to academic materials so they can compete with Google, or (in my opinion the more likely solution) encourage widespread use of Google Scholar.

The ‘Google effect’ as I see it is not ‘a tendency toward mediocrity’ in students, it is an exposure of the dire mediocrity of the interfaces and search process for academic material. Google has democratized information searching, and made it possible for the average untrained adult to find information — academic publishers and other information providers need to catch up by providing seamless, well-ranked searches (again most likely through Google Scholar), and at least for those who are subscribers to their resources (either individually or through their institution)* make the results available with a single click. The alternative to this will not be improved information literacy skills, people are not going to learn something more difficult if they believe the tools they have will do an adequate job. I hope the end result of the Google effect will be a trend away from mediocrity–the mediocrity of academic information interfaces–and toward usable information search interfaces for all kinds of materials.

*Agruably, these results should be more widely available than that, but this post is not about the merits of open access, and academic publishers are not likely to change their access model so radically any time soon.


Subscribe

License

by-nc-sa.png
Some rights reserved.

Comment moderation

If it is your first time posting, your comment will automatically be held for my moderation -- I try get to these as soon as possible. After that, your comments will appear automatically. If your comment is on-topic and isn't abusing me or anyone else who comments, chances are I'll leave it alone. That said, I reserve the right to delete (or infinitely moderate) any comments that are abusive, spammy or otherwise irelevant.