Archive for the 'web' Category

Travel website usability by a travel writer

I’ve written before about how much airline websites annoy me for their lack of usability, but it turns out I’m not the only one: Check out this article by The Age travel writer Clive Dorman. He might not be talking about things in the same way as I have, but he is far more eloquent:

…[I ] had a dream about a super-fast airline website that performed each action so fast and seamlessly it was truly joyous (OK, so I’m part-nerd). I was truly disappointed this morning when I realised it was still a mental pie in the sky. (Read more)

Clearly Clive has experienced the same problems I and no doubt countless others have, and he is giving them voice in a large platform. The airlines need to listen to this kind of feedback; the first airline to get their website right is likely to gain some business even if they are a little more expensive than the alternatives.

Advertisements

Captcha and accessibility

I’ve written before about the problems with anti-spam devices, but today I read some wonderful blog posts on this, and since I’m neither a user with a dsiability that prevents me from using CAPTCHA, nor an expert on accessibility for users with visual impairments, I will let the posts speak for themselves:

  • One user’s experience trying to sign up for a Gmail account, which failed because CAPTCHA has accessibility problems.
  • A study, showing that this is the majority experience of CAPTCHA (73% of users were unsuccessful using the ‘accessible’ version of CAPTCHA)
  • A discussion of the issue at Feminists With Disabilities, noting that to provide Google with feedback, you have to get through Captcha first, and how this further disavantages an already disadvantaged user population.
  • A link to the Google accessibility reporting function–please use this liberally if you notice any other problems with Google’s interfaces (and you have been able to sign up for an account).

As this article on anti-spam devices points out, it’s not just users with visual impairments that suffer when presented with CAPTCHA, it’s also users with reading difficulties, and even users without disabilities suffer some inconvenience.

It is telling that  one of the best cited posts on Captcha effectiveness (which finds CAPTCHA to be very effective) refers only to the ability of CAPTCHA to prevent spam. The “false positives”, where CAPTCHA fails to allow a human being to access a website, are dismissed with a single line “these are eminently human-solvable, in my opinion”, while pointing out that CAPTCHA is used on most interactive internet sites.

Spam is a usability and accessibility problem, but the way to solve it should not prevent users with disabilities accessing internet content. Not only is CAPTCHA as an approach inaccessible and unusable, but it’s widepsread implementation could end up costing sites which use it a lot of money.

Names and logos: The awkward case of Cuil

I’m a bit late on this, but I wanted to briefly mention Cuil.  Cuil is a search engine developed by ex-Google employees that deviates from Google’s strict “search is the answer.  What was the question?” strategy to offer faceted search results. Faceted search results are demonstrably useful, particularly to typical users who enter few words and then need to drill down to more useful results. Faceted search is a point of difference between Google and Cuil, and a trick Google has missed, in my opnion.  Cuil also claims to index signifcantly more content, and offer more behind-the-scenes analysis of search results than its competitors.  Cuil’s search results interface leaves a bit to be desired, in my opinion (in particular the lack of clear ranking), but other than that it is an interesting tool and I will be keeping an eye on it.

Cuil has an unfortunate problem, though:

Cuil logo

Cuil logo

I understand that the company posits (possibly incorrectly) that ‘cuil’ means knowledge in Irish Gaelic, and is prnounced “cool”.  I understand that this is very Web 2.0 and the ‘i’ is reflects the word ‘ipod’ and all those other ithings, and ‘information’ as well.  However, the first thing I saw when I looked at this was a French colloquialism that is less than polite, and is spelled c-u-l. For what it is worth, I am hardly the first person to notice this–apparently other misspellings also have unfortunate meanings, though none that are so clearly suggested by the logo.

This is a wonderful example of why it is important to test branding in all major markets when you’re selling a brand on internationally, but particularly on the web–without asking locals (or consulting with localization experts), you can’t know whether you are inadvertently giving offense (or making people laugh at you).  Has anyone else noticed any other unfortunate porduct names, other than the famous Pajero?

Human meaning in machine encoding? Thoughts on the semantic web

Tim Berners-Lee, the inventor of the world wide web, outlines his goals for the semantic web in the book he wrote about the development of the web.  I love his dream, that one day we would be able to ask “find out where a baseball game was played today and it was also 22C”.  I just don’t believe it is very likely to happen, for two reasons:

  • Effort
  • Natural language

The effort question is a really interesting one.  Somewhere along the line, someone has to expend the effort to make human semantic concepts in some way machine encoded, or, alternatively to answer their own questions.  For some, a certain level of machine encoding of the semantics they personally attach to an object (usually in the form of tags) is useful, either for some purpose of their own (information retrieval, for example), or for some social-capital reason (see a more detailed explanation of this here).  However, when a person has only a small amount of information to organise they are considerably less likely to add semantic information to it.

If there is no human being willing to expend the effort to add semantic information, there may be a human being willing to write computer programs to extract such information.  This will be more or less successful dependent on the kind of information to be extracted, and what it is to be extracted from, for example:

This is lesser effort than tagging, because it can be done once and used multiple times, but it is still effort that someone has to expend.

One further approach is, as in this paper (sorry, paywall), leveraging human-created tags to allow machines to do things that look like they understand the semantic web–so in the paper, for example, the author wrote a program that used the way people had combined tags on flickr to unsdersdtand what concrete things (for example tulips) were associated with abstract concepts (for example spring).

In any of the three cases human effort is required to generate the information needed for machines to do the kind of processing Berners-Lee suggests the semantic web ought to be able to do for us.  To actually get people to expend this effort requires them to have a special interest in it, either at a personal level (as with tagging) or a research interest (as with automatic extraction programs.  I think this effort is a major impediment to more widespread “semantic web” applications and uses.

The natural language question is also a barrier, and a much more usability centred barrier.  Even if we could get evertyhing tagged up, either by human hands or automatically, how people would then ask this semantic web to answer their questions is an open question.  glenn, an acquaintance of mine who works in the field (and like his name spelt wiht a lower case ‘g’) thinks that we need query languages, and I am inclined to agree.  If natural language searching on the free-text internet fails (paywall again, sorry), it will surely fail in any kind of structured environment.  Unfortunately, users are known to do poorly with Boolean search, and it is reasonable to expect that other query languages would porduce similarly bad results, so even if the web was tagged up, it may still be fairly difficult for the average user to ask the question Berners-Lee posed in his book.

I think tagging is great, because it imbues objects with personal meaning, and allows people to find things more easily.  I have yet to see evidence of a truly workable (and by implication usable) semantic web, though, and as such I don’t believe people will be able to answer questions about baseball games at 22C for some time to come. I also believe that even when it is possible to answer these soorts of questions, it will be not because of advanced tagging of web-pages, but more form advanced text processing by search engines–and that isn’t the semantic web, it’s search engine companies prioritising user experience.

Voyage: A road to nowhere

Voyage is a novel feed reader that displays content in a 3D-appearing space, and despite my well-documented reservations about 3D interfaces, I tried to give Voyage a go.  I have to assume that Voyage is not actually a production-level RSS service, but rather a demonstration system, because it is lacking some fundamental features of RSS readers including:

  • Personalisation: You can’t create your own account on Voyage, which would mean you had to re-add your feeds every time you visited the site.
  • RSS search: Voyage forces you to know the RSS URL of the feed you want to access–not the name of the site or the site URL, but the RSS URL.  This is a big ask of the average user
  • Reading: To actually read any interesting RSS feeds you leave Voyage and go to the original site, even in cases where the feed is full-text (rather than an “atom”).
  • Pictures: The site does not display pictures. This is a bit of a problem for picture-oriented blogs like I Can Has Cheezburger

Given these limitations, this display feels more like a discovery service for new blogs (along the lines of the liveplasma music and movie discovery service), but it does not have the back-end database of recommendations.  Either way, there are considerable usability problems with this interface:

  • The text is not clear and readable
  • The 3D-ness of the interface doesn’t add anything (the only dimension that appears to have any meaning at all is the forward and back one), and does make things harder to find (indeed, included in the 23 things task is the “add a feed and try to find it” puzzle).  Given that 3D interfaces perform deomnstrably (PDF) worse in information organisation tasks, and this interface does not have to be 3D, this is a serious usability concern
  • The feeds area looks as though you ought to be able to click n the feeds to go to them.  Instead clicking on them deletes them, which given that you need to know the feed URL of a site to add it, is a high cost error for a simple action
  • It simply isn’t clear what many of the interface elements (space, colour, the horizontal line) mean, making the interface difficult to learn
  • it is difficult to navigate back “out”once you have selected something, meaning that the navigation is difficult and actions cannot be easily undone

Each of these concerns is in contravention of at least one of this excellent list of usability first principles, meaning that basically Voyage is hard to use.  Not only is it difficult to use, but it doesn’t offer either a decent feed reader or an interesting discovery service, so there is nothing in the user experience that is compelling enough to entice users back.  Maybe in a couple of years this concept will be more fully fleshed out, but in the mean time I am going to stick with Google Reader, which does reading and recommendations very well indeed.

VuFind: An interesting case of open source usability

We all know that library users are consistently frustrated with library systems, and cannot find what they want, particularly since the advent of Google (PDF). Some academics berate and despair of their students’ information seeking practices, and claim that Google is ruining young minds. In my opinion, as I have stated before, berating students (and Google) is going after the wrong target. It is human nature to maximise benefits while minimising effort, and for many students the time they will spendf searching a number of interfaces for relevant resources–particularly when the interfaces are confusing, archaic, and unhelpful–is simply better spent reading the resources they find on Google, and writing their assignments. The only way to change this “satisificng” approach and reveal the vast range of library resources available to our students is to make them findable through interfaces that do not confuse or humiliate users, and do not require a librarian to operate. While libraries can’t expect to compete with Google while they are buying information from a multitude of vendors that do not have standardised search results or formats, library search interfaces can offer some additional features (such as metadata-based faceting and primary browsing) that Google doesn’t offer–and if the information is better, or gets better results (like higher grades) that will also prove an incentive to use library interfaces.

Typicall I expect library catalogues to be ugly and cantankerous, I see that as the price I pay for finding the books I want(and don’t even get me started on finding journal articles–usually I start with Google Scholar). This is why, when I looked at VuFind on the National Library web site, I was so impressed with it: it is clean, attractive, and very usable:

  • It searches more than one type of holding; my search results included books, online resources, and microfilm. This is much closer to the “one stop shop” expectations that users have than any library system I have used in the past.
  • I can choose between my search results based on metadata facets–that is, I can choose books, or works by a certain author, or items from a specific subject. This means that single term searches are much more likely to be successful, as I can easily disambiguate my search and bring the results that are most relevant to me to the top
  • Results are relevance ranked (don’t laugh, some library systems don’t do this). This feature is the one that has given Google search engine market dominance; their excellent relevance ranking meant that people found what they were looking for in the one to two pages of results they typically view.

These are just a few of the features that make VuFind feel like a breath of fresh air. Another thing that is unusual about VuFind, though, and one that makes it especially exciting to me, is th fact that it is open source. This basically means that you can get the software for free (though if you want support you will generally pay for it), and that if you want to change something about it, all you need is a willing programmer.

Open source software provides large scope for improving usability of software locally, because unusable features can be altered, however generally speaking open source software is not as usable as its “closed source” or commercial counterparts (a problem that is recognised, but not well handled, in the open source community). Dave Nichols and Mike Twidale, colleagues of mine, have long been interested in usability in open source software (and indeed how to open source usability bug reporting). In a 2003 paper they published (which anyone interested in open source or usability should read), they suggested several reasons why open source software might have usability problems:

  • Open source communities, famous for comments like “RTFM” (read the **&%@& manual), are not generally welcoming to experts from other backgrounds, as usability experts often are
  • Design for usability generally has to start before design for coding
  • Open source communities are populated by programmers, who generally cannot see the problems that users with a lesser understanding of computers might have
  • Open source software programming is often done to meet a need of the programmer, and as mentioned above, programmers have very different user interface needs to other users
  • Design by committee and software bloat are not usually good for usability, and open source software is prone to both

In another paper on open source usability, Dave and Mike noted that it can be hard to report usability bugs in the same way as technical bugs, and that open source interfaces may be prevented from innovating by playing “catch up” with their commercial counterparts.

So VuFind is positively fascinating for its usability, both among library systems (though some of the newer commercial systems look interesting), and among open source projects (Koha is similarly fascinatingly usable and open source). Why is it that VuFind is such an exception to the rules?

  • It was created by a library, under one umbrella, and not in a typical open source community. Being under a single umbrella demonstrably helps open source projects’ usability (Dave and Mike again, there), largely by ameliorating design by committee and imposing some order on the process. This will also have meant that the community was different — VuFind’s website comments that it was developed “by libraries“, and thus not just programmers, meaning that feedback from other disciplines was likely welcome
  • Typical library system websites (though again, I can’t speak for some of the newer ones) are not effective for users, so VuFind didn’t have to play interface “catch up”
  • VuFind was developed “for libraries” not “for programmers”
  • It looks suspiciously (to me) like VuFind might have had a formal usability process, though I can’t find any evidence for this one way or another

In the end, whatever the specific differences are, VuFind is not just exciting in terms of its user experience, but fascinating, and an exemplar of how to do usability in an open source project. I don’t know if it is the way we will go with our discovery layer (and not having seen many of the other possibilities, I can’t comment on whether it is the way we should go either), but it certainly is a fascinating project, and I will be watching it further.

Social usability, acquaintances, and spam

Despite my many years of internet use, I have only rarely had those moments where I stumbled across something I really wasn’t looking for and didn’t want (and usually because I typed something foolish into Google Images without the safe search turned on). Invariably, what I have seen has been thumbnails and relatively inoffensive–insofar as any adult content you weren’t looking for can be inoffensive (as for what people are looking for…that is neither for me to comment on, nor a topic for this blog).

Like Sara, though, my first experience of the true “Can. not. un. see” moment has come as a result of the 23 Things. I was checking my blog over the weekend, and saw I had a comment stuck in moderation. It was on a post I wrote early in the 23 Things, about anonymity online, and said merely “thanks”. Normally, I would delete such a post as spam outright, but given that I know many people are freshly beginning 23 Things, and I didn’t want to discourage a new user, I thought I better make sure that it wasn’t a 23 Things fellow traveller. I didn’t recognise the email address, but that isn’t anything new, and the link wasn’t obvoiusly spammy, so I clicked on it to see the person’s blog. Bad idea. What I saw was a large, outright obscene image and I couldn’t close the browser tab fast enough.

So here we have a very specific set of social circumstances that led me to an unlikely behaviour, and had decidedly unpleasant results–it is easy to see how spammers, scammers, and phishers do their nefarious work. Trust and identity are important features of online social media, but it they are a hard thing to negotiate, and breaking this trust (as my commenter did over the weekend) has severely negative consequences. These negative consequences include the personal negative responses like I had yesterday, the time many of us (including me) spend moderating their blogs so other people don’t have to be offended, and so that such material is not linked from a professional platform, and the bandwidth cost associated with viewing unwanted images or other media.

What is the solution to these antisocial behaviours leading to bad user experience? One possibility is to never click on or approve anything from anyone we don’t know for certain, but to me this denies one of the more interesting possibilities on the web: meeting new people and ideas. Alternatively we could decide not to moderate, and risk unsavoury links being added to our social spaces without our permission, however this gives the spammers even more advertising (and I’m glad I am the only person who had to see what I saw). Being careful seems a happy medium, with a low rate of failure, but it is not always effective, and it would be nice if some of it could be automated. Since it isn’t, though, I urge all my readers to be careful out there, because once something is seen, you can’t unsee it. Does anyone have any better suggestions for dealing with this problem?


Subscribe

License

by-nc-sa.png
Some rights reserved.

Comment moderation

If it is your first time posting, your comment will automatically be held for my moderation -- I try get to these as soon as possible. After that, your comments will appear automatically. If your comment is on-topic and isn't abusing me or anyone else who comments, chances are I'll leave it alone. That said, I reserve the right to delete (or infinitely moderate) any comments that are abusive, spammy or otherwise irelevant.