Usable usability assessment

I was going to write about the vagaries of public transportation, and in particular air travel, today, but I am planning at least two further round trips to New Zealand in the near-ish future, and so I shall wait to confirm my newly formed opinions (and hopefully simmer down some) before launching myself on the poor user experiences involved in that particular endeavour. Instead, I want to talk about something underpinning good usability (and to a certain extent, user experience): Usability assessment.

So far on this blog, I have talked endlessly about user experiences (and to a certain extent, usability) with little reference to how we know the things we know about users of any given system. The way we know anything about users is assessment, either previous assessment that has contributed to a body of knowledge that allows us to make generalisations about “the user” (indeed “the user of any system”), or new assessments that answer specific questions about specific user groups and systems.

There are numerous ways of assessing usability (combined with that background knowledge about “the user” mentioned above, knowing these methods and being able to apply them appropriately is what makes a usability professional), but to discuss each type is well beyond the scope of this post. What I want to talk about here is good usability assessment — and because a lot of the work I have done recently has been with surveys, I’m going to use those as a reference point.

Given that usability assessment informs design and development, our understanding of our users, and (sometimes) the body of general knowledge about users, it’s a good idea to get assessment results as right as possible. This imperative is compounded by the fact that usability needs to do more than just make users happier, it also needs to be cost effective (though to be fair, the barrier for this can be quite low — a representative of a large firm I once did some consulting for told me that every time users ‘phoned that company’s helpline, it cost the company a minimum of $10 — at that rate it doesn’t take many users who don’t call to pay off a few hundred dollars worth of usability consulting). There are basically three steps to making sure usability assessment results are useful:

  • Doing the right tests: This seems obvious, but it is worth mentioning all the same. Just like a chest x-ray can’t tell you if you have a cracked kneecap, a lab-based usability study can’t tell you how software (or any system) gets used in the real world (similarly, usage studies can’t tell you why people do the things they do, and observational studies can’t tell you whether you should make that button blue or purple). Which test is right depends on what you’re trying to find out, how much money and time you have, what stage of development you’re at, and who your users are. The other part of doing the right test is knowing what things to investigate; it’s all very well to assess the usability of your homepage (for example), but if 90% of your customers access your service via the telephone it is the usability of your phone system you should be testing.
  • Testing the right users: This is more subtle than it seems. Testing on members of the development team is clearly not going to be effective, but there is more to it than that. Let’s examine how survey participants are chosen:
    • Where you advertise will affect the makeup of your participant population; for example if you advertise a library survey only in the physical library, only those who come to the library in person are likely to see the ad.
    • Participants of a public survey are, to a certain extent, self-selecting. Those who feel they have something to say on a topic will be more likely to start a survey, and more likely to complete it. These effects can be ameliorated to a certain extent by offering rewards, and using broadly inclusive language in the advertising and survey wording can help, but it is important to still recognise this bias.
    • Survey timing is important. Running a survey during exam time at a university may attract a disproportionate number of procrastinators for example, while running it during summer term can only give reliable information about summer school attendants, and not the population at large.
    • How you collect your surveys is important. Paper-based surveys have a much lower response rate than online surveys (and skew the results toward highly motivated participants — usually those who hold strong opinions). Collecting results online in a population which includes less tech-savvy participants (as older adults often are), however, will skew the results toward more technically able users. Decisions have to be made with your whole user population in mind.

    While it is almost certainly impossible to test all users for any given system, and in any heterogeneous population it is difficult to even get a truly representative sample, it is important to try to minimise sample bias (and understand and acknowledge it, where it happens).

  • Making the test usable: This one is where it is easy to make mistakes, especially with surveys. I recently saw a survey where the participant was given a list of statements and asked first how important the item was, and then how well they felt the system met their needs. Given that the goal of a survey participant is usually to give their opinion first and foremost, I bet a lot of participants will fill this out wrong. Using language users of your system don’t understand will also reduce the reliability of your results — instead of asking how happy users are with their ISP, it might be better advised to ask how happy they are with their internet service. One final (and insidious) example of poor (in this instance) survey usability is bias — letting the phrasing of a question influence the answer (I’ve made this mistake recently myself, asking what users would call a service they used to make contact with the library, and repeating the word contact in one of the options given).

Usability assessment is a tool that can help make your users happier, and possibly reduce your costs. Like anything, though, it only works if you get it right.

Advertisements

2 Responses to “Usable usability assessment”


  1. 1 libodyssey Friday, March 21, 2008 at 7:25 pm

    I agree that a lot of people complete surveys now on the basis of what’s being offered as a prize. I’m not the slightest bit tinny and I never win anything, so it’s lucky that just having someone listen to my opinion is reward enough for me!
    We have just started using Google Analytics to measure who’s visiting Swinburne Research Bank and what they do while they’re there. It’s interesting from a repository manager’s point of view, because it will confirm/deny our assumptions about who finds the service useful. It’s also a particularly good time as we redesign the website because it will have a significant impact on the choices we make. I have always assumed that users of the repository act differently from other website users, but maybe I am wrong. As you say, it’s difficult to know how to predict users’ behaviour when the website’s audience is completely unknown. We are not really trying to sell anyone a product, but the user studies will certainly affect the way we build and promote our services.

  2. 2 Sara Jervis Thursday, March 27, 2008 at 10:58 am

    I have just purchased a new (used) car – 2005 model. I have been surveyed several times about my purchase experience BUT not about the car. The car (BMW) has super duper technology features that could at a pinch, take me to the moon, but you cannot fit an esky in the boot (the 1997 model could) and the locking of the non driver car doors as a safety feature is not efficient unlike in my previous model. I gather that I might have to use the book to see if I can lock the doors with the engine on, and in my seat, after a passenger has alighted – not good enough for this user and that is if I can. I have been advised to let BMW know about these irritations with “their car of the year” and I say as if.
    Market forces dictate what is listened to and I leave it to the market to decide whether a roomier boot is something to be compromised on for better whiz bang safety. I shall buy a smaller esky.

    The survey I (almost) participated in was about the salespeople who sold me the car. I got into a rage with the messenger. If I did not give the salesperson who I dealt with 100% excellent, he would be marked down. I yelled, yes, yelled at the person undertaking the survey and demanded to speak to the survey designer. I spoke firmly about the impossibility of a perfect score, except for a Romanian gymnast. After I raged and raved, the survey person asked me if I would say 95% !!

    I refused to participate.

    I then found out the name of the Sales Director and wrote a letter about how the sales people were excellent and this person wrote back to me to thank me and advised she had passed my compliments to the salesperson.

    Call me old fashioned but I generally, as a rule, refuse to participate in surveys because they have not been designed by people like you.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




Subscribe

License

by-nc-sa.png
Some rights reserved.

Comment moderation

If it is your first time posting, your comment will automatically be held for my moderation -- I try get to these as soon as possible. After that, your comments will appear automatically. If your comment is on-topic and isn't abusing me or anyone else who comments, chances are I'll leave it alone. That said, I reserve the right to delete (or infinitely moderate) any comments that are abusive, spammy or otherwise irelevant.

%d bloggers like this: