Archive for the 'usability testing' Category

Women in tech, inclusive design, and the lesson Apple learned today

Women are clearly a minority in tech fields, both in education and in the workforce.  There are a number of reasons why this might be the case, including cultural attitudes, lack of mentorship and outright hostility to women in tech. This post isn’t about the cause of having so few women in tech, though–it’s about the results. People tend to design for themselves, particularly in tech–and this is perfectly expected, but it does mean that with the paucity of women in tech fields, the design work in tech is not often done with women in mind. Testing on people not-like-designers who use or might use the thing you design is pretty much the core of usability.  Given that women do purchase and use technology, if possible it’s worth including some of us in any design team and it’s always worth including them in the testing phase of any product they might use, because they might just see it differently (this principle also applies to products that might be used by children or the elderly or anyone who is a target market for any product, particularly where they are not represented on the design team).  Back to women, though: today Apple has learned about including woment he hard way.

Today Apple announced their much awaited new toy.  As many people predicted, it is a tablet, and they have called it the iPad.  The name has problems, including the phonemic similarity to iPod which one of my workmates pointed out, but more embarrassingly for Apple the connotations that immediately led to not Apple, nor iPad, but iTampon being a trending topic on Twitter and some pretty vicious skwereing on sites like adfreak

Apple's iPad spoof advertisement showing feminine hygiene product

This isn’t the first time Apple has forgotten women in it’s design process, I’ve already blogged about the direction of the clip on the iPod shuffle. Despite the free publicity, though, this is the one they might learn from–being ridiculed all over the internet probably wasn’t what they hoped for with this announcement. Apple may well have had women in their design process (there is a strange kind of groupthink that goes on on team-based design where people miss things that would bee seen by anyone outside the team), but they clearly didn’t test on a diversity of women.

The name of this product shows it wasn’t designed with me in mind, and makes me a little less likely to buy it as a result–this design, like the clip on the shuffle isn’t inclusive.  Obviously not enough people complained about the shuffle, and Apple didn’t understand the need to include women in design and testing.  I bet they will next time, though, and I hope other companies have seen Apple’s mistake and learned something too.


Google search isn’t just search anymore

I know I’m a bit lot late to the table with this, but Google search isn’t restricted to just searching anymore!  They’ve introduced some browsing tools as well (see the video below for more):

Now, it’s easy to figure out that I am very pro-browsing, and therefore I think it’s great that Google has included these things into their search experience, but I’d like to unpack just why I think browsing is such a good thing (and make a couple of suggestions for extensions of what Google is doing) along the way.

Google has been very pro-search as an information organisation and finding strategy for a long time, their search-don’t-sort appraoch to gmail being one obvious example of this.  It’s completely understandable that this has been Google’s whole approach for so long, after all, search is what they do (and they do it very well).

Search isn’t always the answer though (and if you watch this video of a Google user experience researcher talking about the search options, it is evident that Google knows that).  For one things, humans employ more than just search in their information seeking strategies: the research (PDF) shows that information seeking is generally an interative process that includes searching, browsing, and refinement.  Not only is search not the only approach we use for finding information, but sometimes search isn’t enough on its own: with all the information on the web, it can be hard to know when someone types ‘Placebo’ into a search box whether they want to know about the psychological effects of sugar pills, or whether they’re interested in the British based rock band (this ambiguity applies to any number of terms). Similarly, information seekers may want a particular type of information (for example reviews, or places where a product can be bought), or information from a particular geographic location, time or author, or general subject field.  Also, even with known-item searches (those where the searcher knows exactly what they are looking for, and that it exists somewhere, because they have found a pointer to it or seen it) if the searcher doesn’t remember the exact words that occur in the document, they might not find what they are looking for.

Google’s ‘more search options’ are beginning to deal with this problem.  They allow people to find three specific types of content (reviews, forums and video), they provide suggested search terms, they allow the user to look at results from a specific time, and also see how the search terms popularity has changed over time.  I’m not entirely sure what value the ‘wonder wheel (see below)’ adds, given that the related search terms provide all the wonder wheel terms and more, but  I suppose some people may find the visual presentation useful.

Google's wonder wheel, a visual display of related search termsIt certainly is heartening, for someone as vested in browsing as I am, to see Google incorporating browsing into their search.  All I want now is to see it expanded:  I want to filter news by topic and country (and standard search results for that matter); when I use Scholar, I want to be able to browse by author or year.  What Google has provided is an excellent start, and I look forward to seeing where this goes in the future.

The new Facebook: Not yet unfriended by users, but close

Facebook recently made a change to their interface that was the subject of outrage for many of their users, inspiring more than 1.7 million to sign a petition to reject it.  Facebook has made some changes to accomodate some of the things users said were problems, but many of the changes (including the slower-to-render rounded corners on pictures) appear to be here to stay.

Initially I was mildly irritated by the new interface, but I put it down to my change aversion (users near-universally hate change, which is why if you’re making major changes, they better help users out substantially).  However, as time has gone on, I have become more irritated with the new interface, not less.  As I see it, there are a few problems with the new interface:

  • The proliferation of nonsense in my news feed, without an option to show status updates only.  Yes, I can turn the rubbish from every application off, if I want to, but this requires effort on my part, and will happen every time a new crop of applications becomes popular.  It’s also fairly irritating that I had to go to a help guide to even find out how to do this much, because the mechanism for operating these options is hidden unless you happen to look in the right place at the right time.
  • Another side of the same coin: having to edit applications not to publish my life story immediately upon adding them.  I don’t particularly want to bombard my friends with nonsense every time I play a turn in Lexulous.  This means I have to be particularly pro-active in editing the settings for my applications so that they don’t bombard people, and the function for editing this is reasonably difficult to find
  • The lack of automatic updating.  I know the old interface didn’t have it, but the trade off for change was supposed to be that we got automatic updating. This change has had no benefit for me, so I resent the fact that the one useful thing that was supposed to happen didn’t.

Do I think no interface should ever change their look and feel?  Absolutely not.  Do I think that Facebook should have done some usability testing before lanching this design?  For sure.  Do I think they did?  Dubious at best.  The Facebook approach, which is one that will always generate negative publicity, is to test their designs on real live users.

According to this blog post, the best way to plan change requires four steps: knowing your customers, listening to them, communicating with them, and responding to them. I think that sounds pretty good–pretty much like doing good user experience, in fact.  And Facebook didn’t do too badly, on a points system–they did warn users (albeit not in a way that most users would notice), and they did respond to some of the complaints users had (albeit not in a way that is really that satisfying).  Unfortunately, you can’t pick and choose which things you want out of that list–good user experience requires all of them.

Nonetheless, I think many (if not most) Facebook users will suck up the changes, even though they don’t like them, because for now, Facebook offers them more than the changes have taken away.  Having said that, though, like I said in my earlier post about Facebook and MySpace, people have personal purposes for using social networking tools.  If Facebook continues to change in a way that breaks that purpose (as the first iteration of these changes did), they will find that users (and thus their advertising dollars) drift away.

What product or service have you used that has slowly worn away at your loyalty until you couldn’t stand it any more?

Names and logos: The awkward case of Cuil

I’m a bit late on this, but I wanted to briefly mention Cuil.  Cuil is a search engine developed by ex-Google employees that deviates from Google’s strict “search is the answer.  What was the question?” strategy to offer faceted search results. Faceted search results are demonstrably useful, particularly to typical users who enter few words and then need to drill down to more useful results. Faceted search is a point of difference between Google and Cuil, and a trick Google has missed, in my opnion.  Cuil also claims to index signifcantly more content, and offer more behind-the-scenes analysis of search results than its competitors.  Cuil’s search results interface leaves a bit to be desired, in my opinion (in particular the lack of clear ranking), but other than that it is an interesting tool and I will be keeping an eye on it.

Cuil has an unfortunate problem, though:

Cuil logo

Cuil logo

I understand that the company posits (possibly incorrectly) that ‘cuil’ means knowledge in Irish Gaelic, and is prnounced “cool”.  I understand that this is very Web 2.0 and the ‘i’ is reflects the word ‘ipod’ and all those other ithings, and ‘information’ as well.  However, the first thing I saw when I looked at this was a French colloquialism that is less than polite, and is spelled c-u-l. For what it is worth, I am hardly the first person to notice this–apparently other misspellings also have unfortunate meanings, though none that are so clearly suggested by the logo.

This is a wonderful example of why it is important to test branding in all major markets when you’re selling a brand on internationally, but particularly on the web–without asking locals (or consulting with localization experts), you can’t know whether you are inadvertently giving offense (or making people laugh at you).  Has anyone else noticed any other unfortunate porduct names, other than the famous Pajero?

Usable usability assessment

I was going to write about the vagaries of public transportation, and in particular air travel, today, but I am planning at least two further round trips to New Zealand in the near-ish future, and so I shall wait to confirm my newly formed opinions (and hopefully simmer down some) before launching myself on the poor user experiences involved in that particular endeavour. Instead, I want to talk about something underpinning good usability (and to a certain extent, user experience): Usability assessment.

So far on this blog, I have talked endlessly about user experiences (and to a certain extent, usability) with little reference to how we know the things we know about users of any given system. The way we know anything about users is assessment, either previous assessment that has contributed to a body of knowledge that allows us to make generalisations about “the user” (indeed “the user of any system”), or new assessments that answer specific questions about specific user groups and systems.

There are numerous ways of assessing usability (combined with that background knowledge about “the user” mentioned above, knowing these methods and being able to apply them appropriately is what makes a usability professional), but to discuss each type is well beyond the scope of this post. What I want to talk about here is good usability assessment — and because a lot of the work I have done recently has been with surveys, I’m going to use those as a reference point.

Given that usability assessment informs design and development, our understanding of our users, and (sometimes) the body of general knowledge about users, it’s a good idea to get assessment results as right as possible. This imperative is compounded by the fact that usability needs to do more than just make users happier, it also needs to be cost effective (though to be fair, the barrier for this can be quite low — a representative of a large firm I once did some consulting for told me that every time users ‘phoned that company’s helpline, it cost the company a minimum of $10 — at that rate it doesn’t take many users who don’t call to pay off a few hundred dollars worth of usability consulting). There are basically three steps to making sure usability assessment results are useful:

  • Doing the right tests: This seems obvious, but it is worth mentioning all the same. Just like a chest x-ray can’t tell you if you have a cracked kneecap, a lab-based usability study can’t tell you how software (or any system) gets used in the real world (similarly, usage studies can’t tell you why people do the things they do, and observational studies can’t tell you whether you should make that button blue or purple). Which test is right depends on what you’re trying to find out, how much money and time you have, what stage of development you’re at, and who your users are. The other part of doing the right test is knowing what things to investigate; it’s all very well to assess the usability of your homepage (for example), but if 90% of your customers access your service via the telephone it is the usability of your phone system you should be testing.
  • Testing the right users: This is more subtle than it seems. Testing on members of the development team is clearly not going to be effective, but there is more to it than that. Let’s examine how survey participants are chosen:
    • Where you advertise will affect the makeup of your participant population; for example if you advertise a library survey only in the physical library, only those who come to the library in person are likely to see the ad.
    • Participants of a public survey are, to a certain extent, self-selecting. Those who feel they have something to say on a topic will be more likely to start a survey, and more likely to complete it. These effects can be ameliorated to a certain extent by offering rewards, and using broadly inclusive language in the advertising and survey wording can help, but it is important to still recognise this bias.
    • Survey timing is important. Running a survey during exam time at a university may attract a disproportionate number of procrastinators for example, while running it during summer term can only give reliable information about summer school attendants, and not the population at large.
    • How you collect your surveys is important. Paper-based surveys have a much lower response rate than online surveys (and skew the results toward highly motivated participants — usually those who hold strong opinions). Collecting results online in a population which includes less tech-savvy participants (as older adults often are), however, will skew the results toward more technically able users. Decisions have to be made with your whole user population in mind.

    While it is almost certainly impossible to test all users for any given system, and in any heterogeneous population it is difficult to even get a truly representative sample, it is important to try to minimise sample bias (and understand and acknowledge it, where it happens).

  • Making the test usable: This one is where it is easy to make mistakes, especially with surveys. I recently saw a survey where the participant was given a list of statements and asked first how important the item was, and then how well they felt the system met their needs. Given that the goal of a survey participant is usually to give their opinion first and foremost, I bet a lot of participants will fill this out wrong. Using language users of your system don’t understand will also reduce the reliability of your results — instead of asking how happy users are with their ISP, it might be better advised to ask how happy they are with their internet service. One final (and insidious) example of poor (in this instance) survey usability is bias — letting the phrasing of a question influence the answer (I’ve made this mistake recently myself, asking what users would call a service they used to make contact with the library, and repeating the word contact in one of the options given).

Usability assessment is a tool that can help make your users happier, and possibly reduce your costs. Like anything, though, it only works if you get it right.



Some rights reserved.

Comment moderation

If it is your first time posting, your comment will automatically be held for my moderation -- I try get to these as soon as possible. After that, your comments will appear automatically. If your comment is on-topic and isn't abusing me or anyone else who comments, chances are I'll leave it alone. That said, I reserve the right to delete (or infinitely moderate) any comments that are abusive, spammy or otherwise irelevant.