Archive for the 'trust' Category

Password masking, and the difference between usability and user experience

Recently Jakob Nielsen recommended removing the little black dots that come up when you type in a password, and having your password come up in clear text instead.  He had some pretty good reasons for recommending this including:

  • Increased password security
  • Mobile usability
  • Error prevention

However, Nielsen also recognises that there are some situations and some passwords (for example banking passwords) where the security outweighs usability. You can read more in his article about the matter here.

Responses to his idea ran the gamut from wholehearted agreement (by a security expert no less)  through tentative disagreement to pretty strong disagreement.

There has been some comment on the sociotechnical aspects of password masking, including using masking as a reminder to users that they ought to keep their passwords secure, and a discussion about the reasons why many people are uncomfortable with masking.

Other responses suggested solutions to the problem, including displaying only the most recently typed character (like on the iPhone), and giving the user the option to unmask (rather than mask) a password.

I completely understand the usability reasons for unmasking passwords, and I agree with what Jakob Nielsen is saying, up to a point.  My preferred option, out of all the ones suggested though, is the last option, where a user can choose to unmask a password, and my reasons for this are a common context of password use, which I will illustrate with an example:

I’ve just finished a large group project to launch a new library catalogue; we did a lot of collaborative work, and spent a lot of time using computers that projected onto a large screen.  We frequently read email to remember discussions we’d had about the system, manage links, manage to-do lists and generally remind us what was going on (this is a really common way–PDF of storing “stuff” in one’s headspace), and the system had components we had to log into.  We were logging into and out of systems left and right, and always on a big screen. I work at an institution with a single sign-on–this means your password for the HR system where you manage your payroll and salary, your library password, your email password–they’re all the same thing (bear in mind, single sign-on is good for security, users are less likely to use bad passwords if they only have to remember a few of them). Even more frequently than we had these meetings, two or more of us would be clustered around a desk testing some aspect of the new library system that required a log in or out.

I can’t imagne that either of these scenarios is uncommon in the workplace, meaning that in Nielsen’s world users would all share their passwords.  Similarly, I imgaine it is fairly common in social contexts, particularly with shared hosues and computers. Sharing passwords is undesirable at best, and I don’t need to describe how much damage one bad apple in a workplace could do under such circumstances–and it would be extremely difficult to track down who that person was, and what they had done when a large group of individuals all knew each others’ passwords.  Not only that, with single sign on passwords provide access to confeidential and sensitive information, including (at my institition) email, leave details, salary details and library details.

Just like with Nielsen’s solution, having characters disappear one character at a time essentially clear-texts your password to anyone who happens to be present, leaving the only options to balance security and usability as the check-box options.  Nielsen suggests that the checkbox be “hide”, but I disagree.  The social implications of a “hide” box are that you have to make the decision to hide your password from your colleagues or loved ones or friends in front of them, which sets up the potential for interesting dynamics around trust in professional and personal social interactions.  My preference would be an “unhide” box that implies it is simply natural to keep one’s password hidden–thereby avoiding any issues of trust in situations where otherwhise passwords might be shared.

The problem with Nielsen’s approach is that it is a purist usability approach.  If all we cared about was making systems more usable, absolutely it would be right to expose everyone’s pasword, and have the option to hide it occassionally as necessary.  This could lead to extremely uncomfortable social situations in both the work and personal spheres, though and as such is poor design of user experience, which takes the context of use into account–and I can’t recommend something that would so frequently lead to bad user experience.

So, what do you think?  Should we all show each other our passwords?


Apologising: Google is doing it right

As some of you will know, gmail went down for 100 minutes early thismorning.  I did notice it, but assumed it was my internet connection acting weird again–and I didn’t really need to read email at 7AM anyway.  For people elsewhere, however (for example in the US where this was anything from midday to close of business) and even people in New Zealand where the workday was just beginning this could have been a real problem, especially for those using gmail for business porposes.

Given how reliable Google usually is, this sudden and lengthy failure will understandably shake confidence in the service, and may even make people more righteously angry than service failures by unreliable companies (consider my eyerolling acceptance above, when I thought the problem was my ISP).

Generally speaking, users can think one of three ways when things go wrong (and lets face it, things do go wrong sometimes with any product or service):

  • That the product or service is unreliable and therefore they have lost faith in the product or service and the parent company
  • That something went wrong, but that the company did what they could about it and the solution was acceptable so they will continue to use the product or service
  • That the resolution to the problem was not satisfactory, but that they have no option but to use the company next time anyway (for example when the company has a monopoly–if this is the case though, as soon as the company no longer has a monoply they can expect customers to jump ship).

Google probably has a lot of people in the second category after today, because they did two things right: They updated people, and they wrote a fabulous and public apology.  The apology was probably even more effective than one normally would be because a large company apologised for an outage in a free service, but there are a few other things Google did right:

  • They apologised unreservedly, and with an understanding of their users.  There was no “we’re really sorry but it wasn’t our fault” or “we’re really sorry but you shouldn’t be so mad”–they understood why people might be annoyed, and they said sorry.
  • They explained the cause of the problem.  Not everyone is going to care about this, but it is good practice to explain for those who do, when writing for a public audience
  • They described what they are doing to make sure it doesn’t happen again.
  • They subtly reminded users why they chose gmail in the first place, not by saying “we are the most reliable”, but “we’re trying to keep failures rare”.
  • The apology was public (right up there on Google’s gmail blog), but not forced on those who didn’t notice the failure.

This is probably the work of Google’s PR people, but dealing with the failures that inevitably happen in life is a really important part of good user experience, and (I swear I don’t work for Google) this is one that Google have done really well.

The new Facebook: Not yet unfriended by users, but close

Facebook recently made a change to their interface that was the subject of outrage for many of their users, inspiring more than 1.7 million to sign a petition to reject it.  Facebook has made some changes to accomodate some of the things users said were problems, but many of the changes (including the slower-to-render rounded corners on pictures) appear to be here to stay.

Initially I was mildly irritated by the new interface, but I put it down to my change aversion (users near-universally hate change, which is why if you’re making major changes, they better help users out substantially).  However, as time has gone on, I have become more irritated with the new interface, not less.  As I see it, there are a few problems with the new interface:

  • The proliferation of nonsense in my news feed, without an option to show status updates only.  Yes, I can turn the rubbish from every application off, if I want to, but this requires effort on my part, and will happen every time a new crop of applications becomes popular.  It’s also fairly irritating that I had to go to a help guide to even find out how to do this much, because the mechanism for operating these options is hidden unless you happen to look in the right place at the right time.
  • Another side of the same coin: having to edit applications not to publish my life story immediately upon adding them.  I don’t particularly want to bombard my friends with nonsense every time I play a turn in Lexulous.  This means I have to be particularly pro-active in editing the settings for my applications so that they don’t bombard people, and the function for editing this is reasonably difficult to find
  • The lack of automatic updating.  I know the old interface didn’t have it, but the trade off for change was supposed to be that we got automatic updating. This change has had no benefit for me, so I resent the fact that the one useful thing that was supposed to happen didn’t.

Do I think no interface should ever change their look and feel?  Absolutely not.  Do I think that Facebook should have done some usability testing before lanching this design?  For sure.  Do I think they did?  Dubious at best.  The Facebook approach, which is one that will always generate negative publicity, is to test their designs on real live users.

According to this blog post, the best way to plan change requires four steps: knowing your customers, listening to them, communicating with them, and responding to them. I think that sounds pretty good–pretty much like doing good user experience, in fact.  And Facebook didn’t do too badly, on a points system–they did warn users (albeit not in a way that most users would notice), and they did respond to some of the complaints users had (albeit not in a way that is really that satisfying).  Unfortunately, you can’t pick and choose which things you want out of that list–good user experience requires all of them.

Nonetheless, I think many (if not most) Facebook users will suck up the changes, even though they don’t like them, because for now, Facebook offers them more than the changes have taken away.  Having said that, though, like I said in my earlier post about Facebook and MySpace, people have personal purposes for using social networking tools.  If Facebook continues to change in a way that breaks that purpose (as the first iteration of these changes did), they will find that users (and thus their advertising dollars) drift away.

What product or service have you used that has slowly worn away at your loyalty until you couldn’t stand it any more?

When things go wrong, communicate

In three separate instances recently, I have been frustrated by poor communication on the part of service industries I deal with.

In the first instance I was drastically affected by an airline schedule change, and it was not made at all clear to me what my options were–and when I worked it out and tried to to take advantage of the best option for me, the airline tried to charge me for it, claiming I had “already agreed to the schedule change”.  To be fair, I did eventually get what I needed with no additional fees to pay, and I was thrilled, but it seems a bit sad to be thrilled by an airline doing the right thing.

In the second, I found out that my favourite class was being cancelled at my local recreation centre from feedback they posted publically to another class, saying they would be moving that class into the room we had previously occupied.

In the third case, I was phoned the day before a booked appointment to say that I would not be able to keep my appointment (and offered two less convenient times as alternatives) because the professional I was to see was “not in”.  When I pressed to try and see the person with whom I had an existing relationship, I was told they had left the business.  This from a business that would charge me a 50% cancellation fee if I were to cancel within 24 hours of an appointment.

In all three of these cases, the disappointing thing that happened was inevitable, and I am not blaming the companies concerned for what happened.  What I am blaming them for, and what really made me angry, was their inability to communicate with me properly and in a timely fashion about the issues which affected me, and the paucity of alternatives I was offered (at least in the first and third cases).

Things go wrong in life, particularly in those industries where a product and a service are sold together.  In most cases users will be pretty forgiving if they understand what has gone wrong, and you communicate with them and explain what their options are from the outset.  In the instances where something goes wrong, communication is the key to keeping a user as happy as it is humanly possible to do, and keeping them using your service rather than anyone else’s.

Has anyone else had an experience where communication made the difference between grudging satisfaction and outright annoyance?

Social usability, acquaintances, and spam

Despite my many years of internet use, I have only rarely had those moments where I stumbled across something I really wasn’t looking for and didn’t want (and usually because I typed something foolish into Google Images without the safe search turned on). Invariably, what I have seen has been thumbnails and relatively inoffensive–insofar as any adult content you weren’t looking for can be inoffensive (as for what people are looking for…that is neither for me to comment on, nor a topic for this blog).

Like Sara, though, my first experience of the true “Can. not. un. see” moment has come as a result of the 23 Things. I was checking my blog over the weekend, and saw I had a comment stuck in moderation. It was on a post I wrote early in the 23 Things, about anonymity online, and said merely “thanks”. Normally, I would delete such a post as spam outright, but given that I know many people are freshly beginning 23 Things, and I didn’t want to discourage a new user, I thought I better make sure that it wasn’t a 23 Things fellow traveller. I didn’t recognise the email address, but that isn’t anything new, and the link wasn’t obvoiusly spammy, so I clicked on it to see the person’s blog. Bad idea. What I saw was a large, outright obscene image and I couldn’t close the browser tab fast enough.

So here we have a very specific set of social circumstances that led me to an unlikely behaviour, and had decidedly unpleasant results–it is easy to see how spammers, scammers, and phishers do their nefarious work. Trust and identity are important features of online social media, but it they are a hard thing to negotiate, and breaking this trust (as my commenter did over the weekend) has severely negative consequences. These negative consequences include the personal negative responses like I had yesterday, the time many of us (including me) spend moderating their blogs so other people don’t have to be offended, and so that such material is not linked from a professional platform, and the bandwidth cost associated with viewing unwanted images or other media.

What is the solution to these antisocial behaviours leading to bad user experience? One possibility is to never click on or approve anything from anyone we don’t know for certain, but to me this denies one of the more interesting possibilities on the web: meeting new people and ideas. Alternatively we could decide not to moderate, and risk unsavoury links being added to our social spaces without our permission, however this gives the spammers even more advertising (and I’m glad I am the only person who had to see what I saw). Being careful seems a happy medium, with a low rate of failure, but it is not always effective, and it would be nice if some of it could be automated. Since it isn’t, though, I urge all my readers to be careful out there, because once something is seen, you can’t unsee it. Does anyone have any better suggestions for dealing with this problem?



Some rights reserved.

Comment moderation

If it is your first time posting, your comment will automatically be held for my moderation -- I try get to these as soon as possible. After that, your comments will appear automatically. If your comment is on-topic and isn't abusing me or anyone else who comments, chances are I'll leave it alone. That said, I reserve the right to delete (or infinitely moderate) any comments that are abusive, spammy or otherwise irelevant.