David Travis

UX Consultant and Trainer

Personal Homepage
UserFocus (http://www.userfocus.co.uk)

David Travis is the founder of Userfocus. He holds a BSc (Hons) degree and a PhD in Psychology and he is a Chartered Psychologist. His professional affiliations include membership of the Experimental Psychology Society, the Information Architecture Institute and the Usability Professionals Association and he is an Associate Fellow of the British Psychological Society. His career spans three decades as a researcher, author and consultant. David has carried out usability consulting activities for a number of clients and he has delivered over 200 seminars in usability for a range of private and public sector organisations. He has written two books on usability (Effective Color Displays: Theory and Practice and E-Commerce Usability) and co-edited a third (Information Superhighways: Multimedia Users and Futures). David is the founder of Userfocus (www.userfocus.co.uk) and you can follow him on Twitter: @userfocus.

Publication Statistics

Publication period start
Publication period end
Number of co-authors

Number of publications with favourite co-authors

Productive Colleagues
Most productive colleagues in number of publications


Travis, David

22.13 Commentary by David Travis

William Hudson writes knowledgeably and expertly about card sorting — as you would expect from someone who has been practising the technique for well over a decade. William’s chapter in the encyclopaedia will be a great help to those people new to card sorting who need a step-by-step tutorial through the technique.

For people who already have some experience with card sorting, I wanted to add a few words about dealing with some of the problems that come up when you do open and closed card sorting in practice. First: with an open card sort, how do you deal with a very large web site where you may have hundreds of items that need to be sorted? And second: with a closed card sort, how can you present the results back to clients in such a way that they understand the complex, quantitative data you have collected?

22.13.1 An open card sort with a very large web site

A few years ago, I worked with an auction web site to help them revise their online help system. There was a large number of help pages (over 850) and these had grown in an ad hoc manner. To ensure the new help system achieved its expected business benefits, the client needed to structure and organise the content before it was integrated into the new interface. However, even the most dedicated user won’t be happy sorting 850 cards of content, so we first had to do something to make the task manageable.

We began with a content inventory of the on-line help system. This was an important first step in describing the relationships between the different pages since it allowed us to answer questions like ‘Which help pages are most commonly accessed?’, ‘What search terms are most common?’ and ‘How many help pages does the typical user view in a session?’ Answers to these questions helped us classify the content into ‘critical’ and ‘secondary’ content. We also weeded out the ‘ROT’: content that was Redundant, Outdated or Trivial. These steps helped us reduce the sheer amount of content to something that was a bit more manageable.

Our next step was to examine the content and see if there were any obvious, stand-out topics or groups. At this point, we did in fact subject a couple of people (I was one) to the entire inventory sort to see if we could spot any obvious categories. With this approach we were able to find clusters of cards that we thought most people would place together. For example, imagine a corporate intranet that has dozens of HR policies (travel policy, environment policy, maternity policy etc). It’s self-evident that most people will place these policies in the same group, so there is little to be gained by asking people to sort every policy when instead you can use a small handful of exemplars of each group in the card sort.

These two techniques helped us reduce the number of items to around 100, an acceptable number for a card sort.

As a result of our work, the new information architecture reduced the number of support enquiries from users who were unable to find or understand content. Users were now able to solve issues themselves, which indirectly increased the number of listings, sales and registrations.

22.13.2 Presenting the data from a closed card sort

Last year, I worked with the intranet design team in the Royal Bank of Scotland. The bank has over 150,000 employees and the design team had embarked on a major overhaul of the intranet, which contained around half a million pages. The design team wanted to check if staff could find important content in the new structure, which had close to 1000 nodes.

We carried out a closed card sort much along the lines that William describes in his article. However, we wanted to make sure that we canvassed opinions from employees in several countries, including the US, the UK and India. Because of this, we decided to use a remote, unmoderated closed card sort. We asked a representative sample of bank employees to visit a web site that contained the intranet’s top-level navigation terms arranged in a tree structure (this helped us focus on navigation without the distractions of aesthetics). The participants’ task was to choose the right link for various tasks, such as “Find an expenses claim form”. Over 200 participants took part in the study.

The challenge with a study like this is presenting the results back to the design team in such a way that they can make an informed decision on the data. There are some obvious statistics to use — such as the number of participants who succeeded in the task — but equally useful for design is an understanding of the incorrect paths chosen by participants.

An example (for one task) of the way we chose to present the results.
Figure 22.1: An example (for one task) of the way we chose to present the results.

Figure 1 shows an example (for one task) of the way we chose to present the results. Note the following features of the graphic:

  • The ‘tube map’ diagram shows the main paths participants took to find the answer.  The green line shows the correct path and the red lines show commonly taken incorrect paths. A red circle indicates a node where people chose the wrong path.
  • ‘Success rate’ shows the percentage of participants who found the correct answer. The error bars show the 95% confidence interval.
  • ‘Success rate — detailed breakdown’ provides more background on the success rate measure, showing how many participants needed to backtrack to find the answer (“indirect success”).
  • ‘Directness’ is the percentage of participants who didn't backtrack up the tree at any point during the task. The higher this score is, the more confident we can be that participants were sure of their answers (even if the answer is wrong). The error bars show the 95% confidence interval.
  • ‘Time taken’ shows the median time taken by participants. The error bar shows the upper quartile. You can think of time taken as a measure of hesitation when completing the task.
  • We also included a qualitative judgement on how the design performed on this task based on the measured success rate (“Very poor” through to “Excellent”) and a section that interprets the findings and provides suggestions for improvement.

Other than the tube map visualisation, we were able to extract most of these metrics from the online tool we used to collect the data (Treejack). This made the analysis and presentation relatively straightforward. (Many thanks to Rebecca Shipp, RBS Group, for permission to describe this case study).