Publication statistics

Pub. period:2004-2013
Pub. count:17
Number of co-authors:6



Co-authors

Number of publications with 3 favourite co-authors:

James R. Lewis:4
James R Lewis:1
Ray Matsil:1

 

 

Productive colleagues

Jeff Sauro's 3 most productive colleagues in number of publications:

James R. Lewis:22
Joseph S. Dumas:14
James R Lewis:1
 
 
 
Jul 23

Men have become the tools of their tools.

-- Henry David Thoreau

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!

 
 

Jeff Sauro

Picture of Jeff Sauro.
Personal Homepage:
http://www.MeasuringUsability.com

Jeff is a Six-Sigma trained statistical analyst and pioneer in quantifying the user experience. He is founding principal of Measuring Usability LLC, a quantitative user research firm based in Denver, CO. He is author of four books including: Quantifying the User Experience: Practical Statistics for User Research. He has worked for GE, Intuit, PeopleSoft and Oracle and has consulted with dozens of Fortune 500 companies including Walmart, PayPal, Autodesk and McGraw Hill. Jeff received his Masters from Stanford University and maintains the website MeasuringUsability.com. You can follow him on Twitter: @MsrUsability.

...
Edit author info
Add publication

Publications by Jeff Sauro (bibliography)

 what's this?
2013
 
Edit | Del

Sauro, Jeff (2013). Measuring Usability. Retrieved 9 October 2013 from http://www.measuringusability.com/blog/measure-findability.php

How to measure usability and findability.

© All rights reserved Sauro and/or his/her publisher

 
Edit | Del

Sauro, Jeff (2013). What UX Methods to Use and When to Use Them. Retrieved 9 February 2014 from Measuring Usability: http://www.measuringusability.com/blog/method-when.php

2012

Sauro, Jeff (2012). Commentary on 'Card Sorting' by William Hudson

 
Edit | Del

Sauro, Jeff and Lewis, James R (2012): Quantifying the User Experience: Practical Statistics for User Research. Morgan Kaufmann

You're being asked to quantify usability improvements with statistics. But even with a background in statistics, you are hesitant to statistically analyze the data, as you may be unsure about which statistical tests to use and have trouble defending the use of the small test sample sizes associated with usability studies. The book is about providing a practical guide on how to use statistics to solve common quantitative problems arising in user research. It addresses common questions you face every day such as: Is the current product more usable than our competition? Can we be sure at least 70% of users can complete the task on the 1st attempt? How long will it take users to purchase products on the website? This book shows you which test to use, and how provide a foundation for both the statistical theory and best practices in applying them. The authors draw on decades of statistical literature from Human Factors, Industrial Engineering and Psychology, as well as their own published research to provide the best solutions. They provide both concrete solutions (excel formula, links to their own web-calculators) along with an engaging discussion about the statistical reasons for why the tests work, and how to effectively communicate the results. Provides practical guidance on solving usability testing problems with statistics for any project, including those using Six Sigma practicesShow practitioners which test to use, why they work, best practices in application, along with easy-to-use excel formulas and web-calculators for analyzing dataRecommends ways for practitioners to communicate results to stakeholders in plain English

© All rights reserved Sauro and Lewis and/or Morgan Kaufmann

 
Edit | Del

Sauro, Jeff (2012). How Effective are Heuristic Evaluations. Retrieved 9 February 2014 from Measuring Usability: http://www.measuringusability.com/blog/effective-he.php

2011
 
Edit | Del

Sauro, Jeff and Lewis, James R. (2011): When designing usability questionnaires, does it hurt to be positive?. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2215-2224.

When designing questionnaires there is a tradition of including items with both positive and negative wording to minimize acquiescence and extreme response biases. Two disadvantages of this approach are respondents accidentally agreeing with negative items (mistakes) and researchers forgetting to reverse the scales (miscoding). The original System Usability Scale (SUS) and an all positively worded version were administered in two experiments (n=161 and n=213) across eleven websites. There was no evidence for differences in the response biases between

© All rights reserved Sauro and Lewis and/or their publisher

 
Edit | Del

Webb, Erika Noll, Matsil, Ray and Sauro, Jeff (2011): Benefit analysis of user assistance improvements. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 841-850.

In this paper, we describe a study conducted to examine the impact of changes to our user assistance model in our enterprise software systems. In this study, we examined both a traditional user assistance model, as well as our new user assistance model. In the traditional user assistance model, users of a general ledger prototype were given inline error messages and access to a PDF version of the help manual from a help icon at the top of the page. In the new user assistance model, error messages appeared in pop-up windows with links to specific areas where users could correct the errors. Fields that needed to be changed were highlighted with a red border and when clicked, a description of the required change would appear. When users needed help, they could select from lists of relevant help topics available at different levels based on where they were working in the system.

© All rights reserved Webb et al. and/or their publisher

 
Edit | Del

Sauro, Jeff (2011). Measuring User Interface Disasters. Retrieved 26 September 2013 from http://www.measuringusability.com/blog/ui-disasters.php

The only thing worse than users failing a task is users failing a task and thinking they've completed it successfully. This is a disaster.

© All rights reserved Sauro and/or his/her publisher

2010
 
Edit | Del

Sauro, Jeff and Lewis, James R. (2010): Average task times in usability tests: what to report?. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2347-2350.

The distribution of task time data in usability studies is positively skewed. Practitioners who are aware of this positive skew tend to report the sample median. Monte Carlo simulations using data from 61 large-sample usability tasks showed that the sample median is a biased estimate of the population median. Using the geometric mean to estimate the center of the population will, on average, have 13% less error and 22% less bias than the sample median. Other estimates of the population center (trimmed, harmonic and Winsorized means) had worse performance than the sample median.

© All rights reserved Sauro and Lewis and/or their publisher

 
Edit | Del

Sauro, Jeff (2010): A Practical Guide to Measuring Usability: 72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software. CreateSpace Independent Publishing Platform

A detailed and readable guide that answers the most common questions that arise when measuring the usability of websites, software or cellphones. What measures do you take? What do have users do? What questionnaires do you use? What about sample size and statistical analysis?

© All rights reserved Sauro and/or CreateSpace Independent Publishing Platform

2009
 
Edit | Del

Sauro, Jeff and Dumas, Joseph S. (2009): Comparison of three one-question, post-task usability questionnaires. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1599-1608.

Post-task ratings of difficulty in a usability test have the potential to provide diagnostic information and be an additional measure of user satisfaction. But the ratings need to be reliable as well as easy to use for both respondents and researchers. Three one-question rating types were compared in a study with 26 participants who attempted the same five tasks with two software applications. The types were a Likert scale, a Usability Magnitude Estimation (UME) judgment, and a Subjective Mental Effort Question (SMEQ). All three types could distinguish between the applications with 26 participants, but the Likert and SMEQ types were more sensitive with small sample sizes. Both the Likert and SMEQ types were easy to learn and quick to execute. The online version of the SMEQ question was highly correlated with other measures and had equal sensitivity to the Likert question type.

© All rights reserved Sauro and Dumas and/or ACM Press

 
Edit | Del

Sauro, Jeff and Lewis, James R. (2009): Correlations among prototypical usability metrics: evidence for the construct of usability. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1609-1618.

Correlations between prototypical usability metrics from 90 distinct usability tests were strong when measured at the task-level (r between .44 and .60). Using test-level satisfaction ratings instead of task-level ratings attenuated the correlations (r between .16 and .24). The method of aggregating data from a usability test had a significant effect on the magnitude of the resulting correlations. The results of principal components and factor analyses on the prototypical usability metrics provided evidence for an underlying construct of general usability with objective and subjective factors.

© All rights reserved Sauro and Lewis and/or ACM Press

2006
 
Edit | Del

Sauro, Jeff (2006): Quantifying usability. In Interactions, 13 (6) pp. 20-21.

 
Edit | Del

Sauro, Jeff (2006): The user is in the numbers. In Interactions, 13 (6) pp. 22-25.

 
Edit | Del

Lewis, James R. and Sauro, Jeff (2006): When 100% Really Isn't 100%: Improving the Accuracy of Small-Sample Estimates of Completion Rates. In Journal of Usability Studies, 1 (3) pp. 136-150.

Small sample sizes are a fact of life for most usability practitioners. This can lead to serious measurement problems, especially when making binary measurements such as successful task completion rates (p). The computation of confidence intervals helps by establishing the likely boundaries of measurement, but there is still a question of how to compute the best point estimate, especially for extreme outcomes. In this paper, we report the results of investigations of the accuracy of different estimation methods for two hypothetical distributions and one empirical distribution of p. If a practitioner has no expectation about the value of p, then the Laplace method ((x+1)/(n+2)) is the best estimator. If practitioners are reasonably sure that p will range between .5 and 1.0, then they should use the Wilson method if the observed value of p is less than .5, Laplace when p is greater than .9, and maximum likelihood (x/n) otherwise.

© All rights reserved Lewis and Sauro and/or Usability Professionals Association

2005
 
Edit | Del

Sauro, Jeff and Kindlund, Erika (2005): A method to standardize usability metrics into a single score. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 401-409.

Current methods to represent system or task usability in a single metric do not include all the ANSI and ISO defined usability aspects: effectiveness, efficiency&satisfaction. We propose a method to simplify all the ANSI and ISO aspects of usability into a single, standardized and summated usability metric (SUM). In four data sets, totaling 1860 task observations, we show that these aspects of usability are correlated and equally weighted and present a quantitative model for usability. Using standardization techniques from Six Sigma, we propose a scalable process for standardizing disparate usability metrics and show how Principal Components Analysis can be used to establish appropriate weighting for a summated model. SUM provides one continuous variable for summative usability evaluations that can be used in regression analysis, hypothesis testing and usability reporting.

© All rights reserved Sauro and Kindlund and/or ACM Press

2004
 
Edit | Del

Sauro, Jeff (2004): Premium usability: getting the discount without paying the price. In Interactions, 11 (4) pp. 30-37.

The debate rages. "Formal usability testing costs too much," says one side. "We need methodological rigor," maintains the other. "You can find the important problems with just five users," insists the first. "Such a small number doesn't give us reliable results," counters the second. And never the twain shall meet. Or will they? In this Whiteboard, Jeff Sauro explores the issues and gives us some ideas for maintaining the statistical validity of our usability testing as we reduce its costs. -- Elizabeth Buie

© All rights reserved Sauro and/or Lawrence Erlbaum Associates

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
 
 
 

Changes to this page (author)

09 Feb 2014: Added
09 Feb 2014: Added
04 Dec 2013: Modified
04 Dec 2013: Modified
09 Oct 2013: Added
26 Sep 2013: Added
26 Sep 2013: Added
26 Sep 2013: Added
26 Sep 2013: Added
20 Mar 2012: Modified
20 Mar 2012: Modified
13 Mar 2012: Added
05 Jul 2011: Modified
05 Jul 2011: Modified
18 Nov 2010: Modified
02 Nov 2010: Modified
09 May 2009: Modified
09 May 2009: Modified
29 Jun 2007: Modified
29 Jun 2007: Modified
29 Jun 2007: Modified
29 Jun 2007: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/jeff_sauro.html

Publication statistics

Pub. period:2004-2013
Pub. count:17
Number of co-authors:6



Co-authors

Number of publications with 3 favourite co-authors:

James R. Lewis:4
James R Lewis:1
Ray Matsil:1

 

 

Productive colleagues

Jeff Sauro's 3 most productive colleagues in number of publications:

James R. Lewis:22
Joseph S. Dumas:14
James R Lewis:1
 
 
 
Jul 23

Men have become the tools of their tools.

-- Henry David Thoreau

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!