Number of co-authors:39
Number of publications with 3 favourite co-authors:Cathleen Wharton:3Vicky L. O'Day:2Rolf Molich:2
Robin Jeffries's 3 most productive colleagues in number of publications:Jakob Nielsen:109Tom Rodden:106Terry Winograd:59
There is no reason for any individual to have a computer in his home
-- Ken Olson
Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess
User Experience and Experience Design !
Our Latest Books
Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger
Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad
Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam
Publications by Robin Jeffries (bibliography)
Akers, David, Simpson, Matthew, Jeffries, Robin and Winograd, Terry (2009): Undo and erase events as indicators of usability problems. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 659-668.
One approach to reducing the costs of usability testing is to facilitate the automatic detection of critical incidents: serious breakdowns in interaction that stand out during software use. This research evaluates the use of undo and erase events as indicators of critical incidents in Google SketchUp (a 3D-modeling application), measuring an indicator's usefulness by the numbers and types of usability problems discovered. We compared problems identified using undo and erase events to problems identified using the user-reported critical incident technique [Hartson and Castillo 1998]. In a within-subjects experiment with 35 participants, undo and erase episodes together revealed over 90% of the problems rated as severe, several of which would not have been discovered by self-report alone. Moreover, problems found by all three methods were rated as significantly more severe than those identified by only a subset of methods. These results suggest that undo and erase events will serve as useful complements to user-reported critical incidents for low cost usability evaluation of creation-oriented applications like SketchUp.
© All rights reserved Akers et al. and/or ACM Press
Russell, Daniel M., Tang, Diane, Kellar, Melanie and Jeffries, Robin (2009): Task Behaviors During Web Search: The Difficulty of Assigning Labels. In: HICSS 2009 - 42st Hawaii International International Conference on Systems Science 5-8 January, 2009, Waikoloa, Big Island, HI, USA. pp. 1-5.
Au, Irene, Boardman, Richard, Jeffries, Robin, Larvie, Patrick, Pavese, Antonella, Riegelsberger, Jens, Rodden, Kerry and Stevens, Molly (2008): User experience at Google: focus on the user and all else will follow. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 3681-3686.
This paper presents an overview of the User Experience (UX) team at Google. We focus on four aspects of working within Google's product development organization: (1) a bottom-up 'ideas' culture, (2) a data-driven engineering approach, (3) a fast, highly iterative web development cycle, and (4) a global product perspective of designing for multiple countries. Each aspect leads to challenges and opportunities for the UX team. We discuss these, and outline some of the methodological approaches we employ to deal with them, along with some examples of our work.
© All rights reserved Au et al. and/or ACM Press
Molich, Rolf, Jeffries, Robin and Dumas, Joseph (2007): Making Usability Recommendations Useful and Usable. In Journal of Usability Studies, 2 (4) pp. 162-179.
This paper evaluates the quality of recommendations for improving a user interface resulting from a usability evaluation. The study compares usability comments written by different authors, but describing similar usability issues. The usability comments were provided by 17 professional teams who independently evaluated the usability of the website for the Hotel Pennsylvania in New York.
© All rights reserved Molich et al. and/or Usability Professionals Association
Olson, Gary M. and Jeffries, Robin (eds.) Extended Abstracts Proceedings of the 2006 Conference on Human Factors in Computing Systems (CHI) 2006, Montréal, Canada.
Grinter, Rebecca E., Rodden, Tom, Aoki, Paul, Cutrell, Ed, Jeffries, Robin and Olson, Gary (eds.) Proceedings of the 2006 SIGCHI conference on Human Factors in computing systems April 22-27, 2006, Montréal, Canada.
Olson, Gary M. and Jeffries, Robin (eds.) Extended Abstracts Proceedings of the 2006 Conference on Human Factors in Computing Systems April 22-27, 2006, Montréal, Québec, Canada.
Newman, William M., Jeffries, Robin and Schraefel, M. C. (2005): Do CHI papers work for you?: addressing concerns of authors, audiences and reviewers. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 2045-2046.
CHI papers serve unique and vital purposes within the HCI community. Their ability to serve these purposes is of particular concern to authors, audiences (both attendees at conference sessions and readers of proceedings) and reviewers. However, these stakeholders rarely have an opportunity to state their concerns and influence how they are addressed. This SIG will offer such an opportunity. It has been organized by members of the CHI Papers Support Team, who will lead discussions of major issues. The outcome will be a set of recommended further actions by the Support Team and future papers co-chairs.
© All rights reserved Newman et al. and/or ACM Press
Dumas, Joseph S., Molich, Rolf and Jeffries, Robin (2004): Describing usability problems: are we sending the right message?. In Interactions, 11 (4) pp. 24-29.
Jeffries, Robin (2000): Commentary. In Interactions, 7 (2) pp. 28-34.
Tauber, Michael J., Bellotti, Victoria, Jeffries, Robin, Mackinlay, Jock D. and Nielsen, Jakob (eds.) Proceedings of the ACM CHI 96 Human Factors in Computing Systems Conference April 14-18, 1996, Vancouver, Canada.
Berlin, Lucy M., Jeffries, Robin, O'Day, Vicky L., Paepcke, Andreas and Wharton, Cathleen (1993): WHERE Did You Put It? Issues in the Design and Use of a Group Memory. In: Ashlund, Stacey, Mullet, Kevin, Henderson, Austin, Hollnagel, Erik and White, Ted (eds.) Proceedings of the ACM CHI 93 Human Factors in Computing Systems Conference April 24-29, 1993, Amsterdam, The Netherlands. pp. 23-30.
Collaborating teams of knowledge workers need a common repository in which to share information gathered by individuals or developed by the team. This is difficult to achieve in practice, because individual information access strategies break down with group information -- people can generally find things that are on their own messy desks and file systems, but not on other people's. The design challenge in a group memory is thus to enable low-effort information sharing without reducing individuals' finding effectiveness. This paper presents the lessons from our design and initial use of a hypertext-based group memory, TeamInfo. We expose the serious cognitive obstacles to a shared information structure, discuss the uses and benefits we have experienced, address the effects of technology limitations, and highlight some unexpected social and work impacts of our group memory.
© All rights reserved Berlin et al. and/or ACM Press
O'Day, Vicky L. and Jeffries, Robin (1993): Information Artisans: Patterns of Result Sharing by Information Searchers. In: Kaplan, Simon M. (ed.) Proceedings of the ACM Conference on Organizational Computing Systems 1993 November 1-4, 1993, Milpitas, California, USA. pp. 98-107.
We studied the uses of information search results by regular clients of professional intermediaries. We found that all of the participants in our study acted as intermediaries themselves, sharing information they had received from library searches with others in their work settings. There were four basic models of sharing: updating team members, consulting, broadcasting, and putting information into a shared archive. In many sharing scenarios, the library clients acted as information artisans, creating new artifacts by transforming and enhancing their search results before passing them on. When possible, the library clients delivered their new information artifacts in collaborative settings, to ensure that recipients understood and could apply the results and to allow opportunities for follow-up search requests. These observations suggest that new functionality is needed for information search systems, to support the analysis, manipulation, and packaging of search results, and collaborative information delivery with intertwined communication and information components.
© All rights reserved O'Day and and/or ACM Press
Wharton, Cathleen, Bradford, Janice, Jeffries, Robin and Franzke, Marita (1992): Applying Cognitive Walkthroughs to More Complex User Interfaces: Experiences, Issues, and Recommendations. In: Bauersfeld, Penny, Bennett, John and Lynch, Gene (eds.) Proceedings of the ACM CHI 92 Human Factors in Computing Systems Conference June 3-7, 1992, Monterey, California. pp. 381-388.
The Cognitive Walkthrough methodology was developed in an effort to bring cognitive theory closer to practice; to enhance the design and evaluation of user interfaces in industrial settings. For the first time, small teams of professional developers have used this method to critique three complex software systems. In this paper we report evidence about how the methodology worked for these evaluations. We focus on five core issues: (1) task selection, coverage, and evaluation, (2) the process of doing a Cognitive Walkthrough, (3) requisite knowledge for the evaluators, (4) group walkthroughs, and (5) the interpretation of results. Our findings show that many variables can affect the success of the technique; we believe that if the Cognitive Walkthrough is ultimately to be successful in industrial settings, the method must be refined and augmented in a variety of ways.
© All rights reserved Wharton et al. and/or ACM Press
Berlin, Lucy M. and Jeffries, Robin (1992): Consultants and Apprentices: Observations about Learning and Collaborative Problem Solving. In: Proceedings of the 1992 ACM conference on Computer-supported cooperative work November 01 - 04, 1992, Toronto, Ontario, Canada. pp. 130-137.
Informal consulting interactions between apprentices and experts represent a little-studied but common collaborative work practice in many domains. In the computer industry, programmers become apprentices as they retool themselves to new computer languages, programming environments, software frameworks and systems. Our empirical study of consulting interactions has provided insights into the nature of this informal collaborative work practice. We describe the variety of "hard-to-find" information provided by the expert, the incidental learning observed, and the pair's strategies for managing joint and individual productivity. Given these observations, we discuss how computer-based tools could help apprentices encapsulate task context, switch among subtasks, facilitate collaborative interaction, and supplement consultants.
© All rights reserved Berlin and Jeffries and/or ACM Press
Jeffries, Robin and Desurvire, Heather (1992): The Interactive Matrix Chart. In ACM SIGCHI Bulletin, 24 (4) pp. 39-41.
Recent research comparing usability assessment methods has been interpreted by some to imply that usability testing is no longer necessary, because other techniques, such as heuristic evaluation, can find some usability problems more cost-effectively. Such an interpretation grossly overstates the actual results of the studies. In this article, we, as authors of studies that compared inspection methods to usability testing, point out the rather severe limitations to using inspection methods as a substitute for usability testing and argue for a more balanced repertoire of usability assessment techniques.
© All rights reserved Jeffries and Desurvire and/or ACM Press
Jeffries, Robin, Miller, James R., Wharton, Cathleen and Uyeda, Kathy M. (1991): User Interface Evaluation in the Real World: A Comparison of Four Techniques. In: Robertson, Scott P., Olson, Gary M. and Olson, Judith S. (eds.) Proceedings of the ACM CHI 91 Human Factors in Computing Systems Conference April 28 - June 5, 1991, New Orleans, Louisiana. pp. 119-124.
A user interface (UI) for a software product was evaluated prior to its release by four groups, each applying a different technique: heuristic evaluation, software guidelines, cognitive walkthroughs, and usability testing. Heuristic evaluation by several UI specialists found the most serious problems with the least amount of effort, although they also reported a large number of low-priority problems. The relative advantages of all the techniques are discussed, and suggestions for improvements in the techniques are offered.
© All rights reserved Jeffries et al. and/or ACM Press
Jeffries, Robin (1990): CHI'89 Interactive Poster Session Papers and Abstracts. In ACM SIGCHI Bulletin, 21 (3) p. 16.
Jeffries, Robin (1989): CHI'89 Interactive Poster Session Papers and Abstracts. In ACM SIGCHI Bulletin, 21 (1) p. 80.
Jeffries, Robin and Rosenberg, Jarrett (1987): Comparing a form--based and language--based user interface for instructing a mail program. In: Graphics Interface 87 (CHI+GI 87) April 5-9, 1987, Toronto, Ontario, Canada. pp. 261-266.
Anderson, John R. and Jeffries, Robin (1985): Novice LISP Errors: Undetected Losses of Information from Working Memory. In Human-Computer Interaction, 1 (2) pp. 107-131.
Four experiments study the errors students make using LISP functions. The first two experiments show that frequency of errors is increased by increasing the complexity of irrelevant aspects of the problem. The experiments also show that the distribution of errors is largely random and that subjects' errors seem to result from slips rather than from misconceptions. Experiment 3 shows that subjects' errors tend to involve loss of parentheses in answers when the resulting errors are well-formed LISP expressions. Experiment 4 asks subjects, who knew no LISP, to judge the reasonableness of the answers to various LISP function calls. Subjects could detect many errors on the basis of general criteria of what a reasonable answer should look like. On the basis of these four experiments, we conclude that errors occur when there is a loss of information in the working memory representation of the problem and when the resulting answer still looks reasonable.
© All rights reserved Anderson and Jeffries and/or Taylor and Francis
Show list on your website
Join our community and advance:
Changes to this page (author)15 Apr 2011: Modified19 Nov 2010: Modified
13 Jun 2009: Modified
12 Jun 2009: Modified
09 May 2009: Modified
28 Aug 2008: Added
12 May 2008: Modified
27 Jul 2007: Added
29 Jun 2007: Modified
29 Jun 2007: Modified
24 Jun 2007: Modified
23 Jun 2007: Modified
23 Jun 2007: Modified
28 Apr 2003: Added
Page maintainer: The Editorial Team