Publication statistics

Pub. period:-2012
Pub. count:64
Number of co-authors:89



Co-authors

Number of publications with 3 favourite co-authors:

Wayne D. Gray:6
Suresh K. Bhavnani:5
Michael E. Atwood:4

 

 

Productive colleagues

Bonnie E. John's 3 most productive colleagues in number of publications:

John M. Carroll:209
Scott E. Hudson:113
Bonnie A. Nardi:67
 
 
 

Upcoming Courses

go to course
Quality Web Communication: The Beginner's Guide
89% booked. Starts in 6 days
go to course
User-Centred Design - Module 2
83% booked. Starts in 12 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Bonnie E. John

Picture of Bonnie E. John.
Update pic
Has also published under the name of:
"Bonnie John"

Personal Homepage:
http://www.cs.cmu.edu/~bej/

I am Professor at the Human-Computer Interaction Institute at Carnegie Mellon University. I am interested in techniques to improve the design of computer systems with respect to their usefulness and usability. To that end, I have investigated the effectiveness and usability of several HCI techniques (e.g., think-aloud usability studies, Cognitive Walkthrough, GOMS) and produced new techniques for bringing usability concerns to the design process (e.g., CPM-GOMS and software architecture evaluation for usability). Much of my work focuses on cognitive modeling, where I work within a unified theory of cognition to develop models of human performance that produce quantitative predictions of performance with less effort than prototyping and user testing. I also work on bridging the gap between HCI and software engineering, specifically including usability concerns in software architecture design.

Edit author info
Rename / change spelling
Add publication
 

Publications by Bonnie E. John (bibliography)

 what's this?
2012
 
Edit | Del

Trewin, Shari, Richards, John T., Hanson, Vicki L., Sloan, David, John, Bonnie E., Swart, Cal and Thomas, John C. (2012): Understanding the role of age and fluid intelligence in information search. In: Fourteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2012. pp. 119-126

In this study, we explore the role of age and fluid intelligence on the behavior of people looking for information in a real-world search space. Analyses of mouse moves, clicks, and eye movements provide a window into possible differences in both task strategy and performance, and allow us to begin to separate the influence of age from the correlated but isolable influence of cognitive ability. We found little evidence of differences in strategy between younger and older participants matched on fluid intelligence. Both performance and strategy differences were found between older participants having higher versus lower fluid intelligence, however, suggesting that cognitive factors, rather than age per se, exert the dominant influence. This underscores the importance of measuring and controlling for cognitive abilities in studies involving older adults.

© All rights reserved Trewin et al. and/or ACM Press

 
Edit | Del

John, Bonnie E., Patton, Evan W., Gray, Wayne D. and Morrison, Donald F. (2012): Tools for Predicting the Duration and Variability of Skilled: Performance without Skilled Performers. In: Proceedings of the Human Factors and Ergonomics Society 2012 Annual Meeting 2012. pp. 985-989

Many devices are designed to allow skilled users to complete routine tasks quickly, often within a specified amount of time. Predictive human performance modeling has long been able to predict the mean time to accomplish a task, making it possible to compare device designs before building them. However, estimates of the variability of performance are also important, especially in real-time, safety-critical tasks. Until recently, the human factors community lacked tools to predict the variability of skilled performance. In this paper, we describe a combination of theory-based tools (CogTool and SANLab) that address this critical gap and that can easily be used by human factors practitioners or system designers. We describe these tools, their integration, and provide a concrete example of their use in the context of entering the landing speed into the Boeing 777 Flight Management Computer (FMC) using the Control and Display Unit (CDU).

© All rights reserved John et al. and/or Human Factors and Ergonomics Society

 
Edit | Del

Patton, Evan W., Gra, Wayne D. and John, Bonnie E. (2012): Automated CPM-GOMS Modeling from Human Data. In: Proceedings of the Human Factors and Ergonomics Society 2012 Annual Meeting 2012. pp. 1005-1009

We present the Log Analyzer for generating CPM-GOMS models from human performance data. Built on top of the SANLab tool for stochastic CPM-GOMS modeling (Patton&Gray, 2010), the Log Analyzer uses event-driven parsing to map experimental log files into SANLab interactive routines used to generate CPM-GOMS activity networks. Identical models within and across participants are averaged to obtain estimates of performance times and variability, which are then used to drive stochastic simulations. In this report, we apply our tool to human data collected during a simple eyetracking calibration task and compare the resulting models to existing models in the literature. The generated models show good predictive performance and raise questions about di erent strategies not captured in the literature.

© All rights reserved Patton et al. and/or Human Factors and Ergonomics Society

2011
 
Edit | Del

John, Bonnie E. (2011): Using predictive human performance models to inspire and support UI design recommendations. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 983-986

Predictive human performance modeling has traditionally been used to make quantitative comparisons between alternative designs (e.g., task execution time for skilled users) instead of identifying UI problems or making design recommendations. This note investigates how reliably novice modelers can extract design recommendations from their models. Many HCI evaluation methods have been plagued by the "evaluator effect" [3], i.e., different people using the same method find different UI problems. Our data and analyses show that predictive human performance modeling is no exception. Novice modelers using CogTool [5] display a 34% Any-Two Agreement in their design recommendations, a result in the upper quartile of evaluator effect studies. However, because these recommendations are grounded in models, they may have more reliable impact on measurable performance than recommendations arising from less formal methods.

© All rights reserved John and/or his/her publisher

 
Edit | Del

Suzuki, Shunsuke, Bellotti, Victoria, Yee, Nick, John, Bonnie E., Nakao, Yusuke, Asahi, Toshiyuki and Fukuzumi, Shin'ichi (2011): Variation in importance of time-on-task with familiarity with mobile phone models. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2551-2554

We studied the extent to which time-on-task is correlated with perception of usability for people who are familiar with a phone model and for those who are not. Our controlled experiment, conducted in Japan, correlated subjective usability assessments with time-on-task for expert and novice users on three different mobile phone models. We found that the correlation between perceived usability and time-on-task is stronger when participants are more familiar with the phone model. While not significant when initially inspecting a new phone model, a negative correlation between time-on-task and perceived usability becomes significant with as little as an hour's time doing tasks on the unfamiliar phone. This suggests that designing the UI to make time-on-task as short as possible may not have much effect on the purchase decision, but as experience increases, it may increase the loyalty of existing users.

© All rights reserved Suzuki et al. and/or their publisher

 
Edit | Del

Bellamy, Rachel, John, Bonnie E. and Kogan, Sandra (2011): Deploying CogTool: integrating quantitative usability assessment into real-world software development. In: Proceeding of the 33rd international conference on Software engineering 2011. pp. 691-700

Usability concerns are often difficult to integrate into real-world software development processes. To remedy this situation, IBM research and development, partnering with Carnegie Mellon University, has begun to employ a repeatable and quantifiable usability analysis method, embodied in CogTool, in its development practice. CogTool analyzes tasks performed on an interactive system from a storyboard and a demonstration of tasks on that storyboard, and predicts the time a skilled user will take to perform those tasks. We discuss how IBM designers and UX professionals used CogTool in their existing practice for contract compliance, communication within a product team and between a product team and its customer, assigning appropriate personnel to fix customer complaints, and quantitatively assessing design ideas before a line of code is written. We then reflect on the lessons learned by both the development organizations and the researchers attempting this technology transfer from academic research to integration into real-world practice, and we point to future research to even better serve the needs of practice.

© All rights reserved Bellamy et al. and/or ACM Press

 Cited in the following chapter:

Usability Evaluation: [/encyclopedia/usability_evaluation.html]


 
2010
 
Edit | Del

Harris, Brett N., John, Bonnie E. and Brezin, Jonathan (2010): Human performance modeling for all: importing UI prototypes into cogtool. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 3481-3486

UI designers use a variety of prototyping tools, from paper and pencil sketching, to drag-and-drop mock-up tools (e.g., Balsamiq Mockups), to sophisticated suites of modeling tools and toolkits (e.g., iRise or dijit, the dojo GUI toolkit ). Many projects would benefit from quickly analyzing prototypes at an early stage without the effort of bringing in users for empirical tests. Most analysis tools, however (e.g., AutoCWW [1], Bloodhound [2], and CogTool [4]), require prototypes to be in their own format, which forces the designer to re-do the prototypes in order to analyze them. Our work is a step toward allowing the CogTool analysis tools to import from many different prototyping tools, so designers will have a path to quick usability analysis without changing the way they currently express their preliminary designs.

© All rights reserved Harris et al. and/or their publisher

 
Edit | Del

Trewin, Shari, Richards, John, Bellamy, Rachel, John, Bonnie E., Thomas, John, Swart, Cal and Brezin, Jonathan (2010): Toward modeling auditory information seeking strategies on the web. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 3973-3978

Human performance models based on information foraging theory have proved capable of predicting navigation behavior on the Web. They can therefore provide a useful tool for Web site design. They may also be effective for modeling auditory navigation within a single Web page. Designers often struggle to accommodate this sort of access, different as it is from their own experience. As a step toward realistic simulations based on models of auditory Web access, we describe information seeking strategies observed in people with visual impairment using screen reading software for Web navigation tasks. We outline one example strategy for approaching a new Web page that, guided by information foraging theory, may expose access barriers that current design tools miss.

© All rights reserved Trewin et al. and/or their publisher

 
Edit | Del

Trewin, Shari, Bellamy, Rachel, Thomas, John, Brezin, Jonathan, Richards, John, Swart, Cal and John, Bonnie E. (2010): Designing for auditory web access: accessibility and cellphone users. In: Proceedings of the 2010 International Cross-Disciplinary Conference on Web Accessibility W4A 2010. p. 4

Low-end mobile phones are becoming generally available throughout the world and the main window onto the Web. A large constituency for Web access is being created whose experience Web designers are not familiar with, particularly because of the central role of auditory, rather than visual access. This is analogous to problems faced by visually impaired users, where again Web designers have an audience whose experience they rarely share. One might think that proven design practices of working directly with users or intermediaries would be effective, but we argue that in these situations they are difficult to apply, not least because of the distance geographically and culturally from the users. We therefore present alternative ways for developers to gain insight into the experience of auditory access. We also discuss the potential of tools and resources such as simulations of an auditory interaction with the Web and illustrate how these can help -- particularly with early stage design. Successful simulations can reduce the cost and effort needed to design a usable auditory Web experience, and as a bonus, the insights gained often improve a site for many other users as well.

© All rights reserved Trewin et al. and/or their publisher

 
Edit | Del

Trewin, Shari, John, Bonnie E., Richards, John, Swart, Cal, Brezin, Jonathan, Bellamy, Rachel and Thomas, John (2010): Towards a tool for keystroke level modeling of skilled screen reading. In: Twelfth Annual ACM SIGACCESS Conference on Assistive Technologies 2010. pp. 27-34

Designers often have no access to individuals who use screen reading software, and may have little understanding of how their design choices impact these users. We explore here whether cognitive models of auditory interaction could provide insight into screen reader usability. By comparing human data with a tool-generated model of a practiced task performed using a screen reader, we identify several requirements for such models and tools. Most important is the need to represent parallel execution of hearing with thinking and acting. Rules for placement of cognitive operators that were developed for visual user interfaces may not be applicable in the auditory domain. Other mismatches between the data and the model were attributed to the extremely fast listening rate and differences between the typing patterns of screen reader usage and the model's assumptions. This work informs the development of more accurate models of auditory interaction. Tools incorporating such models could help designers create user interfaces that are well tuned for screen reader users, without the need for modeling expertise.

© All rights reserved Trewin et al. and/or their publisher

2009
 
Edit | Del

John, Bonnie E., Blackmon, Marilyn Hughes, Polson, Peter G., Fennell, Karl and Teo, Leonghwee (2009): Rapid Theory Prototyping: An Example of an Aviation Task. In: Proceedings of the Human Factors and Ergonomics Society 53rd Annual Meeting 2009. pp. 794-798

We present our experience using CogTool, a tool originally designed for ease of use and learning by non-psychologist design practitioners, as a means for rapid theory exploration. We created seven iterations of a "model prototype" of an aviation task where each iteration produced errors that pointed to additional theory or device knowledge that should be incorporated to prevent those errors. This theory and knowledge was put into the next iteration by modifying the mock-up of the device, not by changing the implementation of the underlying cognitive model. This trick allowed us to rapidly change theory and knowledge and understand what must eventually migrate to the underlying cognitive model to provide general support for predictions of novice behavior in multi-step procedures with complex devices.

© All rights reserved John et al. and/or their publisher

2008
 
Edit | Del

Teo, Leonghwee and John, Bonnie E. (2008): CogTool-explorer: towards a tool for predicting user interaction. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 2793-2798.

In recent years, research predicting search through webpages has begun to be successful. However, existing tools ignore the order in which on-screen options are evaluated and therefore might make inaccurate predictions. We developed CogTool-Explorer and used it to model a previously published web-based experiment. Its predictions were better than those of a previously published tool, and included the order of evaluation effect not accounted for by previous tools. These more accurate predictions can be attributed to the approach used in CogTool-Explorer.

© All rights reserved Teo and John and/or ACM Press

2007
 
Edit | Del

Chaudhri, Vinay K., John, Bonnie E., Mishra, Sunil, Pacheco, John, Porter, Bruce W. and Spaulding, Aaron (2007): Enabling experts to build knowledge bases from science textbooks. In: Sleeman, Derek H. and Barker, Ken (eds.) K-CAP 2007 - Proceedings of the 4th International Conference on Knowledge Capture October 28-31, 2007, Whistler, BC, Canada. pp. 159-166.

 
Edit | Del

Clark, Peter, Chaw, Shaw Yi, Barker, Ken, Chaudhri, Vinay K., Harrison, Philip, Fan, James, John, Bonnie E., Porter, Bruce W., Spaulding, Aaron, Thompson, John and Yeh, Peter Z. (2007): Capturing and answering questions posed to a knowledge-based system. In: Sleeman, Derek H. and Barker, Ken (eds.) K-CAP 2007 - Proceedings of the 4th International Conference on Knowledge Capture October 28-31, 2007, Whistler, BC, Canada. pp. 63-70.

2005
 
Edit | Del

John, Bonnie E. (2005): The Human Computer Interaction Institute at Carnegie Mellon University. In Interactions, 12 (5) pp. 28-29.

 
Edit | Del

Luo, Lu and John, Bonnie E. (2005): Predicting task execution time on handheld devices using the keystroke-level model. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1605-1608.

The Keystroke-Level Model (KLM) has been shown to predict skilled use of desktop systems, but has not been validated on a handheld device that uses a stylus instead of a keyboard. This paper investigates the accuracy of KLM predictions for user interface tasks running on a Palm OS based handheld device. The models were produced using a recently developed tool for KLM construction, CogTool, and were compared to data obtained from a user study of 10 participants. Our results have shown that the KLM can accurately predict task execution time on handheld user interfaces with less than 8% prediction error.

© All rights reserved Luo and John and/or ACM Press

 
Edit | Del

Vera, Alonso H., John, Bonnie E., Remington, Roger, Matessa, Michael and Freed, Michael A. (2005): Automating Human-Performance Modeling at the Millisecond Level. In Human-Computer Interaction, 20 (3) pp. 225-265.

A priori prediction of skilled human performance has the potential to be of great practical value but is difficult to carry out. This article reports on an approach that facilitates modeling of human behavior at the level of cognitive, perceptual, and motor operations, following the CPM-GOMS method (John, 1990). CPM-GOMS is a powerful modeling method that has remained underused because of the expertise and labor required. We describe a process for automatically generating CPM-GOMS models from a hierarchical task decomposition expressed in a computational modeling tool, taking advantage of reusable behavior templates and their efficacy for generating zero-parameter a priori predictions of complex human behavior. To demonstrate the process, we present a model of automated teller machine interaction. The model shows that it is possible to string together existing behavioral templates that compose basic HCI tasks, (e.g., mousing to a button and clicking on it) to generate powerful human performance predictions. Because interleaving of templates is now automated, it becomes possible to construct arbitrarily long sequences of behavior. In addition, the manipulation and adaptation of complete models has the potential of becoming dramatically easier. Thus, the tool described here provides an engine for CPM-GOMS that may facilitate computational modeling of human performance at the millisecond level.

© All rights reserved Vera et al. and/or Taylor and Francis

 
Edit | Del

Golden, Elspeth, John, Bonnie E. and Bass, Len (2005): Quality vs. quantity: comparing evaluation methods in a usability-focused software architecture modification task. In: ISESE 2005 - International Symposium on Empirical Software Engineering 17-18 November, 2005, Noosa Heads, Australia. pp. 141-150.

 
Edit | Del

John, Bonnie E., Bass, Len, Segura, Maria Isabel Snchez and Adams, Rob J. (2005): Bringing Usability Concerns to the Design of Software Architecture. In: Bastide, Remi, Palanque, Philippe A. and Roth, Jrg (eds.) Engineering Human Computer Interaction and Interactive Systems, Joint Working Conferences EHCI-DSVIS 2004 July 11-13, 2005, Hamburg, Germany. pp. 1-19.

 
Edit | Del

John, Bonnie E. and Salvucci, Dario D. (2005): Multipurpose prototypes for assessing user interfaces in pervasive computing systems. In IEEE Pervasive Computing, 4 (4) pp. 27-34

2004
 
Edit | Del

John, Bonnie E., Prevas, Konstantine, Salvucci, Dario D. and Koedinger, Kenneth R. (2004): Predictive human performance modeling made easy. In: Dykstra-Erickson, Elizabeth and Tscheligi, Manfred (eds.) Proceedings of ACM CHI 2004 Conference on Human Factors in Computing Systems April 24-29, 2004, Vienna, Austria. pp. 455-462.

Although engineering models of user behavior have enjoyed a rich history in HCI, they have yet to have a widespread impact due to the complexities of the modeling process. In this paper we describe a development system in which designers generate predictive cognitive models of user behavior simply by demonstrating tasks on HTML mock-ups of new interfaces. Keystroke-Level Models are produced automatically using new rules for placing mental operators, then implemented in the ACT-R cognitive architecture. They interact with the mock-up through integrated perceptual and motor modules, generating behavior that is automatically quantified and easily examined. Using a query-entry user interface as an example [19], we demonstrate that this new system enables more rapid development of predictive models, with more accurate results, than previously published models of these tasks.

© All rights reserved John et al. and/or ACM Press

 
Edit | Del

John, Bonnie E. (2004): Beyond the UI: product, process and passion. In: Proceedings of the Third Nordic Conference on Human-Computer Interaction October 23-27, 2004, Tampere, Finland. pp. 285-286.

Interactive products have definitely improved from the users' perspective in the 20 years since the HCI field emerged. Prior to the 1980s, only a small portion of the population of the western world used computers, primarily scientists, engineers, and financial analysts. Today, almost everyone in developed nations use dozens of computers each day: withdrawing money from the bank, checking out at the grocery store, speed-dialing the cell phone, surfing the web, email to grandmother, and IMing friends to set up an impromptu social event. Interactivity with computers has progressed from command-line operating systems accessible only to gurus to almost invisible operating systems which require almost no attention, from esoteric applications whose functionality was desired by only a few, to depending on computer functionality for almost every aspect of life. The fact that people can and do live in such tight integration with interactive systems is a testament to HCI's contribution to the world. We can declare victory!

© All rights reserved John and/or ACM Press

2002
 
Edit | Del

John, Bonnie E., Vera, Alonso H., Matessa, Michael, Freed, Michael and Remington, Roger (2002): Automating CPM-GOMS. In: Terveen, Loren (ed.) Proceedings of the ACM CHI 2002 Conference on Human Factors in Computing Systems Conference April 20-25, 2002, Minneapolis, Minnesota. pp. 147-154.

 
Edit | Del

Ebling, Maria R., John, Bonnie E. and Satyanarayanan, M. (2002): The importance of translucence in mobile computing systems. In ACM Transactions on Computer-Human Interaction, 9 (1) pp. 42-67.

Mobile computing has been an active area of research for the past decade, but its importance will increase substantially in the decade to come. One problem faced by designers of mobile systems is that of maintaining the illusion of connectivity even when network performance is poor or non-existent. The Coda file system uses its cache to maintain this illusion. Extensive experience with the system suggests that, although users find the functionality provided by the system extremely valuable, new users face an arduous learning curve and even experienced users are sometimes confused by the system's behavior. The fundamental problem is that the lack of a strong network connection causes the system to violate a key property of caching: transparency. To overcome this problem, we have built an interface, called the CodaConsole, that makes caching translucent to users through controlled exposure of cache management internals. The interface exposes critical aspects of caching to support the mobile user while hiding noncritical details to preserve usability. This article presents the design, implementation, and usability evaluation of this interface. The CodaConsole successfully makes caching translucent in the presence of disconnected or weakly connected operation. The most surprising result was that novice Coda users performed almost as well as experienced Coda users.

© All rights reserved Ebling et al. and/or ACM Press

2001
 
Edit | Del

Bhavnani, Suresh K., Reif, Frederick and John, Bonnie E. (2001): Beyond Command Knowledge: Identifying and Teaching Strategic Knowledge for Using Complex Computer Applications. In: Beaudouin-Lafon, Michel and Jacob, Robert J. K. (eds.) Proceedings of the ACM CHI 2001 Human Factors in Computing Systems Conference March 31 - April 5, 2001, Seattle, Washington, USA. pp. 229-236.

Despite experience, many users do not make efficient use of complex computer applications. We argue that this is caused by a lack of strategic knowledge that is difficult to acquire just by knowing how to use commands. To address this problem, we present efficient and general strategies for using computer applications, and identify the components of strategic knowledge required to use them. We propose a framework for teaching strategic knowledge, and show how we implemented it in a course for freshman students. In a controlled study, we compared our approach to the traditional approach of just teaching commands. The results show that efficient and general strategies can in fact be taught to students of diverse backgrounds in a limited time without harming command knowledge. The experiment also pinpointed those strategies that can be automatically learned just from learning commands, and those that require more practice than we provided. These results are important to universities and companies that wish to foster more efficient use of complex computer applications.

© All rights reserved Bhavnani et al. and/or ACM Press

 
Edit | Del

John, Bonnie E. and Bass, Len (2001): Usability and software architecture. In Behaviour and Information Technology, 20 (5) pp. 329-338.

The role of software architecture with respect to usability has evolved over the past 20 years. The architectures of the 1980s and early 1990s assumed that usability was primarily a property of the presentation of information. Therefore, simply separating the presentation from the dialogue and application made it easy to modify that presentation after user testing. A more popular belief in the 1990s was that usability concerns greatly affected system functionality as well as the presentation. This emphasis took attention away from architectural support (beyond separation). Achieving the correct functionality for a given system became paramount. It is our observation that even if presentation and functionality of a system are well designed, the usability of a system can be greatly compromised if the underlying architecture does not support human concerns beyond modifiability. This paper will present a new role for software architecture in usability, preliminary research and practice stemming from this role and a research agenda for the future.

© All rights reserved John and Bass and/or Taylor and Francis

 
Edit | Del

Bass, Leonard J. and John, Bonnie E. (2001): Supporting Usability Through Software Architecture. In IEEE Computer, 34 (10) pp. 113-115.

 
Edit | Del

Bass, Leonard J. and John, Bonnie E. (2001): Evaluating Software Architectures for Usability. In: Little, Murray Reed and Nigay, Laurence (eds.) EHCI 2001 - Engineering for Human-Computer Interaction, 8th IFIP International Conference May 11-13, 2001, Toronto, Canada. pp. 37-38.

2000
 
Edit | Del

Baumeister, Lynn K., John, Bonnie E. and Byrne, Michael D. (2000): A Comparison of Tools for Building GOMS Models. In: Turner, Thea, Szwillus, Gerd, Czerwinski, Mary, Peterno, Fabio and Pemberton, Steven (eds.) Proceedings of the ACM CHI 2000 Human Factors in Computing Systems Conference April 1-6, 2000, The Hague, The Netherlands. pp. 502-509.

We compare three tools for creating GOMS models, QGOMS [2], CATHCI [17] and GLEAN3 [12], along several dimensions. We examine the representation and available constructs in each tool, the qualitative and quantitative design information provided, the support for building cognitively plausible models, and pragmatics about using each tool (e.g., how easy it is to modify a model). While each tool has its strengths, they all reave something to be desired as a practical UI design tool.

© All rights reserved Baumeister et al. and/or ACM Press

 
Edit | Del

Bhavnani, Suresh K. and John, Bonnie E. (2000): The Strategic Use of Complex Computer Systems. In Human-Computer Interaction, 15 (2) pp. 107-137.

Several studies show that despite experience, many users with basic command knowledge do not progress to an efficient use of complex computer applications. These studies suggest that knowledge of tasks and knowledge of tools are insufficient to lead users to become efficient. To address this problem, we argue that users also need to learn strategies in the intermediate layers of knowledge lying between tasks and tools. These strategies are (a) efficient because they exploit specific powers of computers, (b) difficult to acquire because they are suggested by neither tasks nor tools, and (c) general in nature having wide applicability. The above characteristics are first demonstrated in the context of aggregation strategies that exploit the iterative power of computers. A cognitive analysis of a real-world task reveals that even though such aggregation strategies can have large effects on task time, errors, and on the quality of the final product, they are not often used by even experienced users. We identify other strategies beyond aggregation that can be efficient and useful across computer applications and show how they were used to develop a new approach to training with promising results. We conclude by suggesting that a systematic analysis of strategies in the intermediate layers of knowledge can lead not only to more effective ways to design training but also to more principled approaches to design systems. These advances should lead users to make more efficient use of complex computer systems.

© All rights reserved Bhavnani and John and/or Taylor and Francis

 
Edit | Del

Ebling, Maria R. and John, Bonnie E. (2000): On the Contributions of Different Empirical Data in Usability Testing. In: Proceedings of DIS00: Designing Interactive Systems: Processes, Practices, Methods, & Techniques 2000. pp. 289-296.

Many sources of empirical data can be used to evaluate an interface (e.g., time to learn, time to perform benchmark tasks, number of errors on benchmark tasks, answers on questionnaires, comments made in verbal protocols). This paper examines the relative contributions of both quantitative and qualitative data gathered during a usability study. For each usability problem uncovered by this study, we trace each contributing piece of evidence back to its empirical source. For this usability study, the verbal protocol provided the sole source of evidence for more than one third of the most severe problems and more than two thirds of the less severe problems. Thus, although the verbal protocol provided the bulk of the evidence, other sources of data contributed disproportionately to the more critical problems. This work suggests that further research is required to determine the relative value of different forms of empirical evidence.

© All rights reserved Ebling and John and/or ACM Press

1999
 
Edit | Del

Bhavnani, Suresh K., John, Bonnie E. and Flemming, Ulrich (1999): The Strategic Use of CAD: An Empirically Inspired, Theory-Based Course. In: Altom, Mark W. and Williams, Marian G. (eds.) Proceedings of the ACM CHI 99 Human Factors in Computing Systems Conference May 15-20, 1999, Pittsburgh, Pennsylvania. pp. 183-190.

The inefficient use of complex computer systems has been widely reported. These studies show the persistence of inefficient methods despite many years of experience and formal training. To counteract this phenomenon, we present the design of a new course, called the Strategic Use of CAD. The course aims at teaching students efficient strategies to use a computer-aided drafting system through a two-pronged approach. Learning to See teaches students to recognize opportunities to use efficient strategies by studying the nature of the task, and Learning to Do teaches students to implement the strategies. Results from a pilot experiment show that this approach had a positive effect on the strategic behavior of students who did not exhibit knowledge of efficient strategies before the class, and had no effect on the strategic behavior of those who did. Strategic training can thus assist users in recognizing opportunities to use efficient strategies. We present the ramifications of these results on the design of training and future experiments.

© All rights reserved Bhavnani et al. and/or ACM Press

 
Edit | Del

Byrne, Michael D., John, Bonnie E., Wehrle, Neil S. and Crow, David C. (1999): The Tangled Web We Wove: A Taskonomy of WNW Use. In: Altom, Mark W. and Williams, Marian G. (eds.) Proceedings of the ACM CHI 99 Human Factors in Computing Systems Conference May 15-20, 1999, Pittsburgh, Pennsylvania. pp. 544-551.

A prerequisite to the effective design of user interfaces is an understanding of the tasks for which that interface will actually be used. Surprisingly little task analysis has appeared for one of the most discussed and fastest-growing computer applications, browsing the World-Wide Web (WWW). Based on naturally-collected verbal protocol data, we present a taxonomy of tasks undertaken on the WWW. The data reveal that several previous claims about browsing behavior are questionable, and suggests that that widget-centered approaches to interface design and evaluation may be incomplete with respect to good user interfaces for the Web.

© All rights reserved Byrne et al. and/or ACM Press

 
Edit | Del

Hudson, Scott E., John, Bonnie E., Knudsen, Keith and Byrne, Michael D. (1999): A Tool for Creating Predictive Performance Models from User Interface Demonstrations. In: Zanden, Brad Vander and Marks, Joe (eds.) Proceedings of the 12th annual ACM symposium on User interface software and technology November 07 - 10, 1999, Asheville, North Carolina, United States. pp. 93-102.

A central goal of many user interface development tools has been to make the construction of high quality interfaces easy enough that iterative design approaches could be a practical reality. In the last 15 years significant advances in this regard have been achieved. However, the evaluation portion of the iterative design process has received relatively little support from tools. Even though advances have also been made in usability evaluation methods, nearly all evaluation is still done "by hand," making it more expensive and difficult than it might be. This paper considers a partial implementation of the CRITIQUE usability evaluation tool that is being developed to help remedy this situation by automating a number of evaluation tasks. This paper will consider techniques used by the system to produce predictive models (keystroke level models and simplified GOMS models) from demonstrations of sample tasks in a fraction of the time needed by conventional handcrafting methods. A preliminary comparison of automatically generated models with models created by an expert modeler show them to produce very similar predictions (within 2%). Further, because they are automated, these models promise to be less subject to human error and less affected by the skill of the modeler.

© All rights reserved Hudson et al. and/or ACM Press

 
Edit | Del

Altmann, Erik M. and John, Bonnie E. (1999): Episodic indexing: a model of memory for attention events. In Cognitive Science, 23 (2) pp. 117-156.

1998
 
Edit | Del

Bhavnani, Suresh K. and John, Bonnie E. (1998): Delegation and Circumvention: Two Faces of Efficiency. In: Karat, Clare-Marie, Lund, Arnold, Coutaz, Jolle and Karat, John (eds.) Proceedings of the ACM CHI 98 Human Factors in Computing Systems Conference April 18-23, 1998, Los Angeles, California. pp. 273-280.

Throughout history, inefficient methods to use devices have been replaced by more efficient ones. This shift typically occurs when users discover how to delegate work to the powers of a tool, and to circumvent its limitations. Strategies of delegation and circumvention, therefore, appear to be the core of efficient use. To show how this approach can explain the relationship between tools and strategies in complex computer systems, we describe five ways to perform a real-world drawing task with current as well as future tools. We then present five corresponding GOMS models that demonstrate the value of efficient strategies when compared to the observed behavior of a professional CAD user. We conclude by presenting a generalized framework to characterize efficient strategies and discuss its relevance to design and training.

© All rights reserved Bhavnani and John and/or ACM Press

 
Edit | Del

John, Bonnie E. (1998): On Our Case Study of Claims Analysis and Other Usability Evaluation Methods. In Behaviour and Information Technology, 17 (4) pp. 244-246.

 
Edit | Del

John, Bonnie E. (1998): Cognitive Modeling in Human-Computer Interaction. In: Graphics Interface 98 June 18-20, 1998, Vancouver, British Columbia, Canada. pp. 161-167.

1997
 
Edit | Del

John, Bonnie E. and Marks, Steven J. (1997): Tracking the Effectiveness of Usability Evaluation Methods. In Behaviour and Information Technology, 16 (4) pp. 188-202.

We present a case study that tracks usability problems predicted with six usability evaluation methods (claims analysis, cognitive walkthrough, GOMS, heuristic evaluation, user action notation, and simply reading the specification) through a development process. We assess the method's predictive power by comparing the predictions to the results of user tests. We assess the method's persuasive power by seeing how many problems led to changes in the implemented code. We assess design-change effectiveness by user testing the resulting new versions of the system. We conclude that predictive methods are not as effective as the HCI field would like and discuss directions for future research.

© All rights reserved John and Marks and/or Taylor and Francis

 
Edit | Del

John, Bonnie E. and Mashyna, Matthew M. (1997): Evaluating a Multimedia Authoring Tool. In JASIST - Journal of the American Society for Information Science and Technology, 48 (11) pp. 1004-1022.

1996
 
Edit | Del

Pane, John F., Corbett, Albert T. and John, Bonnie E. (1996): Assessing Dynamics in Computer-Based Instruction. In: Tauber, Michael J., Bellotti, Victoria, Jeffries, Robin, Mackinlay, Jock D. and Nielsen, Jakob (eds.) Proceedings of the ACM CHI 96 Human Factors in Computing Systems Conference April 14-18, 1996, Vancouver, Canada. pp. 197-204.

We present an evaluation of a multimedia educational software system that includes text, graphics, animations, and simulations. When compared with an informationally equivalent control environment that used text and carefully selected still images, we found little evidence that the dynamic presentations enhanced student understanding of the declarative information in this lesson. Furthermore, students cannot be relied on to take full advantage of exploratory opportunities in computer-based instruction. These results prescribe further investigation of whether and how computer-based multimedia can be used effectively in education and training.

© All rights reserved Pane et al. and/or ACM Press

 
Edit | Del

Bhavnani, Suresh K. and John, Bonnie E. (1996): Exploring the Unrealized Potential of Computer-Aided Drafting. In: Tauber, Michael J., Bellotti, Victoria, Jeffries, Robin, Mackinlay, Jock D. and Nielsen, Jakob (eds.) Proceedings of the ACM CHI 96 Human Factors in Computing Systems Conference April 14-18, 1996, Vancouver, Canada. pp. 332-339.

Despite huge investments by vendors and users, CAD productivity remains disappointing. Our analysis of real-world CAD usage shows that even after many years of experience, users tend to use suboptimal strategies to perform complex CAD tasks. Additionally, some of these strategies have a marked resemblance to manual drafting techniques. Although this phenomenon has been previously reported, this paper explores explanations for its causes and persistence. We argue that the strategic knowledge to use CAD effectively is neither defined nor explicitly taught. In the absence of a well-formed strategy, users often develop a synthetic mental model of CAD containing a mixture of manual and CAD methods. As these suboptimal strategies do not necessarily prevent users from producing clean, accurate drawings, the inefficiencies tend to remain unrecognized and users have little motivation to develop better strategies. To reverse this situation we recommend that the strategic knowledge to use CAD effectively should be made explicit and provided early in training. We use our analysis to begin the process of making this strategic knowledge explicit. We conclude by discussing the ramifications of this research in training as well as in the development of future computer aids for drawing and design.

© All rights reserved Bhavnani and John and/or ACM Press

 
Edit | Del

John, Bonnie E. (1996): TYPIST: A Theory of Performance in Skilled Typing. In Human-Computer Interaction, 11 (4) pp. 321-355.

TYPIST is a TheorY of Performance In Skilled Typing built within the framework of the Model Human Processor (MHP; Card, Moran,&Newell, 1983). As such, it can be used to make quantitative predictions of performance on typing tasks and can be integrated with other MHP-based models of performance. In this article, I present the theory and explain the source of each theoretical assumption (MHP, typing task analysis, or empirical typing data). I then demonstrate different ways to use TYPIST by applying it to six transcription typing tasks. Finally, I summarize its application to many more typing tasks that display robust behavioral phenomena identified by Salthouse (1986).

© All rights reserved John and/or Taylor and Francis

 
Edit | Del

John, Bonnie E. and Kieras, David E. (1996): Using GOMS for User Interface Design and Evaluation: Which Technique?. In ACM Transactions on Computer-Human Interaction, 3 (4) pp. 287-319.

Since the seminal book, The Psychology of Human-Computer Interaction, the GOMS model has been one of the few widely known theoretical concepts in human-computer interaction. This concept has spawned much research to verify and extend the original work and has been used in real-world design and evaluation situations. This article synthesizes the previous work on GOMS to provide an integrated view of GOMS models and how they can be used in design. We briefly describe the major variants of GOMS that have matured sufficiently to be used in actual design. We then provide guidance to practitioners about which GOMS variant to use for different design situations. Finally, we present examples of the application of GOMS to practical design problems and then summarize the lessons learned.

© All rights reserved John and Kieras and/or ACM Press

 Cited in the following chapter:

Usability Evaluation: [/encyclopedia/usability_evaluation.html]


 
 
Edit | Del

John, Bonnie E. and Kieras, David E. (1996): The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast. In ACM Transactions on Computer-Human Interaction, 3 (4) pp. 320-351.

Since the publication of The Psychology of Human-Computer Interaction, the GOMS model has been one of the most widely known theoretical concepts in HCI. This concept has produced several GOMS analysis techniques that differ in appearance and form, underlying architectural assumptions, and predictive power. This article compares and contrasts four popular variants of the GOMS family (the Keystroke-Level Model, the original GOMS formulation, NGOMSL, and CPM-GOMS) by applying them to a single task example.

© All rights reserved John and Kieras and/or ACM Press

 Cited in the following chapter:

User Interface Design Adaptation: [/encyclopedia/user_interface_design_adaptation.html]


 
1995
 
Edit | Del

Altmann, Erik M., Larkin, Jill H. and John, Bonnie E. (1995): Display Navigation by an Expert Programmer: A Preliminary Model of Memory. In: Katz, Irvin R., Mack, Robert L., Marks, Linn, Rosson, Mary Beth and Nielsen, Jakob (eds.) Proceedings of the ACM CHI 95 Human Factors in Computing Systems Conference May 7-11, 1995, Denver, Colorado. pp. 3-10.

Skilled programmers, working on natural tasks, navigate large information displays with apparent ease. We present a computational cognitive model suggesting how this navigation may be achieved. We trace the model on two related episodes of behavior. In the first, the user acquires information from the display. In the second, she recalls something about the first display and scrolls back to it. The episodes are separated by time and by intervening displays, suggesting that her navigation is mediated by long-term memory, as well as working memory and the display. In the first episode, the model automatically learns to recognize what it sees on the display. In the second episode, a chain of recollections, cued initially by the new display, leads the model to imagine what it might have seen earlier. The knowledge from the first episode recognizes this image, leading the model to scroll in search of the real thing. This model is a step in developing a psychology of skilled programmers working on their own tasks.

© All rights reserved Altmann et al. and/or ACM Press

 
Edit | Del

Bauer, Malcolm I. and John, Bonnie E. (1995): Modeling Time-Constrained Learning in a Highly Interactive Task. In: Katz, Irvin R., Mack, Robert L., Marks, Linn, Rosson, Mary Beth and Nielsen, Jakob (eds.) Proceedings of the ACM CHI 95 Human Factors in Computing Systems Conference May 7-11, 1995, Denver, Colorado. pp. 19-26.

We investigate whether a memory-based learning procedure can explain the development of expertise within the time-constraints of a fast-paced highly interactive task. Our computational cognitive model begins with novice-like knowledge of a domain, and through experience converges on behavior that matches a pre-existing GOMS model of expert human performance. The model coordinates perception, comprehension, strategic planning, learning, memory, and motor action to respond to the time demands of the task while continually improving its performance. Because the model was constructed within the Soar architecture, it is able to make predictions of learning and performance time.

© All rights reserved Bauer and John and/or ACM Press

 
Edit | Del

John, Bonnie E. and Packer, Hilary (1995): Learning and Using the Cognitive Walkthrough Method: A Case Study Approach. In: Katz, Irvin R., Mack, Robert L., Marks, Linn, Rosson, Mary Beth and Nielsen, Jakob (eds.) Proceedings of the ACM CHI 95 Human Factors in Computing Systems Conference May 7-11, 1995, Denver, Colorado. pp. 429-436.

We present a detailed case study, drawn from many information sources, of a computer scientist learning and using Cognitive Walkthroughs to assess a multi-media authoring tool. This study results in several clear messages to both system designers and to developers of evaluation techniques: this technique is currently learnable and usable, but there are several areas where further method-development would greatly contribute to a designer's use of the technique. In addition, the emergent picture of the process this evaluator went through to produce his analysis sets realistic expectations for other novice evaluators who contemplate learning and using Cognitive Walkthroughs.

© All rights reserved John and Packer and/or ACM Press

 
Edit | Del

John, Bonnie E. (1995): Why GOMS?. In Interactions, 2 (4) pp. 80-89.

In 1983 Stu Card, Tom Moran, and Allen Newell presented the concept of "GOMS" in their seminal book, The Psychology of Human-Computer Interaction. They suggested that, when designing a computer system, it is useful to analyze knowledge about how to perform a task in terms of four components: Goals, Operators, Methods, and Selection rules. At the time of their proposal, I was a systems engineer at Bell Laboratories writing specifications for the functionality and user interface of small-business telecommunication systems. If I had heard this claim then, a raft of questions would have leapt to my mind: What is GOMS? What can it do? In what situations does it apply? How do you use it? Who can use it? Can it be trusted? Is it worth the bother of using it? A dozen years later, these same questions still exist for many people, for usability specialists who have not yet used GOMS, for their managers, and for computer system designers. After leaving Bell Laboratories to get a Ph.D. in cognitive psychology/human-computer interaction, I've spent the last twelve years using GOMS: reading about it, exercising it, and doing research to push the envelop of what it can do. This time around, in addition to asking the questions, I may be able to provide some of the answers.

© All rights reserved John and/or Lawrence Erlbaum Associates

1994
 
Edit | Del

John, Bonnie E., Vera, Alonso H. and Newell, Allen (1994): Towards Real-Time GOMS: A Model of Expert Behaviour in a Highly Interactive Task. In Behaviour and Information Technology, 13 (4) pp. 255-267.

We present an analysis of an expert performing a highly interactive computer task. The analysis uses GOMS models, specifying the Goals, Operators, Methods, and Selection rules used by the expert. Two models are presented, one with function-level operators which perform high-level functions in the domain, and one with keystroke-level operators which describe hand movements. For a segment of behaviour in which the expert accomplished about 30 functions in about 30 s, the function-level model predicted the observed behaviour well, while the keystroke-level model predicted only about half of the observed hand movements. These results, including the discrepancy between the models, are discussed.

© All rights reserved John et al. and/or Taylor and Francis

1993
 
Edit | Del

John, Bonnie E. and Morris, James H. (1993): HCI in the School of Computer Science at Carnegie Mellon University. In: Ashlund, Stacey, Mullet, Kevin, Henderson, Austin, Hollnagel, Erik and White, Ted (eds.) Proceedings of the ACM CHI 93 Human Factors in Computing Systems Conference April 24-29, 1993, Amsterdam, The Netherlands. pp. 49-50.

People use computers to accomplish tasks. Consequently, understanding human capabilities and tasks is as important to the design of computer systems as understanding computer technologies. The School of Computer Science (SCS) at Carnegie Mellon University (CMU) has become home to an interdisciplinary community that performs research on HCI issues, develops systems using HCI methods of design and evaluation, and trains students in the theory and skills necessary to become HCI professionals.

© All rights reserved John and Morris and/or ACM Press

 
Edit | Del

Gray, Wayne D., John, Bonnie E., Nardi, Bonnie A., Petre, Marion, Spohrer, James C. and Turner, Althea A. (1993): End-User Programming. In: Cook, Curtis, Scholtz, Jean and Spohrer, James C. (eds.) Empirical Studies of Programmers - Fifth Workshop December 3-15, 1993, 1993, Palo Alto, California. pp. 1-2.

End-user programming involves the end user building new tools, not simply using an application. Hence, word processing is not an example of end-user programming while building style sheets for a word processor would be. Using communication software is not, writing a script for the communication software is. Using someone else's spreadsheet is not, building your own spreadsheet is. Using someone else's HyperCard stack is not, building your own is. Running someone else's cognitive model is not, building a cognitive model that fits your theory is. This definition includes both specialized software for experts (for example, Edmonds, O'Brien,&Bayley, 1993), semi-domain specialized software such as spreadsheets, as well as intendedly general purpose (but specialized anyway) software such as HyperCard. The two defining characteristics are: building software tools (what the end-user programming language, EPL, is used for) and characteristics of the user (whose main interest is in building a tool for which they, among possible others, will be a user). Hence, LISP could be considered an EPL for C programmers who use EMACS.

© All rights reserved Gray et al. and/or Ablex Publishing

 
Edit | Del

Gray, Wayne D., John, Bonnie E. and Atwood, Michael E. (1993): Project Ernestine: Validating a GOMS Analysis for Predicting and Explaining Real-World Task Performance. In Human-Computer Interaction, 8 (3) pp. 237-309.

Project Ernestine served a pragmatic as well as a scientific goal: to compare the worktimes of telephone company toll and assistance operators on two different workstations and to validate a GOMS analysis for predicting and explaining real-world performance. Contrary to expectations, GOMS predicted and the data confirmed that performance with the proposed workstation was slower than with the current one. Pragmatically, this increase in performance time translates into a cost of almost $2 million a year to NYNEX. Scientifically, the GOMS models predicted performance with exceptional accuracy. The empirical data provided us with three interesting results: proof that the new workstation was slower than the old one, evidence that this difference was not constant but varied with call category, and (in a trial that spanned 4 months and collected data on 72,450 phone calls) proof that performance on the new workstation stabilized after the first month. The GOMS models predicted the first two results and explained all three. In this article, we discuss the process and results of model building as well as the design and outcome of the field trial. We assess the accuracy of GOMS predictions and use the mechanisms of the models to explain the empirical results. Last, we demonstrate how the GOMS models can be used to guide the design of a new workstation and evaluate design decisions before they are implemented.

© All rights reserved Gray et al. and/or Taylor and Francis

1992
 
Edit | Del

Peck, Virginia A. and John, Bonnie E. (1992): Browser-Soar: A Computational Model of a Highly Interactive Task. In: Bauersfeld, Penny, Bennett, John and Lynch, Gene (eds.) Proceedings of the ACM CHI 92 Human Factors in Computing Systems Conference June 3-7, 1992, Monterey, California. pp. 165-172.

Browser-Soar models the perceptual, cognitive, and motor operators of a user searching for information in an on-line help browser. The model accounts for 90% of the browsing behavior observed in ten episodes. This result suggests that much of browsing behavior is a routine cognitive task, describable by GOMS, and extends the boundary of tasks to which GOMS applies to include highly interactive tasks. Further, it also suggests that GOMS analyses can be used to evaluate browser interfaces, as they have been used to evaluate text-editors and other computer applications, and to help focus design effort.

© All rights reserved Peck and John and/or ACM Press

 
Edit | Del

John, Bonnie E. and Vera, Alonso H. (1992): A GOMS Analysis of a Graphic, Machine-Paced, Highly Interactive Task. In: Bauersfeld, Penny, Bennett, John and Lynch, Gene (eds.) Proceedings of the ACM CHI 92 Human Factors in Computing Systems Conference June 3-7, 1992, Monterey, California. pp. 251-258.

A GOMS analysis was used to predict the behavior of an expert in a graphic, machine-paced, highly interactive task. The analysis was implemented in a computational model using the Soar cognitive architecture. Using only the information available in an instruction booklet and some simple heuristics for selecting between operators, the functional-level behavior of the expert proved to be virtually dictated by the objects visible on the display. At the keystroke-level, the analysis predicted about 60% of the behavior, in keeping with similar results in previous GOMS research. We conclude that GOMS is capable of predicting expert behavior in a broader range of tasks than previously demonstrated.

© All rights reserved John and Vera and/or ACM Press

 
Edit | Del

Gray, Wayne D., John, Bonnie E. and Atwood, Michael E. (1992): The Precis of Project Ernestine, or, An Overview of a Validation of GOMS. In: Bauersfeld, Penny, Bennett, John and Lynch, Gene (eds.) Proceedings of the ACM CHI 92 Human Factors in Computing Systems Conference June 3-7, 1992, Monterey, California. pp. 307-312.

1990
 
Edit | Del

John, Bonnie E. (1990): Extensions of GOMS Analyses to Expert Performance Requiring Perception of Dynamic Visual and Auditory Information. In: Carrasco, Jane and Whiteside, John (eds.) Proceedings of the ACM CHI 90 Human Factors in Computing Systems Conference 1990, Seattle, Washington,USA. pp. 107-115.

GOMS models of telephone toll and assistance operators (TAOs) are being constructed in an effort to provide theoretical predictions of expert performance on several dedicated workstations. This applied effort has pushed the development of GOMS modeling techniques into the area of speech input and output, and into a task where information is not always available when it is required by the TAO. This paper describes the task, heuristics for constructing the GOMS models, and parameters for making quantitative predictions of performance time.

© All rights reserved John and/or ACM Press

 
Edit | Del

Gray, Wayne D., John, Bonnie E., Stuart, Rory, Lawrence, Deborah and Atwood, Michael E. (1990): GOMS Meets the Phone Company: Analytic Modeling Applied to Real-World Problems. In: Diaper, Dan, Gilmore, David J., Cockton, Gilbert and Shackel, Brian (eds.) INTERACT 90 - 3rd IFIP International Conference on Human-Computer Interaction August 27-31, 1990, Cambridge, UK. pp. 29-34.

GOMS analyses were used to interpret some perplexing data from a field evaluation of two telephone operator workstations. The new workstation is ergonomically superior to the old and is preferred by all who have used it. Despite these advantages telephone operators who use the new workstation are not faster than those who use the old but are, in fact, significantly slower. This bewildering result makes sense when seen with the aid of GOMS. With GOMS we can see that very few of the eliminated key-strokes or ergonomic advantages affect tasks that determine the operator's work time. Indeed, GOMS shows that some presumed procedural improvements have the contrary effect of increasing the time an operator spends handling a phone call. We concluded that if GOMS had been done early on, then the task, not the workstation, would have been redesigned.

© All rights reserved Gray et al. and/or North-Holland

 
Edit | Del

Gray, Wayne D., Atwood, Michael E., Olson, Judith S., John, Bonnie E. and Elkerton, Jay (1990): Real-Time GOMS: Comparative Modeling of a User-Nintendo Interaction. In: D., Woods, and E., Roth, (eds.) Proceedings of the Human Factors Society 34th Annual Meeting 1990, Santa Monica, USA. pp. 385-386.

1989
 
Edit | Del

John, Bonnie E. and Newell, Allen (1989): Cumulating the Science of HCI: From S-R Compatibility to Transcription Typing. In: Bice, Ken and Lewis, Clayton H. (eds.) Proceedings of the ACM CHI 89 Human Factors in Computing Systems Conference April 30 - June 4, 1989, Austin, Texas. pp. 109-114.

In keeping with our claim that an applied psychology of HCI must be based on cumulative work within a unified framework, we present two extensions of the Model Human Processor. A model of immediate response behavior and stimulus-response (S-R) compatibility is presented and extended to a new domain: transcription typing. Parameters are estimated using one S-R compatibility experiment, used to make a priori predictions in four other S-R compatibility tasks, and then carried over into the area of typing. A model of expert transcription typing is described and its prediction of typing phenomena is demonstrated and summarized.

© All rights reserved John and Newell and/or ACM Press

 
Edit | Del

Wolf, Catherine G., Carroll, John M., Landauer, Thomas K., John, Bonnie E. and Whiteside, John (1989): The Role of Laboratory Experiments in HCI: Help, Hindrance, or Ho-Hum?. In: Bice, Ken and Lewis, Clayton H. (eds.) Proceedings of the ACM CHI 89 Human Factors in Computing Systems Conference April 30 - June 4, 1989, Austin, Texas. pp. 265-268.

1987
 
Edit | Del

John, Bonnie E. and Newell, Allen (1987): Predicting the time to recall computer command abbreviations. In: Graphics Interface 87 (CHI+GI 87) April 5-9, 1987, Toronto, Ontario, Canada. pp. 33-40.

1985
 
Edit | Del

John, Bonnie E., Rosenbloom, Paul S. and Newell, Allen (1985): A Theory of Stimulus-Response Compatibility Applied to Human-Computer Interaction. In: Borman, Lorraine and Curtis, Bill (eds.) Proceedings of the ACM CHI 85 Human Factors in Computing Systems Conference April 14-18, 1985, San Francisco, California. pp. 213-219.

A GOMS theory of stimulus-response compatibility is presented and applied to remembering computer command abbreviations. Two abbreviation techniques, vowel-deletion and special-character-plus-first-letter, are compared in an encoding task. Significant differences are found in the time to type the first letter of the abbreviation, and in the time to complete the typing of the abbreviation. These differences are analyzed using the theory which produces an excellent quantitative fit to the data (r{squared} = 0.97).

© All rights reserved John et al. and/or ACM Press

John, Bonnie E. (2013): Cognitive Modeling. In: Soegaard, Mads and Dam, Rikke Friis (eds.). "The Encyclopedia of Human-Computer Interaction, 2nd Ed.". Aarhus, Denmark: The Interaction Design Foundation. Available online at https://www.interaction-design.org/encyclopedia/cognitive_modeling.html

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Changes to this page (author)

10 Nov 2012: Modified
10 Nov 2012: Modified
09 Nov 2012: Modified
12 Oct 2011: Added
12 Oct 2011: Modified
26 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
15 Jan 2011: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
01 Oct 2010: Added
03 Jun 2009: Modified
03 Jun 2009: Modified
03 Jun 2009: Modified
03 Jun 2009: Modified
03 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
31 May 2009: Modified
12 May 2008: Modified
29 Jun 2007: Modified
29 Jun 2007: Modified
29 Jun 2007: Modified
26 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
28 Apr 2003: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/bonnie_e__john.html

Publication statistics

Pub. period:-2012
Pub. count:64
Number of co-authors:89



Co-authors

Number of publications with 3 favourite co-authors:

Wayne D. Gray:6
Suresh K. Bhavnani:5
Michael E. Atwood:4

 

 

Productive colleagues

Bonnie E. John's 3 most productive colleagues in number of publications:

John M. Carroll:209
Scott E. Hudson:113
Bonnie A. Nardi:67
 
 
 

Upcoming Courses

go to course
Quality Web Communication: The Beginner's Guide
89% booked. Starts in 6 days
go to course
User-Centred Design - Module 2
83% booked. Starts in 12 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading