Number of co-authors:85
Number of publications with 3 favourite co-authors:Joanna McGrenere:7William B. Cowan:5Barry A. Po:4
Kellogg S. Booth's 3 most productive colleagues in number of publications:Kori Inkpen:70Ronald M. Baecker:67Colin Ware:58
A general principle for all user interface design is to go through all of your design elements and remove them one at a time. If the design works as well without a certain design element, kill it.
-- Jakob Nielsen, Designing Web Usability, p. 22.
Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess
User Experience and Experience Design !
Our Latest Books
Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger
Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad
Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam
Kellogg S. Booth
Has also published under the name of:
"K. S. Booth" and "Kellogg Booth"
Personal Homepage: http://www.cs.ubc.ca/~ksbooth/
Publications by Kellogg S. Booth (bibliography)
MacKenzie, Russell, Hawkey, Kirstie, Booth, Kellogg S., Liu, Zhangbo, Perswain, Presley and Dhillon, Sukhveer S. (2012): LACOME: a multi-user collaboration system for shared large displays. In: Companion Proceedings of ACM CSCW12 Conference on Computer-Supported Cooperative Work 2012. pp. 267-268.
In this paper we describe LACOME, which is a collaboration system that allows multiple users to simultaneously publish their computer desktops to a shared large screen display, and also allows other users to interact with the displayed information on a variety of semantic levels. LACOME features our LSO (Large Screen Optimized) window manipulation technique that utilizes the entire window for manipulations instead of only the title-bar and borders and includes 'snapping regions' that automatically move the cursor to the window's boundary, allowing quick, accurate manipulations at the edges and corners of the screen.
© All rights reserved MacKenzie et al. and/or ACM Press
Hendy, Jeff C., Link, Juliette, Booth, Kellogg S. and McGrenere, Joanna (2011): Parameter selection in keyboard-based dialog boxes. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2761-2764.
Recent keyboard-based alternatives to WIMP interfaces do not have good support for commands that require multiple parameters. We remedy this by extending a previous design and mimicking dialog boxes to provide good visual feedback while still keeping the advantages of keyboard input. A laboratory study showed the new technique to be competitive with dialog boxes on speed and error rate, but strongly preferred over dialog boxes by experienced command line users. This is a marked improvement over the previous design, which was also preferred by the target user group but did not compete with dialog boxes in terms of performance.
© All rights reserved Hendy et al. and/or their publisher
Shoemaker, Garth, Tsukitani, Takayuki, Kitamura, Yoshifumi and Booth, Kellogg S. (2010): Body-centric interaction techniques for very large wall displays. In: Proceedings of the Sixth Nordic Conference on Human-Computer Interaction 2010. pp. 463-472.
We examine the design space of interaction techniques for very large wall displays by drawing from existing theory and practice for reality-based interfaces and whole-body interfaces. We also apply insights drawn from research in psychology about the human cognitive mechanisms that support sensorimotor operations in different coordinate spaces, as well as research in sociology examining how people manage coordination and privacy concerns in these spaces. Using guidelines obtained from these analyses, we designed and implemented a novel suite of body-centric interaction techniques. These were integrated into a map browsing and editing application for a very large (5m◊3m) wall display. The application was then used to gather user feedback to guide the further development of the interaction techniques.
© All rights reserved Shoemaker et al. and/or their publisher
Raja, Fahimeh, Hawkey, Kirstie, Beznosov, Konstantin and Booth, Kellogg S. (2010): Investigating an appropriate design for personal firewalls. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 4123-4128.
Personal firewalls are an important aspect of security for home computer users, but little attention has been given to their usability. We conducted semi-structured interviews to understand participants' knowledge, requirements, expectations, and misconceptions for personal firewalls. Analysis of 10 interviews shows that different design decisions (i.e., level of automation, multiple profile settings) are appropriate for users with different levels of security knowledge and experience.
© All rights reserved Raja et al. and/or their publisher
Shoemaker, Garth, Tsukitani, Takayuki, Kitamura, Yoshifumi and Booth, Kellogg S. (2010): Whole body large wall display interfaces. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 4809-4812.
This video demonstrates an application that uses a body-centric approach to support interaction with very large wall displays. The design is centered on a virtual body model that represents the users in the context of the workspace, relative to one another as well as to the display(s). This concept of body-centric interaction serves both as a design philosophy and an implementation approach and is both general and powerful. Our approach is general because if the model is detailed enough, a broad range of interaction techniques can be implemented. It is powerful because it opens up an entire class of new interaction techniques: those that depend on properties of a users' body, such as arm or hand pointing direction, head direction, or body location or orientation. The video highlights some of the body-centric interaction techniques that we believe are of value based on how people use their bodies in the everyday world.
© All rights reserved Shoemaker et al. and/or their publisher
Hendy, Jeff, Booth, Kellogg S. and McGrenere, Joanna (2010): Graphically enhanced keyboard accelerators for GUIs. In: Proceedings of the 2010 Conference on Graphics Interface 2010. pp. 3-10.
We introduce GEKA, a graphically enhanced keyboard accelerator method that provides the advantages of a traditional command line interface within a GUI environment, thus avoiding the "Fitts-induced bottleneck" of pointer movement that is characteristic of most WIMP methods. Our design rationale and prototype development were derived from a small formative user study, which suggested that advanced users would like alternatives to WIMP methods in GUIs. The results of a controlled experiment show that GEKA performs well, is faster than menu selection, and is strongly preferred over all mouse-based WIMP methods.
© All rights reserved Hendy et al. and/or their publisher
Maksakov, Evgeny, Booth, Kellogg S. and Hawkey, Kirstie (2010): Whale Tank Virtual Reality. In: Proceedings of the 2010 Conference on Graphics Interface 2010. pp. 185-192.
Whale Tank Virtual Reality (VR) is a novel head-coupled VR technique for collocated collaboration. It allows multiple users to observe a 3D scene from the correct perspective through their own personal viewport into the virtual scene and to interact with the scene on a large touch screen display. There are two primary benefits to Whale Tank VR: 1) Head coupling allows a user to experience the sense of a third dimension and to observe difficult-to-see objects without requiring navigation beyond natural head movement. 2) Multiple viewports enable collocated collaboration by seamlessly adjusting the head-coupled perspectives in each viewport according to the proximity of collaborators to ensure a consistent display at all times. One potential disadvantage that we had to consider was that head-coupling might reduce awareness of a collocated coworker's actions in the 3D scene. We therefore conducted an experiment to study the influence of head coupling on users' awareness-and-recall of actions in a simulated collaborative situation for several levels of task difficulty. Results revealed no statistically significant difference in awareness-and-recall performance with or without the presence of head coupling. This suggests that in situations where head coupling is employed, there is no degradation in users' awareness of collocated activity.
© All rights reserved Maksakov et al. and/or their publisher
Shoemaker, Garth, Findlater, Leah, Dawson, Jessica Q. and Booth, Kellogg S. (2009): Mid-air text input techniques for very large wall displays. In: Proceedings of the 2009 Conference on Graphics Interface 2009. pp. 231-238.
Traditional text input modalities, namely keyboards, are often not appropriate for use when standing in front of very large wall displays. Direct interaction techniques, such as handwriting, are better, but are not well suited to situations where users are not in close physical proximity to the display. We discuss the potential of mid-air interaction techniques for text input on very large wall displays, and introduce two factors, distance-dependence and visibility-dependence, which are useful for segmenting the design space of mid-air techniques. We then describe three techniques that were designed with the goal of exploring the design space, and present a comparative evaluation of those techniques. Questions raised by the evaluation were investigated further in a second evaluation focusing on distance-dependence. The two factors of distance- and visibility-dependence can guide the design of future text input techniques, and our results suggest that distance-independent techniques may be best for use with very large wall displays.
© All rights reserved Shoemaker et al. and/or their publisher
Lanir, Joel, Booth, Kellogg S. and Findlater, Leah (2008): Observing presenters' use of visual aids to inform the design of classroom presentation software. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 695-704.
Large classrooms have traditionally provided multiple blackboards on which an entire lecture could be visible. In recent decades, classrooms were augmented with a data projector and screen, allowing computer-generated slides to replace hand-written blackboard presentations and overhead transparencies as the medium of choice. Many lecture halls and conference rooms will soon be equipped with multiple projectors that provide large, high-resolution displays of comparable size to an old fashioned array of blackboards. The predominant presentation software, however, is still designed for a single medium-resolution projector. With the ultimate goal of designing rich presentation tools that take full advantage of increased screen resolution and real estate, we conducted an observational study to examine current practice with both traditional whiteboards and blackboards, and computer-generated slides. We identify several categories of observed usage, and highlight differences between traditional media and computer slides. We then present design guidelines for presentation software that capture the advantages of the old and the new and describe a working prototype based on those guidelines that more fully utilizes the capabilities of multiple displays.
© All rights reserved Lanir et al. and/or ACM Press
Booth, Kellogg S. (2008): 2^5 Years Ago I Couldn't Even Spell Canadian, Now I Are One Momentos of Collaborating on, with, and about Technology. In: Proceedings of the 2008 Conference on Graphics Interface May 28-30, 2008, Windsor, Ontario, Canada. pp. 107-114.
I've been in Canada doing research and teaching for just about half of my life. It's been fun. I will share some of the lessons I've learned and things I have discovered, but mostly I want to convey a sense of my delight in having had the good fortune over the years to always have worked within a collaborative setting, both in terms of the actual research that often involved systems to support communication between people and in terms of how the research itself was undertaken by a team of people.
© All rights reserved Booth and/or his/her publisher
Lanir, Joel and Booth, Kellogg S. (2008): MultiPresenter: a presentation system for (very) large display surfaces. In: El-Saddik, Abdulmotaleb, Vuong, Son, Griwodz, Carsten, Bimbo, Alberto Del, Candan, K. Selcuk and Jaimes, Alejandro (eds.) Proceedings of the 16th International Conference on Multimedia 2008 October 26-31, 2008, Vancouver, British Columbia, Canada. pp. 519-528.
Lanir, Joel and Booth, Kellogg S. (2008): Presentation tools for high-resolution and multiple displays. In: Proceedings of the 2008 ACM International Workshop on Human-Centered Multimedia 2008. pp. 61-68.
Presentation software was originally developed as a way to design overhead transparencies to be used as visual aids in talks. While much of the software has since then changed, the basic design using the slide metaphor still follows the original purpose and does not accommodate the different needs and uses presentation software has today. We describe our experiences and design process in developing MultiPresenter -- a presentation system that works on multiple displays designed to promote audiences' learning. Our human-centered approach includes observing instructors use of traditional visual aids such as whiteboards and blackboards as well as newer aids such as computer-generated slide presentations, interviews with instructors during the requirement gathering phase, and multiple iterations of design and testing during the implementation phase. We describe our current and future plans for evaluating and extending our system. Evaluations focus on the deployment of MultiPresenter in actual classrooms to gain valuable feedback from both instructors and students on our design decisions and on the effects that our system has on learning.
© All rights reserved Lanir and Booth and/or ACM Press
Swindells, Colin, MacLean, Karon E., Booth, Kellogg S. and Meitner, Michael J. (2007): Exploring affective design for physical controls. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 933-942.
Physical controls such as knobs, sliders, and buttons are experiencing a revival as many computing systems progress from personal computing architectures towards ubiquitous computing architectures. We demonstrate a process for measuring and comparing visceral emotional responses of a physical control to performance results of a target acquisition task. In our user study, participants experienced mechanical and rendered friction, inertia, and detent dynamics as they turned a haptic knob towards graphical targets of two different widths and amplitudes. Together, this process and user study provide novel affect- and performance-based design guidance to developers of physical controls for emerging ubiquitous computing environments. Our work bridges extensive human factors work in mechanical systems that peaked in the 1960's, to contemporary trends, with a goal of integrating mechatronic controls into emerging ubiquitous computing systems.
© All rights reserved Swindells et al. and/or ACM Press
McGrenere, Joanna, Baecker, Ronald M. and Booth, Kellogg S. (2007): A field evaluation of an adaptable two-interface design for feature-rich software. In ACM Transactions on Computer-Human Interaction, 14 (1) p. 3.
Two approaches for supporting personalization in complex software are system-controlled adaptive menus and user-controlled adaptable menus. We evaluate a novel interface design for feature-rich productivity software based on adaptable menus. The design allows the user to easily customize a personalized interface, and also supports quick access to the default interface with all of the standard features. This design was prototyped as a front-end to a commercial word processor. A field experiment investigated users' personalizing behavior and tested the effects of different interface designs on users' satisfaction and their perceived ability to navigate, control, and learn the software. There were two conditions: a commercial word processor with adaptive menus and our prototype with adaptable menus for the same word processor. Our evaluation shows: (1) when provided with a flexible, easy-to-use and easy-to-understand customization mechanism, the majority of users do effectively personalize their interface; and (2) user-controlled interface adaptation with our adaptable menus results in better navigation and learnability, and allows for the adoption of different personalization strategies, as compared to a particular system-controlled adaptive menu system that implements a single strategy. We report qualitative data obtained from interviews and questionnaires with participants in the evaluation in addition to quantitative data.
© All rights reserved McGrenere et al. and/or ACM Press
Shoemaker, Garth B. D., Tang, Anthony and Booth, Kellogg S. (2007): Shadow reaching: a new perspective on interaction for large displays. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 53-56.
We introduce Shadow Reaching, an interaction technique that makes use of a perspective projection applied to a shadow representation of a user. The technique was designed to facilitate manipulation over large distances and enhance understanding in collaborative settings. We describe three prototype implementations that illustrate the technique, examining the advantages of using shadows as an interaction metaphor to support single users and groups of collaborating users. Using these prototypes as a design probe, we discuss how the three components of the technique (sensing, modeling, and rendering) can be accomplished with real (physical) or computed (virtual) shadows, and the benefits and drawbacks of each approach.
© All rights reserved Shoemaker et al. and/or ACM Press
Zheng, Qixing, Booth, Kellogg S. and McGrenere, Joanna (2006): Co-authoring with structured annotations. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 131-140.
Most co-authoring tools support basic annotations, such as edits and comments that are anchored at specific locations in the document. However, they do not support meta-commentary about a document (such as an author's summary of modifications) which gets separated from the document, often in the body of email messages. This causes unnecessary overhead in the write-review-edit workflow inherent in co-authoring. We present document-embedded structured annotations called "bundles" that incorporate the meta-commentary into a unified annotation model that meets a set of annotation requirements we identified through a small field investigation. A usability study with 20 subjects evaluated the annotation reviewing stage of co-authoring and showed that annotation bundles in our high-fidelity prototype reduced reviewing time and increased accuracy, compared to a system that only supports edits and comments.
© All rights reserved Zheng et al. and/or ACM Press
Sprague, David W., Po, Barry A. and Booth, Kellogg S. (2006): The importance of accurate VR head registration on skilled motor performance. In: Proceedings of the 2006 Conference on Graphics Interface 2006. pp. 131-137.
Many virtual reality (VR) researchers consider exact head registration (HR) and an exact multi-sensory alignment between real world and virtual objects to be a critical factor for effective motor performance in VR. Calibration procedures, however, can be error prone, time consuming and sometimes impractical to perform. To better understand the relationship between head registration and fine motor performance, we conducted a series of reciprocal tapping tasks under four conditions: real world tapping, VR with correct HR, VR with mildly perturbed HR, and VR with highly perturbed HR. As might be expected, VR performance was worse than real world performance. There was no effect of HR perturbation on motor performance in the tapping tasks. We believe that sensorimotor adaptation enabled subjects to perform equally well in the three VR conditions despite the incorrect head registration in two of the conditions. This suggests that exact head registration may not be as critically important as previously thought, and that extensive per-user calibration procedures may not be necessary for some VR tasks.
© All rights reserved Sprague et al. and/or Canadian Information Processing Society
Swindells, Colin, MacLean, Karon E., Booth, Kellogg S. and Meitner, Michael (2006): A case-study of affect measurement tools for physical user interface design. In: Proceedings of the 2006 Conference on Graphics Interface 2006. pp. 243-250.
Designers of human-computer interfaces often overlook issues of affect. An example illustrating the importance of affective design is the frustration many of us feel when working with a poorly designed computing device. Redesigning such computing interfaces to induce more pleasant user emotional responses would improve the user's health and productivity. Almost no research has been conducted to explore affective responses in rendered haptic interfaces. In this paper, we describe results and analysis from two user studies as a starting point for future systematic evaluation and design of rendered physical controls. Specifically, we compare and contrast self-report and biometric measurement techniques for two common types of haptic interactions. First, we explore the tactility of real textures such as silk, putty, and acrylic. Second, we explore the kinesthetics of physical control renderings such as friction and inertia. We focus on evaluation methodology, on the premise that good affect evaluation and analysis cycles can be a useful element of the interface designer's tool palette.
© All rights reserved Swindells et al. and/or Canadian Information Processing Society
Berry, Lior, Bartram, Lyn and Booth, Kellogg S. (2005): Role-based control of shared application views. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 23-32.
Collaboration often relies on all group members having a shared view of a single-user application. A common situation is a single active presenter sharing a live view of her workstation screen with a passive audience, using simple hardware-based video signal projection onto a large screen or simple bitmap-based sharing protocols. This offers simplicity and some advantages over more sophisticated software-based replication solutions, but everyone has the exact same view of the application. This conflicts with the presenter\'s need to keep some information and interaction details private. It also fails to recognize the needs of the passive audience, who may struggle to follow the presentation because of verbosity, display clutter or insufficient familiarity with the application. Views that cater to the different roles of the presenter and the audience can be provided by custom solutions, but these tend to be bound to a particular application. In this paper we describe a general technique and implementation details of a prototype system that allows standardized role-specific views of existing single-user applications and permits additional customization that is application-specific with no change to the application source code. Role-based policies control manipulation and display of shared windows and image buffers produced by the application, providing semi-automated privacy protection and relaxed verbosity to meet both presenter and audience needs.
© All rights reserved Berry et al. and/or ACM Press
Po, Barry A., Fisher, Brian D. and Booth, Kellogg S. (2005): Comparing cursor orientations for mouse, pointer, and pen interaction. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 291-300.
Most graphical user interfaces provide visual cursors to facilitate interaction with input devices such as mice, pointers, and pens. These cursors often include directional cues that could influence the stimulus-response compatibility of user input. We conducted a controlled evaluation of four cursor orientations and an orientation-neutral cursor in a circular menu selection task. Mouse interaction on a desktop, pointer (i.e. wand) interaction on a large screen, and pen interaction on a Tablet PC were evaluated. Our results suggest that choosing appropriate cursors is especially important for pointer interaction, but may be less important for mice or pens. Cursors oriented toward the lower-right corner of a display yielded the poorest performance overall while orientation-neutral cursors were generally the best. Advantages were found for orientations aligned with the direction of movement. We discuss these results and suggest guidelines for the appropriate use of cursors in various input and display configurations.
© All rights reserved Po et al. and/or ACM Press
Po, Barry A., Fisher, Brian D. and Booth, Kellogg S. (2005): A two visual systems approach to understanding voice and gestural interaction. In Virtual Reality, 8 (4) pp. 231-241.
Po, Barry A., Fisher, Brian D. and Booth, Kellogg S. (2004): Mouse and touchscreen selection in the upper and lower visual fields. In: Dykstra-Erickson, Elizabeth and Tscheligi, Manfred (eds.) Proceedings of ACM CHI 2004 Conference on Human Factors in Computing Systems April 24-29, 2004, Vienna, Austria. pp. 359-366.
Neuroanatomical evidence indicates the human eye's visual field can be functionally divided into two vertical hemifields, each specialized for specific functions. The upper visual field (UVF) is specialized to support perceptual tasks in the distance, while the lower visual field (LVF) is specialized to support visually-guided motor tasks, such as pointing. We present a user study comparing mouse- and touchscreen-based pointing for items presented in the UVF and LVF on an interactive display. Consistent with the neuroscience literature, we found that mouse and touchscreen pointing were faster and more accurate for items presented in the LVF when compared to pointing at identical targets presented in the UVF. Further analysis found previously unreported performance differences between the visual fields for touchscreen pointing that were not observed for mouse pointing. This indicates that a placement of interactive items favorable to the LVF yields superior user performance, especially for systems dependent on direct touch interactions.
© All rights reserved Po et al. and/or ACM Press
Cubranic, Davor, Murphy, Gail C., Singer, Janice and Booth, Kellogg S. (2004): Learning from project history: a case study for software development. In: Proceedings of ACM CSCW04 Conference on Computer-Supported Cooperative Work 2004. pp. 82-91.
The lack of lightweight communication channels and other technical and sociological difficulties make it hard for new members of a non-collocated software development team to learn effectively from their more experienced colleagues while they are coming up-to-speed on a project. To address this situation, we have developed a tool, named Hipikat, that provides developers with efficient and effective access to the group memory for a software development project that is implicitly formed by all of the artifacts produced during the development. This project memory is built automatically with little or no change to existing work practices. We report an exploratory case study evaluating whether software developers who are new to a project can benefit from the artifacts that Hipikat recommends from the project memory. To assess the appropriateness of the recommendations, we investigated when and how developers queried the project memory, how the evaluated the recommended artifacts, and the process by which they utilized the artifacts. We found that newcomers did use the recommendations and their final solutions exploited the recommended artifacts, although most of the Hipikat queries came in the early stages of a change task. We describe the case study, present qualitative observations, and suggest implications of using project memory as a learning aid for project newcomers.
© All rights reserved Cubranic et al. and/or ACM Press
Hancock, Mark S. and Booth, Kellogg S. (2004): Improving menu placement strategies for pen input. In: Graphics Interface 2004 May 17-19, 2004, London, Ontario, Canada. pp. 221-230.
We investigate menu selection in circular and rectangular pop-up menus using stylus-driven direct input on horizontal and vertical display surfaces. An experiment measured performance in a target acquisition task in three different conditions: direct input on a horizontal display surface, direct input on a vertical display and indirect input to a vertical display. The third condition allows comparison of direct and indirect techniques commonly used for vertical displays. The results of the study show that both left-handed and right-handed users demonstrate a consistent, but mirrored pattern of selection times that is corroborated by qualitative measures of user preference. We describe a menu placement strategy for a tabletop display that detects the handedness of the user and displays rectangular pop-up menus. This placement is based on the results of our study.
© All rights reserved Hancock and Booth and/or their publisher
McGrenere, Joanna, Baecker, Ronald M. and Booth, Kellogg S. (2002): An evaluation of a multiple interface design solution for bloated software. In: Terveen, Loren (ed.) Proceedings of the ACM CHI 2002 Conference on Human Factors in Computing Systems Conference April 20-25, 2002, Minneapolis, Minnesota. pp. 163-170.
Booth, Kellogg S., Fisher, Brian D., Lin, Chi Jui Raymond and Argue, Ritchie (2002): The "mighty mouse" multi-screen collaboration tool. In: Beaudouin-Lafon, Michel (ed.) Proceedings of the 15th annual ACM symposium on User interface software and technology October 27-30, 2002, Paris, France. pp. 209-212.
Many computer operating systems provide seamless support for multiple
display screens, but there are few cross-platform tools for collaborative use
of multiple computers in a shared display environment. Mighty Mouse is a novel
groupware tool built on the public domain VNC protocol. It is tailored
specifically for face-to-face collaboration where multiple heterogeneous
computers (usually laptops) are viewed simultaneously (usually via projectors)
by people working together on a variety of applications under various operating
systems. Mighty Mouse uses only the remote input capability of VNC, but
enhances this with various features to support flexible movement between the
various platforms, "floor control" to facilitate smooth collaboration, and
customization features to accommodate different user, platform, and application
preferences in a relatively seamless manner. The design rationale arises from
specific observations about how people collaborate in meetings, which allows
certain simplifying assumptions to be made in the implementation.
© All rights reserved Booth et al. and/or ACM Press
Danesh, Arman, Inkpen, Kori, Lau, Felix, Shu, Keith and Booth, Kellogg S. (2001): Geney: Designing a Collaborative Activity for the Palm Handheld Computer. In: Beaudouin-Lafon, Michel and Jacob, Robert J. K. (eds.) Proceedings of the ACM CHI 2001 Human Factors in Computing Systems Conference March 31 - April 5, 2001, Seattle, Washington, USA. pp. 388-395.
This paper describes a project to explore issues surrounding the development of a collaborative handheld educational application for children. A user-centered, iterative design process was used to develop Geney, a collaborative problem solving application to help children explore genetic concepts using Palm handheld computers. The design methodology utilized mock-ups of representative tasks and scenarios, pre-design meetings with targets users, prototype development, and feedback sessions with target users. The results of this work identify an effective way of utilizing handheld computers for collaborative learning and provide important insights into the design of handheld applications for children. This work also illustrates the necessity of user-centered design when new user groups are targeted, especially when novel user interface paradigms are employed that go beyond current windows-based interfaces.
© All rights reserved Danesh et al. and/or ACM Press
Swindells, Colin, Dill, John C. and Booth, Kellogg S. (2000): System Lag Tests for Augmented and Virtual Environments. In: Ackerman, Mark S. and Edwards, Keith (eds.) Proceedings of the 13th annual ACM symposium on User interface software and technology November 06 - 08, 2000, San Diego, California, United States. pp. 161-170.
Baecker, Ronald M., Booth, Kellogg S., Jovicic, Sasha, McGrenere, Joanna and Moore, Gale (2000): Reducing the Gap Between What Users Know and What They Need to Know. In: Proceedings of the 2000 ACM Conference on Universal Usability 2000. pp. 17-23.
Universal usability  is currently impeded by system complexity and poorly-crafted interfaces which lead to confusion, frustration, and failure. One of the key challenges is the gap between what users know and what they need to know [17, p.86]. This paper describes and presents early results from three related research projects designed to identify and close this gap and to examine how users might learn what they need to know.
© All rights reserved Baecker et al. and/or ACM Press
Summers, Valerie A., Booth, Kellogg S., Calvert, Thomas W., Graham, Evan D. and MacKenzie, Christine L. (1999): Calibration for augmented reality experimental testbeds. In: SI3D 1999 1999. pp. 155-162.
Wang, Yanqing, MacKenzie, Christine L., Summers, Valerie A. and Booth, Kellogg S. (1998): The Structure of Object Transportation and Orientation in Human-Computer Interaction. In: Karat, Clare-Marie, Lund, Arnold, Coutaz, JoŽlle and Karat, John (eds.) Proceedings of the ACM CHI 98 Human Factors in Computing Systems Conference April 18-23, 1998, Los Angeles, California. pp. 312-319.
An experiment was conducted to investigate the relationship between object transportation and object orientation by the human hand in the context of human-computer interaction (HCI). This work merges two streams of research: the structure of interactive manipulation in HCI and the natural hand prehension in human motor control. It was found that object transportation and object orientation have a parallel, interdependent structure which is generally persistent over different visual feedback conditions. The notion of concurrency and interdependence of multidimensional visuomotor control structure can provide a new framework for human-computer interface evaluation and design.
© All rights reserved Wang et al. and/or ACM Press
Inkpen, Kori, McGrenere, Joanna, Booth, Kellogg S. and Klawe, Maria (1997): The Effect of Turn-Taking Protocols on Children's Learning in Collaborative Environments. In: Graphics Interface 97 May 21-23, 1997, Kelowna, British Columbia, Canada. pp. 138-145.
Boritz, James and Booth, Kellogg S. (1997): A study of interactive 3D point location in a computer simulated virtual environment. In: VRST 1997 1997. pp. 181-187.
Healey, Christopher G., Booth, Kellogg S. and Enns, James T. (1996): High-Speed Visual Estimation Using Preattentive Processing. In ACM Transactions on Computer-Human Interaction, 3 (2) pp. 107-135.
A new method is presented for performing rapid and accurate numerical estimation. The method is derived from an area of human cognitive psychology called preattentive processing. Preattentive processing refers to an initial organization of the visual field based on cognitive operations believed to be rapid, automatic, and spatially parallel. Examples of visual features that can be detected in this way include hue, intensity, orientation, size, and motion. We believe that studies from preattentive vision should be used to assist in the design of visualization tools, especially those for which high-speed target detection, boundary identification, and region detection are important. In our present study, we investigated two known preattentive features (hue and orientation) in the context of a new task (numerical estimation) in order to see whether preattentive estimation was possible. Our experiments tested displays that were designed to visualize data from salmon migration simulations. The results showed that rapid and accurate estimation was indeed possible using either hue or orientation. Furthermore, random variation in one of these features resulted in no interference when subjects estimated the percentage of the other. To test the generality of our results, we varied two important display parameters -- display duration and feature difference -- and found boundary conditions for each. Implications of our results for application to real-world data and tasks are discussed.
© All rights reserved Healey et al. and/or ACM Press
Csinger, Andrew and Booth, Kellogg S. (1994): Reasoning about Video: Knowledge-Based Transcription and Presentation. In: HICSS 1994 1994. pp. 599-608.
Ware, Colin, Arthur, Kevin and Booth, Kellogg S. (1993): Fish Tank Virtual Reality. In: Ashlund, Stacey, Mullet, Kevin, Henderson, Austin, Hollnagel, Erik and White, Ted (eds.) Proceedings of the ACM CHI 93 Human Factors in Computing Systems Conference April 24-29, 1993, Amsterdam, The Netherlands. pp. 37-42.
The defining characteristics of what we call "Fish Tank Virtual Reality" are a stereo image of a three dimensional (3D) scene viewed on a monitor using a perspective projection coupled to the head position of the observer. We discuss some of the relative merits of this mode of viewing as compared to head mounted stereo displays. In addition, we report the experimental investigation of the following variables: 1) whether or not the perspective view is coupled to the actual viewpoint of the observer, 2) whether stereopsis is employed. Experiment 1 involved the subjective comparison of pairs of viewing conditions and the results suggest that head coupling may be more important than stereo in yielding a strong impression of three dimensionality. Experiment 2 involved subjects tracing a path from a leaf of a 3D tree to the correct root (there were two trees intermeshed). The error rates ranged from 22% in the pictorial display, to 1.3% in the head coupled stereo display. The error rates for stereo alone and head coupling alone were 14.7% and 3.2% respectively. We conclude that head coupling is probably more important than stereo in 3D visualization and that head coupling and stereo combined provide an important enhancement to monitor based computer graphics.
© All rights reserved Ware et al. and/or ACM Press
Arthur, Kevin, Booth, Kellogg S. and Ware, Colin (1993): Evaluating 3D Task Performance for Fish Tank Virtual Worlds. In ACM Transactions on Information Systems, 11 (3) pp. 239-265.
"Fish tank virtual reality" refers to the use of a standard graphics workstation to achieve real-time display of 3D scenes using stereopsis and dynamic head-coupled perspective. Fish tank VR has a number of advantages over head-mounted immersion VR which makes it more practical for many applications. After discussing the characteristics of fish tank VR, we describe a set of three experiments conducted to study the benefits of fish tank VR over a traditional workstation graphics display. These experiments tested user performance under two conditions: (a) whether or not stereoscopic display was used and (b) whether or not the perspective display was coupled dynamically to the positions of a user's eyes. Subjects using a comparison protocol consistently preferred head coupling without stereo over stereo without head coupling. Error rates in a tree-tracing task similar to one used by Sollenberger and Milgram showed an order of magnitude improvement for head-coupled stereo over a static (nonhead-coupled) display, and the benefits gained by head coupling were more significant than those gained from stereo alone. The final experiment examined two factors that are often associated with human performance in virtual worlds: the lag (or latency) in receiving and processing tracker data and the rate at which frames are updated. For the tree-tracing task, lag had a larger impact on performance than did frame update rate, with lag having a multiplicative effect on response time. We discuss the relevance of these results for the display of complex 3D data and highlight areas requiring further study.
© All rights reserved Arthur et al. and/or ACM Press
Jang, Stanley, Booth, Kellogg S., Forsey, David R. and Graf, Peter (1993): Investigating the effectiveness of direct manipulation of 3D B--spline curves using the shape--matching paradigm. In: Graphics Interface 93 May 19-21, 1993, Toronto, Ontario, Canada. pp. 97-106.
Healey, Christopher G., Booth, Kellogg S. and Enns, James T. (1993): Harnessing preattentive processes for multivariate data visualization. In: Graphics Interface 93 May 19-21, 1993, Toronto, Ontario, Canada. pp. 107-117.
Cowan, William B., Wein, Christopher, Wein, Marceli and Booth, Kellogg S. (1991): Hardware support for multitasking graphics. In: Graphics Interface 91 June 3-7, 1991, Calgary, Alberta, Canada. pp. 199-206.
Boritz, James, Booth, Kellogg S. and Cowan, William B. (1991): Fitts' Law studies of directional mouse movement. In: Graphics Interface 91 June 3-7, 1991, Calgary, Alberta, Canada. pp. 216-223.
Sun, Frankie K., Cowan, William B. and Booth, Kellogg S. (1990): Understanding visual effects in a windowed environment. In: Graphics Interface 90 May 14-18, 1990, Halifax, Nova Scotia, Canada. pp. 100-107.
MacDonald, J. David and Booth, Kellogg S. (1990): Heuristics for ray tracing using space subdivision. In The Visual Computer, 6 (3) pp. 153-166.
Kelley, J. V., Booth, Kellogg S. and Wein, M. (1989): Design experience with a multiprocessor window system architecture. In: Graphics Interface 89 June 19-23, 1989, London, Ontario, Canada. pp. 62-69.
Neely, Shawn R., Booth, Kellogg S. and Tanner, Peter P. (1989): The fill interpreter: A unified view of brushing filling and compositing. In: Graphics Interface 89 June 19-23, 1989, London, Ontario, Canada. pp. 121-129.
MacDonald, J. David and Booth, Kellogg S. (1989): Heuristics for ray tracing using space subdivision. In: Graphics Interface 89 June 19-23, 1989, London, Ontario, Canada. pp. 152-163.
Bartram, Lyn, Booth, Kellogg S., Cowan, William B., Morrison, Julie B. and Tanner, Peter P. (1988): A system for conducting experiments concerning human factors in interactive graphics. In: Graphics Interface 88 June 6-10, 1988, Edmonton, Alberta, Canada. pp. 34-42.
Booth, Kellogg S., Bryden, M. P., Cowan, William B., Morgan, M. F. and Plante, B. L. (1987): On the parameters of human visual performance --- An investigation of the benefits of antialiasing. In: Graphics Interface 87 (CHI+GI 87) April 5-9, 1987, Toronto, Ontario, Canada. pp. 13-19.
Higgins, Terry M. and Booth, Kellogg S. (1986): A cel--based model for paint systems. In: Graphics Interface 86 May 26-30, 1986, Vancouver, British Columbia, Canada. pp. 82-90.
Paeth, Alan W. and Booth, Kellogg S. (1986): Design and experience with a generalized raster toolkit. In: Graphics Interface 86 May 26-30, 1986, Vancouver, British Columbia, Canada. pp. 91-97.
Booth, Kellogg S., Forsey, David R. and Paeth, Alan W. (1986): Hardware assistance for Z--buffer visible surface algorithms. In: Graphics Interface 86 May 26-30, 1986, Vancouver, British Columbia, Canada. pp. 194-201.
Bartels, Richard H., Beatty, John C., Booth, Kellogg S. and Hardtke, Ines (1984): 2--D and 3--D interactive computer modelling systems. In: Graphics Interface 84 May 28 - June 1, 1984, Ottawa, Ontario, Canada. pp. 161-165.
Booth, Kellogg S. and MacKay, Stephen A. (1982): Techniques for frame buffer animation. In: Graphics Interface 82 May 17-21, 1982, Toronto, Ontario, Canada. pp. 213-220.
Bates, Roger, Beck, Jay, Laskodi, Terry, Reuss, Ed, Wells, Marc, Beatty, John C., Booth, Kellogg S. and Matthies, Larry (1982): A high--performance raster display system. In: Graphics Interface 82 May 17-21, 1982, Toronto, Ontario, Canada. pp. 355-364.
Beatty, John C., Booth, Kellogg S. and Matthies, L. H. (1981): Revisiting Watkins' algorithm. In: Seventh Canadian Man-Computer Communications Conference June 10-12, 1981, Waterloo, Ontario, Canada. pp. 359-370.
Booth, Kellogg S. (1981): Authentication of Signature Using Public Key Encryption. In Communications of the ACM, 24 (11) pp. 772-774.
Show list on your website
Join our community and advance:
Changes to this page (author)07 Nov 2012: Modified03 Apr 2012: Modified
05 Jul 2011: Modified
29 Apr 2011: Modified
16 Jan 2011: Modified
16 Jan 2011: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
17 Aug 2009: Modified
23 Jul 2009: Modified
17 Jun 2009: Modified
16 Jun 2009: Modified
12 Jun 2009: Modified
31 May 2009: Modified
08 Aug 2008: Added
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
29 Jun 2007: Modified
23 Jun 2007: Modified
23 Jun 2007: Modified
23 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Added
11 Jun 2007: Modified
28 Apr 2003: Added
28 Apr 2003: Added
Page maintainer: The Editorial Team