Publication statistics

Pub. period:1987-2012
Pub. count:54
Number of co-authors:87



Co-authors

Number of publications with 3 favourite co-authors:

Patrick Baudisch:11
Francois Guimbretiere:10
Randy Pausch:6

 

 

Productive colleagues

Ken Hinckley's 3 most productive colleagues in number of publications:

Saul Greenberg:140
Scott E. Hudson:113
Ravin Balakrishnan:108
 
 
 

Upcoming Courses

go to course
User-Centred Design - Module 3
72% booked. Starts in 23 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Ken Hinckley

Ph.D

Picture of Ken Hinckley.
Update pic
Has also published under the name of:
"Kee Hinckley"

Personal Homepage:
research.microsoft.com/en-us/um/people/kenh/


Current place of employment:
Microsoft Research

Ken Hinckley is a research scientist at Microsoft Research. He has published widely on input devices and interaction techniques. The basic thrust of his research is to enhance the input vocabulary that one can express using common computational devices and user interfaces.His research area include: - Input devices & interaction techniques: touch-sensing input devices, two-handed input, pen and gesture input, tactile I/O, sensing technologies, ... - Mobile devices & interaction - Formal experimental studies of input techniques and human abilities - All other topics relevant to human-computer interaction

Edit author info
Rename / change spelling
Add publication
 

Publications by Ken Hinckley (bibliography)

 what's this?
2012
 
Edit | Del

Marquardt, Nicolai, Hinckley, Ken and Greenberg, Saul (2012): Cross-device interaction via micro-mobility and f-formations. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 13-22.

GroupTogether is a system that explores cross-device interaction using two sociological constructs. First, F-formations concern the distance and relative body orientation among multiple users, which indicate when and how people position themselves as a group. Second, micro-mobility describes how people orient and tilt devices towards one another to promote fine-grained sharing during co-present collaboration. We sense these constructs using: (a) a pair of overhead Kinect depth cameras to sense small groups of people, (b) low-power 8GHz band radio modules to establish the identity, presence, and coarse-grained relative locations of devices, and (c) accelerometers to detect tilting of slate devices. The resulting system supports fluid, minimally disruptive techniques for co-located collaboration by leveraging the proxemics of people as well as the proxemics of devices.

© All rights reserved Marquardt et al. and/or ACM Press

2011
 
Edit | Del

Bragdon, Andrew, Nelson, Eugene, Li, Yang and Hinckley, Ken (2011): Experimental analysis of touch-screen gesture designs in mobile environments. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 403-412.

Direct-touch interaction on mobile phones revolves around screens that compete for visual attention with users' real-world tasks and activities. This paper investigates the impact of these situational impairments on touch-screen interaction. We probe several design factors for touch-screen gestures, under various levels of environmental demands on attention, in comparison to the status-quo approach of soft buttons. We find that in the presence of environmental distractions, gestures can offer significant performance gains and reduced attentional load, while performing as well as soft buttons when the user's attention is focused on the phone. In fact, the speed and accuracy of bezel gestures did not appear to be significantly affected by environment, and some gestures could be articulated eyes-free, with one hand. Bezel-initiated gestures offered the fastest performance, and mark-based gestures were the most accurate. Bezel-initiated marks therefore may offer a promising approach for mobile touch-screen interaction that is less demanding of the user's attention.

© All rights reserved Bragdon et al. and/or their publisher

 
Edit | Del

Hinckley, Ken and Song, Hyunyoung (2011): Sensor synaesthesia: touch in motion, and motion in touch. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 801-810.

We explore techniques for hand-held devices that leverage the multimodal combination of touch and motion. Hybrid touch + motion gestures exhibit interaction properties that combine the strengths of multi-touch with those of motion-sensing. This affords touch-enhanced motion gestures, such as one-handed zooming by holding one's thumb on the screen while tilting a device. We also consider the reverse perspective, that of motion-enhanced touch, which uses motion sensors to probe what happens underneath the surface of touch. Touching the screen induces secondary accelerations and angular velocities in the sensors. For example, our prototype uses motion sensors to distinguish gently swiping a finger on the screen from 'Sdrags with a hard onset' -- to enable more expressive touch interactions.

© All rights reserved Hinckley and Song and/or their publisher

 
Edit | Del

Song, Hyunyoung, Benko, Hrvoje, Guimbretiere, Francois, Izadi, Shahram, Cao, Xiang and Hinckley, Ken (2011): Grips and gestures on a multi-touch pen. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1323-1332.

This paper explores the interaction possibilities enabled when the barrel of a digital pen is augmented with a multi-touch sensor. We present a novel multi-touch pen (MTPen) prototype and discuss its alternate uses beyond those of a standard stylus, such as allowing new touch gestures to be performed using the index finger or thumb and detecting how users grip the device as a mechanism for mode switching. We also discuss the hardware and software implementation challenges in realizing our prototype, and showcase how one can combine different grips (tripod, relaxed tripod, sketch, wrap) and gestures (swipe and double tap) to enable new interaction techniques with the MTPen in a prototype drawing application. One specific aim is the elimination of some of the comfort problems associated with existing auxiliary controls on digital pens. Mechanical controls such as barrel buttons and barrel scroll wheels work best in only a few specific hand grips and pen rotations. Comparatively, our gestures can be successfully and comfortably performed regardless of the rotation of the pen or how the user grips it, offering greater flexibility in use. We describe a formal evaluation comparing MTPen gestures against the use of a barrel button for mode switching. This study shows that both swipe and double tap gestures are comparable in performance to commonly employed barrel buttons without its disadvantages.

© All rights reserved Song et al. and/or their publisher

 
Edit | Del

Sun, Minghui, Cao, Xiang, Song, Hyunyoung, Izadi, Shahram, Benko, Hrvoje, Guimbretiere, Francois, Ren, Xiangshi and Hinckley, Ken (2011): Enhancing naturalness of pen-and-tablet drawing through context sensing. In: Proceedings of the 2011 ACM International Conference on Interactive Tabletops and Surfaces 2011. pp. 83-86.

Among artists and designers, the pen-and-tablet combination is widely used for creating digital drawings, as digital pens outperform other input devices in replicating the experience of physical drawing tools. In this paper, we explore how contextual information such as the relationship between the hand, the pen, and the tablet can be leveraged in the digital drawing experience to further enhance its naturalness. By embedding sensors in the pen and the tablet to sense and interpret these contexts, we demonstrate how several physical drawing practices can be reflected and assisted in digital interaction scenarios.

© All rights reserved Sun et al. and/or ACM Press

 
Edit | Del

Bragdon, Andrew, DeLine, Rob, Hinckley, Ken and Morris, Meredith Ringel (2011): Code space: touch + air gesture hybrid interactions for supporting developer meetings. In: Proceedings of the 2011 ACM International Conference on Interactive Tabletops and Surfaces 2011. pp. 212-221.

We present Code Space, a system that contributes touch + air gesture hybrid interactions to support co-located, small group developer meetings by democratizing access, control, and sharing of information across multiple personal devices and public displays. Our system uses a combination of a shared multi-touch screen, mobile touch devices, and Microsoft Kinect sensors. We describe cross-device interactions, which use a combination of in-air pointing for social disclosure of commands, targeting and mode setting, combined with touch for command execution and precise gestures. In a formative study, professional developers were positive about the interaction design, and most felt that pointing with hands or devices and forming hand postures are socially acceptable. Users also felt that the techniques adequately disclosed who was interacting and that existing social protocols would help to dictate most permissions, but also felt that our lightweight permission feature helped presenters manage incoming content.

© All rights reserved Bragdon et al. and/or ACM Press

2010
 
Edit | Del

Hinckley, Ken, Yatani, Koji, Pahud, Michel, Coddington, Nicole, Rodenhouse, Jenny, Wilson, Andy, Benko, Hrvoje and Buxton, Bill (2010): Manual deskterity: an exploration of simultaneous pen + touch direct input. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2793-2802.

Manual Deskterity is a prototype digital drafting table that supports both pen and touch input. We explore a division of labor between pen and touch that flows from natural human skill and differentiation of roles of the hands. We also explore the simultaneous use of pen and touch to support novel compound gestures.

© All rights reserved Hinckley et al. and/or their publisher

 
Edit | Del

Hinckley, Ken, Yatani, Koji, Pahud, Michel, Coddington, Nicole, Rodenhouse, Jenny, Wilson, Andy, Benko, Hrvoje and Buxton, Bill (2010): Pen + touch = new tools. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 27-36.

We describe techniques for direct pen+touch input. We observe people's manual behaviors with physical paper and notebooks. These serve as the foundation for a prototype Microsoft Surface application, centered on note-taking and scrapbooking of materials. Based on our explorations we advocate a division of labor between pen and touch: the pen writes, touch manipulates, and the combination of pen + touch yields new tools. This articulates how our system interprets unimodal pen, unimodal touch, and multimodal pen+touch inputs, respectively. For example, the user can hold a photo and drag off with the pen to create and place a copy; hold a photo and cross it in a freeform path with the pen to slice it in two; or hold selected photos and tap one with the pen to staple them all together. Touch thus unifies object selection with mode switching of the pen, while the muscular tension of holding touch serves as the "glue" that phrases together all the inputs into a unitary multimodal gesture. This helps the UI designer to avoid encumbrances such as physical buttons, persistent modes, or widgets that detract from the user's focus on the workspace.

© All rights reserved Hinckley et al. and/or their publisher

 
Edit | Del

Benko, Hrvoje, Izadi, Shahram, Wilson, Andrew D., Cao, Xiang, Rosenfeld, Dan and Hinckley, Ken (2010): Design and evaluation of interaction models for multi-touch mice. In: Proceedings of the 2010 Conference on Graphics Interface 2010. pp. 253-260.

Adding multi-touch sensing to the surface of a mouse has the potential to substantially increase the number of interactions available to the user. However, harnessing this increased bandwidth is challenging, since the user must perform multi-touch interactions while holding the device and using it as a regular mouse. In this paper we describe the design challenges and formalize the design space of multi-touch mice interactions. From our design space categories we synthesize four interaction models which enable the use of both multi-touch and mouse interactions on the same device. We describe the results of a controlled user experiment evaluating the performance of these models in a 2D spatial manipulation task typical of touch-based interfaces and compare them to interacting directly on a multi-touch screen and with a regular mouse. We observed that our multi-touch mouse interactions were overall slower than the chosen baselines; however, techniques providing a single focus of interaction and explicit touch activation yielded better performance and higher preferences from our participants. Our results expose the difficulties in designing multi-touch mice interactions and define the problem space for future research in making these devices effective.

© All rights reserved Benko et al. and/or their publisher

2009
 
Edit | Del

Hinckley, Ken, Dixon, Morgan, Sarin, Raman, Guimbretiere, Francois and Balakrishnan, Ravin (2009): Codex: a dual screen tablet computer. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1933-1942.

The Codex is a dual-screen tablet computer, about the size of a 4"x 6 day planner, with a self-supporting binding and embedded sensors. The device can be oriented in a variety of postures to support different nuances of individual work, ambient display, or collaboration with another user. In the context of a pen-operated note taking application, we demonstrate interaction techniques that support a fluid division of labor for tasks and information across the two displays while minimizing disruption to the primary experience of authoring notes.

© All rights reserved Hinckley et al. and/or ACM Press

 
Edit | Del

Grossman, Tovi, Baudisch, Patrick and Hinckley, Ken (2009): Handle Flags: efficient and flexible selections for inking applications. In: Proceedings of the 2009 Conference on Graphics Interface 2009. pp. 167-174.

There are a number of challenges associated with content selection in pen-based interfaces. Supplementary buttons to enter a selection mode may not be available, and selections may require a careful and error prone lasso stroke. In this paper we describe the design and evaluation of Handle Flags, a new localized technique used to select and perform commands on ink strokes in pen-operated interfaces. When the user positions the pen near an ink stroke, Handle Flags are displayed for the potential selections that the ink stroke could belong to (such as proximal strokes comprising a word or drawing). Tapping the handle allows the user to access the corresponding selection, without requiring a complex lasso stroke. Our studies show that Handle Flags offer significant benefits in comparison to traditional techniques, and are a promising technique for pen-based applications.

© All rights reserved Grossman et al. and/or their publisher

2008
 
Edit | Del

Li, Kevin A., Baudisch, Patrick and Hinckley, Ken (2008): Blindsight: eyes-free access to mobile phones. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1389-1398.

Many mobile phones integrate services such as personal calendars. Given the social nature of the stored data, however, users often need to access such information as part of a phone conversation. In typical non-headset use, this re-quires users to interrupt their conversations to look at the screen. We investigate a counter-intuitive solution: to avoid the need for interruption we replace the visual interface with one based on auditory feedback. Surprisingly, this can be done without interfering with the phone conversation. We present blindSight, a prototype application that replaces the traditionally visual in-call menu of a mobile phone. Users interact using the phone keypad, without looking at the screen. BlindSight responds with auditory feedback. This feedback is heard only by the user, not by the person on the other end of the line. We present the results of two user studies of our prototype. The first study verifies that useful keypress accuracy can be obtained for the phone-at-ear position. The second study compares the blindSight system against a visual baseline condition and finds a preference for blindSight.

© All rights reserved Li et al. and/or ACM Press

 
Edit | Del

Cao, Xiang, Wilson, Andrew D., Balakrishnan, Ravin, Hinckley, Ken and Hudson, Scott E. (2008): ShapeTouch: Leveraging contact shape on interactive surfaces. In: Third IEEE International Workshop on Tabletops and Interactive Surfaces Tabletop 2008 October 1-3, 2008, Amsterdam, The Netherlands. pp. 129-136.

 
Edit | Del

Baudisch, Patrick, Zotov, Alexander, Cutrell, Edward and Hinckley, Ken (2008): Starburst: a target expansion algorithm for non-uniform target distributions. In: Levialdi, Stefano (ed.) AVI 2008 - Proceedings of the working conference on Advanced Visual Interfaces May 28-30, 2008, Napoli, Italy. pp. 129-137.

 
Edit | Del

Bi, Xiaojun, Moscovich, Tomer, Ramos, Gonzalo, Balakrishnan, Ravin and Hinckley, Ken (2008): An exploration of pen rolling for pen-based interaction. In: Cousins, Steve B. and Beaudouin-Lafon, Michel (eds.) Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology October 19-22, 2008, Monterey, CA, USA. pp. 191-200.

2007
 
Edit | Del

Hinckley, Ken, Zhao, Shengdong, Sarin, Raman, Baudisch, Patrick, Cutrell, Edward, Shilman, Michael and Tan, Desney S. (2007): InkSeine: In Situ search for active note taking. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 251-260.

Using a notebook to sketch designs, reflect on a topic, or capture and extend creative ideas are examples of active note taking tasks. Optimal experience for such tasks demands concentration without interruption. Yet active note taking may also require reference documents or emails from team members. InkSeine is a Tablet PC application that supports active note taking by coupling a pen-and-ink interface with an in situ search facility that flows directly from a user's ink notes (Fig. 1). InkSeine integrates four key concepts: it leverages preexisting ink to initiate a search; it provides tight coupling of search queries with application content; it persists search queries as first class objects that can be commingled with ink notes; and it enables a quick and flexible workflow where the user may freely interleave inking, searching, and gathering content. InkSeine offers these capabilities in an interface that is tailored to the unique demands of pen input, and that maintains the primacy of inking above all other tasks.

© All rights reserved Hinckley et al. and/or ACM Press

 
Edit | Del

Guimbretiere, Francois, Dixon, Morgan and Hinckley, Ken (2007): ExperiScope: an analysis tool for interaction data. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 1333-1342.

We present ExperiScope, an analytical tool to help designers and experimenters explore the results of quantitative evaluations of interaction techniques. ExperiScope combines a new visualization incorporating aspects of the KLM and the three-state model with an interface helping users to rapidly cluster similar patterns of interactions. The tool makes it easy to identify and compare key patterns of use encountered during data collection. This promotes a deeper understanding of the results of a given evaluation. We illustrate the advantages of this tool by revisiting the data collected for an experiment conducted by Hinckley et al. [19] which compared different mode switching techniques. Our results show that our tool complements the previously reported results by offering insights about error behavior and the impact of mode switching on user performance. By providing a more fine-grained analysis of the data gathered during empirical evaluations, we hope that our tool will improve researchers' understanding of existing and newly developed interaction techniques.

© All rights reserved Guimbretiere et al. and/or ACM Press

 
Edit | Del

Liao, Chunyuan, Guimbretiere, Francois, Hinckley, Ken and Hollan, James D. (2007): Papiercraft: A gesture-based command system for interactive paper. In ACM Transactions on Computer-Human Interaction, 14 (4) p. 18.

Paper persists as an integral component of active reading and other knowledge-worker tasks because it provides ease of use unmatched by digital alternatives. Paper documents are light to carry, easy to annotate, rapid to navigate, flexible to manipulate, and robust to use in varied environments. Interactions with paper documents create rich webs of annotation, cross reference, and spatial organization. Unfortunately, the resulting webs are confined to the physical world of paper and, as they accumulate, become increasingly difficult to store, search, and access. XLibris [Schilit et al. 1998] and similar systems address these difficulties by simulating paper with tablet PCs. While this approach is promising, it suffers not only from limitations of current tablet computers (e.g., limited screen space) but also from loss of invaluable paper affordances. In this article, we describe PapierCraft, a gesture-based command system that allows users to manipulate digital documents using paper printouts as proxies. Using an Anoto [Anoto 2002] digital pen, users can draw command gestures on paper to tag a paragraph, e-mail a selected area, copy selections to a notepad, or create links to related documents. Upon pen synchronization, PapierCraft executes the commands and presents the results in a digital document viewer. Users can then search the tagged information and navigate the web of annotated digital documents resulting from interactions with the paper proxies. PapierCraft also supports real time interactions across mix-media, for example, letting users copy information from paper to a Tablet PC screen. This article presents the design and implementation of the PapierCraft system and describes user feedback from initial use.

© All rights reserved Liao et al. and/or ACM Press

2006
 
Edit | Del

Hinckley, Ken, Guimbretiere, Francois, Baudisch, Patrick, Sarin, Raman, Agrawala, Maneesh and Cutrell, Edward (2006): The springboard: multiple modes in one spring-loaded control. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 181-190.

Modes allow a few inputs to invoke many operations, yet if a user misclassifies or forgets the state of a system, modes can result in errors. Spring-loaded modes (quasimodes) maintain a mode while the user holds a control such as a button or key. The Springboard is an interaction technique for tablet computers that extends quasimodes to encompass multiple tool modes in a single spring-loaded control. The Springboard allows the user to continue holding down a nonpreferred-hand command button after selecting a tool from a menu as a way to repeatedly apply the same tool. We find the Springboard improves performance for both a local marking menu and for a non-local marking menu ("lagoon") at the lower left corner of the screen. Despite the round-trip costs incurred to move the pen to a tool lagoon, a keystroke-level analysis of the true cost of each technique reveals the local marking menu is not significantly faster.

© All rights reserved Hinckley et al. and/or ACM Press

 
Edit | Del

Grossman, Tovi, Hinckley, Ken, Baudisch, Patrick, Agrawala, Maneesh and Balakrishnan, Ravin (2006): Hover widgets: using the tracking state to extend the capabilities of pen-operated devices. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 861-870.

We present Hover Widgets, a new technique for increasing the capabilities of pen-based interfaces. Hover Widgets are implemented by using the pen movements above the display surface, in the tracking state. Short gestures while hovering, followed by a pen down, access the Hover Widgets, which can be used to activate localized interface widgets. By using the tracking state movements, Hover Widgets create a new command layer which is clearly distinct from the input layer of a pen interface. In a formal experiment Hover Widgets were found to be faster than a more traditional command activation technique, and also reduced errors due to divided attention.

© All rights reserved Grossman et al. and/or ACM Press

 
Edit | Del

Zhao, Shengdong, Agrawala, Maneesh and Hinckley, Ken (2006): Zone and polygon menus: using relative position to increase the breadth of multi-stroke marking menus. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 1077-1086.

We present Zone and Polygon menus, two new variants of multi-stroke marking menus that consider both the relative position and orientation of strokes. Our menus are designed to increase menu breadth over the 8 item limit of status quo orientation-based marking menus. An experiment shows that Zone and Polygon menus can successfully increase breadth by a factor of 2 or more over orientation-based marking menus, while maintaining high selection speed and accuracy. We also discuss hybrid techniques that may further increase menu breadth and performance. Our techniques offer UI designers new options for balancing menu breadth and depth against selection speed and accuracy.

© All rights reserved Zhao et al. and/or ACM Press

 
Edit | Del

Hinckley, Ken, Guimbretiere, Francois, Agrawala, Maneesh, Apitz, Georg and Chen, Nicholas (2006): Phrasing techniques for multi-stroke selection gestures. In: Proceedings of the 2006 Conference on Graphics Interface 2006. pp. 147-154.

Pen gesture interfaces have difficulty supporting arbitrary multiple-stroke selections because lifting the pen introduces ambiguity as to whether the next stroke should add to the existing selection, or begin a new one. We explore and evaluate techniques that use a non-preferred-hand button or touchpad to phrase together one or more independent pen strokes into a unitary multi-stroke gesture. We then illustrate how such phrasing techniques can support multiple-stroke selection gestures with tapping, crossing, lassoing, disjoint selection, circles of exclusion, selection decorations, and implicit grouping operations. These capabilities extend the expressiveness of pen gesture interfaces and suggest new directions for multiple-stroke pen input techniques.

© All rights reserved Hinckley et al. and/or Canadian Information Processing Society

 
Edit | Del

Baudisch, Patrick, Tan, Desney S., Collomb, Maxime, Robbins, Dan, Hinckley, Ken, Agrawala, Maneesh, Zhao, Shengdong and Ramos, Gonzalo (2006): Phosphor: explaining transitions in the user interface using afterglow effects. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 169-178.

Sometimes users fail to notice a change that just took place on their display. For example, the user may have accidentally deleted an icon or a remote collaborator may have changed settings in a control panel. Animated transitions can help, but they force users to wait for the animation to complete. This can be cumbersome, especially in situations where users did not need an explanation. We propose a different approach. Phosphor objects show the outcome of their transition instantly; at the same time they explain their change in retrospect. Manipulating a phosphor slider, for example, leaves an afterglow that illustrates how the knob moved. The parallelism of instant outcome and explanation supports both types of users. Users who already understood the transition can continue interacting without delay, while those who are inexperienced or may have been distracted can take time to view the effects at their own pace. We present a framework of transition designs for widgets, icons, and objects in drawing programs. We evaluate phosphor objects in two user studies and report significant performance benefits for phosphor objects.

© All rights reserved Baudisch et al. and/or ACM Press

 
Edit | Del

Ramos, Gonzalo, Robertson, George G., Czerwinski, Mary, Tan, Desney S., Baudisch, Patrick, Hinckley, Ken and Agrawala, Maneesh (2006): Tumble! Splat! helping users access and manipulate occluded content in 2D drawings. In: Celentano, Augusto (ed.) AVI 2006 - Proceedings of the working conference on Advanced visual interfaces May 23-26, 2006, Venezia, Italy. pp. 428-435.

2005
 
Edit | Del

Liao, Chunyuan, Guimbretiere, Francois and Hinckley, Ken (2005): PapierCraft: a command system for interactive paper. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 241-244.

Knowledge workers use paper extensively for document reviewing and note-taking due to its versatility and simplicity of use. As users annotate printed documents and gather notes, they create a rich web of annotations and cross references. Unfortunately, as paper is a static media, this web often gets trapped in the physical world. While several digital solutions such as XLibris [15] and Digital Desk [18] have been proposed, they suffer from a small display size or onerous hardware requirements. To address these limitations, we propose PapierCraft, a gesture-based interface that allows users to manipulate digital documents directly using their printouts as proxies. Using a digital pen, users can annotate a printout or draw command gestures to indicate operations such as copying a document area, pasting an area previously copied, or creating a link. Upon pen synchronization, our infrastructure executes these commands and presents the result in a customized viewer. In this paper we describe the design and implementation of the PapierCraft command system, and report on early user feedback.

© All rights reserved Liao et al. and/or ACM Press

 
Edit | Del

Baudisch, Patrick, Cutrell, Edward, Hinckley, Ken and Eversole, Adam (2005): Snap-and-go: helping users align objects without the modality of traditional snapping. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 301-310.

Snapping is a widely used technique that helps users position graphical objects precisely, e.g., to align them with a grid or other graphical objects. Unfortunately, whenever users want to position a dragged object close to such an aligned location, they first need to deactivate snapping. We propose snap-and-go, a snapping technique that overcomes this limitation. By merely stopping dragged objects at aligned positions, rather than "warping" them there, snap-and-go helps users align objects, yet still allows placing dragged objects anywhere else. While this approach of inserting additional motor space renders snap-and-go slightly slower than traditional snapping, snap-and-go simplifies the user interface by eliminating the need for a deactivation option and thereby allows introducing snapping to application scenarios where traditional snapping is inapplicable. In our user studies, participants were able to align objects up to 138% (1D) and 231% (2D) faster with snap-and-go than without and snap-and-go proved robust against the presence of distracting snap targets.

© All rights reserved Baudisch et al. and/or ACM Press

 
Edit | Del

Hinckley, Ken, Baudisch, Patrick, Ramos, Gonzalo and Guimbretiere, Francois (2005): Design and analysis of delimiters for selection-action pen gesture phrases in scriboli. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 451-460.

We present a quantitative analysis of delimiters for pen gestures. A delimiter is "something different" in the input stream that a computer can use to determine the structure of input phrases. We study four techniques for delimiting a selection-action gesture phrase consisting of lasso selection plus marking-menu-based command activation. Pigtail is a new technique that uses a small loop to delimit lasso selection from marking (Fig. 1). Handle adds a box to the end of the lasso, from which the user makes a second stroke for marking. Timeout uses dwelling with the pen to delimit the lasso from the mark. Button uses a button press to signal when to delimit the gesture. We describe the role of delimiters in our Scriboli pen interaction testbed, and show how Pigtail supports scope selection, command activation, and direct manipulation all in a single fluid pen gesture.

© All rights reserved Hinckley et al. and/or ACM Press

 
Edit | Del

Li, Yang, Hinckley, Ken, Guan, Zhiwei and Landay, James A. (2005): Experimental analysis of mode switching techniques in pen-based user interfaces. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 461-470.

Inking and gesturing are two central tasks in pen-based user interfaces. Switching between modes for entry of uninterpreted ink and entry of gestures is required by many pen-based user interfaces. Without an appropriate mode switching technique, pen-based interactions in such situations may be inefficient and cumbersome. In this paper, we investigate five techniques for switching between ink and gesture modes in pen interfaces, including a pen-pressure based mode switching technique that allows implicit mode transition. A quantitative experimental study was conducted to evaluate the performance of these techniques. The results suggest that pressing a button with the non-preferred hand offers the fastest performance, while the technique of holding the pen still is significantly slower and more prone to error than the other techniques. Pressure, while promising, did not perform as well as the non-preferred hand button with our current implementation.

© All rights reserved Li et al. and/or ACM Press

 
Edit | Del

Hinckley, Ken, Pierce, Jeffrey S., Horvitz, Eric and Sinclair, Mike (2005): Foreground and background interaction with sensor-enhanced mobile devices. In ACM Trans. Comput.-Hum. Interact., 12 (1) pp. 31-52.

 Cited in the following chapter:

Mobile Computing: [/encyclopedia/mobile_computing.html]


 
2004
 
Edit | Del

Krumm, John and Hinckley, Ken (2004): The NearMe Wireless Proximity Server. In: Davies, Nigel, Mynatt, Elizabeth D. and Siio, Itiro (eds.) UbiComp 2004 Ubiquitous Computing 6th International Conference September 7-10, 2004, Nottingham, UK. pp. 283-300.

 
Edit | Del

Hinckley, Ken, Ramos, Gonzalo, Guimbretiere, Francois, Baudisch, Patrick and Smith, Marc (2004): Stitching: pen gestures that span multiple displays. In: Costabile, Maria Francesca (ed.) AVI 2004 - Proceedings of the working conference on Advanced visual interfaces May 25-28, 2004, Gallipoli, Italy. pp. 23-31.

2003
 
Edit | Del

Hinckley, Ken (2003): Synchronous gestures for multiple persons and computers. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 149-158.

This research explores distributed sensing techniques for mobile devices using synchronous gestures. These are patterns of activity, contributed by multiple users (or one user with multiple devices), which take on a new meaning when they occur together in time, or in a specific sequence in time. To explore this new area of inquiry, this work uses tablet computers augmented with touch sensors and two-axis linear accelerometers (tilt sensors). The devices are connected via an 802.11 wireless network and synchronize their time-stamped sensor data. This paper describes a few practical examples of interaction techniques using synchronous gestures such as dynamically tiling together displays by physically bumping them together, discusses implementation issues, and speculates on further possibilities for synchronous gestures.

© All rights reserved Hinckley and/or ACM Press

 
Edit | Del

McLoone, Hugh, Hinckley, Ken and Cutrell, Edward (2003): Bimanual Interaction on the Microsoft Office Keyboard. In: Proceedings of IFIP INTERACT03: Human-Computer Interaction 2003, Zurich, Switzerland. p. 49.

 
Edit | Del

Hinckley, Ken (2003): Distributed and local sensing techniques for face-to-face collaboration. In: Oviatt, Sharon L., Darrell, Trevor, Maybury, Mark T. and Wahlster, Wolfgang (eds.) Proceedings of the 5th International Conference on Multimodal Interfaces - ICMI 2003 November 5-7, 2003, Vancouver, British Columbia, Canada. pp. 81-84.

2002
 
Edit | Del

Hinckley, Ken, Cutrell, Edward, Bathiche, Steve and Muss, Tim (2002): Quantitative analysis of scrolling techniques. In: Terveen, Loren (ed.) Proceedings of the ACM CHI 2002 Conference on Human Factors in Computing Systems Conference April 20-25, 2002, Minneapolis, Minnesota. pp. 65-72.

2001
 
Edit | Del

Hinckley, Ken and Horvitz, Eric (2001): Toward more sensitive mobile phones. In: Marks, Joe and Mynatt, Elizabeth D. (eds.) Proceedings of the 14th annual ACM symposium on User interface software and technology November 11 - 14, 2001, Orlando, Florida. pp. 191-192.

Although cell phones are extremely useful, they can be annoying and distracting to owners and others nearby. We describe sensing techniques intended to help make mobile phones more polite and less distracting. For example, our phone's ringing quiets as soon as the user responds to an incoming call, and the ring mutes if the user glances at the caller ID and decides not to answer. We also eliminate the need to press a TALK button to answer an incoming call by recognizing if the user picks up the phone and listens to it.

© All rights reserved Hinckley and Horvitz and/or ACM Press

 Cited in the following chapter:

Mobile Computing: [/encyclopedia/mobile_computing.html]


 
2000
 
Edit | Del

Balakrishnan, Ravin and Hinckley, Ken (2000): Symmetric Bimanual Interaction. In: Turner, Thea, Szwillus, Gerd, Czerwinski, Mary, Peterno, Fabio and Pemberton, Steven (eds.) Proceedings of the ACM CHI 2000 Human Factors in Computing Systems Conference April 1-6, 2000, The Hague, The Netherlands. pp. 33-40.

We present experimental work that explores the factors governing symmetric bimanual interaction in a two-handed task that requires the user to track a pair of targets, one target with each hand. A symmetric bimanual task is a two-handed task in which each hand is assigned an identical role. In this context, we explore three main experimental factors. We vary the distance between the pair of targets to track: as the targets become further apart, visual diversion increases, forcing the user to divide attention between the two targets. We also vary the demands of the task by using both a slow and a fast tracking speed. Finally, we explore visual integration of sub-tasks: in one condition, the two targets to track are connected by a line segment which visually links the targets, while in the other condition there is no connecting line. Our results indicate that all three experimental factors affect the degree of parallelism, which we quantify using a new metric of bimanual parallelism. However, differences in tracking error between the two hands are affected only by the visual integration factor.

© All rights reserved Balakrishnan and Hinckley and/or ACM Press

 
Edit | Del

Robertson, George G., Dantzich, Maarten van, Robbins, Daniel, Czerwinski, Mary, Hinckley, Ken, Risden, Kirsten, Thiel, David and Gorokhovsky, Vadim (2000): The Task Gallery: A 3D Window Manager. In: Turner, Thea, Szwillus, Gerd, Czerwinski, Mary, Peterno, Fabio and Pemberton, Steven (eds.) Proceedings of the ACM CHI 2000 Human Factors in Computing Systems Conference April 1-6, 2000, The Hague, The Netherlands. pp. 494-501.

The Task Gallery is a window manager that uses interactive 3D graphics to provide direct support for task management and document comparison, lacking from many systems implementing the desktop metaphor. User tasks appear as artwork hung on the walls of a virtual art gallery, with the selected task on a stage. Multiple documents can be selected and displayed side-by-side using 3D space to provide uniform and intuitive scaling. The Task Gallery hosts any Windows application, using a novel redirection mechanism that routes input and output between the 3D environment and unmodified 2D Windows applications. User studies suggest that the Task Gallery helps with task management, is enjoyable to use, and that the 3D metaphor evokes spatial memory and cognition.

© All rights reserved Robertson et al. and/or ACM Press

 
Edit | Del

Hinckley, Ken, Pierce, Jeff, Sinclair, Mike and Horvitz, Eric (2000): Sensing Techniques for Mobile Interaction. In: Ackerman, Mark S. and Edwards, Keith (eds.) Proceedings of the 13th annual ACM symposium on User interface software and technology November 06 - 08, 2000, San Diego, California, United States. pp. 91-100.

 Cited in the following chapter:

Mobile Computing: [/encyclopedia/mobile_computing.html]


 
 
Edit | Del

Igarashi, Takeo and Hinckley, Ken (2000): Speed-Dependent Automatic Zooming for Browsing Large Documents. In: Ackerman, Mark S. and Edwards, Keith (eds.) Proceedings of the 13th annual ACM symposium on User interface software and technology November 06 - 08, 2000, San Diego, California, United States. pp. 139-148.

1999
 
Edit | Del

Hinckley, Ken and Sinclair, Mike (1999): Touch-Sensing Input Devices. In: Altom, Mark W. and Williams, Marian G. (eds.) Proceedings of the ACM CHI 99 Human Factors in Computing Systems Conference May 15-20, 1999, Pittsburgh, Pennsylvania. pp. 223-230.

We can touch things, and our senses tell us when our hands are touching something. But most computer input devices cannot detect when the user touches or releases the device or some portion of the device. Thus, adding touch sensors to input devices offers many possibilities for novel interaction techniques. We demonstrate the TouchTrackball and the Scrolling TouchMouse, which use unobtrusive capacitance sensors to detect contact from the user's hand without requiring pressure or mechanical actuation of a switch. We further demonstrate how the capabilities of these devices can be matched to an implicit interaction technique, the On-Demand Interface, which uses the passive information captured by touch sensors to fade in or fade out portions of a display depending on what the user is doing; a second technique uses explicit, intentional interaction with touch sensors for enhanced scrolling. We present our new devices in the context of a simple taxonomy of tactile input technologies. Finally, we discuss the properties of touch-sensing as an input channel in general.

© All rights reserved Hinckley and Sinclair and/or ACM Press

 
Edit | Del

Hinckley, Ken, Sinclair, Mike, Hanson, Erik, Szeliski, Richard and Conway, Matthew (1999): The VideoMouse: A Camera-Based Multi-Degree-of-Freedom Input Device. In: Zanden, Brad Vander and Marks, Joe (eds.) Proceedings of the 12th annual ACM symposium on User interface software and technology November 07 - 10, 1999, Asheville, North Carolina, United States. pp. 103-112.

The VideoMouse is a mouse that uses a camera as its input sensor. A real-time vision algorithm determines the six degree-of-freedom mouse posture, consisting of 2D motion, tilt in the forward/back and left/right axes, rotation of the mouse about its vertical axis, and some limited height sensing. Thus, a familiar 2D device can be extended for three-dimensional manipulation, while remaining suitable for standard 2D GUI tasks. We describe techniques for mouse functionality, 3D manipulation, navigating large 2D spaces, and using the camera for lightweight scanning tasks.

© All rights reserved Hinckley et al. and/or ACM Press

 
Edit | Del

Balakrishnan, Ravin and Hinckley, Ken (1999): The Role of Kinesthetic Reference Frames in Two-Handed Input Performance. In: Zanden, Brad Vander and Marks, Joe (eds.) Proceedings of the 12th annual ACM symposium on User interface software and technology November 07 - 10, 1999, Asheville, North Carolina, United States. pp. 171-178.

We present experimental work which explores how the match (or mismatch) between the input space of the hands and the output space of a graphical display influences two-handed input performance. During interaction with computers, a direct correspondence between the input and output spaces is often lacking. Not only are the hands disjoint from the display space, but the reference frames of the hands may in fact be disjoint from one another if two separate input devices (e.g. two mice) are used for two-handed input. In general, we refer to the workspace and origin within which the hands operate as kinesthetic reference frames. Our goal is to better understand how an interface designer's choice of kinesthetic reference frames influences a user's ability to coordinate two-handed movements, and to explore how the answer to this question may depend on the availability of visual feedback. Understanding this issue has implications for the design of two-handed interaction techniques and input devices, as well as for the reference principle of Guiard's Kinematic Chain model of human bimanual action. Our results suggest that the Guiard reference principle is robust with respect to variances in the kinesthetic reference frames as long as appropriate visual feedback is present.

© All rights reserved Balakrishnan and Hinckley and/or ACM Press

1998
 
Edit | Del

Hinckley, Ken, Czerwinski, Mary and Sinclair, Mike (1998): Interaction and Modeling Techniques for Desktop Two-Handed Input. In: Mynatt, Elizabeth D. and Jacob, Robert J. K. (eds.) Proceedings of the 11th annual ACM symposium on User interface software and technology November 01 - 04, 1998, San Francisco, California, United States. pp. 49-58.

We describe input devices and two-handed interaction techniques to support map navigation tasks. We discuss several design variations and user testing of two-handed navigation techniques, including puck and stylus input on a Wacom tablet, as well as a novel design incorporating a touchpad (for the nonpreferred hand) and a mouse (for the preferred hand). To support the latter technique, we introduce a new input device, the TouchMouse, which is a standard mouse augmented with a pair of one-bit touch sensors, one for the palm and one for the index finger. Finally, we propose several enhancements to Buxton's three-state model of graphical input and extend this model to encompass two-handed input transactions as well.

© All rights reserved Hinckley et al. and/or ACM Press

 
Edit | Del

Hinckley, Ken, Pausch, Randy, Proffitt, Dennis and Kassell, Neal F. (1998): Two-Handed Virtual Manipulation. In ACM Transactions on Computer-Human Interaction, 5 (3) pp. 260-302.

We discuss a two-handed user interface designed to support three-dimensional neurosurgical visualization. By itself, this system is a "point design," an example of an advanced user interface technique. In this work, we argue that in order to understand why interaction techniques do or do not work, and to suggest possibilities for new techniques, it is important to move beyond point design and to introduce careful scientific measurement of human behavioral principles. In particular, we argue that the common-sense viewpoint that "two hands save time by working in parallel" may not always be an effective way to think about two-handed interface design because the hands do not necessarily work in parallel (there is a structure to two-handed manipulation) and because two hands do more than just save time over one hand (two hands provide the user with more information and can structure how the user thinks about a task). To support these claims, we present an interface design developed in collaboration with neurosurgeons which has undergone extensive informal usability testing, as well as a pair of formal experimental studies which investigate behavioral aspects of two-handed virtual object manipulation. Our hope is that this discussion will help others to apply the lessons learned in our neurosurgery application to future two-handed user interface designs.

© All rights reserved Hinckley et al. and/or ACM Press

1997
 
Edit | Del

Hinckley, Ken, Pausch, Randy, Proffitt, Dennis, Patten, James and Kassell, Neal F. (1997): Cooperative Bimanual Action. In: Pemberton, Steven (ed.) Proceedings of the ACM CHI 97 Human Factors in Computing Systems Conference March 22-27, 1997, Atlanta, Georgia. pp. 27-34.

We present an experiment on cooperative bimanual action. Right-handed subjects manipulated a pair of physical objects, a tool and a target object, so that the tool would touch a target on the object (fig. 1). For this task, there is a marked specialization of the hands. Performance is best when the left hand orients the target object and the right hand manipulates the tool, but is significantly reduced when these roles are reversed. This suggests that the right hand operates relative to the frame-of-reference of the left hand. Furthermore, when physical constraints guide the tool placement, this fundamentally changes the type of motor control required. The task is tremendously simplified for both hands, and reversing roles of the hands is no longer an important factor. Thus, specialization of the roles of the hands is significant only for skilled manipulation.

© All rights reserved Hinckley et al. and/or ACM Press

 
Edit | Del

Hinckley, Ken, Tullio, Joe, Pausch, Randy, Proffitt, Dennis and Kassell, Neal F. (1997): Usability Analysis of 3D Rotation Techniques. In: Robertson, George G. and Schmandt, Chris (eds.) Proceedings of the 10th annual ACM symposium on User interface software and technology October 14 - 17, 1997, Banff, Alberta, Canada. pp. 1-10.

We report results from a formal user study of interactive 3D rotation using the mouse-driven Virtual Sphere and Arcball techniques, as well as multidimensional input techniques based on magnetic orientation sensors. Multidimensional input is often assumed to allow users to work quickly, but at the cost of precision, due to the instability of the hand moving in the open air. We show that, at least for the orientation matching task used in this experiment, users can take advantage of the integrated degrees of freedom provided by multidimensional input without necessarily sacrificing precision: using multidimensional input, users completed the experimental task up to 36% faster without any statistically detectable loss of accuracy. We also report detailed observations of common usability problems when first encountering the techniques. Our observations suggest some design issues for 3D input devices. For example, the physical form-factors of the 3D input device significantly influenced user acceptance of otherwise identical input sensors. The device should afford some tactile cues, so the user can feel its orientation without looking at it. In the absence of such cues, some test users were unsure of how to use the device.

© All rights reserved Hinckley et al. and/or ACM Press

 
Edit | Del

Pierce, Jeffrey S., Audia, Steve, Burnette, Tommy, Christiansen, Kevin, Cosgrove, Dennis, Conway, Matthew, Hinckley, Ken, Monkaitis, Kristen, Patten, James, Shochet, Joe, Staack, David, Stearns, Brian, Sturgill, Chris, Williams, George and Pausch, Randy (1997): Alice: Easy to Use Interactive 3D Graphics. In: Robertson, George G. and Schmandt, Chris (eds.) Proceedings of the 10th annual ACM symposium on User interface software and technology October 14 - 17, 1997, Banff, Alberta, Canada. pp. 77-78.

Alice is a rapid prototyping system used to create three dimensional graphics simulations like those seen in virtual reality applications. Alice uses an interpreted language called Python as its scripting language to implement user actions. This interactive development environment allows users to explore many more design options than is possible in a compiled language environment. The alpha version of Alice for Windows 95 is available for free over the internet, with the beta release scheduled for August.

© All rights reserved Pierce et al. and/or ACM Press

 
Edit | Del

Hinckley, Ken, Pausch, Randy F. and Proffitt, Dennis (1997): Attention and Visual Feedback: The Bimanual Frame of Reference. In: SI3D 1997 1997. pp. 121-126,192.

1995
 
Edit | Del

Goble, John C., Hinckley, Ken, Pausch, Randy F., Snell, John W. and Kassell, Neal F. (1995): Two-Handed Spatial Interface Tools for Neurosurgial Planning. In IEEE Computer, 28 (7) pp. 20-26.

1994
 
Edit | Del

Hinckley, Ken, Pausch, Randy, Goble, John C. and Kassell, Neal F. (1994): Passive Real-World Interface Props for Neurosurgical Visualization. In: Adelson, Beth, Dumais, Susan and Olson, Judith S. (eds.) Proceedings of the ACM CHI 94 Human Factors in Computing Systems Conference April 24-28, 1994, Boston, Massachusetts. pp. 452-458.

We claim that physical manipulation of familiar real-world objects in the user's real environment is an important technique for the design of three-dimensional user interfaces. These real-world passive interface props are manipulated by the user to specify spatial relationships between interface objects. By unobtrusively embedding free-space position and orientation trackers within the props, we enable the computer to passively observe a natural user dialog in the real world, rather than forcing the user to engage in a contrived dialog in the computer-generated world. We present neurosurgical planning as a driving application and demonstrate the utility of a head viewing prop, a cutting-plane selection prop, and a trajectory selection prop in this domain. Using passive props in this interface exploits the surgeon's existing skills, provides direct action-task correspondence, eliminates explicit modes for separate tools, facilitates natural two-handed interaction, and provides tactile and kinesthetic feedback for the user. Our informal evaluation sessions have shown that with a cursory introduction, neurosurgeons who have never seen the interface can understand and use it without training.

© All rights reserved Hinckley et al. and/or ACM Press

 
Edit | Del

Hinckley, Ken, Pausch, Randy, Goble, John C. and Kassell, Neal F. (1994): A Survey of Design Issues in Spatial Input. In: Szekely, Pedro (ed.) Proceedings of the 7th annual ACM symposium on User interface software and technology November 02 - 04, 1994, Marina del Rey, California, United States. pp. 213-222.

We present a survey of design issues for developing effective free-space three-dimensional (3D) user interfaces. Our survey is based upon previous work in 3D interaction, our experience in developing free-space interfaces, and our informal observations of test users. We illustrate our design issues using examples drawn from instances of 3D interfaces. For example, our first issue suggests that users have difficulty understanding three-dimensional space. We offer a set of strategies which may help users to better perceive a 3D virtual environment, including the use of spatial references, relative gesture, two-handed interaction, multisensory feedback, physical constraints, and head tracking. We describe interfaces which employ these strategies. Our major contribution is the synthesis of many scattered results, observations, and examples into a common framework. This framework should serve as a guide to researchers or systems builders who may not be familiar with design issues in spatial input. Where appropriate, we also try to identify areas in free-space 3D interaction which we see as likely candidates for additional research. An extended and annotated version of the references list for this paper is available on-line through mosaic at address http://uvacs.cs.virginia.edu/~kph2q/.

© All rights reserved Hinckley et al. and/or ACM Press

 
Edit | Del

Durbin, Jim, Jacob, Robert J. K. and Hinckley, Ken (1994): Laying the Foundation for the Information Super Highway: Human-Computer Interaction Research. In ACM SIGCHI Bulletin, 26 (4) pp. 56-58.

1987
 
Edit | Del

Hinckley, Ken (1987): An Object Oriented Extension Language for Integrating Disparate Applications. In: Bullinger, Hans-Jorg and Shackel, Brian (eds.) INTERACT 87 - 2nd IFIP International Conference on Human-Computer Interaction September 1-4, 1987, Stuttgart, Germany. pp. 529-533.

Much of the work in user-interface design has concentrated on developing a consistent and friendly interface to particular applications. While this is appropriate in some environments, a broader problem concerns unifying interfaces to multiple applications, often from different vendors, whose interface designers were not working in concert. In a multi-window/multi-processing system, the user may be presented with different programs utilizing different user paradigms and requiring different methods to perform similar tasks. QUICHE (Quick User Interface and Command Handling Extension) is an extension language based on the Icon [1] programming language. Like Lisp-style EMACS [2] extension languages, it can be used to define key bindings. Unlike those extensions, QUICHE is not bound to any particular application, can easily make system calls and invoke programs, and can call entry points in the program it is extending, either directly or through a trait binding mechanism. It is this latter ability that makes QUICHE an ideal means of unifying application interfaces. Traits are defined for applications such as window managers, debuggers and editors. Each trait encapsulates a set of basic operations that will be available in any instance of an object. The existence of a standard set of traits allows the interface designer to tailor interfaces that provide consistency across multiple applications, and to easily modify or add features to a set of applications. In addition, QUICHE can also extend applications which do not themselves support any traits. A manager can be written in QUICHE that, instead of calling the application, inserts the commands corresponding to a call as though the user had entered them.

© All rights reserved Hinckley and/or North-Holland

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Changes to this page (author)

27 Nov 2012: Added
23 Nov 2012: Modified
07 Nov 2012: Modified
06 Nov 2012: Modified
04 Apr 2012: Modified
04 Apr 2012: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
16 Jan 2011: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
12 Jul 2009: Modified
17 Jun 2009: Modified
17 Jun 2009: Modified
17 Jun 2009: Modified
01 Jun 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
29 May 2009: Modified
09 May 2009: Modified
12 Nov 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
24 Jul 2007: Modified
24 Jul 2007: Modified
29 Jun 2007: Modified
29 Jun 2007: Modified
29 Jun 2007: Modified
24 Jun 2007: Modified
23 Jun 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Modified
11 Jun 2007: Modified
28 Apr 2003: Added
28 Apr 2003: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/ken_hinckley.html

Publication statistics

Pub. period:1987-2012
Pub. count:54
Number of co-authors:87



Co-authors

Number of publications with 3 favourite co-authors:

Patrick Baudisch:11
Francois Guimbretiere:10
Randy Pausch:6

 

 

Productive colleagues

Ken Hinckley's 3 most productive colleagues in number of publications:

Saul Greenberg:140
Scott E. Hudson:113
Ravin Balakrishnan:108
 
 
 

Upcoming Courses

go to course
User-Centred Design - Module 3
72% booked. Starts in 23 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading