Number of co-authors:23
Number of publications with 3 favourite co-authors:Timothy Miller:4Andrew Bragdon:3Joseph J. LaViola:3
Robert Zeleznik's 3 most productive colleagues in number of publications:Joseph J. LaViola:29Andries van Dam:25John F. Hughes:23
The ability to simplify means to eliminate the unnecessary so that the necessary may speak
-- Hans Hofmann
Read the fascinating history of Wearable Computing, told by its father, Steve Mann
Read Steve's chapter !
Our Latest Books
Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger
Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad
Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam
Has also published under the name of:
"Robert C. Zeleznik"
Publications by Robert Zeleznik (bibliography)
Bragdon, Andrew, Zeleznik, Robert, Reiss, Steven P., Karumuri, Suman, Cheung, William, Kaplan, Joshua, Coleman, Christopher, Adeputra, Ferdi and LaViola, Joseph J. (2010): Code bubbles: a working set-based interface for code understanding and maintenance. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2503-2512.
Developers spend significant time reading and navigating code fragments spread across multiple locations. The file-based nature of contemporary IDEs makes it prohibitively difficult to create and maintain a simultaneous view of such fragments. We propose a novel user interface metaphor for code understanding based on collections of lightweight, editable fragments called bubbles, which form concurrently visible working sets. We present the results of a qualitative usability evaluation, and the results of a quantitative study which indicates Code Bubbles significantly improved code understanding time, while reducing navigation interactions over a widely-used IDE, for two controlled tasks.
© All rights reserved Bragdon et al. and/or their publisher
Zeleznik, Robert, Bragdon, Andrew, Adeputra, Ferdi and Ko, Hsu-Sheng (2010): Hands-on math: a page-based multi-touch and pen desktop for technical work and problem solving. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 17-26.
Students, scientists and engineers have to choose between the flexible, free-form input of pencil and paper and the computational power of Computer Algebra Systems (CAS) when solving mathematical problems. Hands-On Math is a multi-touch and pen-based system which attempts to unify these approaches by providing virtual paper that is enhanced to recognize mathematical notations as a means of providing in situ access to CAS functionality. Pages can be created and organized on a large pannable desktop, and mathematical expressions can be computed, graphed and manipulated using a set of uni- and bi-manual interactions which facilitate rapid exploration by eliminating tedious and error prone transcription tasks. Analysis of a qualitative pilot evaluation indicates the potential of our approach and highlights usability issues with the novel techniques used.
© All rights reserved Zeleznik et al. and/or their publisher
Bragdon, Andrew, Zeleznik, Robert, Williamson, Brian, Miller, Timothy and LaViola, Joseph J. (2009): GestureBar: improving the approachability of gesture-based interfaces. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 2269-2278.
GestureBar is a novel, approachable UI for learning gestural interactions that enables a walk-up-and-use experience which is in the same class as standard menu and toolbar interfaces. GestureBar leverages the familiar, clean look of a common toolbar, but in place of executing commands, richly discloses how to execute commands with gestures, through animated images, detail tips and an out-of-document practice area. GestureBar's simple design is also general enough for use with any recognition technique and for integration with standard, non-gestural UI components. We evaluate GestureBar in a formal experiment showing that users can perform complex, ecologically valid tasks in a purely gestural system without training, introduction, or prior gesture experience when using GestureBar, discovering and learning a high percentage of the gestures needed to perform the tasks optimally, and significantly outperforming a state of the art crib sheet. The relative contribution of the major design elements of GestureBar is also explored. A second experiment shows that GestureBar is preferred to a basic crib sheet and two enhanced crib sheet variations.
© All rights reserved Bragdon et al. and/or ACM Press
LaViola, Joseph J., Leal, Anamary, Miller, Timothy S. and Zeleznik, Robert (2008): Evaluation of Techniques for Visualizing Mathematical Expression Recognition Results. In: Proceedings of the 2008 Conference on Graphics Interface May 28-30, 2008, Windsor, Ontario, Canada. pp. 131-138.
We present an experimental study that evaluates four different techniques for visualizing the machine interpretation of handwritten mathematics. Typeset in Place puts a printed form of the recognized expression in the same location as the handwritten mathematics. Adjusted Ink replaces what was written with scaled-to-fit, cleaned up handwritten characters using an ink font. The Large Offset technique scales a recognized printed form to be just as wide as the handwritten input, and places it below the handwritten mathematical expression. The Small Offset technique is similar to Large Offset but the printed form is set to be a fixed size which is generally small compared to the written expression. Our experiment explores how effective each technique is with assisting users in identifying and correcting recognition mistakes with different types and quantities of mathematical expressions. Our evaluation is based on task completion time and a comprehensive post-questionnaire used to solicit reactions on each technique. The results of our study indicate that, although each technique has advantages and disadvantages depending on the complexity of the handwritten mathematics, subjects took significantly longer to complete the recognition task with Typeset in Place and generally preferred Adjusted Ink or Small Offset.
© All rights reserved LaViola et al. and/or their publisher
Zeleznik, Robert and Miller, Timothy (2006): Fluid inking: augmenting the medium of free-form inking with gestures. In: Proceedings of the 2006 Conference on Graphics Interface 2006. pp. 155-162.
We present Fluid Inking, a generally applicable approach to augmenting the fluid medium of free-form inking with gestural commands. Our approach is characterized by four design criteria, including: 1) pen-based hardware impartiality: all interactions can be performed with a button-free stylus, the minimal input hardware requirement for inking, and the least common denominator device for pen-based systems ranging from PDAs to whiteboards; 2) performability: gestures use short sequences of simple and familiar inking interactions that require minimal targeting; 3) extensibility: gestures are a regular pattern of optional shortcuts for commands in an arbitrarily scalable menu system; and 4) discoverability: gesture shortcuts (analogous to modifier keys) are displayed in the interactive menu and are suggested with dynamic feedback during inking. This paper presents the Fluid Inking techniques in the unified context of a prototype notetaking application and emphasizes how post-fix terminal punctuation and prefix flicks can disambiguate gestures from regular inking. We also discuss how user feedback influenced the Fluid Inking design.
© All rights reserved Zeleznik and Miller and/or Canadian Information Processing Society
Zeleznik, Robert, Miller, Timothy and Forsberg, Andrew (2001): Pop through mouse button interactions. In: Marks, Joe and Mynatt, Elizabeth D. (eds.) Proceedings of the 14th annual ACM symposium on User interface software and technology November 11 - 14, 2001, Orlando, Florida. pp. 195-196.
We present a range of novel interactions enabled by a simple modification in
the design of a computer mouse. By converting each mouse button to pop through
tactile push-buttons, similar to the focus/shutter-release buttons used in many
cameras, users can feel, and the computer can sense, two distinct "clicks"
corresponding to pressing lightly and pressing firmly to pop through. Despite
the prototypical status of our hardware and software implementations, our
current pop through mouse interactions are compelling and warrant further
investigation. In particular, we demonstrate that pop through buttons not only
yield an additional button activation state that is composable with, or even
preferable to, techniques such as double-clicking, but also can endow a
qualitatively novel user experience when meaningfully and consistently applied.
We propose a number of software guidelines that may provide a consistent,
systemic benefit; for example, light pressure may invoke default interaction
(short menu), and firm pressure may supply more detail (long menu).
© All rights reserved Zeleznik et al. and/or ACM Press
Miller, Timothy and Zeleznik, Robert (1998): An Insidious Haptic Invasion: Adding Force Feedback to the X Desktop. In: Mynatt, Elizabeth D. and Jacob, Robert J. K. (eds.) Proceedings of the 11th annual ACM symposium on User interface software and technology November 01 - 04, 1998, San Francisco, California, United States. pp. 59-64.
This paper describes preliminary work in a project to add force feedback to user interface elements of the X Window System in an attempt to add true "feel" to the window system's "look and feel". Additions include adding ridges around icons and menu items to aid interaction, alignment guides for moving windows, and other enhancements to window manipulation. The motivation for this system is the observation that people naturally have many skills for and intuitions about a very rich environment of interaction forces in the non-computer world; however, these skills are largely unused in computer applications. We expect that haptic modifications to conventional graphical user interfaces, such as those we present, can lead to gains in performance, intuition, learnability, and enjoyment of the interface. This paper describes details of the implementation of the haptic window system elements, in addition to higher-level haptic design principles and informal observations of users of the system.
© All rights reserved Miller and Zeleznik and/or ACM Press
Forsberg, Andrew, Dieterich, Mark and Zeleznik, Robert (1998): The Music Notepad. In: Mynatt, Elizabeth D. and Jacob, Robert J. K. (eds.) Proceedings of the 11th annual ACM symposium on User interface software and technology November 01 - 04, 1998, San Francisco, California, United States. pp. 203-210.
We present a system for entering common music notation based on 2D gestural input. The key feature of the system is the look-and-feel of the interface which approximates sketching music with paper and pencil. A probability-based interpreter integrates sequences of gestural input to perform the most common notation and editing operations. In this paper, we present the user's model of the system, the components of the high-level recognition system, and a discussion of the evolution of the system including user feedback.
© All rights reserved Forsberg et al. and/or ACM Press
Forsberg, Andrew, Herndon, Kenneth and Zeleznik, Robert (1996): Aperture Based Selection for Immersive Virtual Environments. In: Kurlander, David, Brown, Marc and Rao, Ramana (eds.) Proceedings of the 9th annual ACM symposium on User interface software and technology November 06 - 08, 1996, Seattle, Washington, United States. pp. 95-96.
We present two novel techniques for effectively selecting objects in immersive virtual environments using a single 6 DOF magnetic tracker. These techniques advance the state of the art in that they exploit the participant's visual frame of reference and fully utilize the position and orientation data from the tracker to improve accuracy of the selection task. Preliminary results from pilot usability studies validate our designs. Finally, the two techniques combine to compensate for each other's weaknesses.
© All rights reserved Forsberg et al. and/or ACM Press
Ayers, Matthew and Zeleznik, Robert (1996): The Lego Interface Toolkit. In: Kurlander, David, Brown, Marc and Rao, Ramana (eds.) Proceedings of the 9th annual ACM symposium on User interface software and technology November 06 - 08, 1996, Seattle, Washington, United States. pp. 97-98.
This paper describes a rapid prototyping system for physical interaction devices in immersive virtual environments. Because of the increased complexity of 3D interactive environments and the lack of standard interactive tools, designers are unable to use traditional 2D hardware in 3D virtual environments. As a result, designers must create entirely new interaction devices, a both slow and expensive process. We propose a system which allows hardware designers to experiment with the construction of new 3D interaction devices both quickly and inexpensively.
© All rights reserved Ayers and Zeleznik and/or ACM Press
Stevens, Marc P., Zeleznik, Robert and Hughes, John F. (1994): An Architecture for an Extensible 3D Interface Toolkit. In: Szekely, Pedro (ed.) Proceedings of the 7th annual ACM symposium on User interface software and technology November 02 - 04, 1994, Marina del Rey, California, United States. pp. 59-67.
This paper presents the architecture for an extensible toolkit used in construction and rapid prototyping of three dimensional interfaces, interactive illustrations, and three dimensional widgets. The toolkit provides methods for the direct manipulation of 3D primitives which can be linked together through a visual programming language to create complex constrained behavior. Features of the toolkit include the ability to visually build, encapsulate, and parametrize complex models, and impose limits on the models. The toolkit's constraint resolution technique is based on a dynamic object model similar to those in prototype delegation object systems. The toolkit has been used to rapidly prototype tools for mechanical modelling, scientific visualization, construct 3D widgets, and build mathematical illustrations.
© All rights reserved Stevens et al. and/or ACM Press
Herndon, Kenneth, Zeleznik, Robert, Robbins, Daniel, Conner, D. Brookshire, Snibbe, Scott S. and van Dam, Andries (1992): Interactive Shadows. In: Mackinlay, Jock D. and Green, Mark (eds.) Proceedings of the 5th annual ACM symposium on User interface software and technology November 15 - 18, 1992, Monteray, California, United States. pp. 1-6.
It is often difficult in computer graphics applications to understand spatial relationships between objects in a 3D scene or effect changes to those objects without specialized visualization and manipulation techniques. We present a set of three-dimensional tools (widgets) called "shadows" that not only provide valuable perceptual cues about the spatial relationships between objects, but also provide a direct manipulation interface to constrained transformation techniques. These shadow widgets provide two advances over previous techniques. First, they provide high correlation between their own geometric feedback and their effects on the objects they control. Second, unlike some other 3D widgets, they do not obscure the objects they control.
© All rights reserved Herndon et al. and/or ACM Press
Show list on your website
Join the design elite and advance:
Changes to this page (author)03 Nov 2010: Modified02 Nov 2010: Modified
09 May 2009: Modified
12 May 2008: Added
23 Jun 2007: Modified
28 Apr 2003: Added
Page maintainer: The Editorial Team