Publication statistics

Pub. period:1996-2012
Pub. count:57
Number of co-authors:97



Co-authors

Number of publications with 3 favourite co-authors:

Ken Hinckley:11
Edward Cutrell:6
Desney S. Tan:5

 

 

Productive colleagues

Patrick Baudisch's 3 most productive colleagues in number of publications:

Carl Gutwin:116
Ravin Balakrishnan:108
Wei-Ying Ma:95
 
 
 

Upcoming Courses

go to course
User Experience: The Beginner's Guide
90% booked. Starts in 5 days
go to course
User-Centred Design - Module 2
89% booked. Starts in 6 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Patrick Baudisch

Picture of Patrick Baudisch.
Update pic
Personal Homepage:
http://www.patrickbaudisch.com/

 

Publications by Patrick Baudisch (bibliography)

 what's this?
2012
 
Edit | Del

Mueller, Stefanie, Lopes, Pedro and Baudisch, Patrick (2012): Interactive construction: interactive fabrication of functional mechanical devices. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 599-606. Available online

Personal fabrication tools, such as laser cutters and 3D printers allow users to create precise objects quickly. However, working through a CAD system removes users from the workpiece. Recent interactive fabrication tools reintroduce this directness, but at the expense of precision. In this paper, we introduce constructable, an interactive drafting table that produces precise physical output in every step. Users interact by drafting directly on the workpiece using a hand-held laser pointer. The system tracks the pointer, beautifies its path, and implements its effect by cutting the workpiece using a fast high-powered laser cutter. Constructable achieves precision through tool-specific constraints, user-defined sketch lines, and by using the laser cutter itself for all visual feedback, rather than using a screen or projection. We demonstrate how constructable allows creating simple but functional devices, including a simple gearbox, that cannot be created with traditional interactive fabrication tools.

© All rights reserved Mueller et al. and/or ACM Press

2011
 
Edit | Del

Roudaut, Anne, Pohl, Henning and Baudisch, Patrick (2011): Touch input on curved surfaces. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1011-1020. Available online

Advances in sensing technology are currently bringing touch input to non-planar surfaces, ranging from spherical touch screens to prototypes the size and shape of a ping-pong ball. To help interface designers create usable interfaces on such devices, we determine how touch surface curvature affects targeting. We present a user study in which participants acquired targets on surfaces of different curvature and at locations of different slope. We find that surface convexity increases pointing accuracy, and in particular reduces the offset between the input point perceived by users and the input point sensed by the device. Concave surfaces, in contrast, are subject to larger error offsets. This is likely caused by how concave surfaces hug the user's finger, thus resulting in a larger contact area. The effect of slope on targeting, in contrast, is unexpected at first sight. Some targets located downhill from the user's perspective are subject to error offsets in the opposite direction from all others. This appears to be caused by participants acquiring these targets using a different finger posture that lets them monitor the position of their fingers more effectively.

© All rights reserved Roudaut et al. and/or their publisher

 
Edit | Del

Ashbrook, Daniel, Baudisch, Patrick and White, Sean (2011): Nenya: subtle and eyes-free mobile input with a magnetically-tracked finger ring. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2043-2046. Available online

We present Nenya, a new input device in the shape of a finger ring. Nenya provides an input mechanism that is always available, fast to access, and allows analog input, while remaining socially acceptable by being embodied in commonly worn items. Users make selections by twisting the ring and "click" by sliding it along the finger. The ring -- the size of a regular wedding band -- is magnetic, and is tracked by a wrist-worn sensor. Nenya's tiny size, eyes-free usability, and physical form indistinguishable from a regular ring make its use subtle and socially acceptable. We present two user studies (one- and two-handed) in which we studied sighted and eyes-free use, finding that even with no visual feedback users were able to select from eight targets.

© All rights reserved Ashbrook et al. and/or their publisher

 
Edit | Del

Holz, Christian and Baudisch, Patrick (2011): Understanding touch. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2501-2510. Available online

Current touch devices, such as capacitive touchscreens are based on the implicit assumption that users acquire targets with the center of the contact area between finger and device. Findings from our previous work indicate, however, that such devices are subject to systematic error offsets. This suggests that the underlying assumption is most likely wrong. In this paper, we therefore revisit this assumption. In a series of three user studies, we find evidence that the features that users align with the target are visual features. These features are located on the top of the user's fingers, not at the bottom, as assumed by traditional devices. We present the projected center model, under which error offsets drop to 1.6mm, compared to 4mm for the traditional model. This suggests that the new model is indeed a good approximation of how users conceptualize touch input. The primary contribution of this paper is to help understand touch-one of the key input technologies in human-computer interaction. At the same time, our findings inform the design of future touch input technology. They explain the inaccuracy of traditional touch devices as a -Sparallax- artifact between user control based on the top of the finger and sensing based on the bottom side of the finger. We conclude that certain camera-based sensing technologies can inherently be more accurate than contact area-based sensing.

© All rights reserved Holz and Baudisch and/or their publisher

 
Edit | Del

Gustafson, Sean, Holz, Christian and Baudisch, Patrick (2011): Imaginary phone: learning imaginary interfaces by transferring spatial memory from a familiar device. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 283-292. Available online

We propose a method for learning how to use an imaginary interface (i.e., a spatial non-visual interface) that we call "transfer learning". By using a physical device (e.g. an iPhone) a user inadvertently learns the interface and can then transfer that knowledge to an imaginary interface. We illustrate this concept with our Imaginary Phone prototype. With it users interact by mimicking the use of a physical iPhone by tapping and sliding on their empty non-dominant hand without visual feedback. Pointing on the hand is tracked using a depth camera and touch events are sent wirelessly to an actual iPhone, where they invoke the corresponding actions. Our prototype allows the user to perform everyday task such as picking up a phone call or launching the timer app and setting an alarm. Imaginary Phone thereby serves as a shortcut that frees users from the necessity of retrieving the actual physical device. We present two user studies that validate the three assumptions underlying the transfer learning method. (1) Users build up spatial memory automatically while using a physical device: participants knew the correct location of 68% of their own iPhone home screen apps by heart. (2) Spatial memory transfers from a physical to an imaginary inter-face: participants recalled 61% of their home screen apps when recalling app location on the palm of their hand. (3) Palm interaction is precise enough to operate a typical mobile phone: Participants could reliably acquire 0.95cm wide iPhone targets on their palm-sufficiently large to operate any iPhone standard widget.

© All rights reserved Gustafson et al. and/or ACM Press

 
Edit | Del

Wimmer, Raphael and Baudisch, Patrick (2011): Modular and deformable touch-sensitive surfaces based on time domain reflectometry. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 517-526. Available online

Time domain reflectometry, a technique originally used in diagnosing cable faults, can also locate where a cable is being touched. In this paper, we explore how to extend time domain reflectometry in order to touch-enable thin, modular, and deformable surfaces and devices. We demonstrate how to use this approach to make smart clothing and to rapid prototype touch-sensitive objects of arbitrary shape. To accomplish this, we extend time domain reflectometry in three ways: (1) Thin: We demonstrate how to run time domain reflectometry on a single wire. This allows us to touch-enable thin metal objects, such as guitar strings. (2) Modularity: We present a two-pin connector system that allows users to daisy chain touch-sensitive segments. We illustrate these enhancements with 13 prototypes and a series of performance measurements. (3) Deformability: We create deformable touch devices by mounting stretch-able wire patterns onto elastic tape and meshes. We present selected performance measurements.

© All rights reserved Wimmer and Baudisch and/or ACM Press

2010
 
Edit | Del

Baudisch, Patrick (2010): My new pc is a mobile phone: techniques and technology for the new smallness. In: Proceedings of 12th Conference on Human-computer interaction with mobile devices and services 2010. pp. 1-2. Available online

Neither desktop computers nor the hundred-dollar laptop are the new mass computation platform of this world -- mobile phones are. 4 Billion of them. So how come we still use PCs? Mobile devices have a major limitation: mobility requires smallness. Initially, miniaturization of hardware drove miniaturization at a fast pace, but limitations are not primarily technical anymore: today, it is almost exclusively human factors. Screens need to be large enough to be seen, keyboards large enough to be typed on. These factors, however, are practically invariant. In this presentation, I take a closer look at the research that emerges from the tension between the desire to perform complex tasks and the desire for mobility. Is it possible for mobile users to perform those complex tasks that today's users still perform on "large screen" desktop computers? What range of applications can we adapt by visually compressing them? What applications resist such an adaptation and why? In the second half of the talk, I am trying to look into the future of mobile device hardware, devices ten times smaller than today's devices.

© All rights reserved Baudisch and/or his/her publisher

 
Edit | Del

Holz, Christian and Baudisch, Patrick (2010): The generalized perceived input point model and how to double touch accuracy by extracting fingerprints. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 581-590. Available online

It is generally assumed that touch input cannot be accurate because of the fat finger problem, i.e., the softness of the fingertip combined with the occlusion of the target by the finger. In this paper, we show that this is not the case. We base our argument on a new model of touch inaccuracy. Our model is not based on the fat finger problem, but on the perceived input point model. In its published form, this model states that touch screens report touch location at an offset from the intended target. We generalize this model so that it represents offsets for individual finger postures and users. We thereby switch from the traditional 2D model of touch to a model that considers touch a phenomenon in 3-space. We report a user study, in which the generalized model explained 67% of the touch inaccuracy that was previously attributed to the fat finger problem. In the second half of this paper, we present two devices that exploit the new model in order to improve touch accuracy. Both model touch on per-posture and per-user basis in order to increase accuracy by applying respective offsets. Our RidgePad prototype extracts posture and user ID from the user's fingerprint during each touch interaction. In a user study, it achieved 1.8 times higher accuracy than a simulated capacitive baseline condition. A prototype based on optical tracking achieved even 3.3 times higher accuracy. The increase in accuracy can be used to make touch interfaces more reliable, to pack up to 3.3{sup:2} > 10 times more controls into the same surface, or to bring touch input to very small mobile devices.

© All rights reserved Holz and Baudisch and/or their publisher

 
Edit | Del

Baudisch, Patrick, Becker, Torsten and Rudeck, Frederik (2010): Lumino: tangible blocks for tabletop computers based on glass fiber bundles. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 1165-1174. Available online

Tabletop computers based on diffuse illumination can track fiducial markers placed on the table's surface. In this paper, we demonstrate how to do the same with objects arranged in a three-dimensional structure without modifying the table. We present lumino, a system of building blocks. In addition to a marker, each block contains a glass fiber bundle. The bundle optically guides the light reflected off markers in the higher levels down to the table surface, where the table's built-in camera reads it. While guiding marker images down, the bundle optically scales and rearranges them. It thereby fits the images of an entire vertical arrangement of markers into the horizontal space usually occupied by a single 2D marker. We present six classes of blocks and matching marker designs, each of which is optimized for different requirements. We show three demo applications. One of them is a construction kit that logs and critiques constructions. The presented blocks are unpowered and maintenance-free, keeping larger numbers of blocks manageable.

© All rights reserved Baudisch et al. and/or their publisher

 
Edit | Del

Boring, Sebastian, Baur, Dominikus, Butz, Andreas, Gustafson, Sean and Baudisch, Patrick (2010): Touch projector: mobile interaction through video. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2287-2296. Available online

In 1992, Tani et al. proposed remotely operating machines in a factory by manipulating a live video image on a computer screen. In this paper we revisit this metaphor and investigate its suitability for mobile use. We present Touch Projector, a system that enables users to interact with remote screens through a live video image on their mobile device. The handheld device tracks itself with respect to the surrounding displays. Touch on the video image is "projected" onto the target display in view, as if it had occurred there. This literal adaptation of Tani's idea, however, fails because handheld video does not offer enough stability and control to enable precise manipulation. We address this with a series of improvements, including zooming and freezing the video image. In a user study, participants selected targets and dragged targets between displays using the literal and three improved versions. We found that participants achieved highest performance with automatic zooming and temporary image freezing.

© All rights reserved Boring et al. and/or their publisher

 
Edit | Del

Gustafson, Sean, Bierwirth, Daniel and Baudisch, Patrick (2010): Imaginary interfaces: spatial interaction with empty hands and without visual feedback. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 3-12. Available online

Screen-less wearable devices allow for the smallest form factor and thus the maximum mobility. However, current screen-less devices only support buttons and gestures. Pointing is not supported because users have nothing to point at. However, we challenge the notion that spatial interaction requires a screen and propose a method for bringing spatial interaction to screen-less devices. We present Imaginary Interfaces, screen-less devices that allow users to perform spatial interaction with empty hands and without visual feedback. Unlike projection-based solutions, such as Sixth Sense, all visual "feedback" takes place in the user's imagination. Users define the origin of an imaginary space by forming an L-shaped coordinate cross with their non-dominant hand. Users then point and draw with their dominant hand in the resulting space. With three user studies we investigate the question: To what extent can users interact spatially with a user interface that exists only in their imagination? Participants created simple drawings, annotated existing drawings, and pointed at locations described in imaginary space. Our findings suggest that users' visual short-term memory can, in part, replace the feedback conventionally displayed on a screen.

© All rights reserved Gustafson et al. and/or their publisher

 
Edit | Del

Augsten, Thomas, Kaefer, Konstantin, Meusel, Ren, Fetzer, Caroline, Kanitz, Dorian, Stoff, Thomas, Becker, Torsten, Holz, Christian and Baudisch, Patrick (2010): Multitoe: high-precision interaction with back-projected floors based on high-resolution multi-touch input. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 209-218. Available online

Tabletop applications cannot display more than a few dozen on-screen objects. The reason is their limited size: tables cannot become larger than arm's length without giving up direct touch. We propose creating direct touch surfaces that are orders of magnitude larger. We approach this challenge by integrating high-resolution multitouch input into a back-projected floor. As the same time, we maintain the purpose and interaction concepts of tabletop computers, namely direct manipulation. We base our hardware design on frustrated total internal reflection. Its ability to sense per-pixel pressure allows the floor to locate and analyze users' soles. We demonstrate how this allows the floor to recognize foot postures and identify users. These two functions form the basis of our system. They allow the floor to ignore users unless they interact explicitly, identify and track users based on their shoes, enable high-precision interaction, invoke menus, track heads, and allow users to control high-degree of freedom interactions using their feet. While we base our designs on a series of simple user studies, the primary contribution on this paper is in the engineering domain.

© All rights reserved Augsten et al. and/or their publisher

2009
 
Edit | Del

Baudisch, Patrick and Chu, Gerry (2009): Back-of-device interaction allows creating very small touch devices. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1923-1932. Available online

In this paper, we explore how to add pointing input capabilities to very small screen devices. On first sight, touchscreens seem to allow for particular compactness, because they integrate input and screen into the same physical space. The opposite is true, however, because the user's fingers occlude contents and prevent precision. We argue that the key to touch-enabling very small devices is to use touch on the device backside. In order to study this, we have created a 2.4" prototype device; we simulate screens smaller than that by masking the screen. We present a user study in which participants completed a pointing task successfully across display sizes when using a back-of device interface. The touchscreen-based control condition (enhanced with the shift technique), in contrast, failed for screen diagonals below 1 inch. We present four form factor concepts based on back-of-device interaction and provide design guidelines extracted from a second user study.

© All rights reserved Baudisch and Chu and/or ACM Press

 
Edit | Del

Nacenta, Miguel A., Baudisch, Patrick, Benko, Hrvoje and Wilson, Andy (2009): Separability of Spatial Manipulations in Multi-touch Interfaces. In: Proceedings of Graphics Interface 2009, Kelowna, Canada. pp. 175-182. Available online

 
Edit | Del

Grossman, Tovi, Baudisch, Patrick and Hinckley, Ken (2009): Handle Flags: efficient and flexible selections for inking applications. In: Proceedings of the 2009 Conference on Graphics Interface 2009. pp. 167-174. Available online

There are a number of challenges associated with content selection in pen-based interfaces. Supplementary buttons to enter a selection mode may not be available, and selections may require a careful and error prone lasso stroke. In this paper we describe the design and evaluation of Handle Flags, a new localized technique used to select and perform commands on ink strokes in pen-operated interfaces. When the user positions the pen near an ink stroke, Handle Flags are displayed for the potential selections that the ink stroke could belong to (such as proximal strokes comprising a word or drawing). Tapping the handle allows the user to access the corresponding selection, without requiring a complex lasso stroke. Our studies show that Handle Flags offer significant benefits in comparison to traditional techniques, and are a promising technique for pen-based applications.

© All rights reserved Grossman et al. and/or their publisher

 
Edit | Del

Nacenta, Miguel A., Baudisch, Patrick, Benko, Hrvoje and Wilson, Andy (2009): Separability of spatial manipulations in multi-touch interfaces. In: Proceedings of the 2009 Conference on Graphics Interface 2009. pp. 175-182. Available online

Multi-touch interfaces allow users to translate, rotate, and scale digital objects in a single interaction. However, this freedom represents a problem when users intend to perform only a subset of manipulations. A user trying to scale an object in a print layout program, for example, might find that the object was also slightly translated and rotated, interfering with what was already carefully laid out earlier. We implemented and tested interaction techniques that allow users to select a subset of manipulations. Magnitude Filtering eliminates transformations (e.g., rotation) that are small in magnitude. Gesture Matching attempts to classify the user's input into a subset of manipulation gestures. Handles adopts a conventional single-touch handles approach for touch input. Our empirical study showed that these techniques significantly reduce errors in layout, while the Handles technique was slowest. A variation of the Gesture Matching technique presented the best combination of speed and control, and was favored by participants.

© All rights reserved Nacenta et al. and/or their publisher

 
Edit | Del

Ni, Tao and Baudisch, Patrick (2009): Disappearing mobile devices. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 101-110. Available online

In this paper, we extrapolate the evolution of mobile devices in one specific direction, namely miniaturization. While we maintain the concept of a device that people are aware of and interact with intentionally, we envision that this concept can become small enough to allow invisible integration into arbitrary surfaces or human skin, and thus truly ubiquitous use. This outcome assumed, we investigate what technology would be most likely to provide the basis for these devices, what abilities such devices can be expected to have, and whether or not devices that size can still allow for meaningful interaction. We survey candidate technologies, drill down on gesture-based interaction, and demonstrate how it can be adapted to the desired form factors. While the resulting devices offer only the bare minimum in feedback and only the most basic interactions, we demonstrate that simple applications remain possible. We complete our exploration with two studies in which we investigate the affordance of these devices more concretely, namely marking and text entry using a gesture alphabet.

© All rights reserved Ni and Baudisch and/or their publisher

 
Edit | Del

Balakrishnan, Ravin and Baudisch, Patrick (2009): Introduction to this Special Issue on Ubiquitous Multi-Display Environments. In Human-Computer Interaction, 24 (1) pp. 1-8. Available online

2008
 
Edit | Del

Gustafson, Sean, Baudisch, Patrick, Gutwin, Carl and Irani, Pourang (2008): Wedge: clutter-free visualization of off-screen locations. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 787-796. Available online

To overcome display limitations of small-screen devices, researchers have proposed techniques that point users to objects located off-screen. Arrow-based techniques such as City Lights convey only direction. Halo conveys direction and distance, but is susceptible to clutter resulting from overlapping halos. We present Wedge, a visualization technique that conveys direction and distance, yet avoids overlap and clutter. Wedge represents each off-screen location using an acute isosceles triangle: the tip coincides with the off-screen locations, and the two corners are located on-screen. A wedge conveys location awareness primarily by means of its two legs pointing towards the target. Wedges avoid overlap programmatically by repelling each other, causing them to rotate until overlap is resolved. As a result, wedges can be applied to numbers and configurations of targets that would lead to clutter if visualized using halos. We report on a user study comparing Wedge and Halo for three off-screen tasks. Participants were significantly more accurate when using Wedge than when using Halo.

© All rights reserved Gustafson et al. and/or ACM Press

 
Edit | Del

Hoffmann, Raphael, Baudisch, Patrick and Weld, Daniel S. (2008): Evaluating visual cues for window switching on large screens. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 929-938. Available online

An increasing number of users are adopting large, multi-monitor displays. The resulting setups cover such a broad viewing angle that users can no longer simultaneously perceive all parts of the screen. Changes outside the user's visual field often go unnoticed. As a result, users sometimes have trouble locating the active window, for example after switching focus. This paper surveys graphical cues designed to direct visual attention and adapts them to window switching. Visual cues include five types of frames and mask around the target window and four trails leading to the window. We report the results of two user studies. The first evaluates each cue in isolation. The second evaluates hybrid techniques created by combining the most successful candidates from the first study. The best cues were visually sparse -- combinations of curved frames which use color to pop-out and tapered trails with predictable origin.

© All rights reserved Hoffmann et al. and/or ACM Press

 
Edit | Del

Li, Kevin A., Baudisch, Patrick and Hinckley, Ken (2008): Blindsight: eyes-free access to mobile phones. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1389-1398. Available online

Many mobile phones integrate services such as personal calendars. Given the social nature of the stored data, however, users often need to access such information as part of a phone conversation. In typical non-headset use, this re-quires users to interrupt their conversations to look at the screen. We investigate a counter-intuitive solution: to avoid the need for interruption we replace the visual interface with one based on auditory feedback. Surprisingly, this can be done without interfering with the phone conversation. We present blindSight, a prototype application that replaces the traditionally visual in-call menu of a mobile phone. Users interact using the phone keypad, without looking at the screen. BlindSight responds with auditory feedback. This feedback is heard only by the user, not by the person on the other end of the line. We present the results of two user studies of our prototype. The first study verifies that useful keypress accuracy can be obtained for the phone-at-ear position. The second study compares the blindSight system against a visual baseline condition and finds a preference for blindSight.

© All rights reserved Li et al. and/or ACM Press

 
Edit | Del

Baudisch, Patrick, Zotov, Alexander, Cutrell, Edward and Hinckley, Ken (2008): Starburst: a target expansion algorithm for non-uniform target distributions. In: Levialdi, Stefano (ed.) AVI 2008 - Proceedings of the working conference on Advanced Visual Interfaces May 28-30, 2008, Napoli, Italy. pp. 129-137. Available online

 
Edit | Del

Li, Kevin A., Baudisch, Patrick, Griswold, William G. and Hollan, James D. (2008): Tapping and rubbing: exploring new dimensions of tactile feedback with voice coil motors. In: Cousins, Steve B. and Beaudouin-Lafon, Michel (eds.) Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology October 19-22, 2008, Monterey, CA, USA. pp. 181-190. Available online

2007
 
Edit | Del

Hinckley, Ken, Zhao, Shengdong, Sarin, Raman, Baudisch, Patrick, Cutrell, Edward, Shilman, Michael and Tan, Desney S. (2007): InkSeine: In Situ search for active note taking. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 251-260. Available online

Using a notebook to sketch designs, reflect on a topic, or capture and extend creative ideas are examples of active note taking tasks. Optimal experience for such tasks demands concentration without interruption. Yet active note taking may also require reference documents or emails from team members. InkSeine is a Tablet PC application that supports active note taking by coupling a pen-and-ink interface with an in situ search facility that flows directly from a user's ink notes (Fig. 1). InkSeine integrates four key concepts: it leverages preexisting ink to initiate a search; it provides tight coupling of search queries with application content; it persists search queries as first class objects that can be commingled with ink notes; and it enables a quick and flexible workflow where the user may freely interleave inking, searching, and gathering content. InkSeine offers these capabilities in an interface that is tailored to the unique demands of pen input, and that maintains the primacy of inking above all other tasks.

© All rights reserved Hinckley et al. and/or ACM Press

 
Edit | Del

Vogel, Daniel and Baudisch, Patrick (2007): Shift: a technique for operating pen-based interfaces using touch. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 657-666. Available online

Retrieving the stylus of a pen-based device takes time and requires a second hand. Especially for short intermittent interactions many users therefore choose to use their bare fingers. Although convenient, this increases targeting times and error rates. We argue that the main reasons are the occlusion of the target by the user's finger and ambiguity about which part of the finger defines the selection point. We propose a pointing technique we call Shift that is designed to address these issues. When the user touches the screen, Shift creates a callout showing a copy of the occluded screen area and places it in a non-occluded location. The callout also shows a pointer representing the selection point of the finger. Using this visual feedback, users guide the pointer into the target by moving their finger on the screen surface and commit the target acquisition by lifting the finger. Unlike existing techniques, Shift is only invoked when necessary--over large targets no callout is created and users enjoy the full performance of an unaltered touch screen. We report the results of a user study showing that with Shift participants can select small targets with much lower error rates than an unaided touch screen and that Shift is faster than Offset Cursor for larger targets.

© All rights reserved Vogel and Baudisch and/or ACM Press

 
Edit | Del

Zhao, Shengdong, Dragicevic, Pierre, Chignell, Mark, Balakrishnan, Ravin and Baudisch, Patrick (2007): Earpod: eyes-free menu selection using touch input and reactive audio feedback. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 1395-1404. Available online

We present the design and evaluation of earPod: an eyes-free menu technique using touch input and reactive auditory feedback. Studies comparing earPod with an iPod-like visual menu technique on reasonably-sized static menus indicate that they are comparable in accuracy. In terms of efficiency (speed), earPod is initially slower, but outperforms the visual technique within 30 minutes of practice. Our results indicate that earPod is potentially a reasonable eyes-free menu technique for general use, and is a particularly exciting technique for use in mobile device interfaces.

© All rights reserved Zhao et al. and/or ACM Press

 
Edit | Del

Wigdor, Daniel, Forlines, Clifton, Baudisch, Patrick, Barnwell, John and Shen, Chia (2007): Lucid touch: a see-through mobile device. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 269-278. Available online

Touch is a compelling input modality for interactive devices; however, touch input on the small screen of a mobile device is problematic because a user's fingers occlude the graphical elements he wishes to work with. In this paper, we present LucidTouch, a mobile device that addresses this limitation by allowing the user to control the application by touching the back of the device. The key to making this usable is what we call pseudo-transparency: by overlaying an image of the user's hands onto the screen, we create the illusion of the mobile device itself being semi-transparent. This pseudo-transparency allows users to accurately acquire targets while not occluding the screen with their fingers and hand. Lucid Touch also supports multi-touch input, allowing users to operate the device simultaneously with all 10 fingers. We present initial study results that indicate that many users found touching on the back to be preferable to touching on the front, due to reduced occlusion, higher precision, and the ability to make multi-finger input.

© All rights reserved Wigdor et al. and/or ACM Press

2006
 
Edit | Del

Hinckley, Ken, Guimbretiere, Francois, Baudisch, Patrick, Sarin, Raman, Agrawala, Maneesh and Cutrell, Edward (2006): The springboard: multiple modes in one spring-loaded control. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 181-190. Available online

Modes allow a few inputs to invoke many operations, yet if a user misclassifies or forgets the state of a system, modes can result in errors. Spring-loaded modes (quasimodes) maintain a mode while the user holds a control such as a button or key. The Springboard is an interaction technique for tablet computers that extends quasimodes to encompass multiple tool modes in a single spring-loaded control. The Springboard allows the user to continue holding down a nonpreferred-hand command button after selecting a tool from a menu as a way to repeatedly apply the same tool. We find the Springboard improves performance for both a local marking menu and for a non-local marking menu ("lagoon") at the lower left corner of the screen. Despite the round-trip costs incurred to move the pen to a tool lagoon, a keystroke-level analysis of the true cost of each technique reveals the local marking menu is not significantly faster.

© All rights reserved Hinckley et al. and/or ACM Press

 
Edit | Del

Grossman, Tovi, Hinckley, Ken, Baudisch, Patrick, Agrawala, Maneesh and Balakrishnan, Ravin (2006): Hover widgets: using the tracking state to extend the capabilities of pen-operated devices. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 861-870. Available online

We present Hover Widgets, a new technique for increasing the capabilities of pen-based interfaces. Hover Widgets are implemented by using the pen movements above the display surface, in the tracking state. Short gestures while hovering, followed by a pen down, access the Hover Widgets, which can be used to activate localized interface widgets. By using the tracking state movements, Hover Widgets create a new command layer which is clearly distinct from the input layer of a pen interface. In a formal experiment Hover Widgets were found to be faster than a more traditional command activation technique, and also reduced errors due to divided attention.

© All rights reserved Grossman et al. and/or ACM Press

 
Edit | Del

Benko, Hrvoje, Wilson, Andrew D. and Baudisch, Patrick (2006): Precise selection techniques for multi-touch screens. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 1263-1272. Available online

The size of human fingers and the lack of sensing precision can make precise touch screen interactions difficult. We present a set of five techniques, called Dual Finger Selections, which leverage the recent development of multi-touch sensitive displays to help users select very small targets. These techniques facilitate pixel-accurate targeting by adjusting the control-display ratio with a secondary finger while the primary finger controls the movement of the cursor. We also contribute a "clicking" technique, called SimPress, which reduces motion errors during clicking and allows us to simulate a hover state on devices unable to sense proximity. We implemented our techniques on a multi-touch tabletop prototype that offers computer vision-based tracking. In our formal user study, we tested the performance of our three most promising techniques (Stretch, X-Menu, and Slider) against our baseline (Offset), on four target sizes and three input noise levels. All three chosen techniques outperformed the control technique in terms of error rate reduction and were preferred by our participants, with Stretch being the overall performance and preference winner.

© All rights reserved Benko et al. and/or ACM Press

 
Edit | Del

Baudisch, Patrick, Sinclair, Mike and Wilson, Andrew (2006): Soap: a pointing device that works in mid-air. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 43-46. Available online

Soap is a pointing device based on hardware found in a mouse, yet works in mid-air. Soap consists of an optical sensor device moving freely inside a hull made of fabric. As the user applies pressure from the outside, the optical sensor moves independent from the hull. The optical sensor perceives this relative motion and reports it as position input. Soap offers many of the benefits of optical mice, such as high-accuracy sensing. We describe the design of a soap prototype and report our experiences with four application scenarios, including a wall display, Windows Media Center, slide presentation, and interactive video games.

© All rights reserved Baudisch et al. and/or ACM Press

 
Edit | Del

Baudisch, Patrick, Tan, Desney S., Collomb, Maxime, Robbins, Dan, Hinckley, Ken, Agrawala, Maneesh, Zhao, Shengdong and Ramos, Gonzalo (2006): Phosphor: explaining transitions in the user interface using afterglow effects. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 169-178. Available online

Sometimes users fail to notice a change that just took place on their display. For example, the user may have accidentally deleted an icon or a remote collaborator may have changed settings in a control panel. Animated transitions can help, but they force users to wait for the animation to complete. This can be cumbersome, especially in situations where users did not need an explanation. We propose a different approach. Phosphor objects show the outcome of their transition instantly; at the same time they explain their change in retrospect. Manipulating a phosphor slider, for example, leaves an afterglow that illustrates how the knob moved. The parallelism of instant outcome and explanation supports both types of users. Users who already understood the transition can continue interacting without delay, while those who are inexperienced or may have been distracted can take time to view the effects at their own pace. We present a framework of transition designs for widgets, icons, and objects in drawing programs. We evaluate phosphor objects in two user studies and report significant performance benefits for phosphor objects.

© All rights reserved Baudisch et al. and/or ACM Press

 
Edit | Del

Baudisch, Patrick (2006): Interacting with Large Displays. In IEEE Computer, 39 (4) pp. 96-97. Available online

 
Edit | Del

Ramos, Gonzalo, Robertson, George G., Czerwinski, Mary, Tan, Desney S., Baudisch, Patrick, Hinckley, Ken and Agrawala, Maneesh (2006): Tumble! Splat! helping users access and manipulate occluded content in 2D drawings. In: Celentano, Augusto (ed.) AVI 2006 - Proceedings of the working conference on Advanced visual interfaces May 23-26, 2006, Venezia, Italy. pp. 428-435. Available online

2005
 
Edit | Del

Collomb, Maxime, Hascoet, Mountaz, Baudisch, Patrick and Lee, Brian (2005): Improving drag-and-drop on wall-size displays. In: Graphics Interface 2005 May 9-11, 2005, Victoria, British Columbia, Canada. pp. 25-32. Available online

On wall-size displays with pen or touch input, users can have difficulties reaching display contents located too high, too low, or too far away. Drag-and-drop interactions can be further complicated by bezels separating individual display units. Researchers have proposed a variety of interaction techniques to address this issue, such as extending the user's reach (e.g., push-and-throw) and bringing potential targets to the user (drag-and-pop). In this paper, we introduce a new technique called push-and-pop that combines the strengths of push-and-throw and drag-and-pop. We present two user studies comparing six different techniques designed for extending drag-and-drop to wall-size displays. In both studies, participants were able to file icons on a wall-size display fastest when using the push-and-pop interface.

© All rights reserved Collomb et al. and/or their publisher

 
Edit | Del

Baudisch, Patrick, Cutrell, Edward, Hinckley, Ken and Eversole, Adam (2005): Snap-and-go: helping users align objects without the modality of traditional snapping. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 301-310. Available online

Snapping is a widely used technique that helps users position graphical objects precisely, e.g., to align them with a grid or other graphical objects. Unfortunately, whenever users want to position a dragged object close to such an aligned location, they first need to deactivate snapping. We propose snap-and-go, a snapping technique that overcomes this limitation. By merely stopping dragged objects at aligned positions, rather than "warping" them there, snap-and-go helps users align objects, yet still allows placing dragged objects anywhere else. While this approach of inserting additional motor space renders snap-and-go slightly slower than traditional snapping, snap-and-go simplifies the user interface by eliminating the need for a deactivation option and thereby allows introducing snapping to application scenarios where traditional snapping is inapplicable. In our user studies, participants were able to align objects up to 138% (1D) and 231% (2D) faster with snap-and-go than without and snap-and-go proved robust against the presence of distracting snap targets.

© All rights reserved Baudisch et al. and/or ACM Press

 
Edit | Del

Hinckley, Ken, Baudisch, Patrick, Ramos, Gonzalo and Guimbretiere, Francois (2005): Design and analysis of delimiters for selection-action pen gesture phrases in scriboli. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 451-460. Available online

We present a quantitative analysis of delimiters for pen gestures. A delimiter is "something different" in the input stream that a computer can use to determine the structure of input phrases. We study four techniques for delimiting a selection-action gesture phrase consisting of lasso selection plus marking-menu-based command activation. Pigtail is a new technique that uses a small loop to delimit lasso selection from marking (Fig. 1). Handle adds a box to the end of the lasso, from which the user makes a second stroke for marking. Timeout uses dwelling with the pen to delimit the lasso from the mark. Button uses a button press to signal when to delimit the gesture. We describe the role of delimiters in our Scriboli pen interaction testbed, and show how Pigtail supports scope selection, command activation, and direct manipulation all in a single fluid pen gesture.

© All rights reserved Hinckley et al. and/or ACM Press

 
Edit | Del

Lam, Heidi and Baudisch, Patrick (2005): Summary thumbnails: readable overviews for small screen web browsers. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 681-690. Available online

In order to display web pages designed for desktop-sized monitors, some small-screen web browsers provide single-column or thumbnail views. Both have limitations. Single-column views affect page layouts and require users to scroll significantly more. Thumbnail views tend to reduce contained text beyond readability, so differentiating visually similar areas requires users to zoom. In this paper, we present Summary Thumbnails-thumbnail views enhanced with readable text fragments. Summary Thumbnails help users identify viewed material and distinguish between visually similar areas. In our user study, participants located content in web pages about 41% faster and with 71% lower error rates when using the Summary Thumbnail interface than when using the Single-Column interface, and zoomed 59% less than when using the Thumbnail interface. Nine of the eleven participants preferred Summary Thumbnails over both the Thumbnail and Single-Column interfaces.

© All rights reserved Lam and Baudisch and/or ACM Press

 
Edit | Del

Smith, Grham, Schraefel, M. C. and Baudisch, Patrick (2005): Curve dial: eyes-free parameter entry for GUIs. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1146-1147. Available online

In this demonstration, we introduce "curve dial" a technique designed to extend gesture-based interactions like FlowMenus with eyes-free parameter entry. FlowMenus, let users enter numerical parameters with "dialing" strokes surrounding the center of a radial menu. This centering requires users to keep their eyes on the Menu in order to align the pen with its center before initiating a gesture. Curve dial instead tracks the curvature of the path created by the pen: since curvature is location-independent, curvature dialing does not require users to keep track of the menu center and is therefore eyes-free. We demonstrate curvature dial with the example of a simple application that allows users to scroll through a document eyes-free.

© All rights reserved Smith et al. and/or ACM Press

 
Edit | Del

Huynh, David, Drucker, Steven M., Baudisch, Patrick and Wong, Curtis (2005): Time quilt: scaling up zoomable photo browsers for large, unstructured photo collections. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1937-1940. Available online

In the absence of manual organization of large digital photo collections, the photos' visual content and creation dates can help support time-based visual search tasks. Current zoomable photo browsers are designed to support visual searches by maximizing screenspace usage. However, their space-filling layouts fail to convey temporal order effectively. We propose a novel layout called time quilt that trades off screens-space usage for better presentation of temporal order. In an experimental comparison of space-filling, linear timeline, and time quilt layouts, participants carried out the task of finding photos in their personal photo collections averaging 4,000 items. They performed 45% faster on time quilt. Furthermore, while current zoomable photo browsers are designed for visual searches,this support does not scale to thousands of photos: individual thumbnails become less informative as they grow smaller. We found a subjective preference for the use of representative photos to provide an overview for visual searches in place of the diminishing thumbnails.

© All rights reserved Huynh et al. and/or ACM Press

 
Edit | Del

Baudisch, Patrick, Tan, Desney S., Steedly, Drew, Rudolph, Eric, Uyttendaele, Matt, Pal, Chris and Szeliski, Richard (2005): Panoramic viewfinder: providing a real-time preview to help users avoid flaws in panoramic pictures. In: Proceedings of OZCHI05, the CHISIG Annual Conference on Human-Computer Interaction 2005. pp. 1-10. Available online

Image stitching allows users to combine multiple regular-sized photographs into a single wide-angle picture, often referred to as a panoramic picture. To create such a panoramic picture, users traditionally first take all the photographs, then upload them to a PC and stitch. During stitching, however, users often discover that the produced panorama contains artifacts or is incomplete. Fixing these flaws requires retaking individual images, which is often difficult by this time. In this paper, we present Panoramic Viewfinder, an interactive system for panorama construction that offers a real-time preview of the panorama while shooting. As the user swipes the camera across the scene, each photo is immediately added to the preview. By making ghosting and stitching failures apparent, the system allows users to immediately retake necessary images. The system also provides a preview of the cropped panorama. When this preview includes all desired scene elements, users know that the panorama will be complete. Unlike earlier work in the field of real-time stitching, this paper focuses on the user interface aspects of real-time stitching. We describe our prototype, individual shooting modes, and an implementation overview.

© All rights reserved Baudisch et al. and/or their publisher

 
Edit | Del

Robertson, George G., Czerwinski, Mary, Baudisch, Patrick, Meyers, Brian, Robbins, Daniel C., Smith, Greg and Tan, Desney S. (2005): The Large-Display User Experience. In IEEE Computer Graphics and Applications, 25 (4) pp. 44-51. Available online

2004
 
Edit | Del

Baudisch, Patrick, Pruitt, John and Ball, Steve (2004): Flat volume control: improving usability by hiding the volume control hierarchy in the user interface. In: Dykstra-Erickson, Elizabeth and Tscheligi, Manfred (eds.) Proceedings of ACM CHI 2004 Conference on Human Factors in Computing Systems April 24-29, 2004, Vienna, Austria. pp. 255-262. Available online

The hardware-inspired volume user interface model that is in use across all of today's operating systems is the source of several usability issues. One of them is that restoring the volume of a muted application can require an inappropriately long troubleshooting process: in addition to manipulating the application's volume and mute controls, users may also have to visit the system's volume control panel to find and adjust additional controls there. The "flat" volume control model presented in this paper eliminates this and other problems by hiding the hardware-oriented volume model from the user. Using the flat model, users use one slider per application to indicate how loud they want the respective applications to play; the slider then internally adjusts all hardware volume variables necessary to obtain the requested output. By offering a single point of control for each application, the flat model simplifies controlling application volume and restoring muted applications. In our studies, participants completed all four volume control and mixing tasks faster and with less error when using the flat model than when using the existing hardware-oriented volume control model. Participants also indicated a subjective preference for the flat model over the existing model.

© All rights reserved Baudisch et al. and/or ACM Press

 
Edit | Del

Baudisch, Patrick and Gutwin, Carl (2004): Multiblending: displaying overlapping windows simultaneously without the drawbacks of alpha blending. In: Dykstra-Erickson, Elizabeth and Tscheligi, Manfred (eds.) Proceedings of ACM CHI 2004 Conference on Human Factors in Computing Systems April 24-29, 2004, Vienna, Austria. pp. 367-374. Available online

Alpha blending allows the simultaneous display of overlapping windows-such as palette windows in visual workspaces. Although alpha blending has been used in some applications, such as games, it has not been widely adopted. One reason for the limited acceptance is that in many scenarios, alpha blending compromises the readability of content. We introduce a new blending mechanism called multiblending that uses a vector of blending weights, one for each class of features, rather than a single transparency value. Multiblending can in most cases be automatically optimized to preserve the most relevant features of both the palette and the background window. We present the results of a user study in which multiblended palettes provided higher recognizability of both the background and the palette than the best participating version of alpha blending.

© All rights reserved Baudisch and Gutwin and/or ACM Press

 
Edit | Del

Baudisch, Patrick, Xie, Xing, Wang, Chong and Ma, Wei-Ying (2004): Collapse-to-zoom: viewing web pages on small screen devices by interactively removing irrelevant content. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 91-94. Available online

Overview visualizations for small-screen web browsers were designed to provide users with visual context and to allow them to rapidly zoom in on tiles of relevant content. Given that content in the overview is reduced, however, users are often unable to tell which tiles hold the relevant material, which can force them to adopt a time-consuming hunt-and-peck strategy. Collapse-to-zoom addresses this issue by offering an alternative exploration strategy. In addition to allowing users to zoom into relevant areas, collapse-to-zoom allows users to collapse areas deemed irrelevant, such as columns containing menus, archive material, or advertising. Collapsing content causes all remaining content to expand in size causing it to reveal more detail, which increases the user\'s chance of identifying relevant content. Collapse-to-zoom navigation is based on a hybrid between a marquee selection tool and a marking menu, called marquee menu. It offers four commands for collapsing content areas at different granularities and to switch to a full-size reading view of what is left of the page.

© All rights reserved Baudisch et al. and/or ACM Press

 
Edit | Del

Baudisch, Patrick, Lee, Bongshin and Hanna, Libby (2004): Fishnet, a fisheye web browser with search term popouts: a comparative evaluation with overview and linear view. In: Costabile, Maria Francesca (ed.) AVI 2004 - Proceedings of the working conference on Advanced visual interfaces May 25-28, 2004, Gallipoli, Italy. pp. 133-140. Available online

 
Edit | Del

Hinckley, Ken, Ramos, Gonzalo, Guimbretiere, Francois, Baudisch, Patrick and Smith, Marc (2004): Stitching: pen gestures that span multiple displays. In: Costabile, Maria Francesca (ed.) AVI 2004 - Proceedings of the working conference on Advanced visual interfaces May 25-28, 2004, Gallipoli, Italy. pp. 23-31. Available online

 
Edit | Del

Robertson, George G., Horvitz, Eric, Czerwinski, Mary, Baudisch, Patrick, Hutchings, Dugald Ralph, Meyers, Brian, Robbins, Daniel C. and Smith, Greg (2004): Scalable Fabric: flexible task management. In: Costabile, Maria Francesca (ed.) AVI 2004 - Proceedings of the working conference on Advanced visual interfaces May 25-28, 2004, Gallipoli, Italy. pp. 85-89. Available online

2003
 
Edit | Del

Baudisch, Patrick and Rosenholtz, Ruth (2003): Halo: a technique for visualizing off-screen objects. In: Cockton, Gilbert and Korhonen, Panu (eds.) Proceedings of the ACM CHI 2003 Human Factors in Computing Systems Conference April 5-10, 2003, Ft. Lauderdale, Florida, USA. pp. 481-488.

 
Edit | Del

Baudisch, Patrick, Cutrell, Edward, Robbins, Dan, Czerwinski, Mary, Tandler, Peter, Bederson, Benjamin B. and Zierlinger, Alex (2003): Drag-and-Pop and Drag-and-Pick: Techniques for Accessing Remote Screen Content on Touch- and Pen-Operated Systems. In: Proceedings of IFIP INTERACT03: Human-Computer Interaction 2003, Zurich, Switzerland. p. 65.

 
Edit | Del

Baudisch, Patrick, Cutrell, Edward and Robertson, George G. (2003): High-Density Cursor: a Visualization Technique that Helps Users Keep Track of Fast-moving Mouse Cursors. In: Proceedings of IFIP INTERACT03: Human-Computer Interaction 2003, Zurich, Switzerland. p. 236.

 
Edit | Del

Baudisch, Patrick, DeCarlo, Douglas, Duchowski, Andrew T. and Geisler, Wilson S. (2003): Focusing on the essential: considering attention in display design. In Communications of the ACM, 46 (3) pp. 60-66. Available online

2002
 
Edit | Del

Baudisch, Patrick, Good, Nathaniel, Bellotti, Victoria and Schraedley, Pamela (2002): Keeping things in context: a comparative evaluation of focus plus context screens, overviews, and zooming. In: Terveen, Loren (ed.) Proceedings of the ACM CHI 2002 Conference on Human Factors in Computing Systems Conference April 20-25, 2002, Minneapolis, Minnesota. pp. 259-266.

2001
 
Edit | Del

Baudisch, Patrick, Good, Nathaniel and Stewart, Paul (2001): Focus plus context screens: combining display technology with visualization techniques. In: Marks, Joe and Mynatt, Elizabeth D. (eds.) Proceedings of the 14th annual ACM symposium on User interface software and technology November 11 - 14, 2001, Orlando, Florida. pp. 31-40. Available online

Computer users working with large visual documents, such as large layouts, blueprints, or maps perform tasks that require them to simultaneously access overview information while working on details. To avoid the need for zooming, users currently have to choose between using a sufficiently large screen or applying appropriate visualization techniques. Currently available hi-res "wall-size" screens, however, are cost-intensive, space-intensive, or both. Visualization techniques allow the user to more efficiently use the given screen space, but in exchange they either require the user to switch between multiple views or they introduce distortion. In this paper, we present a novel approach to simultaneously display focus and context information. Focus plus context screens consist of a hi-res display and a larger low-res display. Image content is displayed such that the scaling of the display content is preserved, while its resolution may vary according to which display region it is displayed in. Focus plus context screens are applicable to practically all tasks that currently use overviews or fisheye views, but unlike these visualization techniques, focus plus context screens provide a single, non-distorted view. We present a prototype that seamlessly integrates an LCD with a projection screen and demonstrate four applications that we have adapted so far.

© All rights reserved Baudisch et al. and/or ACM Press

1999
 
Edit | Del

Baudisch, Patrick (1999): Using a Painting Metaphor to Rate Large Numbers of Objects. In: Bullinger, Hans-Jorg (ed.) HCI International 1999 - Proceedings of the 8th International Conference on Human-Computer Interaction August 22-26, 1999, Munich, Germany. pp. 266-270.

1998
 
Edit | Del

Baudisch, Patrick (1998): Don't Click, Paint! Using Toggle Maps to Manipulate Sets of Toggle Switches. In: Mynatt, Elizabeth D. and Jacob, Robert J. K. (eds.) Proceedings of the 11th annual ACM symposium on User interface software and technology November 01 - 04, 1998, San Francisco, California, United States. pp. 65-66. Available online

A toggle map is a set of toggle switches that allows the manipulation of several switches with a single mouse drag interaction. Because toggle switches are functionally equivalent to black and white pixels, interaction techniques from paint programs can be adopted for this task. A controlled experiment shows that toggle maps can speed up interfaces containing many toggle switches, such as the interactive definition of user profiles. Toggle maps can also be applied to segmented continuous variables. As an example an efficient timer dialog is presented.

© All rights reserved Baudisch and/or ACM Press

1996
 
Edit | Del

Baudisch, Patrick (1996): The Cage: Efficient Construction in 3D using a Cubic Adaptive Grid. In: Kurlander, David, Brown, Marc and Rao, Ramana (eds.) Proceedings of the 9th annual ACM symposium on User interface software and technology November 06 - 08, 1996, Seattle, Washington, United States. pp. 171-172. Available online

The Cage is an easy to use 3D grid. Built into a 3D modeler, it provides a visualized reference coordinate system that helps the user to orient himself in 3D space, and that supports efficient alignment and snapping methods. It can be adapted with a single mouse click to any new viewing situation and reference system. The Cage was implemented in C++ under Open Inventor on Silicon Graphics workstations. It was tested as a part of a 3D authoring tool for virtual TV studios.

© All rights reserved Baudisch and/or ACM Press

 
Add publication
Show list on your website
 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/patrick_baudisch.html