Publication statistics

Pub. period:2009-2013
Pub. count:8
Number of co-authors:13


Number of publications with 3 favourite co-authors:

Patrick Baudisch:4
Albrecht Schmidt:2
Andreas Bulling:2



Productive colleagues

Christian Holz's 3 most productive colleagues in number of publications:

Albrecht Schmidt:111
Steven K. Feiner:76
Patrick Baudisch:57

Upcoming Courses

go to course
Emotional Design: How to make products people will love
go to course
UI Design Patterns for Successful Software
87% booked. Starts in 8 days

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !


Our Latest Books

The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading

Christian Holz

Picture of Christian Holz.
Update pic

Publications by Christian Holz (bibliography)

 what's this?
Edit | Del

Schmidt, Albrecht, Bulling, Andreas and Holz, Christian (eds.) 4th Augmented Human International Conference March 7-8, 2013, Stuttgart, Germany.

Edit | Del

Schmidt, Albrecht, Bulling, Andreas and Holz, Christian (eds.) Proceedings of the 4th Augmented Human International Conference AH 2013 March 7-8, 2013, Stuttgart, Germany.

Edit | Del

Holz, Christian and Wilson, Andrew (2011): Data miming: inferring spatial object descriptions from human gesture. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 811-820. Available online

Speakers often use hand gestures when talking about or describing physical objects. Such gesture is particularly useful when the speaker is conveying distinctions of shape that are difficult to describe verbally. We present data miming -- an approach to making sense of gestures as they are used to describe concrete physical objects. We first observe participants as they use gestures to describe real-world objects to another person. From these observations, we derive the data miming approach, which is based on a voxel representation of the space traced by the speaker's hands over the duration of the gesture. In a final proof-of-concept study, we demonstrate a prototype implementation of matching the input voxel representation to select among a database of known physical objects.

© All rights reserved Holz and Wilson and/or their publisher

Edit | Del

Holz, Christian and Baudisch, Patrick (2011): Understanding touch. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2501-2510. Available online

Current touch devices, such as capacitive touchscreens are based on the implicit assumption that users acquire targets with the center of the contact area between finger and device. Findings from our previous work indicate, however, that such devices are subject to systematic error offsets. This suggests that the underlying assumption is most likely wrong. In this paper, we therefore revisit this assumption. In a series of three user studies, we find evidence that the features that users align with the target are visual features. These features are located on the top of the user's fingers, not at the bottom, as assumed by traditional devices. We present the projected center model, under which error offsets drop to 1.6mm, compared to 4mm for the traditional model. This suggests that the new model is indeed a good approximation of how users conceptualize touch input. The primary contribution of this paper is to help understand touch-one of the key input technologies in human-computer interaction. At the same time, our findings inform the design of future touch input technology. They explain the inaccuracy of traditional touch devices as a -Sparallax- artifact between user control based on the top of the finger and sensing based on the bottom side of the finger. We conclude that certain camera-based sensing technologies can inherently be more accurate than contact area-based sensing.

© All rights reserved Holz and Baudisch and/or their publisher

Edit | Del

Gustafson, Sean, Holz, Christian and Baudisch, Patrick (2011): Imaginary phone: learning imaginary interfaces by transferring spatial memory from a familiar device. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 283-292. Available online

We propose a method for learning how to use an imaginary interface (i.e., a spatial non-visual interface) that we call "transfer learning". By using a physical device (e.g. an iPhone) a user inadvertently learns the interface and can then transfer that knowledge to an imaginary interface. We illustrate this concept with our Imaginary Phone prototype. With it users interact by mimicking the use of a physical iPhone by tapping and sliding on their empty non-dominant hand without visual feedback. Pointing on the hand is tracked using a depth camera and touch events are sent wirelessly to an actual iPhone, where they invoke the corresponding actions. Our prototype allows the user to perform everyday task such as picking up a phone call or launching the timer app and setting an alarm. Imaginary Phone thereby serves as a shortcut that frees users from the necessity of retrieving the actual physical device. We present two user studies that validate the three assumptions underlying the transfer learning method. (1) Users build up spatial memory automatically while using a physical device: participants knew the correct location of 68% of their own iPhone home screen apps by heart. (2) Spatial memory transfers from a physical to an imaginary inter-face: participants recalled 61% of their home screen apps when recalling app location on the palm of their hand. (3) Palm interaction is precise enough to operate a typical mobile phone: Participants could reliably acquire 0.95cm wide iPhone targets on their palm-sufficiently large to operate any iPhone standard widget.

© All rights reserved Gustafson et al. and/or ACM Press

Edit | Del

Holz, Christian and Baudisch, Patrick (2010): The generalized perceived input point model and how to double touch accuracy by extracting fingerprints. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 581-590. Available online

It is generally assumed that touch input cannot be accurate because of the fat finger problem, i.e., the softness of the fingertip combined with the occlusion of the target by the finger. In this paper, we show that this is not the case. We base our argument on a new model of touch inaccuracy. Our model is not based on the fat finger problem, but on the perceived input point model. In its published form, this model states that touch screens report touch location at an offset from the intended target. We generalize this model so that it represents offsets for individual finger postures and users. We thereby switch from the traditional 2D model of touch to a model that considers touch a phenomenon in 3-space. We report a user study, in which the generalized model explained 67% of the touch inaccuracy that was previously attributed to the fat finger problem. In the second half of this paper, we present two devices that exploit the new model in order to improve touch accuracy. Both model touch on per-posture and per-user basis in order to increase accuracy by applying respective offsets. Our RidgePad prototype extracts posture and user ID from the user's fingerprint during each touch interaction. In a user study, it achieved 1.8 times higher accuracy than a simulated capacitive baseline condition. A prototype based on optical tracking achieved even 3.3 times higher accuracy. The increase in accuracy can be used to make touch interfaces more reliable, to pack up to 3.3{sup:2} > 10 times more controls into the same surface, or to bring touch input to very small mobile devices.

© All rights reserved Holz and Baudisch and/or their publisher

Edit | Del

Augsten, Thomas, Kaefer, Konstantin, Meusel, Ren, Fetzer, Caroline, Kanitz, Dorian, Stoff, Thomas, Becker, Torsten, Holz, Christian and Baudisch, Patrick (2010): Multitoe: high-precision interaction with back-projected floors based on high-resolution multi-touch input. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 209-218. Available online

Tabletop applications cannot display more than a few dozen on-screen objects. The reason is their limited size: tables cannot become larger than arm's length without giving up direct touch. We propose creating direct touch surfaces that are orders of magnitude larger. We approach this challenge by integrating high-resolution multitouch input into a back-projected floor. As the same time, we maintain the purpose and interaction concepts of tabletop computers, namely direct manipulation. We base our hardware design on frustrated total internal reflection. Its ability to sense per-pixel pressure allows the floor to locate and analyze users' soles. We demonstrate how this allows the floor to recognize foot postures and identify users. These two functions form the basis of our system. They allow the floor to ignore users unless they interact explicitly, identify and track users based on their shoes, enable high-precision interaction, invoke menus, track heads, and allow users to control high-degree of freedom interactions using their feet. While we base our designs on a series of simple user studies, the primary contribution on this paper is in the engineering domain.

© All rights reserved Augsten et al. and/or their publisher

Edit | Del

Holz, Christian and Feiner, Steven K. (2009): Relaxed selection techniques for querying time-series graphs. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 213-222. Available online

Time-series graphs are often used to visualize phenomena that change over time. Common tasks include comparing values at different points in time and searching for specified patterns, either exact or approximate. However, tools that support time-series graphs typically separate query specification from the actual search process, allowing users to adapt the level of similarity only after specifying the pattern. We introduce relaxed selection techniques, in which users implicitly define a level of similarity that can vary across the search pattern, while creating a search query with a single-gesture interaction. Users sketch over part of the graph, establishing the level of similarity through either spatial deviations from the graph, or the speed at which they sketch (temporal deviations). In a user study, participants were significantly faster when using our temporally relaxed selection technique than when using traditional techniques. In addition, they achieved significantly higher precision and recall with our spatially relaxed selection technique compared to traditional techniques.

© All rights reserved Holz and Feiner and/or their publisher

Add publication
Show list on your website

Join our community and advance:




Join our community!

Page Information

Page maintainer: The Editorial Team