Number of co-authors:18
Number of publications with 3 favourite co-authors:Ravin Balakrishnan:7Géry Casiez:6Alec Azad:1
Daniel Vogel's 3 most productive colleagues in number of publications:Ravin Balakrishnan:108Andy Cockburn:68Patrick Baudisch:57
If houses were built the way software is built, the first woodpecker would bring down civilization.
-- Popular computer one-liner
Read the fascinating history of Wearable Computing, told by its father, Steve Mann
Read Steve's chapter !
Our Latest Books
Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger
Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad
Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam
Personal Homepage: http://www.nonsequitoria.com/
Current place of employment: University of Waterloo
Daniel Vogel is a Human-Computer Interaction researcher with a focus onaesthetic input and interaction with large displays, table-tops, and tablets. Before returning to academia. Dan worked for more than ten years as a graphic designer, information architect, and interaction consultant for clients such as AutoDesk Press, Starbucks, and Sony. He holds PhD and MSc degrees in Computer Science from the University of Toronto, as well as a BFA in Intermedia Studies from the Emily Carr University of Art + Design. Currently, Dan is a Banting Postdoctoral Fellow in the Cheriton School of Computer Science at the University of Waterloo.
Publications by Daniel Vogel (bibliography)
Azad, Alec, Ruiz, Jaime, Vogel, Daniel, Hancock, Mark and Lank, Edward (2012): Territoriality and behaviour on and around large vertical publicly-shared displays. In: Proceedings of DIS12 Designing Interactive Systems 2012. pp. 468-477.
We investigate behaviours on, and around, large vertical displays during concurrent usage. Using an observational field study, we identify fundamental patterns of how people use existing public displays: their orientation, positioning, group identification, and behaviour within and between social groups just-before, during, and just-after usage. These results are then used to motivate a controlled experiment where two individuals or two pairs of individuals complete tasks concurrently on a simulated large vertical display. Results from our controlled study demonstrates that vertical surface territories are similar to those found in horizontal tabletops in function, but their definitions and social conventions are different. In addition, the nature of use-while-standing systems results in more complex and dynamic physical territories around the display. We show that the anthropological notion of personal space must be slightly refined for application to vertical displays.
© All rights reserved Azad et al. and/or ACM Press
Roussel, Nicolas, Casiez, Géry, Aceituno, Jonathan and Vogel, Daniel (2012): Giving a hand to the eyes: leveraging input accuracy for subpixel interaction. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 351-358.
We argue that the current practice of using integer positions for pointing events artificially constrains human precision capabilities. The high sensitivity of current input devices can be harnessed to enable precise direct manipulation ""in between"" pixels, called subpixel interaction. We provide detailed analysis of subpixel theory and implementation, including the critical component of revised control-display gain transfer functions. A prototype implementation is described with several illustrative examples. Guidelines for subpixel domain applicability are provided and an overview of required changes to operating systems and graphical user interface frameworks are discussed.
© All rights reserved Roussel et al. and/or ACM Press
Vogel, Daniel and Casiez, Géry (2011): Conté: multimodal input inspired by an artist's crayon. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 357-366.
Conté is a small input device inspired by the way artists manipulate a real Conté crayon. By changing which corner, edge, end, or side is contacting the display, the operator can switch interaction modes using a single hand. Conté's rectangular prism shape enables both precise pen-like input and tangible handle interaction. Conté also has a natural compatibility with multi-touch input: it can be tucked in the palm to interleave same-hand touch input, or used to expand the vocabulary of bimanual touch. Inspired by informal interviews with artists, we catalogue Conté's characteristics, and use these to outline a design space. We describe a prototype device using common materials and simple electronics. With this device, we demonstrate interaction techniques in a test-bed drawing application. Finally, we discuss alternate hardware designs and future human factors research to study this new class of input.
© All rights reserved Vogel and Casiez and/or ACM Press
Vogel, Daniel and Balakrishnan, Ravin (2010): Occlusion-aware interfaces. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 263-272.
We define occlusion-aware interfaces as interaction techniques which know what area of the display is currently occluded, and use this knowledge to counteract potential problems and/or utilize the hidden area. As a case study, we describe the Occlusion-Aware Viewer, which identifies important regions hidden beneath the hand and displays them in a non-occluded area using a bubble-like callout. To determine what is important, we use an application agnostic image processing layer. For the occluded area, we use a user configurable, real-time version of Vogel et al.'s  geometric model. In an evaluation with a simultaneous monitoring task, we find the technique can successfully mitigate the effects of occlusion, although issues with ambiguity and stability suggest further refinements. Finally, we present designs for three other occlusion-aware techniques for pop-ups, dragging, and a hidden widget.
© All rights reserved Vogel and Balakrishnan and/or their publisher
Vogel, Daniel and Balakrishnan, Ravin (2010): Direct Pen Interaction With a Conventional Graphical User Interface. In Human Computer Interaction, 25 (4) pp. 324-388.
We examine the usability and performance of Tablet PC direct pen input with a conventional graphical user interface (GUI). We use a qualitative observational study design with 16 participants divided into 4 groups: 1 mouse group for a baseline control and 3 Tablet PC groups recruited according to their level of experience. The study uses a scripted scenario of realistic tasks and popular office applications designed to exercise standard GUI components and cover typical interactions such as parameter selection, object manipulation, text selection, and ink annotation. We capture a rich set of logging data including 3D motion capture, video taken from the participants' point-of-view, screen capture video, and pen events such as movement and taps. To synchronize, segment, and annotate these logs, we used our own custom analysis software. We find that pen participants make more errors, perform inefficient movements, and express frustration during many tasks. Our observations reveal overarching problems with direct pen input: poor precision when tapping and dragging, errors caused by hand occlusion, instability and fatigue due to ergonomics and reach, cognitive differences between pen and mouse usage, and frustration due to limited input capabilities. We believe these to be the primary causes of nontext errors, which contribute to user frustration when using a pen with a conventional GUI. Finally, we discuss how researchers could address these issues without sacrificing the consistency of current GUIs and applications by making improvements at three levels: hardware, base interaction, and widget behavior.
© All rights reserved Vogel and Balakrishnan and/or Lawrence Erlbaum
Vogel, Daniel, Cudmore, Matthew, Casiez, Géry, Balakrishnan, Ravin and Keliher, Liam (2009): Hand occlusion with tablet-sized direct pen input. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 557-566.
We present results from an experiment examining the area occluded by the hand when using a tablet-sized direct pen input device. Our results show that the pen, hand, and forearm can occlude up to 47% of a 12 inch display. The shape of the occluded area varies between participants due to differences in pen grip rather than simply anatomical differences. For the most part, individuals adopt a consistent posture for long and short selection tasks. Overall, many occluded pixels are located higher relative to the pen than previously thought. From the experimental data, a five-parameter scalable circle and pivoting rectangle geometric model is presented which captures the general shape of the occluded area relative to the pen position. This model fits the experimental data much better than the simple bounding box model often used implicitly by designers. The space of fitted parameters also serves to quantify the shape of occlusion. Finally, an initial design for a predictive version of the model is discussed.
© All rights reserved Vogel et al. and/or ACM Press
Casiez, Géry and Vogel, Daniel (2008): The effect of spring stiffness and control gain with an elastic rate control pointing device. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1709-1718.
Isometric and elastic devices are most compatible with a rate control mapping. However, the effect of elastic stiffness has not been thoroughly investigated nor its interaction with control gain. In a controlled experiment, these factors are investigated along with user feedback regarding ease-of-use and fatigue. The results reveal a U-shaped profile of control gain vs. movement time, with different profiles for different stiffness levels. Using the optimum control gain for each stiffness level, performance across stiffness levels were similar. However, users preferred lower stiffness and lower control gain levels due to increased controller displacement. Based on these results, design guidelines for elastic rate control devices are given.
© All rights reserved Casiez and Vogel and/or ACM Press
Casiez, Géry, Vogel, Daniel, Balakrishnan, Ravin and Cockburn, Andy (2008): The Impact of Control-Display Gain on User Performance in Pointing Tasks. In Human-Computer Interaction, 23 (3) pp. 215-250.
We theoretically and empirically examine the impact of control display (CD) gain on mouse pointing performance. Two techniques for modifying CD gain are considered: constant gain (CG) where CD gain is uniformly adjusted by a constant multiplier, and pointer acceleration (PA) where CD gain is adjusted using a nonuniform function depending on movement characteristics. Both CG and PA are evaluated at various levels of relationship between mouse and cursor movement: from low levels, which have a near one-to-one mapping, through to high levels that aggressively amplify mouse movement. We further derive a model predicting the modification in motor-space caused by pointer acceleration. Experiments are then conducted on a standard desktop display and on a very large high-resolution display, allowing us to measure performance in high index of difficulty tasks where the effect of clutching may be pronounced. The evaluation apparatus was designed to minimize device quantization effects and used accurate 3D motion tracking equipment to analyze users' limb movements. On both displays, and in both gain techniques, we found that low levels of CD gain had a marked negative effect on performance, largely because of increased clutching and maximum limb speeds. High gain levels had relatively little impact on performance, with only a slight increase in time when selecting very small targets at high levels of constant gain. On the standard desktop display, pointer acceleration resulted in 3.3% faster pointing than constant gain and up to 5.6% faster with small targets. This supported the theoretical prediction of motor-space modification but fell short of the theoretical potential, possibly because PA caused an increase in target overshooting. Both techniques were accurately modeled by Fitts' law in all gain settings except for when there was a significant amount of clutching. From our results, we derive a usable range of CD gain settings between thresholds of speed and accuracy given the capabilities of a pointing device, display, and the expected range of target widths and distances.
© All rights reserved Casiez et al. and/or Taylor and Francis
Vogel, Daniel and Baudisch, Patrick (2007): Shift: a technique for operating pen-based interfaces using touch. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 657-666.
Retrieving the stylus of a pen-based device takes time and requires a second hand. Especially for short intermittent interactions many users therefore choose to use their bare fingers. Although convenient, this increases targeting times and error rates. We argue that the main reasons are the occlusion of the target by the user's finger and ambiguity about which part of the finger defines the selection point. We propose a pointing technique we call Shift that is designed to address these issues. When the user touches the screen, Shift creates a callout showing a copy of the occluded screen area and places it in a non-occluded location. The callout also shows a pointer representing the selection point of the finger. Using this visual feedback, users guide the pointer into the target by moving their finger on the screen surface and commit the target acquisition by lifting the finger. Unlike existing techniques, Shift is only invoked when necessary--over large targets no callout is created and users enjoy the full performance of an unaltered touch screen. We report the results of a user study showing that with Shift participants can select small targets with much lower error rates than an unaided touch screen and that Shift is faster than Offset Cursor for larger targets.
© All rights reserved Vogel and Baudisch and/or ACM Press
Casiez, Géry, Vogel, Daniel, Pan, Qing and Chaillou, Christophe (2007): RubberEdge: reducing clutching by combining position and rate control with elastic feedback. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 129-138.
Position control devices enable precise selection, but significant clutching degrades performance. Clutching can be reduced with high control-display gain or pointer acceleration, but there are human and device limits. Elastic rate control eliminates clutching completely, but can make precise selection difficult. We show that hybrid position-rate control can outperform position control by 20% when there is significant clutching, even when using pointer acceleration. Unlike previous work, our RubberEdge technique eliminates trajectory and velocity discontinuities. We derive predictive models for position control with clutching and hybrid control, and present a prototype RubberEdge position-rate control device including initial user feedback.
© All rights reserved Casiez et al. and/or ACM Press
Vollick, Ian, Vogel, Daniel, Agrawala, Maneesh and Hertzmann, Aaron (2007): Specifying label layout style by example. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 221-230.
Creating high-quality label layouts in a particular visual style is a time-consuming process. Although automated labeling algorithms can aid the layout process, expert design knowledge is required to tune these algorithms so that they produce layouts which meet the designer's expectations. We propose a system which can learn a labellayout style from a single example layout and then apply this style to new labeling problems. Because designers find it much easier to create example layouts than tune algorithmic parameters, our system provides a more natural workflow for graphic designers. We demonstrate that our system is capable of learning a variety of label layout styles from examples.
© All rights reserved Vollick et al. and/or ACM Press
Forlines, Clifton, Vogel, Daniel and Balakrishnan, Ravin (2006): HybridPointing: fluid switching between absolute and relative pointing with a direct input device. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 211-220.
We present HybridPointing, a technique that lets users easily switch between absolute and relative pointing with a direct input device such as a pen. Our design includes a new graphical element, the Trailing Widget, which remains "close at hand" but does not interfere with normal cursor operation. The use of visual feedback to aid the user's understanding of input state is discussed, and several novel visual aids are presented. An experiment conducted on a large, wall-sized display validates the benefits of HybridPointing under certain conditions. We also discuss other situations in which HybridPointing may be useful. Finally, we present an extension to our technique that allows for switching between absolute and relative input in the middle of a single drag-operation.
© All rights reserved Forlines et al. and/or ACM Press
Vogel, Daniel and Balakrishnan, Ravin (2005): Distant freehand pointing and clicking on very large, high resolution displays. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 33-42.
We explore the design space of freehand pointing and clicking interaction with very large high resolution displays from a distance. Three techniques for gestural pointing and two for clicking are developed and evaluated. In addition, we present subtle auditory and visual feedback techniques to compensate for the lack of kinesthetic feedback in freehand interaction, and to promote learning and use of appropriate postures.
© All rights reserved Vogel and Balakrishnan and/or ACM Press
Vogel, Daniel and Balakrishnan, Ravin (2004): Interactive public ambient displays: transitioning from implicit to explicit, public to personal, interaction with multiple users. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 137-146.
We develop design principles and an interaction framework for sharable, interactive public ambient displays that support the transition from implicit to explicit interaction with both public and personal information. A prototype system implementation that embodies these design principles is described. We use novel display and interaction techniques such as simple hand gestures and touch screen input for explicit interaction and contextual body orientation and position cues for implicit interaction. Techniques are presented for subtle notification, self-revealing help, privacy controls, and shared use by multiple people each in their own context. Initial user feedback is also presented, and future directions discussed.
© All rights reserved Vogel and Balakrishnan and/or ACM Press
Show list on your website
Join the design elite and advance:
Changes to this page (author)23 Nov 2012: Modified22 Nov 2012: Added
09 Nov 2012: Modified
05 Apr 2012: Modified
16 Jan 2011: Modified
02 Nov 2010: Modified
26 Jun 2009: Modified
05 Jun 2009: Modified
09 May 2009: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
24 Jul 2007: Modified
19 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Added
Page maintainer: The Editorial Team