Publication statistics

Pub. period:1995-2012
Pub. count:59
Number of co-authors:72



Co-authors

Number of publications with 3 favourite co-authors:

Changuk Sohn:7
Audrey Girouard:7
Daniel Cheng:6

 

 

Productive colleagues

Roel Vertegaal's 3 most productive colleagues in number of publications:

Gerrit van der Vee..:37
Pattie Maes:37
Ivan Poupyrev:37
 
 
 
Jul 29

There is an old English folk saying that goes, "If you can't stand the heat, get out of the kitchen." I have a different approach: Do something about the heat. The folk saying would have us accept the poor designs of the world. Why? After all, if people were responsible for the "heat" in the first place, then people should be able to do something about it. Is the kitchen too hot? Redesign it.

-- Don Norman

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!

 
 

Roel Vertegaal

Personal Homepage:
http://www.hml.queensu.ca/roel.html

Add description
Add publication

Publications by Roel Vertegaal (bibliography)

 what's this?
2012
 
Edit | Del

Bolton, John, Kim, Kibum and Vertegaal, Roel (2012): A comparison of competitive and cooperative task performance using spherical and flat displays. In: Proceedings of ACM CSCW12 Conference on Computer-Supported Cooperative Work 2012. pp. 529-538.

While large flat vertical displays may facilitate persistent public sharing of work, they may do so at a cost of limited personal display space when everyone can see each other's activity. By contrast, new form factors, such as spherical displays, support sharing display space by limiting the user's view to at most one hemisphere. In this paper, we investigate how different interactive large display form factors can support differences in sharing of information during competitive and cooperative task conditions. We implemented three different large display types: spherical, flat, and a flat display with divider. Results show that task performance of the flat display with divider did not differ significantly from that of the spherical display. Additionally, we implemented and compared three peeking techniques that facilitated sharing of information. Results show participants peeked significantly more in competitive tasks than they did in cooperative tasks. Usage of peeking techniques between the spherical display and the flat display with divider were similar, and distinct from that of the flat display. Not surprisingly, results show that the affordance of easily glancing at a partner's work on the flat display provided a significant advantage in cooperative tasks.

© All rights reserved Bolton et al. and/or ACM Press

 
Edit | Del

Strohmeier, Paul, Vertegaal, Roel and Girouard, Audrey (2012): With a flick of the wrist: stretch sensors as lightweight input for mobile devices. In: Proceedings of the 6th International Conference on Tangible and Embedded Interaction 2012. pp. 307-308.

With WristFlicker, we detect wrist movement through sets of stretch sensors embedded in clothing. Our system supports wrist rotation (pronation/supination), and both wrist tilts (flexion/extension and ulnar/radial deviation). Each wrist movement is measured by two opposing stretch sensors, mimicking the counteracting movement of muscles. We discuss interaction techniques that allow a user to control a music player through this lightweight input.

© All rights reserved Strohmeier et al. and/or ACM Press

 
Edit | Del

Nesbitt, Andrea, Rabinovitch, Matthew, Girouard, Audrey and Vertegaal, Roel (2012): The Hum: interacting with an actuated ambient organism. In: Proceedings of the 6th International Conference on Tangible and Embedded Interaction 2012. pp. 329-330.

The Hum is an immersive art installation filled with hundreds of suspended furred catkins surrounding a cocoon. As visitors enter the space, catkins twitch, shiver and hum. In The Hum, we explore the idea of computers that communicate ephemerally through alterations of room and space.

© All rights reserved Nesbitt et al. and/or ACM Press

 
Edit | Del

Strohmeier, Paul, Swensen, Kaja Vembe, Lapp, Cameron, Girouard, Audrey and Vertegaal, Roel (2012): A Flock of Birds: bringing paper to life. In: Proceedings of the 6th International Conference on Tangible and Embedded Interaction 2012. pp. 333-334.

In this paper we describe A Flock of Birds, an interactive, robotic origami art installation. The art installation explores folding paper as a fusion of input, output and computation while simultaneously providing its audience with a fun and exciting experience.

© All rights reserved Strohmeier et al. and/or ACM Press

 
Edit | Del

Wightman, Doug, Ye, Zi, Brandt, Joel and Vertegaal, Roel (2012): SnipMatch: using source code context to enhance snippet retrieval and parameterization. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 219-228.

Programmers routinely use source code snippets to increase their productivity. However, locating and adapting code snippets to the current context still takes time: for example, variables must be renamed, and dependencies included. We believe that when programmers decide to invest time in creating a new code snippet from scratch, they would also be willing to spend additional effort to make that code snippet configurable and easy to integrate. To explore this insight, we built SnipMatch, a plug-in for the Eclipse IDE. SnipMatch introduces a simple markup that allows snippet authors to specify search patterns and integration instructions. SnipMatch leverages this information, in conjunction with current code context, to improve snippet search and parameterization. For example, when a search query includes local variables, SnipMatch suggests compatible snippets, and automatically adapts them by substituting in these variables. In the lab, we observed that participants integrated snippets faster when using SnipMatch than when using standard Eclipse. Findings from a public deployment to 93 programmers suggest that SnipMatch has become integrated into the work practices of real users.

© All rights reserved Wightman et al. and/or ACM Press

2011
 
Edit | Del

Bolton, John, Kim, Kibum and Vertegaal, Roel (2011): Privacy and sharing information on spherical and large flat displays. In: Proceedings of ACM CSCW11 Conference on Computer-Supported Cooperative Work 2011. pp. 573-574.

It is important for users to understand the fundamental tradeoff between sharing information and preserving privacy in collaboration. The more information is shared about one's actions, the less privacy is preserved. Sharing information may, in fact, counter-intuitively result in increased social stress in some cases. Maintaining privacy while allowing for the sharing of information is an important consideration for successful collaboration and we believe display form factor matters for this tradeoff. New form factors such as spherical displays support privacy naturally, by limiting a user's view to at most one hemisphere. In this video we show how different types of interactive large display form factor can provide a balance between privacy and the sharing of information in a cooperative game.

© All rights reserved Bolton et al. and/or their publisher

 
Edit | Del

Dijkstra, Rob, Perez, Christopher and Vertegaal, Roel (2011): Evaluating effects of structural holds on pointing and dragging performance with flexible displays. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1293-1302.

In this paper, we present a study of the effects of structural holds and rigidity of a flexible display on touch pointing and dragging performance. We discuss an observational study in which we collected common holds used when pointing on a mockup paper display. We also measured the force patterns each hold generated within the display surface. We analyzed this data to produce 3 force zones in the display for each of the four most frequently observed holds: the grip zone, rigid zone, and the flexible zone. We report on an empirical evaluation in which we compared the efficiency of pointing and dragging operations between holds, and between structural zones within holds, using a real flexible Lumalive display. Results suggest that structural force distributions in a flexible display affect the Index of Performance of both pointing and dragging tasks, irrespective of hold, with rigid parts of the display yielding a 12% average performance gain over flexible areas.

© All rights reserved Dijkstra et al. and/or their publisher

 
Edit | Del

Lahey, Byron, Girouard, Audrey, Burleson, Winslow and Vertegaal, Roel (2011): PaperPhone: understanding the use of bend gestures in mobile devices with flexible electronic paper displays. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1303-1312.

Flexible displays potentially allow for interaction styles that resemble those used in paper documents. Bending the display, e.g., to page forward, shows particular promise as an interaction technique. In this paper, we present an evaluation of the effectiveness of various bend gestures in executing a set of tasks with a flexible display. We discuss a study in which users designed bend gestures for common computing actions deployed on a smartphone-inspired flexible E Ink prototype called PaperPhone. We collected a total of 87 bend gesture pairs from ten participants and their appropriateness over twenty actions in five applications. We identified six most frequently used bend gesture pairs out of 24 unique pairs. Results show users preferred bend gestures and bend gesture pairs that were conceptually simpler, e.g., along one axis, and less physically demanding. There was a strong agreement among participants to use the same three pairs in applications: (1) side of display, up/down (2) top corner, up/down (3) bottom corner, up/down. For actions with a strong directional cue, we found strong consensus on the polarity of the bend gestures (e.g., navigating left is performed with an upwards bend gesture, navigating right, downwards). This implies that bend gestures that take directional cues into account are likely more natural to users.

© All rights reserved Lahey et al. and/or their publisher

 
Edit | Del

Tarun, Aneesh P., Lahey, Byron, Girouard, Audrey, Burleson, Winslow and Vertegaal, Roel (2011): Snaplet: using body shape to inform function in mobile flexible display devices. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 329-334.

With recent advances in flexible displays, computer displays are no longer restricted to flat, rigid form factors. In this paper, we propose that the physical form of a flexible display, depending on the way it is held or worn, can help shape its current functionality. We propose Snaplet, a wearable flexible E Ink display augmented with sensors that allow the shape of the display to be detected. Snaplet is a paper computer in the form of a bracelet. When in a convex shape on the wrist, Snaplet functions as a watch and media player. When held flat in the hand it is a PDA with notepad functionality. When held in a concave shape Snaplet functions as a phone. Calls are dropped by returning its shape to a flat or convex shape.

© All rights reserved Tarun et al. and/or their publisher

 
Edit | Del

Cheng, Sylvia, Dickie, Connor, Hanewich-Hollatz, Andreas, Vertegaal, Roel and Lee, Justin (2011): Don't touch: social appropriateness of touch sensor placement on interactive lumalive e-textime shirts. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. p. 511.

In this video, we discuss the design of an e-textile shirt with an interactive Lumalive display featuring a touch-controlled image browser. To determine where to place touch sensors, we investigated which areas of the Lumalive shirt users would be comfortable touching or being touched based on how often participants would opt out of touches. For both touchers and touchees, opt-outs occurred mostly in the upper chest. On the front, the upper chest and lower abdominal zones were the least comfortable. Findings suggest participants were less comfortable with touches on the upper chest, the lower abdomen, and the lower back. We conclude that the most appropriate areas for touch sensors on a shirt are on the arms, shoulders, and upper back.

© All rights reserved Cheng et al. and/or their publisher

 
Edit | Del

Cheng, Sylvia H., Kim, Kibum and Vertegaal, Roel (2011): TagURit: a proximity-based game of tag using lumalive e-textile displays. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1147-1152.

We present an electronic game of tag that uses proximity sensing and Lumalive displays on garments. In our game of tag, each player physically represents a location-tagged Universal Resource Indicator (URI). The URIs, one chaser and two target players, wear touch-sensitive Lumalive display shirts. The goal of the game is for the chaser to capture a token displayed on one of the Lumalive shirts, by pressing a touch sensor located on the shirt. When the chaser is in close proximity to the token player, the token jumps to the shirt of the second closest player, making this children's game more challenging for adult players. Our system demonstrates the use of interactive e-textile displays to remove the technological barrier between contact and proximity in the real world, and the seamless representation of gaming information from the virtual world in that real world.

© All rights reserved Cheng et al. and/or their publisher

 
Edit | Del

Bolton, John, Kim, Kibum and Vertegaal, Roel (2011): SnowGlobe: a spherical fish-tank VR display. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1159-1164.

In this paper, we present a spherical display with Fish-Tank VR as a means for interacting with three-dimensional objects. We implemented the spherical display by reflecting a projected image off a hemispherical mirror, allowing for a seamless curvilinear display surface. Diffuse illumination is used for detecting touch points on the sphere. The user's head position and the position of the sphere are also tracked using a Vicon motion capture device. Users can perform multi-touch gestures to interact with 3D content on the spherical display. Our system relies on the metaphor of a snow globe. Users can walk around a display while maintaining motion parallax corrected viewpoints of the object on the display. They can interact with the 3D object using multitouch interaction techniques, allowing for rotating and scaling of the 3D model on the display.

© All rights reserved Bolton et al. and/or their publisher

 
Edit | Del

Banerjee, Amartya, Burstyn, Jesse, Girouard, Audrey and Vertegaal, Roel (2011): WaveForm: remote video blending for VJs using in-air multitouch gestures. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1807-1812.

We present WaveForm, a system that enables a Video Jockey (VJ) to directly manipulate video content on a large display on a stage, from a distance. WaveForm implements an in-air multitouch gesture set to layer, blend, scale, rotate, and position video content on the large display. We believe this leads to a more immersive experience for the VJ user, as well as for the audience witnessing the VJ's performance during a live event.

© All rights reserved Banerjee et al. and/or their publisher

 
Edit | Del

Banerjee, Amartya, Burstyn, Jesse, Girouard, Audrey and Vertegaal, Roel (2011): Pointable: an in-air pointing technique to manipulate out-of-reach targets on tabletops. In: Proceedings of the 2011 ACM International Conference on Interactive Tabletops and Surfaces 2011. pp. 11-20.

Selecting and moving digital content on interactive tabletops often involves accessing the workspace beyond arm's reach. We present Pointable, an in-air, bimanual perspective-based interaction technique that augments touch input on a tabletop for distant content. With Pointable, the dominant hand selects remote targets, while the non-dominant hand can scale and rotate targets with a dynamic C/D gain. We conducted 3 experiments; the first showed that pointing at a distance using Pointable has a Fitts' law throughput comparable to that of a mouse. In the second experiment, we found that Pointable had the same performance as multi-touch input in a resize, rotate and drag task. In a third study, we observed that when given the choice, over 75% of participants preferred to use Pointable over multi-touch for target manipulation. In general, Pointable allows users to manipulate out-of-reach targets, without loss of performance, while minimizing the need to lean, stand up, or involve collocated collaborators.

© All rights reserved Banerjee et al. and/or ACM Press

 
Edit | Del

Holman, David and Vertegaal, Roel (2011): TactileTape: low-cost touch sensing on curved surfaces. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 17-18.

TactileTape is a one-dimensional touch sensor that looks and behaves like regular tape. It can be constructed from everyday materials (a pencil, tin foil, and shelf liner) and senses single-touch input on curved and deformable surfaces. It is used as a roll of touch sensitive material from which designers cut pieces to quickly add touch sensitive strips to physical prototypes. TactileTape is low-cost, easy to interface, and, unlike current non-planar touch solutions [2,7,11], it is better adapted for the rapid exploration and iteration in the early design stage.

© All rights reserved Holman and Vertegaal and/or ACM Press

2010
 
Edit | Del

Wightman, Doug, Ginn, Tim and Vertegaal, Roel (2010): TouchMark: flexible document navigation and bookmarking techniques for e-book readers. In: Proceedings of the 2010 Conference on Graphics Interface 2010. pp. 241-244.

We present TouchMark, a set of page navigation techniques that preserve some of the physical affordances of paper books. TouchMark introduces physical tabs, one on each side of the display, to enable gestures such as page thumbing and bookmarking. TouchMark can be implemented on a variety of electronic devices, including tablet computers and laptops, by augmenting standard hardware with inexpensive sensors.

© All rights reserved Wightman et al. and/or their publisher

 
Edit | Del

Vertegaal, Roel (2010): The (re)usability of everyday computational things: why industrial design will be the new interaction design. In Interactions, 17 (6) pp. 38-41.

2009
 
Edit | Del

Vertegaal, Roel and Poupyrev, Ivan (2009): Eek! a mouse! organic user interfaces: tangible, transitive materials and programmable reality. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 3313-3316.

In this panel, we explore the role emerging transitive materials, like flexible thin-film displays, multi-touch input skins, e-textiles, micro-actuators and Claytronics might play in re-defining the human interface towards a programmable form of reality. Panelist will extrapolate historical trends from Tangibles to new developments in organic user interfaces, trying to identify a future in which interfaces will no longer be predominantly flat, but instead have any possible shape or form: from skins that are foldable, flexible and physical to three-dimensional products that are fully kinetic.

© All rights reserved Vertegaal and Poupyrev and/or ACM Press

 
Edit | Del

Lepinski, Julian, Akaoka, Eric and Vertegaal, Roel (2009): Context menus for the real world: the stick-anywhere computer. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 3499-3500.

In this video, we present a context-aware menu system made out of simulated digital paper. Built on the ubiquitous yellow sticky notes found in offices everywhere, our computer provides a contextual interactive paper menu that can be used to operate numerous everyday electric and electronic devices, such as lamps, speakers and computers. Stuck on a device, the sticky screen displays contextual information and control options which may be selected with a single touch of the finger. The stick-anywhere computer is an example of a context-aware organic user interface that, through a flexible paper-like display, allows software to reside directly on the product or task. The Stick-Anywhere Computer was implemented using a Xuuk Eyebox2 IR camera that tracks nearly invisible IR markers on post-it notes as well as fingers, and uses a projector to render interactive content directly onto the paper note.

© All rights reserved Lepinski et al. and/or ACM Press

 
Edit | Del

Akaoka, Eric and Vertegaal, Roel (2009): DisplayObjects: functional prototyping on real objects. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 3507-3508.

This video introduces DisplayObjects, a rapid prototyping workbench that allows functional displays to be projected on real 3D physical prototypes. DisplayObjects uses a Vicon motion capture system to track the location of physical models. 3D software renditions of the 3D physical model are then texture-mapped with interactive behavior and projected back onto the physical model to allow real-time interactions with the object. This simulates the functionality of future 3D interactive OLED display skins for product designs. We show a selection of interaction techniques used to design a number of DisplayObjects.

© All rights reserved Akaoka and Vertegaal and/or ACM Press

 
Edit | Del

Coelho, Marcelo, Poupyrev, Ivan, Sadi, Sajid, Vertegaal, Roel, Berzowska, Joanna, Buechley, Leah, Maes, Pattie and Oxman, Neri (2009): Programming reality: from transitive materials to organic user interfaces. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 4759-4762.

Over the past few years, a quiet revolution has been redefining our fundamental computing technologies. Flexible E-Ink, OLED displays, shape-changing materials, parametric design, e-textiles, sensor networks, and intelligent interfaces promise to spawn entirely new user experiences that will redefine our relationship with technology. This workshop invites researchers and practitioners to imagine and debate this future, exploring two converging themes. Transitive Materials focuses on how emerging materials and computationally-driven behaviors can operate in unison blurring the boundaries between form and function, human body and environment, structures and membranes. Organic User Interfaces (OUI) explores future interactive designs and applications as these materials become commonplace.

© All rights reserved Coelho et al. and/or ACM Press

 
Edit | Del

Akaoka, Eric, Ginn, Tim and Vertegaal, Roel (2009): DisplayObjects: prototyping functional physical interfaces on 3d styrofoam, paper or cardboard models. In: Proceedings of the 4th International Conference on Tangible and Embedded Interaction 2009. pp. 49-56.

This paper introduces DisplayObjects, a rapid prototyping workbench that allows functional interfaces to be projected onto real 3D physical prototypes. DisplayObjects uses a Vicon motion capture system to track the location of physical models. 3D software renditions of the 3D physical model are then texture-mapped with interactive behavior and projected back onto the physical model to allow real-time interactions with the object. We discuss the implementation of the system, as well as a selection of one and two-handed interaction techniques for DisplayObjects. We conclude with a design case that comments on some of the early design experiences with the system.

© All rights reserved Akaoka et al. and/or their publisher

 
Edit | Del

Rooke, Mike and Vertegaal, Roel (2009): Physics on display: tangible graphics on hexagonal bezel-less screens. In: Proceedings of the 4th International Conference on Tangible and Embedded Interaction 2009. pp. 233-236.

In this paper, we present a tiled display system made out of hexagonal cardboard screens with no visible bezel. Use of a bezel-less hexagonal form factor allows users to build larger multiform displays out of smaller tiles. Individual display tiles can be picked up to allow tangible interactions with physics simulations that are rendered onto the individual tiles. The corners of each hexagon are marked with invisible infrared retro-reflective dots. Computer vision is used to track the 3D location and orientation of these tiles. Our prototype projects back images onto each individual display. This allows for a seamless interaction experience that anticipates wireless Organic LED technology. We discuss a number of applications and interaction techniques for compound cardboard displays, which include tilting, rotating, moving and touching of tiles.

© All rights reserved Rooke and Vertegaal and/or their publisher

2008
 
Edit | Del

Vertegaal, Roel (2008): A Fitts Law comparison of eye tracking and manual input in the selection of visual targets. In: Digalakis, Vassilios, Potamianos, Alexandros, Turk, Matthew, Pieraccini, Roberto and Ivanov, Yuri (eds.) Proceedings of the 10th International Conference on Multimodal Interfaces - ICMI 2008 October 20-22, 2008, Chania, Crete, Greece. pp. 241-248.

 
Edit | Del

Gallant, David T., Seniuk, Andrew G. and Vertegaal, Roel (2008): Towards more paper-like input: flexible input devices for foldable interaction styles. In: Cousins, Steve B. and Beaudouin-Lafon, Michel (eds.) Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology October 19-22, 2008, Monterey, CA, USA. pp. 283-286.

 
Edit | Del

Vertegaal, Roel and Poupyrev, Ivan (2008): Introduction. In Communications of the ACM, 51 (6) pp. 26-30.

 
Edit | Del

Holman, David and Vertegaal, Roel (2008): Organic user interfaces: designing computers in any way, shape, or form. In Communications of the ACM, 51 (6) pp. 48-55.

 
Edit | Del

Vertegaal, Roel (2008): A Fitts Law comparison of eye tracking and manual input in the selection of visual targets. In: Proceedings of the 2008 International Conference on Multimodal Interfaces 2008. pp. 241-248.

We present a Fitts' Law evaluation of a number of eye tracking and manual input devices in the selection of large visual targets. We compared performance of two eye tracking techniques, manual click and dwell time click, with that of mouse and stylus. Results show eye tracking with manual click outperformed the

© All rights reserved Vertegaal and/or his/her publisher

2007
 
Edit | Del

Chen, Daniel, Hart, Jamie and Vertegaal, Roel (2007): Towards a Physiological Model of User Interruptability. In: Baranauskas, Maria Cecília Calani, Palanque, Philippe A., Abascal, Julio and Barbosa, Simone Diniz Junqueira (eds.) DEGAS 2007 - Proceedings of the 1st International Workshop on Design and Evaluation of e-Government Applications and Services September 11th, 2007, Rio de Janeiro, Brazil. pp. 439-451.

2006
 
Edit | Del

Dickie, Connor, Hart, Jamie, Vertegaal, Roel and Eiser, Alex (2006): LookPoint: an evaluation of eye input for hands-free switching of input devices between multiple computers. In: Kjeldskov, Jesper and Paay, Jane (eds.) Proceedings of OZCHI06, the CHISIG Annual Conference on Human-Computer Interaction 2006. pp. 119-126.

We present LookPoint, a system that uses eye input for switching input between multiple computing devices. LookPoint uses an eye tracker to detect which screen the user is looking at, and then automatically routes mouse and keyboard input to the computer associated with that screen. We evaluated the use of eye input for switching between three computer monitors during a typing task, comparing its performance with that of three other selection techniques: multiple keyboards, function key selection, and mouse selection. Results show that the use of eye input is 111% faster than the mouse, 75% faster than function keys, and 37% faster than the use of multiple keyboards. A user satisfaction questionnaire showed that participants also preferred the use of eye input over other three techniques. The implications of this work are discussed, as well as future calibration-free implementations.

© All rights reserved Dickie et al. and/or their publisher

 
Edit | Del

Altosaar, Mark, Vertegaal, Roel, Sohn, Changuk and Cheng, Daniel (2006): AuraOrb: using social awareness cues in the design of progressive notification appliances. In: Kjeldskov, Jesper and Paay, Jane (eds.) Proceedings of OZCHI06, the CHISIG Annual Conference on Human-Computer Interaction 2006. pp. 159-166.

One of the problems with notification appliances is that they can be distracting when providing information not of immediate interest to the user. In this paper, we present AuraOrb, an ambient notification appliance that deploys progressive turn taking techniques to minimize notification disruptions. AuraOrb uses social awareness cues, such as eye contact to detect user interest in an initially ambient light notification. Once detected, it displays a text message with a notification heading visible from 360 degrees. Touching the orb causes the associated message to be displayed on the user's computer screen. When user interest is lost, AuraOrb automatically reverts back to its idle state. We performed an initial evaluation of AuraOrb's functionality using a set of heuristics tailored to ambient displays. We compared progressive notification with the use of persistent ticker tape notifications and Outlook Express system tray messages for notifying the user of incoming emails. Results of our evaluation suggest that progressive turn taking techniques allowed AuraOrb users to access notification headings with minimal impact on their focus task.

© All rights reserved Altosaar et al. and/or their publisher

2005
 
Edit | Del

Smith, John D., Vertegaal, Roel and Sohn, Changuk (2005): ViewPointer: lightweight calibration-free eye tracking for ubiquitous handsfree deixis. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 53-61.

We introduce ViewPointer, a wearable eye contact sensor that detects deixis towards ubiquitous computers embedded in real world objects. ViewPointer consists of a small wearable camera no more obtrusive than a common Bluetooth headset. ViewPointer allows any real-world object to be augmented with eye contact sensing capabilities, simply by embedding a small infrared (IR) tag. The headset camera detects when a user is looking at an infrared tag by determining whether the reflection of the tag on the cornea of the user\'s eye appears sufficiently central to the pupil. ViewPointer not only allows any object to become an eye contact sensing appliance, it also allows identification of users and transmission of data to the user through the object. We present a novel encoding scheme used to uniquely identify ViewPointer tags, as well as a method for transmitting URLs over tags. We present a number of scenarios of application as well as an analysis of design principles. We conclude eye contact sensing input is best utilized to provide context to action.

© All rights reserved Smith et al. and/or ACM Press

 
Edit | Del

Dickie, Connor, Vertegaal, Roel, Sohn, Changuk and Cheng, Daniel (2005): eyeLook: using attention to facilitate mobile media consumption. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 103-106.

One of the problems with mobile media devices is that they may distract users during critical everyday tasks, such as navigating the streets of a busy city. We addressed this issue in the design of eyeLook: a platform for attention sensitive mobile computing. eyeLook appliances use embedded low cost eyeCONTACT sensors (ECS) to detect when the user looks at the display. We discuss two eyeLook applications, seeTV and seeTXT, that facilitate courteous media consumption in mobile contexts by using the ECS to respond to user attention. seeTV is an attentive mobile video player that automatically pauses content when the user is not looking. seeTXT is an attentive speed reading application that flashes words on the display, advancing text only when the user is looking. By making mobile media devices sensitive to actual user attention, eyeLook allows applications to gracefully transition users between consuming media, and managing life.

© All rights reserved Dickie et al. and/or ACM Press

 
Edit | Del

Danninger, Maria, Vertegaal, Roel, Siewiorek, Daniel P. and Mamuji, Aadil (2005): Using social geometry to manage interruptions and co-worker attention in office environments. In: Graphics Interface 2005 May 9-11, 2005, Victoria, British Columbia, Canada. pp. 211-218.

Social geometry is a novel technique for reasoning about the engagement of participants during group meetings on the basis of head orientation data provided by computer vision. This form of group context can be used by ubiquitous environments to route communications between users, or sense availability of users for interruption. We explored problems of distraction by co-workers in office cubicle farms, applying our method to the design of a cubicle that automatically regulates visual and auditory communications between users.

© All rights reserved Danninger et al. and/or their publisher

 
Edit | Del

Fono, David and Vertegaal, Roel (2005): EyeWindows: evaluation of eye-controlled zooming windows for focus selection. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 151-160.

In this paper, we present an attentive windowing technique that uses eye tracking, rather than manual pointing, for focus window selection. We evaluated the performance of 4 focus selection techniques: eye tracking with key activation, eye tracking with automatic activation, mouse and hotkeys in a typing task with many open windows. We also evaluated a zooming windowing technique designed specifically for eye-based control, comparing its performance to that of a stan-dard tiled windowing environment. Results indicated that eye tracking with automatic activation was, on average, about twice as fast as mouse and hotkeys. Eye tracking with key activation was about 72% faster than manual conditions, and preferred by most participants. We believe eye input performed well because it allows manual input to be provided in parallel to focus selection tasks. Results also suggested that zooming

© All rights reserved Fono and Vertegaal and/or ACM Press

 
Edit | Del

Holman, David, Vertegaal, Roel, Altosaar, Mark, Troje, Nikolaus and Johns, Derek (2005): Paper windows: interaction techniques for digital paper. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 591-599.

In this paper, we present Paper Windows, a prototype windowing environment that simulates the use of digital paper displays. By projecting windows on physical paper, Paper Windows allows the capturing of physical affordances of paper in a digital world. The system uses paper as an input device by tracking its motion and shape with a Vicon Motion Capturing System. We discuss the design of a number of interaction techniques for manipulating information on paper displays.

© All rights reserved Holman et al. and/or ACM Press

 
Edit | Del

Jenkin, Tracy, McGeachie, Jesse, Fono, David and Vertegaal, Roel (2005): eyeView: focus+context views for large group video conferences. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1497-1500.

In this paper, we describe the design of eyeView, a video conferencing system that uses participant looking behavior to determine the size of online video conferencing windows. The system uses an elastic windowing algorithm that enlarges the image of the person most looked at by others, while maintaining a contextual view of other remote participants. eyeView measures interest by gauging whom participants look at using an eye tracker embedded in the display. Users can enter side conversations by looking at each other, and pressing the space bar. Cocktail-party filtering is aided by attenuating audio sources outside the social network constituted by glances between participants. By allocating both screen and audio real estate according to the joint attention of participants, eyeView supports smooth allocation of focus on the speaker, while maintaining awareness of the group.

© All rights reserved Jenkin et al. and/or ACM Press

 
Edit | Del

Smith, David, Donald, Matthew, Chen, Daniel, Cheng, Daniel, Sohn, Changuk, Mamuji, Aadil, Holman, David and Vertegaal, Roel (2005): OverHear: augmenting attention in remote social gatherings through computer-mediated hearing. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1801-1804.

One of the problems with mediated communication systems is that they limit the user's ability to listen to informal conversations of others within a remote space. In what is known as the Cocktail Party phenomenon, participants in noisy face-to-face conversations are able to focus their attention on a single individual, typically the person they look at. Media spaces do not support the cues necessary to establish this attentive mechanism. We addressed this issue in our design of OverHear, a media space that augments the user's attention in remote social gatherings through computer mediated hearing. OverHear uses an eye tracker embedded in the webcam display to direct the focal point of a robotic shotgun microphone mounted in the remote space. This directional microphone is automatically pointed towards the currently observed individual, allowing the user to OverHear this person's conversations.

© All rights reserved Smith et al. and/or ACM Press

 
Edit | Del

Vertegaal, Roel, Mamuji, Aadil, Sohn, Changuk and Cheng, Daniel (2005): Media eyepliances: using eye tracking for remote control focus selection of appliances. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1861-1864.

This paper discusses the use of eye contact sensing for focus selection operations in remote controlled media appliances. Focus selection with remote controls tends to be cumbersome as selection buttons place the remote in a device-specific modality. We addressed this issue with the design of Media EyePliances, home theatre appliances augmented with a digital eye contact sensor. An appliance is selected as the focus of remote commands by looking at its sensor. A central server subsequently routes all commands provided by remote, keyboard or voice input to the focus EyePliance. We discuss a calibration-free digital eye contact sensing technique that allows Media EyePliances to determine the user's point of gaze.

© All rights reserved Vertegaal et al. and/or ACM Press

 
Edit | Del

Senior, Robin and Vertegaal, Roel (2005): Augmenting conversational dialogue by means of latent semantic googling. In: Lazzari, Gianni, Pianesi, Fabio, Crowley, James L., Mase, Kenji and Oviatt, Sharon L. (eds.) Proceedings of the 7th International Conference on Multimodal Interfaces - ICMI 2005 October 4-6, 2005, Trento, Italy. pp. 144-150.

 
Edit | Del

Senior, Robin and Vertegaal, Roel (2005): Augmenting conversational dialogue by means of latent semantic googling. In: Proceedings of the 2005 International Conference on Multimodal Interfaces 2005. pp. 144-150.

This paper presents Latent Semantic Googling, a variant of Landauer's Latent Semantic Indexing that uses the Google search engine to judge the semantic closeness of sets of words and phrases. This concept is implemented via Ambient Google, a system for augmenting conversations through the classification of discussed topics. Ambient Google uses a speech recognition engine to generate Google keyphrase queries directly from conversations. These queries are used to analyze the semantics of the conversation, and infer related topics that have been discussed. Conversations are visualized using a spring-model algorithm representing common topics. This allows users to browse their conversation as a contextual relationship between discussed topics, and augment their discussion through the use of related websites discovered by Google. An evaluation of Ambient Google is presented, discussing user reaction to the system.

© All rights reserved Senior and Vertegaal and/or their publisher

2004
 
Edit | Del

Duchowski, Andrew T. and Vertegaal, Roel (eds.) ETRA 2004 - Proceedings of the Eye Tracking Research and Application Symposium March 22-24, 2004, San Antonio, Texas, USA.

 
Edit | Del

Cheng, Daniel and Vertegaal, Roel (2004): An eye for an eye: a performance evaluation comparison of the LC technologies and Tobii eye trackers. In: Duchowski, Andrew T. and Vertegaal, Roel (eds.) ETRA 2004 - Proceedings of the Eye Tracking Research and Application Symposium March 22-24, 2004, San Antonio, Texas, USA. p. 61.

 
Edit | Del

Shell, Jeffrey S., Vertegaal, Roel, Cheng, Daniel, Skaburskis, Alexander W., Sohn, Changuk, Stewart, A. James, Aoudeh, Omar and Dickie, Connor (2004): ECSGlasses and EyePliances: using attention to open sociable windows of interaction. In: Duchowski, Andrew T. and Vertegaal, Roel (eds.) ETRA 2004 - Proceedings of the Eye Tracking Research and Application Symposium March 22-24, 2004, San Antonio, Texas, USA. pp. 93-100.

 
Edit | Del

Skaburskis, Alexander W., Vertegaal, Roel and Shell, Jeffrey S. (2004): Auramirror: reflections on attention. In: Duchowski, Andrew T. and Vertegaal, Roel (eds.) ETRA 2004 - Proceedings of the Eye Tracking Research and Application Symposium March 22-24, 2004, San Antonio, Texas, USA. pp. 101-108.

 
Edit | Del

Duchowski, Andrew T. and Vertegaal, Roel (2004): Preface. In: Duchowski, Andrew T. and Vertegaal, Roel (eds.) ETRA 2004 - Proceedings of the Eye Tracking Research and Application Symposium March 22-24, 2004, San Antonio, Texas, USA. p. 6.

2003
 
Edit | Del

Vertegaal, Roel, Weevers, Ivo, Sohn, Changuk and Cheung, Chris (2003): GAZE-2: conveying eye contact in group video conferencing using eye-controlled camera direction. In: Cockton, Gilbert and Korhonen, Panu (eds.) Proceedings of the ACM CHI 2003 Human Factors in Computing Systems Conference April 5-10, 2003, Ft. Lauderdale, Florida, USA. pp. 521-528.

 
Edit | Del

Shell, Jeffrey S., Selker, Ted and Vertegaal, Roel (2003): Interacting with groups of computers. In Communications of the ACM, 46 (3) pp. 40-46.

 
Edit | Del

Vertegaal, Roel (2003): Introduction. In Communications of the ACM, 46 (3) pp. 30-33.

2002
 
Edit | Del

Vertegaal, Roel and Ding, Yaping (2002): Explaining effects of eye gaze on mediated group conversations: amount or synchronization?. In: Churchill, Elizabeth F., McCarthy, Joe, Neuwirth, Christine and Rodden, Tom (eds.) Proceedings of the 2002 ACM conference on Computer supported cooperative work November 16 - 20, 2002, New Orleans, Louisiana, USA. pp. 41-48.

We present an experiment examining effects of gaze on speech during three-person conversations. Understanding such effects is crucial for the design of teleconferencing systems and Collaborative Virtual Environments (CVEs). Previous findings suggest subjects take more turns when they experience more gaze. We evaluated whether this is because more gaze allowed them to better observe whether they were being addressed. We compared speaking behavior between two conditions: (1) in which subjects experienced gaze synchronized with conversational attention, and (2) in which subjects experienced random gaze. The amount of gaze experienced by subjects was a covariate. Results show subjects were 22% more likely to speak when gaze behavior was synchronized with conversational attention. However, covariance analysis showed these results were due to differences in amount of gaze rather than synchronization of gaze, with correlations of .62 between amount of gaze and amount of subject speech. Task performance was 46% higher when gaze was synchronized. We conclude it is commendable to use synchronized gaze models when designing CVEs, but depending on task situation, random models generating sufficient amounts of gaze may suffice.

© All rights reserved Vertegaal and Ding and/or ACM Press

 
Edit | Del

Duchowski, Andrew T., Vertegaal, Roel and Senders, John W. (eds.) ETRA 2002 - Proceedings of the Eye Tracking Research and Application Symposium March 25-27, 2002, New Orleans, Louisiana, USA.

 
Edit | Del

Vertegaal, Roel (2002): Designing attentive interfaces. In: Duchowski, Andrew T., Vertegaal, Roel and Senders, John W. (eds.) ETRA 2002 - Proceedings of the Eye Tracking Research and Application Symposium March 25-27, 2002, New Orleans, Louisiana, USA. pp. 23-30.

 
Edit | Del

Vertegaal, Roel (2002): What do the eyes behold for human-computer interaction?. In: Duchowski, Andrew T., Vertegaal, Roel and Senders, John W. (eds.) ETRA 2002 - Proceedings of the Eye Tracking Research and Application Symposium March 25-27, 2002, New Orleans, Louisiana, USA. pp. 59-60.

2001
 
Edit | Del

Vertegaal, Roel, Slagter, Robert, Veer, Gerrit van der and Nijholt, Anton (2001): Eye Gaze Patterns in Conversations: There is More the Conversational Agents than Meets the Eyes. In: Beaudouin-Lafon, Michel and Jacob, Robert J. K. (eds.) Proceedings of the ACM CHI 2001 Human Factors in Computing Systems Conference March 31 - April 5, 2001, Seattle, Washington, USA. pp. 301-308.

In multi-agent, multi-user environments, users as well as agents should have a means of establishing who is talking to whom. In this paper, we present an experiment aimed at evaluating whether gaze directional cues of users could be used for this purpose. Using an eye tracker, we measured subject gaze at the faces of conversational partners during four-person conversations. Results indicate that when someone is listening or speaking to individuals, there is indeed a high probability that the person looked at is the person listened

© All rights reserved Vertegaal et al. and/or ACM Press

2000
 
Edit | Del

Vertegaal, Roel, Veer, Gerrit van der and Vons, Harro (2000): Effects of Gaze on Multiparty Mediated Communication. In: Proceedings of Graphics Interface 2000 May 15-17, 2000, Montreal, Quebec, Canada. pp. 95-102.

1999
 
Edit | Del

Vertegaal, Roel (1999): The GAZE Groupware System: Mediating Joint Attention in Multiparty Communication and Collaboration. In: Altom, Mark W. and Williams, Marian G. (eds.) Proceedings of the ACM CHI 99 Human Factors in Computing Systems Conference May 15-20, 1999, Pittsburgh, Pennsylvania. pp. 294-301.

In this paper, we discuss why, in designing multiparty mediated systems, we should focus first on providing non-verbal cues which are less redundantly coded in speech than those normally conveyed by video. We show how conveying one such cue, gaze direction, may solve two problems in multiparty mediated communication and collaboration: knowing who is talking to whom, and who is talking about what. As a candidate solution, we present the GAZE Groupware System, which combines support for gaze awareness in multiparty mediated communication and collaboration with small and linear bandwidth requirements. The system uses an advanced, desk-mounted eyetracker to metaphorically convey gaze awareness in a 3D virtual meeting room and within shared documents.

© All rights reserved Vertegaal and/or ACM Press

1997
 
Edit | Del

Vertegaal, Roel, Velichkovsky, Boris and Veer, Gerrit van der (1997): Catching the Eye: Management of Joint Attention in Cooperative Work. In ACM SIGCHI Bulletin, 29 (4) pp. 87-92.

In this paper, we show how different elements of awareness information in groupware systems can be defined in terms of conveying attentive states of the participants. Different kinds of awareness are distinguished: at macro- and micro-level, the latter consisting of workspace awareness and conversational awareness. We summarize the functional elements of micro-level awareness, organizing them hierarchically in terms of their relation to the attention of participants. We further discuss how groupware systems can capture and represent awareness by means of attention-based metaphors, and give an example of a virtual meeting room in which the gaze direction of the participants is conveyed by means of modern 'imaging' eyetracking technology.

© All rights reserved Vertegaal et al. and/or ACM Press

1996
 
Edit | Del

Vertegaal, Roel and Eaglestone, Barry (1996): Comparison of Input Devices in an ISEE Direct Timbre Manipulation Task. In Interacting with Computers, 8 (1) pp. 13-30.

The representation and manipulation of sound within multimedia systems is an important and currently under-researched area. The paper gives an overview of the authors' work on the direct manipulation of audio information, and describes a solution based upon the navigation of four-dimensional scaled timbre spaces. Three hardware input devices were experimentally evaluated for use in a timbre space navigation task: the Apple Standard Mouse, Gravis Advanced Mousestick II joystick (absolute and relative) and the Nintendo Power Glove. Results show that the usability of these devices significantly affected the efficacy of the system, and that conventional low-cost, low-dimensional devices provided better performance than the low-cost, multidimensional dataglove.

© All rights reserved Vertegaal and Eaglestone and/or Elsevier Science

1995
 
Edit | Del

Vertegaal, Roel and Guest, Steve (1995): Network Issues in the Growth and Adoption of Networked CSCW Services. In ACM SIGCHI Bulletin, 27 (4) pp. 63-67.

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
 
 
 

Changes to this page (author)

23 Nov 2012: Modified
05 Apr 2012: Modified
05 Apr 2012: Modified
05 Apr 2012: Modified
05 Apr 2012: Modified
04 Apr 2012: Modified
03 Apr 2012: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
20 Apr 2011: Modified
20 Apr 2011: Modified
18 Apr 2011: Modified
16 Jan 2011: Modified
16 Jan 2011: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
18 Aug 2009: Modified
18 Aug 2009: Modified
18 Aug 2009: Modified
18 Aug 2009: Modified
25 Jul 2009: Modified
12 Jul 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
24 Jul 2007: Modified
24 Jul 2007: Modified
29 Jun 2007: Modified
29 Jun 2007: Modified
29 Jun 2007: Modified
29 Jun 2007: Modified
29 Jun 2007: Modified
24 Jun 2007: Modified
24 Jun 2007: Modified
23 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
28 Apr 2003: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/roel_vertegaal.html

Publication statistics

Pub. period:1995-2012
Pub. count:59
Number of co-authors:72



Co-authors

Number of publications with 3 favourite co-authors:

Changuk Sohn:7
Audrey Girouard:7
Daniel Cheng:6

 

 

Productive colleagues

Roel Vertegaal's 3 most productive colleagues in number of publications:

Gerrit van der Vee..:37
Pattie Maes:37
Ivan Poupyrev:37
 
 
 
Jul 29

There is an old English folk saying that goes, "If you can't stand the heat, get out of the kitchen." I have a different approach: Do something about the heat. The folk saying would have us accept the poor designs of the world. Why? After all, if people were responsible for the "heat" in the first place, then people should be able to do something about it. Is the kitchen too hot? Redesign it.

-- Don Norman

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!