Publication statistics

Pub. period:2003-2012
Pub. count:33
Number of co-authors:68



Co-authors

Number of publications with 3 favourite co-authors:

Andrew D. Wilson:8
Steven K. Feiner:6
Ken Hinckley:5

 

 

Productive colleagues

Hrvoje Benko's 3 most productive colleagues in number of publications:

Ravin Balakrishnan:108
Bill Buxton:78
Steven K. Feiner:76
 
 
 

Upcoming Courses

go to course
UI Design Patterns for Successful Software
92% booked. Starts in 3 days
go to course
Affordances: Designing Intuitive User Interfaces
91% booked. Starts in 4 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Hrvoje Benko

Ph.D

Picture of Hrvoje Benko.
Update pic
Personal Homepage:
research.microsoft.com/en-us/um/people/benko/


Current place of employment:
Microsoft Research

Hrvoje Benko is researcher in Natural Interaction Research group at Microsoft Research. His research is on novel surface computing technologies and their impact on human-computer interaction. Prior to working at Microsoft, Hrvoje received PhD at Columbia University, working on augmented reality projects that combine immersive experiences with interactive tabletops. His projects explore the space where the flat digital world of surface computing meets the curved, physical, 3D space we live in. His current research interests include: -augmented reality -new input form factors -multi-touch and freehand gestural input In 2010, he has worked with a group at Microsoft Hardware to turn one of his research projects into a product. His team created Microsoft Touch Mouse which enables the user to use touch gestures on top of the mouse to control their Windows experience.

 

Publications by Hrvoje Benko (bibliography)

 what's this?
2012
 
Edit | Del

Wilson, Andrew, Benko, Hrvoje, Izadi, Shahram and Hilliges, Otmar (2012): Steerable augmented reality with the beamatron. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 413-422. Available online

Steerable displays use a motorized platform to orient a projector to display graphics at any point in the room. Often a camera is included to recognize markers and other objects, as well as user gestures in the display volume. Such systems can be used to superimpose graphics onto the real world, and so are useful in a number of augmented reality and ubiquitous computing scenarios. We contribute the Beamatron, which advances steerable displays by drawing on recent progress in depth camera-based interactions. The Beamatron consists of a computer-controlled pan and tilt platform on which is mounted a projector and Microsoft Kinect sensor. While much previous work with steerable displays deals primarily with projecting corrected graphics onto a discrete set of static planes, we describe computational techniques that enable reasoning in 3D using live depth data. We show two example applications that are enabled by the unique capabilities of the Beamatron: an augmented reality game in which a player can drive a virtual toy car around a room, and a ubiquitous computing demo that uses speech and gesture to move projected graphics throughout the room.

© All rights reserved Wilson et al. and/or ACM Press

2011
 
Edit | Del

Song, Hyunyoung, Benko, Hrvoje, Guimbretiere, Francois, Izadi, Shahram, Cao, Xiang and Hinckley, Ken (2011): Grips and gestures on a multi-touch pen. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1323-1332. Available online

This paper explores the interaction possibilities enabled when the barrel of a digital pen is augmented with a multi-touch sensor. We present a novel multi-touch pen (MTPen) prototype and discuss its alternate uses beyond those of a standard stylus, such as allowing new touch gestures to be performed using the index finger or thumb and detecting how users grip the device as a mechanism for mode switching. We also discuss the hardware and software implementation challenges in realizing our prototype, and showcase how one can combine different grips (tripod, relaxed tripod, sketch, wrap) and gestures (swipe and double tap) to enable new interaction techniques with the MTPen in a prototype drawing application. One specific aim is the elimination of some of the comfort problems associated with existing auxiliary controls on digital pens. Mechanical controls such as barrel buttons and barrel scroll wheels work best in only a few specific hand grips and pen rotations. Comparatively, our gestures can be successfully and comfortably performed regardless of the rotation of the pen or how the user grips it, offering greater flexibility in use. We describe a formal evaluation comparing MTPen gestures against the use of a barrel button for mode switching. This study shows that both swipe and double tap gestures are comparable in performance to commonly employed barrel buttons without its disadvantages.

© All rights reserved Song et al. and/or their publisher

 
Edit | Del

Wigdor, Daniel, Benko, Hrvoje, Pella, John, Lombardo, Jarrod and Williams, Sarah (2011): Rock & rails: extending multi-touch interactions with shape gestures to enable precise spatial manipulations. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1581-1590. Available online

Direct touch manipulations enable the user to interact with the on-screen content in a direct and easy manner closely mimicking the spatial manipulations in the physical world. However, they also suffer from well-known issues of precision, occlusion and an inability to isolate different degrees of freedom in spatial manipulations. We present a set of interactions, called Rock&Rails, that augment existing direct touch manipulations with shape-based gestures, thus providing on-demand gain control, occlusion avoidance, and separation of constraints in 2D manipulation tasks. Using shape gestures in combination with direct-manipulations allows us to do this without ambiguity in detection and without resorting to manipulation handles, which break the direct manipulation paradigm. Our set of interactions were evaluated by 8 expert graphic designers and were found to be easy to learn and master, as well as effective in accomplishing a precise graphical layout task.

© All rights reserved Wigdor et al. and/or their publisher

 
Edit | Del

Steinicke, Frank, Benko, Hrvoje, Daiber, Florian, Keefe, Daniel and Rivire, Jean-Baptiste de la (2011): Touching the 3rd dimension (T3D). In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 161-164. Available online

In recent years interactive visualization of 3D data has become increasingly important and widespread due to the requirements of several application areas. However, current user interfaces often lack adequate support for 3D interactions: 2D desktop systems are often limited in cases where natural interaction with 3D content is required, and sophisticated 3D user interfaces consisting of stereoscopic projections and tracked input devices are rarely adopted by ordinary users. Touch interaction has received considerable attention for 2D interfaces, and more recently for 3D interfaces. Many touch devices now support multiple degrees of freedom input by capturing multiple 2D contact positions on the surface as well as varying levels of pressure and even depth. There is, therefore, great potential for multi-touch interfaces to provide the traditionally difficult to achieve combination of natural 3D interaction without any instrumentation. When combined with a stereoscopic display of 3D data as well as 3D depth cameras, we believe that multi-touch technology can form the basis for a next generation of intuitive and expressive 3D user interfaces. Several research groups have begun to explore the potential, limitations, and challenges of this and other 3D touch environments, and first commercial systems are already available. The goal of the SIG "Touching the 3rd Dimension (T3D)" is to address the research and industrial challenges involved in exploring the space where the flat digital world of surface computing meets the physical, spatially complex, 3D space in which we live. The meeting will provide a common forum to attract groups of conference attendees who share their visions of the future and recent results in the area of improving 3D interaction and visualization by taking advantage of the strengths of advanced multi-touch computing.

© All rights reserved Steinicke et al. and/or their publisher

 
Edit | Del

Holman, David and Benko, Hrvoje (2011): SketchSpace: designing interactive behaviors with passive materials. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1987-1992. Available online

This paper presents SketchSpace, a system that allows designers to interactively sketch [3] device's interactive behaviors by imbuing digital functionality to passive materials. SketchSpace requires no augmentation of the device itself, but instead it uses a depth-sensing Kinect camera to simulate the use of hardware sensors by using depth information to infer an object's three-dimensional position, motion, proximity, shape, deformations, and touch events on its surface. A designer can map these inputs to desktop applications in real-time and thus experiment with different interactions. We showcase how SketchSpace can be used to prototype two devices: from tilt sensitive mice to bendable displays. In general, we discuss how this simplifies the process of generating an interactive device sketch and supports rapid exploration of design solutions.

© All rights reserved Holman and Benko and/or their publisher

 
Edit | Del

Jota, Ricardo and Benko, Hrvoje (2011): Constructing virtual 3D models with physical building blocks. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2173-2178. Available online

Constructing virtual 3D models typically requires specialized desktop modeling tools (e.g., CAD tools), which, while very powerful, tend to require a lot of precision, time, and expertise from the user. We present StereoBlocks, a system that combines a Kinect depth camera with 3D stereoscopic projector to allow the user to build complex virtual 3D models from available physical objects. By treating the camera information as a continuous 3D digitizer, we are able to capture the details of the real world and re-project virtual objects side-by-side to real objects. The user is able to visualize such mixed reality model through stereoscopic projected imagery tightly aligned with the real world. In our system, it is literally possible to build the entire virtual castle, using only a single physical brick piece. We discuss our prototype implementation and report on early feedback from the four users that evaluated our system.

© All rights reserved Jota and Benko and/or their publisher

 
Edit | Del

Sun, Minghui, Cao, Xiang, Song, Hyunyoung, Izadi, Shahram, Benko, Hrvoje, Guimbretiere, Francois, Ren, Xiangshi and Hinckley, Ken (2011): Enhancing naturalness of pen-and-tablet drawing through context sensing. In: Proceedings of the 2011 ACM International Conference on Interactive Tabletops and Surfaces 2011. pp. 83-86. Available online

Among artists and designers, the pen-and-tablet combination is widely used for creating digital drawings, as digital pens outperform other input devices in replicating the experience of physical drawing tools. In this paper, we explore how contextual information such as the relationship between the hand, the pen, and the tablet can be leveraged in the digital drawing experience to further enhance its naturalness. By embedding sensors in the pen and the tablet to sense and interpret these contexts, we demonstrate how several physical drawing practices can be reflected and assisted in digital interaction scenarios.

© All rights reserved Sun et al. and/or ACM Press

 
Edit | Del

Saponas, T. Scott, Harrison, Chris and Benko, Hrvoje (2011): PocketTouch: through-fabric capacitive touch input. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 303-308. Available online

PocketTouch is a capacitive sensing prototype that enables eyes-free multitouch input on a handheld device without having to remove the device from the pocket of one's pants, shirt, bag, or purse. PocketTouch enables a rich set of gesture interactions, ranging from simple touch strokes to full alphanumeric text entry. Our prototype device consists of a custom multitouch capacitive sensor mounted on the back of a smartphone. Similar capabilities could be enabled on most existing capacitive touchscreens through low-level access to the capacitive sensor. We demonstrate how touch strokes can be used to initialize the device for interaction and how strokes can be processed to enable text recognition of characters written over the same physical area. We also contribute a comparative study that empirically measures how different fabrics attenuate touch inputs, providing insight for future investigations. Our results suggest that PocketTouch will work reliably with a wide variety of fabrics used in today's garments, and is a viable input method for quick eyes-free operation of devices in pockets.

© All rights reserved Saponas et al. and/or ACM Press

 
Edit | Del

Harrison, Chris, Benko, Hrvoje and Wilson, Andrew D. (2011): OmniTouch: wearable multitouch interaction everywhere. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 441-450. Available online

OmniTouch is a wearable depth-sensing and projection system that enables interactive multitouch applications on everyday surfaces. Beyond the shoulder-worn system, there is no instrumentation of the user or environment. Foremost, the system allows the wearer to use their hands, arms and legs as graphical, interactive surfaces. Users can also transiently appropriate surfaces from the environment to expand the interactive area (e.g., books, walls, tables). On such surfaces -- without any calibration -- OmniTouch provides capabilities similar to that of a mouse or touchscreen: X and Y location in 2D interfaces and whether fingers are "clicked" or hovering, enabling a wide variety of interactions. Reliable operation on the hands, for example, requires buttons to be 2.3cm in diameter. Thus, it is now conceivable that anything one can do on today's mobile devices, they could do in the palm of their hand.

© All rights reserved Harrison et al. and/or ACM Press

2010
 
Edit | Del

Hinckley, Ken, Yatani, Koji, Pahud, Michel, Coddington, Nicole, Rodenhouse, Jenny, Wilson, Andy, Benko, Hrvoje and Buxton, Bill (2010): Manual deskterity: an exploration of simultaneous pen + touch direct input. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2793-2802. Available online

Manual Deskterity is a prototype digital drafting table that supports both pen and touch input. We explore a division of labor between pen and touch that flows from natural human skill and differentiation of roles of the hands. We also explore the simultaneous use of pen and touch to support novel compound gestures.

© All rights reserved Hinckley et al. and/or their publisher

 
Edit | Del

Benko, Hrvoje and Wilson, Andrew D. (2010): Pinch-the-sky dome: freehand multi-point interactions with immersive omni-directional data. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 3045-3050. Available online

Pinch-the-Sky Dome is a large immersive installation where several users can interact simultaneously with omni-directional data inside of a tilted geodesic dome. Our system consists of an omni-directional projector-camera unit in the center of the dome. The projector is able to project an image spanning the entire 360 degrees and a camera is used to track freehand gestures for navigation of the content. The interactive demos include: 1) the exploration of the astronomical data provided by World Wide Telescope, 2) social networking 3D graph visualizations, 3) immersive panoramic images, and 4) 360 degree video conferencing. We combine speech commands with freehand pinch gestures to provide a highly immersive and interactive experience to several users inside the dome, with a very wide field of view for each user.

© All rights reserved Benko and Wilson and/or their publisher

 
Edit | Del

Pahud, Michel, Inkpen, Kori, Benko, Hrvoje, Tang, John C. and Buxton, Bill (2010): Three's company: understanding communication channels in three-way distributed collaboration. In: Proceedings of ACM CSCW10 Conference on Computer-Supported Cooperative Work 2010. pp. 271-280. Available online

We explore the design of a system for three-way collaboration over a shared visual workspace, specifically in how to support three channels of communication: person, reference, and task-space. In two studies, we explore the implications of extending designs intended for dyadic collaboration to three-person groups, and the role of each communication channel. Our studies illustrate the utility of multiple configurations of users around a distributed workspace, and explore the subtleties of traditional notions of identity, awareness, spatial metaphor, and corporeal embodiments as they relate to three-way collaboration.

© All rights reserved et al. and/or their publisher

 
Edit | Del

Hartmann, Bjorn, Morris, Meredith Ringel, Benko, Hrvoje and Wilson, Andrew D. (2010): Pictionaire: supporting collaborative design work by integrating physical and digital artifacts. In: Proceedings of ACM CSCW10 Conference on Computer-Supported Cooperative Work 2010. pp. 421-424. Available online

This paper introduces an interactive tabletop system that enhances creative collaboration across physical and digital artifacts. Pictionaire offers capture, retrieval, annotation, and collection of visual material. It enables multiple designers to fluidly move imagery from the physical to the digital realm; work with found, drawn and captured imagery; organize items into functional collections; and record meeting histories. These benefits are made possible by a large interactive table augmented with high-resolution overhead image capture. Summative evaluations with 16 professionals and four student pairs validated discoverability and utility of interactions, uncovered emergent functionality, and suggested opportunities for transitioning content to and from the table.

© All rights reserved Hartmann et al. and/or their publisher

 
Edit | Del

Hinckley, Ken, Yatani, Koji, Pahud, Michel, Coddington, Nicole, Rodenhouse, Jenny, Wilson, Andy, Benko, Hrvoje and Buxton, Bill (2010): Pen + touch = new tools. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 27-36. Available online

We describe techniques for direct pen+touch input. We observe people's manual behaviors with physical paper and notebooks. These serve as the foundation for a prototype Microsoft Surface application, centered on note-taking and scrapbooking of materials. Based on our explorations we advocate a division of labor between pen and touch: the pen writes, touch manipulates, and the combination of pen + touch yields new tools. This articulates how our system interprets unimodal pen, unimodal touch, and multimodal pen+touch inputs, respectively. For example, the user can hold a photo and drag off with the pen to create and place a copy; hold a photo and cross it in a freeform path with the pen to slice it in two; or hold selected photos and tap one with the pen to staple them all together. Touch thus unifies object selection with mode switching of the pen, while the muscular tension of holding touch serves as the "glue" that phrases together all the inputs into a unitary multimodal gesture. This helps the UI designer to avoid encumbrances such as physical buttons, persistent modes, or widgets that detract from the user's focus on the workspace.

© All rights reserved Hinckley et al. and/or their publisher

 
Edit | Del

Wilson, Andrew D. and Benko, Hrvoje (2010): Combining multiple depth cameras and projectors for interactions on, above and between surfaces. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 273-282. Available online

Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector. Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays. For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display. Or the user may "pick up" the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and "drop" the object onto the wall by touching it with their other hand. We detail the interactions and algorithms unique to LightSpace, discuss some initial observations of use and suggest future directions.

© All rights reserved Wilson and Benko and/or their publisher

 
Edit | Del

Benko, Hrvoje, Izadi, Shahram, Wilson, Andrew D., Cao, Xiang, Rosenfeld, Dan and Hinckley, Ken (2010): Design and evaluation of interaction models for multi-touch mice. In: Proceedings of the 2010 Conference on Graphics Interface 2010. pp. 253-260. Available online

Adding multi-touch sensing to the surface of a mouse has the potential to substantially increase the number of interactions available to the user. However, harnessing this increased bandwidth is challenging, since the user must perform multi-touch interactions while holding the device and using it as a regular mouse. In this paper we describe the design challenges and formalize the design space of multi-touch mice interactions. From our design space categories we synthesize four interaction models which enable the use of both multi-touch and mouse interactions on the same device. We describe the results of a controlled user experiment evaluating the performance of these models in a 2D spatial manipulation task typical of touch-based interfaces and compare them to interacting directly on a multi-touch screen and with a regular mouse. We observed that our multi-touch mouse interactions were overall slower than the chosen baselines; however, techniques providing a single focus of interaction and explicit touch activation yielded better performance and higher preferences from our participants. Our results expose the difficulties in designing multi-touch mice interactions and define the problem space for future research in making these devices effective.

© All rights reserved Benko et al. and/or their publisher

2009
 
Edit | Del

Nacenta, Miguel A., Baudisch, Patrick, Benko, Hrvoje and Wilson, Andy (2009): Separability of Spatial Manipulations in Multi-touch Interfaces. In: Proceedings of Graphics Interface 2009, Kelowna, Canada. pp. 175-182. Available online

 
Edit | Del

Benko, Hrvoje, Saponas, T. Scott, Morris, Dan and Tan, Desney (2009): Enhancing input on and above the interactive surface with muscle sensing. In: Proceedings of the 2009 ACM International Conference on Interactive Tabletops and Surfaces 2009. pp. 93-100. Available online

Current interactive surfaces provide little or no information about which fingers are touching the surface, the amount of pressure exerted, or gestures that occur when not in contact with the surface. These limitations constrain the interaction vocabulary available to interactive surface systems. In our work, we extend the surface interaction space by using muscle sensing to provide complementary information about finger movement and posture. In this paper, we describe a novel system that combines muscle sensing with a multi-touch tabletop, and introduce a series of new interaction techniques enabled by this combination. We present observations from an initial system evaluation and discuss the limitations and challenges of utilizing muscle sensing for tabletop applications.

© All rights reserved Benko et al. and/or their publisher

 
Edit | Del

Freeman, Dustin, Benko, Hrvoje, Morris, Meredith Ringel and Wigdor, Daniel (2009): ShadowGuides: visualizations for in-situ learning of multi-touch and whole-hand gestures. In: Proceedings of the 2009 ACM International Conference on Interactive Tabletops and Surfaces 2009. pp. 165-172. Available online

We present ShadowGuides, a system for in-situ learning of multi-touch and whole-hand gestures on interactive surfaces. ShadowGuides provides on-demand assistance to the user by combining visualizations of the user's current hand posture as interpreted by the system (feedback) and available postures and completion paths necessary to finish the gesture (feedforward). Our experiment compared participants learning gestures with ShadowGuides to those learning with video-based instruction. We found that participants learning with ShadowGuides remembered more gestures and expressed significantly higher preference for the help system.

© All rights reserved Freeman et al. and/or their publisher

 
Edit | Del

Nacenta, Miguel A., Baudisch, Patrick, Benko, Hrvoje and Wilson, Andy (2009): Separability of spatial manipulations in multi-touch interfaces. In: Proceedings of the 2009 Conference on Graphics Interface 2009. pp. 175-182. Available online

Multi-touch interfaces allow users to translate, rotate, and scale digital objects in a single interaction. However, this freedom represents a problem when users intend to perform only a subset of manipulations. A user trying to scale an object in a print layout program, for example, might find that the object was also slightly translated and rotated, interfering with what was already carefully laid out earlier. We implemented and tested interaction techniques that allow users to select a subset of manipulations. Magnitude Filtering eliminates transformations (e.g., rotation) that are small in magnitude. Gesture Matching attempts to classify the user's input into a subset of manipulation gestures. Handles adopts a conventional single-touch handles approach for touch input. Our empirical study showed that these techniques significantly reduce errors in layout, while the Handles technique was slowest. A variation of the Gesture Matching technique presented the best combination of speed and control, and was favored by participants.

© All rights reserved Nacenta et al. and/or their publisher

 
Edit | Del

Wigdor, Daniel, Williams, Sarah, Cronin, Michael, Levy, Robert, White, Katie, Mazeev, Maxim and Benko, Hrvoje (2009): Ripples: utilizing per-contact visualizations to improve user interaction with touch displays. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 3-12. Available online

We present Ripples, a system which enables visualizations around each contact point on a touch display and, through these visualizations, provides feedback to the user about successes and errors of their touch interactions. Our visualization system is engineered to be overlaid on top of existing applications without requiring the applications to be modified in any way, and functions independently of the application's responses to user input. Ripples reduces the fundamental problem of ambiguity of feedback when an action results in an unexpected behaviour. This ambiguity can be caused by a wide variety of sources. We describe the ambiguity problem, and identify those sources. We then define a set of visual states and transitions needed to resolve this ambiguity, of use to anyone designing touch applications or systems. We then present the Ripples implementation of visualizations for those states, and the results of a user study demonstrating user preference for the system, and demonstrating its utility in reducing errors.

© All rights reserved Wigdor et al. and/or their publisher

 
Edit | Del

Villar, Nicolas, Izadi, Shahram, Rosenfeld, Dan, Benko, Hrvoje, Helmes, John, Westhues, Jonathan, Hodges, Steve, Ofek, Eyal, Butler, Alex, Cao, Xiang and Chen, Billy (2009): Mouse 2.0: multi-touch meets the mouse. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 33-42. Available online

In this paper we present novel input devices that combine the standard capabilities of a computer mouse with multi-touch sensing. Our goal is to enrich traditional pointer-based desktop interactions with touch and gestures. To chart the design space, we present five different multi-touch mouse implementations. Each explores a different touch sensing strategy, which leads to differing form-factors and hence interactive possibilities. In addition to the detailed description of hardware and software implementations of our prototypes, we discuss the relative strengths, limitations and affordances of these novel input devices as informed by the results of a preliminary user study.

© All rights reserved Villar et al. and/or their publisher

 
Edit | Del

Hartmann, Bjorn, Morris, Meredith Ringel, Benko, Hrvoje and Wilson, Andrew D. (2009): Augmenting interactive tables with mice & keyboards. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 149-152. Available online

This note examines the role traditional input devices can play in surface computing. Mice and keyboards can enhance tabletop technologies since they support high fidelity input, facilitate interaction with distant objects, and serve as a proxy for user identity and position. Interactive tabletops, in turn, can enhance the functionality of traditional input devices: they provide spatial sensing, augment devices with co-located visual content, and support connections among a plurality of devices. We introduce eight interaction techniques for a table with mice and keyboards, and we discuss the design space of such interactions.

© All rights reserved Hartmann et al. and/or their publisher

2008
 
Edit | Del

Benko, Hrvoje (ed.) Proceedings of the 2nd international Conference on intelligent Technologies For interactive Entertainment. ACM Digital Libraries January 08-12, 2008, Cancun, Mexico.

 
Edit | Del

Benko, Hrvoje, Wilson, Andrew D. and Balakrishnan, Ravin (2008): Sphere: multi-touch interactions on a spherical display. In: Cousins, Steve B. and Beaudouin-Lafon, Michel (eds.) Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology October 19-22, 2008, Monterey, CA, USA. pp. 77-86. Available online

2007
 
Edit | Del

Benko, Hrvoje and Feiner, Steven K. (2007): Pointer warping in heterogeneous multi-monitor environments. In: Proceedings of the 2007 Conference on Graphics Interface 2007. pp. 111-117. Available online

Warping the pointer across monitor bezels has previously been demonstrated to be both significantly faster and preferred to the standard mouse behavior when interacting across displays in homogeneous multi-monitor configurations. Complementing this work, we present a user study that compares the performance of four pointer-warping strategies, including a previously untested frame-memory placement strategy, in heterogeneous multi-monitor environments, where displays vary in size, resolution, and orientation. Our results show that a new frame-memory pointer warping strategy significantly improved targeting performance (up to 30% in some cases). In addition, our study showed that, when transitioning across screens, the mismatch between the visual and the device space has a significantly bigger impact on performance than the mismatch in orientation and visual size alone. For mouse operation in a highly heterogeneous multi-monitor environment, all our participants strongly preferred using pointer warping over the regular mouse behavior.

© All rights reserved Benko and Feiner and/or Canadian Information Processing Society

2006
 
Edit | Del

Benko, Hrvoje, Wilson, Andrew D. and Baudisch, Patrick (2006): Precise selection techniques for multi-touch screens. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 1263-1272. Available online

The size of human fingers and the lack of sensing precision can make precise touch screen interactions difficult. We present a set of five techniques, called Dual Finger Selections, which leverage the recent development of multi-touch sensitive displays to help users select very small targets. These techniques facilitate pixel-accurate targeting by adjusting the control-display ratio with a secondary finger while the primary finger controls the movement of the cursor. We also contribute a "clicking" technique, called SimPress, which reduces motion errors during clicking and allows us to simulate a hover state on devices unable to sense proximity. We implemented our techniques on a multi-touch tabletop prototype that offers computer vision-based tracking. In our formal user study, we tested the performance of our three most promising techniques (Stretch, X-Menu, and Slider) against our baseline (Offset), on four target sizes and three input noise levels. All three chosen techniques outperformed the control technique in terms of error rate reduction and were preferred by our participants, with Stretch being the overall performance and preference winner.

© All rights reserved Benko et al. and/or ACM Press

2005
 
Edit | Del

Benko, Hrvoje and Feiner, Steven K. (2005): Multi-monitor mouse. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1208-1211. Available online

Multiple-monitor computer configurations significantly increase the distances that users must traverse with the mouse when interacting with existing applications, resulting in increased time and effort. We introduce the Multi-Monitor Mouse (M3) technique, which virtually simulates having one mouse pointer per monitor when using a single physical mouse device. M3 allows for conventional control of the mouse within each monitor's screen, while permitting immediate warping across monitors when desired to increase mouse traversal speed. We report the results of a user study in which we compared three implementations of M3 and two cursor placement strategies. Our results suggest that using M3 significantly increases interaction speed in a multi-monitor environment. All eight study participants strongly preferred M3 to the regular mouse behavior.

© All rights reserved Benko and Feiner and/or ACM Press

2004
 
Edit | Del

Allen, Peter K., Feiner, Steven K., Meskell, Lynn, Ross, Ken, Troccoli, Alejandro J., Benko, Hrvoje, Ishak, Edward, Smith, Benjamin and Conlon, James (2004): Digitally modeling, visualizing and preserving archaeological sites. In: JCDL04: Proceedings of the 4th ACM/IEEE-CS Joint Conference on Digital Libraries 2004. p. 389. Available online

 
Edit | Del

Benko, Hrvoje, Ishak, Edward W. and Feiner, Steven K. (2004): Collaborative Mixed Reality Visualization of an Archaeological Excavation. In: 3rd IEEE and ACM International Symposium on Mixed and Augmented Reality ISMAR 2004 2-5 November, 2004, Arlington, VA, USA. pp. 132-140. Available online

2003
 
Edit | Del

Olwal, Alex, Benko, Hrvoje and Feiner, Steven K. (2003): SenseShapes: Using Statistical Geometry for Object Selection in a Multimodal Augmented Reality System. In: 2003 IEEE and ACM International Symposium on Mixed and Augmented Reality ISMAR 2003 7-10 October, 2003, Tokyo, Japan. pp. 300-301. Available online

 
Edit | Del

Kaiser, Edward C., Olwal, Alex, McGee, David, Benko, Hrvoje, Corradini, Andrea, Li, Xiaoguang, Cohen, Philip R. and Feiner, Steven K. (2003): Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality. In: Oviatt, Sharon L., Darrell, Trevor, Maybury, Mark T. and Wahlster, Wolfgang (eds.) Proceedings of the 5th International Conference on Multimodal Interfaces - ICMI 2003 November 5-7, 2003, Vancouver, British Columbia, Canada. pp. 12-19. Available online

 
Edit | Del

Kaiser, Ed, Olwal, Alex, McGee, David, Benko, Hrvoje, Corradini, Andrea, Li, Xiaoguang, Cohen, Phil and Feiner, Steven (2003): Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality. In: Proceedings of the 2003 International Conference on Multimodal Interfaces 2003. pp. 12-19. Available online

We describe an approach to 3D multimodal interaction in immersive augmented and virtual reality environments that accounts for the uncertain nature of the information sources. The resulting multimodal system fuses symbolic and statistical information from a set of 3D gesture, spoken language, and referential agents. The referential agents employ visible or invisible volumes that can be attached to 3D trackers in the environment, and which use a time-stamped history of the objects that intersect them to derive statistics for ranking potential referents. We discuss the means by which the system supports mutual disambiguation of these modalities and information sources, and show through a user study how mutual disambiguation accounts for over 45% of the successful 3D multimodal interpretations. An accompanying video demonstrates the system in action.

© All rights reserved Kaiser et al. and/or their publisher

 
Add publication
Show list on your website
 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/hrvoje_benko.html