Publication statistics

Pub. period:2001-2012
Pub. count:50
Number of co-authors:137


Number of publications with 3 favourite co-authors:

Nicolas Villar:9
Otmar Hilliges:9
Steve Hodges:9



Productive colleagues

Shahram Izadi's 3 most productive colleagues in number of publications:

Steve Benford:121
Ravin Balakrishnan:108
Tom Rodden:106

Upcoming Courses

go to course
The Practical Guide to Usability
88% booked. Starts in 7 days
go to course
The Ultimate Guide to Visual Perception and Design
82% booked. Starts in 13 days

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !


Our Latest Books

The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading

Shahram Izadi

Picture of Shahram Izadi.
Update pic
Personal Homepage:

I'm a research scientist at Microsoft Research, UK. My research focuses on creating novel technologies that push the boundaries of how we interact with computers. In recent years my work has focused primarily on making computers more natural to use through the development of new touch-based computers and devices. My research is very applied, meaning I like to build and play with diverse interactive technologies, all in the name of science.

Edit author info
Rename / change spelling
Add publication

Publications by Shahram Izadi (bibliography)

 what's this?
Edit | Del

Freeman, Dustin, Hilliges, Otmar, Sellen, Abigail, O'Hara, Kenton, Izadi, Shahram and Wood, Kenneth (2012): The role of physical controllers in motion video gaming. In: Proceedings of DIS12 Designing Interactive Systems 2012. pp. 701-710.

Systems that detect the unaugmented human body allow players to interact without using a physical controller. But how is interaction altered by the absence of a physical input device? What is the impact on game performance, on a player's expectation of their ability to control the game, and on their game experience? In this study, we investigate these issues in the context of a table tennis video game. The results show that the impact of holding a physical controller, or indeed of the fidelity of that controller, does not appear in simple measures of performance. Rather, the difference between controllers is a function of the responsiveness of the game being controlled, as well as other factors to do with expectations, real world game experience and social context.

© All rights reserved Freeman et al. and/or ACM Press

Edit | Del

Kim, David, Hilliges, Otmar, Izadi, Shahram, Butler, Alex D., Chen, Jiawen, Oikonomidis, Iason and Olivier, Patrick (2012): Digits: freehand 3D interactions anywhere using a wrist-worn gloveless sensor. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 167-176.

Digits is a wrist-worn sensor that recovers the full 3D pose of the user's hand. This enables a variety of freehand interactions on the move. The system targets mobile settings, and is specifically designed to be low-power and easily reproducible using only off-the-shelf hardware. The electronics are self-contained on the user's wrist, but optically image the entirety of the user's hand. This data is processed using a new pipeline that robustly samples key parts of the hand, such as the tips and lower regions of each finger. These sparse samples are fed into new kinematic models that leverage the biomechanical constraints of the hand to recover the 3D pose of the user's hand. The proposed system works without the need for full instrumentation of the hand (for example using data gloves), additional sensors in the environment, or depth cameras which are currently prohibitive for mobile scenarios due to power and form-factor considerations. We demonstrate the utility of Digits for a variety of application scenarios, including 3D spatial interaction with mobile devices, eyes-free interaction on-the-move, and gaming. We conclude with a quantitative and qualitative evaluation of our system, and discussion of strengths, limitations and future work.

© All rights reserved Kim et al. and/or ACM Press

Edit | Del

Wilson, Andrew, Benko, Hrvoje, Izadi, Shahram and Hilliges, Otmar (2012): Steerable augmented reality with the beamatron. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 413-422.

Steerable displays use a motorized platform to orient a projector to display graphics at any point in the room. Often a camera is included to recognize markers and other objects, as well as user gestures in the display volume. Such systems can be used to superimpose graphics onto the real world, and so are useful in a number of augmented reality and ubiquitous computing scenarios. We contribute the Beamatron, which advances steerable displays by drawing on recent progress in depth camera-based interactions. The Beamatron consists of a computer-controlled pan and tilt platform on which is mounted a projector and Microsoft Kinect sensor. While much previous work with steerable displays deals primarily with projecting corrected graphics onto a discrete set of static planes, we describe computational techniques that enable reasoning in 3D using live depth data. We show two example applications that are enabled by the unique capabilities of the Beamatron: an augmented reality game in which a player can drive a virtual toy car around a room, and a ubiquitous computing demo that uses speech and gesture to move projected graphics throughout the room.

© All rights reserved Wilson et al. and/or ACM Press

Edit | Del

Chen, Jiawen, Izadi, Shahram and Fitzgibbon, Andrew (2012): KinÊtre: animating the world with the human body. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 435-444.

Kintre allows novice users to scan arbitrary physical objects and bring them to life in seconds. The fully interactive system allows diverse static meshes to be animated using the entire human body. Traditionally, the process of mesh animation is laborious and requires domain expertise, with rigging specified manually by an artist when designing the character. Kintre makes creating animations a more playful activity, conducted by novice users interactively "at runtime". This paper describes the Kintre system in full, highlighting key technical contributions and demonstrating many examples of users animating meshes of varying shapes and sizes. These include non-humanoid meshes and incomplete surfaces produced by 3D scanning -- two challenging scenarios for existing mesh animation systems. Rather than targeting professional CG animators, Kintre is intended to bring mesh animation to a new audience of novice users. We demonstrate potential uses of our system for interactive storytelling and new forms of physical gaming.

© All rights reserved Chen et al. and/or ACM Press

Edit | Del

Song, Hyunyoung, Benko, Hrvoje, Guimbretiere, Francois, Izadi, Shahram, Cao, Xiang and Hinckley, Ken (2011): Grips and gestures on a multi-touch pen. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1323-1332.

This paper explores the interaction possibilities enabled when the barrel of a digital pen is augmented with a multi-touch sensor. We present a novel multi-touch pen (MTPen) prototype and discuss its alternate uses beyond those of a standard stylus, such as allowing new touch gestures to be performed using the index finger or thumb and detecting how users grip the device as a mechanism for mode switching. We also discuss the hardware and software implementation challenges in realizing our prototype, and showcase how one can combine different grips (tripod, relaxed tripod, sketch, wrap) and gestures (swipe and double tap) to enable new interaction techniques with the MTPen in a prototype drawing application. One specific aim is the elimination of some of the comfort problems associated with existing auxiliary controls on digital pens. Mechanical controls such as barrel buttons and barrel scroll wheels work best in only a few specific hand grips and pen rotations. Comparatively, our gestures can be successfully and comfortably performed regardless of the rotation of the pen or how the user grips it, offering greater flexibility in use. We describe a formal evaluation comparing MTPen gestures against the use of a barrel button for mode switching. This study shows that both swipe and double tap gestures are comparable in performance to commonly employed barrel buttons without its disadvantages.

© All rights reserved Song et al. and/or their publisher

Edit | Del

Karnik, Abhijit, Henderson, Archie, Dean, Andrew, Pang, Howard, Campbell, Thomas, Sakurai, Satoshi, Herrmann, Guido, Izadi, Shahram, Kitamura, Yoshifumi and Subramanian, Sriram (2011): VORTEX: design and implementation of an interactive volumetric display. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2017-2022.

True 3D display systems like volumetric displays allow generation of autostereoscopic, multi-view 3D content that has real physical dimensions. However their uptake as a research tool within the HCI community is limited largely due to difficulties in buying or building such displays. The choice of commercially available systems is limited and constrains the flexibility of their use in terms of interaction capabilities, display features and integration with multi-display environments (MDEs). In this paper we describe the steps involved in creating custom volumetric display from easily available components. By building a touch-enabled volumetric display we walk-through the steps involved in the process. This will enable us to explore various interactive systems, associated techniques and challenges related to integration of the device into a MDE.

© All rights reserved Karnik et al. and/or their publisher

Edit | Del

Sun, Minghui, Cao, Xiang, Song, Hyunyoung, Izadi, Shahram, Benko, Hrvoje, Guimbretiere, Francois, Ren, Xiangshi and Hinckley, Ken (2011): Enhancing naturalness of pen-and-tablet drawing through context sensing. In: Proceedings of the 2011 ACM International Conference on Interactive Tabletops and Surfaces 2011. pp. 83-86.

Among artists and designers, the pen-and-tablet combination is widely used for creating digital drawings, as digital pens outperform other input devices in replicating the experience of physical drawing tools. In this paper, we explore how contextual information such as the relationship between the hand, the pen, and the tablet can be leveraged in the digital drawing experience to further enhance its naturalness. By embedding sensors in the pen and the tablet to sense and interpret these contexts, we demonstrate how several physical drawing practices can be reflected and assisted in digital interaction scenarios.

© All rights reserved Sun et al. and/or ACM Press

Edit | Del

Davidoff, Scott, Villar, Nicolas, Taylor, Alex S. and Izadi, Shahram (2011): Mechanical hijacking: how robots can accelerate UbiComp deployments. In: Proceedings of the 2011 International Conference on Uniquitous Computing 2011. pp. 267-270.

The complexities and costs of deploying Ubicomp applications seriously compromise our ability to evaluate such systems in the real world. To simplify Ubicomp deployment we introduce the robotic pseudopod (P.Pod), an actuator that acts on mechanical switches originally designed for human control only. P.Pods enable computational control of devices by hijacking their mechanical switches -- a term we refer to as mechanical hijacking. P.Pods offer simple, low-cost, non-destructive computational access to installed hardware, enabling functional, real world Ubicomp deployments. In this paper, we illustrate how three P.Pod primitives, built with the Lego MindStorm NXT toolkit, can implement mechanical hijacking, facilitating real world Ubicomp deployments which otherwise require extensive changes to existing hardware or infrastructure. Lastly, we demonstrate the simplicity of P.Pods by observing two middle school classes build working smart home applications in 4 hours.

© All rights reserved Davidoff et al. and/or ACM Press

Edit | Del

Avrahami, Daniel, Wobbrock, Jacob O. and Izadi, Shahram (2011): Portico: tangible interaction on and around a tablet. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 347-356.

We present Portico, a portable system for enabling tangible interaction on and around tablet computers. Two cameras on small foldable arms are positioned above the display to recognize a variety of physical objects placed on or around the tablet. These cameras have a larger field-of-view than the screen, allowing Portico to extend interaction significantly beyond the tablet itself. Our prototype, which uses a 12" tablet, delivers an interaction space six times the size of the tablet screen. Portico thus allows tablets to extend both their sensing capabilities and interaction space without sacrificing portability. We describe the design of our system and present a number of applications that demonstrate Portico's unique capability to track objects. We focus on a number of fun applications that demonstrate how such a device can be used as a low-cost way to create personal surface computing experiences. Finally, we discuss the challenges in supporting tangible interaction beyond the screen and describe possible mechanisms for overcoming them.

© All rights reserved Avrahami et al. and/or ACM Press

Edit | Del

Izadi, Shahram, Kim, David, Hilliges, Otmar, Molyneaux, David, Newcombe, Richard, Kohli, Pushmeet, Shotton, Jamie, Hodges, Steve, Freeman, Dustin, Davison, Andrew and Fitzgibbon, Andrew (2011): KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 559-568.

KinectFusion enables a user holding and moving a standard Kinect camera to rapidly create detailed 3D reconstructions of an indoor scene. Only the depth data from Kinect is used to track the 3D pose of the sensor and reconstruct, geometrically precise, 3D models of the physical scene in real-time. The capabilities of KinectFusion, as well as the novel GPU-based pipeline are described in full. Uses of the core system for low-cost handheld scanning, and geometry-aware augmented reality and physics-based interactions are shown. Novel extensions to the core GPU pipeline demonstrate object segmentation and user interaction directly in front of the sensor, without degrading camera tracking or reconstruction. These extensions are used to enable real-time multi-touch interactions anywhere, allowing any planar or non-planar reconstructed physical surface to be appropriated for touch.

© All rights reserved Izadi et al. and/or ACM Press

Edit | Del

Butler, Alex, Hilliges, Otmar, Izadi, Shahram, Hodges, Steve, Molyneaux, David, Kim, David and Kong, Danny (2011): Vermeer: direct interaction with a 360° viewable 3D display. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 569-576.

We present Vermeer, a novel interactive 360 viewable 3D display. Like prior systems in this area, Vermeer provides viewpoint-corrected, stereoscopic 3D graphics to simultaneous users, 360 around the display, without the need for eyewear or other user instrumentation. Our goal is to over-come an issue inherent in these prior systems which -- typically due to moving parts -- restrict interactions to outside the display volume. Our system leverages a known optical illusion to demonstrate, for the first time, how users can reach into and directly touch 3D objects inside the display volume. Vermeer is intended to be a new enabling technology for interaction, and we therefore describe our hardware implementation in full, focusing on the challenges of combining this optical configuration with an existing approach for creating a 360 viewable 3D display. Initially we demonstrate direct involume interaction by sensing user input with a Kinect camera placed above the display. However, by exploiting the properties of the optical configuration, we also demonstrate novel prototypes for fully integrated input sensing alongside simultaneous display. We conclude by discussing limitations, implications for interaction, and ideas for future work.

© All rights reserved Butler et al. and/or ACM Press

Edit | Del

Scott, James, Izadi, Shahram, Rezai, Leila Sadat, Ruszkowski, Dominika, Bi, Xiaojun and Balakrishnan, Ravin (2010): RearType: text entry using keys on the back of a device. In: Proceedings of 12th Conference on Human-computer interaction with mobile devices and services 2010. pp. 171-180.

RearType is a text input system for mobile devices such as Tablet PCs, using normal keyboard keys but on the reverse side of the device. The standard QWERTY layout is split and rotated so that hands gripping the device from either side have the usual keys under the fingers. This frees up the front of the device, maximizing the use of the display for visual output, eliminating the need for an onscreen keyboard and the resulting hand occlusion, and providing tactile and multi-finger text entry -- with potential for knowledge transfer from QWERTY. Using a prototype implementation which includes software visualization of the keys to assist with learning, we conducted a study to explore the initial learning curve for RearType. With one hour's training, RearType typing speed was an average 15 WPM, and was not statistically different to a touchscreen keyboard.

© All rights reserved Scott et al. and/or their publisher

Edit | Del

Cao, Xiang, Villar, Nicolas and Izadi, Shahram (2010): Comparing user performance with single-finger, whole-hand, and hybrid pointing devices. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 1643-1646.

Researchers have explored pointing devices operated by a single finger, but their advantage was not clear compared to conventional mice controlled by the whole hand. To incorporate the benefits of both, we prototyped hybrid pointing devices that combined both finger and hand movement to control the cursor, and experimentally compared their performance with single-finger and whole-hand devices. Results showed that such hybrid devices have the potential to improve pointing performance in terms of time, error, and bandwidth, especially for precise pointing.

© All rights reserved Cao et al. and/or their publisher

Edit | Del

Tuddenham, Philip, Kirk, David and Izadi, Shahram (2010): Graspables revisited: multi-touch vs. tangible input for tabletop displays in acquisition and manipulation tasks. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2223-2232.

We present an experimental comparison of multi-touch and tangible user interfaces for basic interface actions. Twelve participants completed manipulation and acquisition tasks on an interactive surface in each of three conditions: tangible user interface; multi-touch; and mouse and puck. We found that interface control objects in the tangible condition were easiest to acquire and, once acquired, were easier/more accurate to manipulate. Further qualitative analysis suggested that in the evaluated tasks tangibles offer greater adaptability of control and specifically highlighted a problem of exit error that can undermine fine-grained control in multi-touch interactions. We discuss the implications of these findings for interface design.

© All rights reserved Tuddenham et al. and/or their publisher

Edit | Del

Yang, Xing-Dong, Mak, Edward, McCallum, David, Irani, Pourang, Cao, Xiang and Izadi, Shahram (2010): LensMouse: augmenting the mouse with an interactive touch display. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2431-2440.

We introduce LensMouse, a novel device that embeds a touch-screen display -- or tangible 'lens' -- onto a mouse. Users interact with the display of the mouse using direct touch, whilst also performing regular cursor-based mouse interactions. We demonstrate some of the unique capabilities of such a device, in particular for interacting with auxiliary windows, such as toolbars, palettes, pop-ups and dialog-boxes. By migrating these windows onto LensMouse, challenges such as screen real-estate use and window management can be alleviated. In a controlled experiment, we evaluate the effectiveness of LensMouse in reducing cursor movements for interacting with auxiliary windows. We also consider the concerns involving the view separation that results from introducing such a display-based device. Our results reveal that overall users are more effective with LenseMouse than with auxiliary application windows that are managed either in single or dual-monitor setups. We conclude by presenting other application scenarios that LensMouse could support.

© All rights reserved Yang et al. and/or their publisher

Edit | Del

Kirk, David S., Izadi, Shahram, Sellen, Abigail, Taylor, Stuart, Banks, Richard and Hilliges, Otmar (2010): Opening up the family archive. In: Proceedings of ACM CSCW10 Conference on Computer-Supported Cooperative Work 2010. pp. 261-270.

The Family Archive device is an interactive multi-touch tabletop technology with integrated capture facility for the archiving of sentimental artefacts and memorabilia. It was developed as a technology probe to help us open up current family archiving practices and to explore family archiving in situ. We detail the deployment and study of three of these devices in family homes and discuss how deploying a new, potentially disruptive, technology can foreground the social relations and organizing systems in domestic life. This in turn facilitates critical reflection on technology design.

© All rights reserved Kirk et al. and/or their publisher

Edit | Del

Benko, Hrvoje, Izadi, Shahram, Wilson, Andrew D., Cao, Xiang, Rosenfeld, Dan and Hinckley, Ken (2010): Design and evaluation of interaction models for multi-touch mice. In: Proceedings of the 2010 Conference on Graphics Interface 2010. pp. 253-260.

Adding multi-touch sensing to the surface of a mouse has the potential to substantially increase the number of interactions available to the user. However, harnessing this increased bandwidth is challenging, since the user must perform multi-touch interactions while holding the device and using it as a regular mouse. In this paper we describe the design challenges and formalize the design space of multi-touch mice interactions. From our design space categories we synthesize four interaction models which enable the use of both multi-touch and mouse interactions on the same device. We describe the results of a controlled user experiment evaluating the performance of these models in a 2D spatial manipulation task typical of touch-based interfaces and compare them to interacting directly on a multi-touch screen and with a regular mouse. We observed that our multi-touch mouse interactions were overall slower than the chosen baselines; however, techniques providing a single focus of interaction and explicit touch activation yielded better performance and higher preferences from our participants. Our results expose the difficulties in designing multi-touch mice interactions and define the problem space for future research in making these devices effective.

© All rights reserved Benko et al. and/or their publisher

Edit | Del

Taylor, Stuart, Izadi, Shahram, Kirk, David, Harper, Richard and Garcia-Mendoza, Armando (2009): Turning the tables: an interactive surface for vjing. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1251-1254.

In this paper we describe VPlay, a multi-touch tabletop application that allows users to mix and manipulate multiple video streams in real-time. Our aim is to explore how such an interactive surface can support and augment practices around VJing -- a form of video performance art that is becoming increasingly popular in nightclubs and other music events. We conclude with observations from a field deployment, which highlight some initial thoughts and reflections on our design rationale.

© All rights reserved Taylor et al. and/or ACM Press

Edit | Del

Villar, Nicolas, Izadi, Shahram, Fraser, Mike and Benford, Steve (eds.) TEI 2009 - Proceedings of the 3rd International Conference on Tangible and Embedded Interaction February 16-18, 2009, Cambridge, UK.

Edit | Del

Villar, Nicolas, Izadi, Shahram, Fraser, Mike and Benford, Steve (eds.) Proceedings of Tangible and Embedded Interaction TEI 2009 February 16-18, 2009, Cambridge, UK.

Edit | Del

Villar, Nicolas, Izadi, Shahram, Fraser, Mike and Benford, Steve (eds.) TEI 2009 - Proceedings of the 3rd International Conference on Tangible and Embedded Interaction February 16-18, 2009, Cambridge, UK.

Edit | Del

Kirk, David, Sellen, Abigail, Taylor, Stuart, Villar, Nicolas and Izadi, Shahram (2009): Putting the physical into the digital: issues in designing hybrid interactive surfaces. In: Proceedings of the HCI09 Conference on People and Computers XXIII 2009. pp. 35-44.

Hybrid surfaces are interactive systems combining techniques of direct-manipulation multi-touch surface interaction with elements of tangible user interfaces (TUIs). The design space for such complex hands-on computing experiences is sufficiently broad that it can be difficult to decide when interface elements should be given either a physical or digital instantiation, and the extent to which different interface functions should be made to model real-world interactions. In this paper we present two case studies of hybrid surface systems we are developing and discuss how we have reasoned through these kinds of design decisions. From this, we derive a set of observations about properties of physical and digital elements, and offer them as a design resource.

© All rights reserved Kirk et al. and/or their publisher

Edit | Del

Villar, Nicolas, Izadi, Shahram, Rosenfeld, Dan, Benko, Hrvoje, Helmes, John, Westhues, Jonathan, Hodges, Steve, Ofek, Eyal, Butler, Alex, Cao, Xiang and Chen, Billy (2009): Mouse 2.0: multi-touch meets the mouse. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 33-42.

In this paper we present novel input devices that combine the standard capabilities of a computer mouse with multi-touch sensing. Our goal is to enrich traditional pointer-based desktop interactions with touch and gestures. To chart the design space, we present five different multi-touch mouse implementations. Each explores a different touch sensing strategy, which leads to differing form-factors and hence interactive possibilities. In addition to the detailed description of hardware and software implementations of our prototypes, we discuss the relative strengths, limitations and affordances of these novel input devices as informed by the results of a preliminary user study.

© All rights reserved Villar et al. and/or their publisher

Edit | Del

Hook, Jonathan, Taylor, Stuart, Butler, Alex, Villar, Nicolas and Izadi, Shahram (2009): A reconfigurable ferromagnetic input device. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 51-54.

We present a novel hardware device based on ferromagnetic sensing, capable of detecting the presence, position and deformation of any ferrous object placed on or near its surface. These objects can include ball bearings, magnets, iron filings, and soft malleable bladders filled with ferrofluid. Our technology can be used to build reconfigurable input devices -- where the physical form of the input device can be assembled using combinations of such ferrous objects. This allows users to rapidly construct new forms of input device, such as a trackball-style device based on a single large ball bearing, tangible mixers based on a collection of sliders and buttons with ferrous components, and multi-touch malleable surfaces using a ferrofluid bladder. We discuss the implementation of our technology, its strengths and limitations, and potential application scenarios.

© All rights reserved Hook et al. and/or their publisher

Edit | Del

Hilliges, Otmar, Izadi, Shahram, Wilson, Andrew D., Hodges, Steve, Garcia-Mendoza, Armando and Butz, Andreas (2009): Interactions in the air: adding further depth to interactive tabletops. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 139-148.

Although interactive surfaces have many unique and compelling qualities, the interactions they support are by their very nature bound to the display surface. In this paper we present a technique for users to seamlessly switch between interacting on the tabletop surface to above it. Our aim is to leverage the space above the surface in combination with the regular tabletop display to allow more intuitive manipulation of digital content in three-dimensions. Our goal is to design a technique that closely resembles the ways we manipulate physical objects in the real-world; conceptually, allowing virtual objects to be 'picked up' off the tabletop surface in order to manipulate their three dimensional position or orientation. We chart the evolution of this technique, implemented on two rear projection-vision tabletops. Both use special projection screen materials to allow sensing at significant depths beyond the display. Existing and new computer vision techniques are used to sense hand gestures and postures above the tabletop, which can be used alongside more familiar multi-touch interactions. Interacting above the surface in this way opens up many interesting challenges. In particular it breaks the direct interaction metaphor that most tabletops afford. We present a novel shadow-based technique to help alleviate this issue. We discuss the strengths and limitations of our technique based on our own observations and initial user feedback, and provide various insights from comparing, and contrasting, our tabletop implementations.

© All rights reserved Hilliges et al. and/or their publisher

Edit | Del

Hilliges, Otmar, Kim, David and Izadi, Shahram (2008): Creating malleable interactive surfaces using liquid displacement sensing. In: Third IEEE International Workshop on Tabletops and Interactive Surfaces Tabletop 2008 October 1-3, 2008, Amsterdam, The Netherlands. pp. 157-160.

Edit | Del

Izadi, Shahram, Butler, Alex, Hodges, Steve, West, Darren, Hall, Malcolm, Buxton, Bill and Molloy, Mike (2008): Experiences with building a thin form-factor touch and tangible tabletop. In: Third IEEE International Workshop on Tabletops and Interactive Surfaces Tabletop 2008 October 1-3, 2008, Amsterdam, The Netherlands. pp. 181-184.

Edit | Del

Rui, Jos, Otero, Nuno, Izadi, Shahram and Harper, Richard H. R. (2008): Instant Places: Using Bluetooth for Situated Interaction in Public Displays. In IEEE Pervasive Computing, 7 (4) pp. 52-57.

Edit | Del

Butler, Alex, Izadi, Shahram and Hodges, Steve (2008): SideSight: multi-"touch" interaction around small devices. In: Cousins, Steve B. and Beaudouin-Lafon, Michel (eds.) Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology October 19-22, 2008, Monterey, CA, USA. pp. 201-204.

Edit | Del

Wilson, Andrew D., Izadi, Shahram, Hilliges, Otmar, Garcia-Mendoza, Armando and Kirk, David (2008): Bringing physics to the surface. In: Cousins, Steve B. and Beaudouin-Lafon, Michel (eds.) Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology October 19-22, 2008, Monterey, CA, USA. pp. 67-76.

Edit | Del

Izadi, Shahram, Hodges, Steve, Taylor, Stuart, Rosenfeld, Dan, Villar, Nicolas, Butler, Alex and Westhues, Jonathan (2008): Going beyond the display: a surface technology with an electronically switchable diffuser. In: Cousins, Steve B. and Beaudouin-Lafon, Michel (eds.) Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology October 19-22, 2008, Monterey, CA, USA. pp. 269-278.

Edit | Del

Terrenghi, Lucia, Kirk, David, Sellen, Abigail and Izadi, Shahram (2007): Affordances for manipulation of physical versus digital media on interactive surfaces. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 1157-1166.

This work presents the results of a comparative study in which we investigate the ways manipulation of physical versus digital media are fundamentally different from one another. Participants carried out both a puzzle task and a photo sorting task in two different modes: in a physical 3-dimensional space and on a multi-touch, interactive tabletop in which the digital items resembled their physical counterparts in terms of appearance and behavior. By observing the interaction behaviors of 12 participants, we explore the main differences and discuss what this means for designing interactive surfaces which use aspects of the physical world as a design resource.

© All rights reserved Terrenghi et al. and/or ACM Press

Edit | Del

Hodges, Steve, Izadi, Shahram, Butler, Alex, Rrustemi, Alban and Buxton, Bill (2007): ThinSight: versatile multi-touch sensing for thin form-factor displays. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 259-268.

ThinSight is a novel optical sensing system, fully integrated into a thin form factor display, capable of detecting multi-ple fingers placed on or near the display surface. We describe this new hardware in detail, and demonstrate how it can be embedded behind a regular LCD, allowing sensing without degradation of display capability. With our approach, fingertips and hands are clearly identifiable through the display. The approach of optical sensing also opens up the exciting possibility for detecting other physical objects and visual markers through the display, and some initial experiments are described. We also discuss other novel capabilities of our system: interaction at a distance using IR pointing devices, and IR-based communication with other electronic devices through the display. A major advantage of ThinSight over existing camera and projector based optical systems is its compact, thin form-factor making such systems even more deployable. We therefore envisage using ThinSight to capture rich sensor data through the display which can be processed using computer vision techniques to enable both multi-touch and tangible interaction.

© All rights reserved Hodges et al. and/or ACM Press

Edit | Del

Agarwal, Ankur, Izadi, Shahram, Chandraker, Manmohan and Blake, Andrew (2007): High Precision Multi-touch Sensing on Surfaces using Overhead Cameras. In: Second IEEE International Workshop on Horizontal Interactive Human-Computer Systems Tabletop 2007 October 10-12, 2007, Newport, Rhode Island, USA. pp. 197-200.

Edit | Del

Izadi, Shahram, Agarwal, Ankur, Criminisi, Antonio, Winn, John M., Blake, Andrew and Fitzgibbon, Andrew W. (2007): C-Slate: A Multi-Touch and Object Recognition System for Remote Collaboration using Horizontal Surfaces. In: Second IEEE International Workshop on Horizontal Interactive Human-Computer Systems Tabletop 2007 October 10-12, 2007, Newport, Rhode Island, USA. pp. 3-10.

Edit | Del

Harper, Richard H. R., Regan, Tim, Izadi, Shahram, Mosawi, Kharsim Al, Rouncefield, Mark and Rubens, Simon (2007): Trafficking: design for the viral exchange of TV content on mobile phones. In: Cheok, Adrian David and Chittaro, Luca (eds.) Proceedings of the 9th Conference on Human-Computer Interaction with Mobile Devices and Services - Mobile HCI 2007 September 9-12, 2007, Singapore. pp. 249-256.

Edit | Del

Brown, Barry A. T., Taylor, Alex S., Izadi, Shahram, Sellen, Abigail, Kaye, Joseph Jofish and Eardley, Rachel (2007): Locating Family Values: A Field Trial of the Whereabouts Clock. In: Krumm, John, Abowd, Gregory D., Seneviratne, Aruna and Strang, Thomas (eds.) UbiComp 2007 Ubiquitous Computing - 9th International Conference September 16-19, 2007, Innsbruck, Austria. pp. 354-371.

We report the results of a long-term, multi-site field trial of a situated awareness device for families called the Whereabouts Clock. The Clock displayed family members' current location as one of four privacy-preserving, deliberately coarse-grained categories (HOME , WORK , SCHOOL or ELSEWHERE) In use, the Clock supported not only family co-ordination but also more emotive aspects of family life such as reassurance, connectedness, identity and social touch. This emphasized aspects of family life frequently neglected in Ubicomp, such as the ways in which families' awareness of each others' activities contributes to a sense of a family's identity. We draw further on the results to differentiate between location as a technical aspect of awareness systems and what we characterize as location-in-interaction. Location-in-interaction is revealed as an emotional, accountable and even moral part of family life

© All rights reserved Brown et al. and/or Springer

 Cited in the following chapter:

Usability Evaluation: [/encyclopedia/usability_evaluation.html]

Edit | Del

Taylor, Alex S., Harper, Richard H. R., Swan, Laurel, Izadi, Shahram, Sellen, Abigail and Perry, Mark (2007): Homes that make us smart. In Personal and Ubiquitous Computing, 11 (5) pp. 383-393.

Edit | Del

Sellen, Abigail, Harper, Richard, Eardley, Rachel, Izadi, Shahram, Regan, Tim, Taylor, Alex S. and Wood, Kenneth R. (2006): HomeNote: supporting situated messaging in the home. In: Proceedings of ACM CSCW06 Conference on Computer-Supported Cooperative Work 2006. pp. 383-392.

In this paper we describe a field trial designed to investigate the potential of remote, situated messaging within the home. Five households used our "HomeNote" device for approximately a month. The results show a diversity of types of communication which highlight the role of messaging both to a household and to a place. It also shows the ways in which these kinds of messages enable subtle ways of requesting action, expressing affection, and marking identity in a household -- communication types which have received little attention in the research literature. These in turn point to new concepts for technology which we describe.

© All rights reserved Sellen et al. and/or ACM Press

Edit | Del

Hodges, Steve, Williams, Lyndsay, Berry, Emma, Izadi, Shahram, Srinivasan, James, Butler, Alex, Smyth, Gavin, Kapur, Narinder and Wood, Kenneth R. (2006): SenseCam: A Retrospective Memory Aid. In: Dourish, Paul and Friday, Adrian (eds.) UbiComp 2006 Ubiquitous Computing - 8th International Conference September 17-21, 2006, Orange County, CA, USA. pp. 177-193.

Edit | Del

Izadi, Shahram, Fitzpatrick, Geraldine, Rodden, Tom, Brignull, Harry, Rogers, Yvonne and Lindley, Sian (2005): The iterative design and study of a large display for shared and sociable spaces. In: Proceedings of the Conference on Designing for User Experiences DUX05 2005. p. 59.

We explore the design opportunities presented by situating large interactive displays outside of the workplace, within shared and sociable spaces such as common areas at universities and conferences, cafes, and hotel foyers. We seek to provide a better understanding of this design space by charting the iterative design of an interactive large display system called Dynamo. Dynamo has been designed to enable the sharing and exchange of a wide variety of digital media. We report on how the interaction metaphors were designed and refined upon in-lab and in-situ studies. We also study how an existing community uses this technology within their own established setting. Qualitative and quantitative analysis shows that the system was used extensively in a variety of ways, including sharing of photos, video clips, and websites, and for facilitating social interaction and collaboration. We conclude with recommendations for designing large display systems for shared and social spaces.

© All rights reserved Izadi et al. and/or ACM Press

Edit | Del

Duff, Paul, McCarthy, Michael R., Clark, Angus, Muller, Henk L., Randell, Cliff, Izadi, Shahram, Boucher, Andy, Law, Andy, Pennington, Sarah and Swinford, Richard (2005): A New Method for Auto-calibrated Object Tracking. In: Beigl, Michael, Intille, Stephen S., Rekimoto, Jun and Tokuda, Hideyuki (eds.) UbiComp 2005 Ubiquitous Computing - 7th International Conference September 11-14, 2005, Tokyo, Japan. pp. 123-140.

Edit | Del

Brignull, Harry, Izadi, Shahram, Fitzpatrick, Geraldine, Rogers, Yvonne and Rodden, Tom (2004): The introduction of a shared interactive surface into a communal space. In: Proceedings of ACM CSCW04 Conference on Computer-Supported Cooperative Work 2004. pp. 49-58.

We describe a user study of a large multi-user interactive surface deployed for an initial period within a real world setting. The surface was designed to enable the sharing and exchange of a wide variety of digital media. The setting for the study was the common room of a high school where students come together to mix, socialize, and collaborate throughout the day. We report on how the students use the new technology within their own established communal space. Findings show that the system was used extensively by the students in a variety of ways, including sharing of photos, video clips, and websites, and for facilitating social interaction. We discuss how the interactive shared surface was appropriated by the students and introduced into their everyday lives in ways that both mirrored and extended their existing practices within the communal space.

© All rights reserved Brignull et al. and/or ACM Press

Edit | Del

Izadi, Shahram, Brignull, Harry, Rodden, Tom, Rogers, Yvonne and Underwood, Mia (2003): Dynamo: a public interactive surface supporting the cooperative sharing and exchange of media. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 159-168.

In this paper we propose a novel way of supporting occasional meetings that take place in unfamiliar public places, which promotes lightweight, visible and fluid collaboration. Our central idea is that the sharing and exchange of information occurs across public surfaces that users can easily access and interact with. To this end, we designed and implemented Dynamo, a communal multi-user interactive surface. The surface supports the cooperative sharing and exchange of a wide range of media that can be brought to the surface by users that are remote from their familiar organizational settings.

© All rights reserved Izadi et al. and/or ACM Press

Edit | Del

Schnadelbach, Holger, Koleva, Boriana, Flintham, Martin, Fraser, Mike, Izadi, Shahram, Chandler, Paul, Foster, Malcolm, Benford, Steve and Rodden, Tom (2002): The augurscope: a mixed reality interface for outdoors. In: Terveen, Loren (ed.) Proceedings of the ACM CHI 2002 Conference on Human Factors in Computing Systems Conference April 20-25, 2002, Minneapolis, Minnesota. pp. 9-16.

Edit | Del

Newman, Mark W., Izadi, Shahram, Edwards, W. Keith, Sedivy, Jana Z. and Smith, Trevor (2002): User interfaces when and where they are needed: an infrastructure for recombinant computing. In: Beaudouin-Lafon, Michel (ed.) Proceedings of the 15th annual ACM symposium on User interface software and technology October 27-30, 2002, Paris, France. pp. 171-180.

Users in ubiquitous computing environments need to be able to make serendipitous use of resources that they did not anticipate and of which they have no prior knowledge. The Speakeasy recombinant computing framework is designed to support such ad hoc use of resources on a network. In addition to other facilities, the framework provides an infrastructure through which device and service user interfaces can be made available to users on multiple platforms. The framework enables UIs to be provided for connections involving multiple entities, allows these UIs to be delivered asynchronously, and allows them to be injected by any party participating in a connection.

© All rights reserved Newman et al. and/or ACM Press

Edit | Del

Edwards, W. Keith, Newman, Mark W., Sedivy, Jana Z., Smith, Trevor, Balfanz, Dirk, Smetters, D. K., Wong, H. Chi and Izadi, Shahram (2002): Using speakeasy for ad hoc peer-to-peer collaboration. In: Churchill, Elizabeth F., McCarthy, Joe, Neuwirth, Christine and Rodden, Tom (eds.) Proceedings of the 2002 ACM conference on Computer supported cooperative work November 16 - 20, 2002, New Orleans, Louisiana, USA. pp. 256-265.

Peer-to-peer systems appear promising in terms of their ability to support ad hoc, spontaneous collaboration. However, current peer-to-peer systems suffer from several deficiencies that diminish their ability to support this domain, such as inflexibility in terms of discovery protocols, network usage, and data transports. We have developed the Speakeasy framework, which addresses these issues, and supports these types of applications. We show how Speakeasy addresses the shortcomings of current peer-to-peer systems, and describe a demonstration application, called Casca, that supports ad hoc peer-to-peer collaboration by taking advantages of the mechanisms provided by Speakeasy.

© All rights reserved Edwards et al. and/or ACM Press

Edit | Del

Newman, Mark W., Sedivy, Jana Z., Neuwirth, Christine, Edwards, W. Keith, Hong, Jason I., Izadi, Shahram, Marcelo, Karen, Smith, Trevor F., Sedivy, Jana and Newman, Mark (2002): Designing for serendipity: supporting end-user configuration of ubiquitous computing environments. In: Proceedings of DIS02: Designing Interactive Systems: Processes, Practices, Methods, & Techniques 2002. pp. 147-156.

The future world of ubiquitous computing is one in which we will be surrounded by an ever-richer set of networked devices and services. In such a world, we cannot expect to have available to us specific applications that allow us to accomplish every conceivable combination of devices that we might wish. Instead, we believe that many of our interactions will be through highly generic tools that allow enduser discovery, configuration, interconnection, and control of the devices around us. This paper presents a design study of such an environment, intended to support serendipitous, opportunistic use of discovered network resources. We present an examination of a generic browser-style application built on top of an infrastructure developed to support arbitrary recombination of devices and services, as well as a number of challenges we believe to be inherent in such settings.

© All rights reserved Newman et al. and/or ACM Press

Edit | Del

Izadi, Shahram, Fraser, Mike, Benford, Steve, Flintham, Martin, Greenhalgh, Chris, Rodden, Tom and Schndelbach, Holger (2002): Citywide: Supporting Interactive Digital Experiences Across Physical Space. In Personal and Ubiquitous Computing, 6 (4) pp. 290-298.

Edit | Del

Benford, Steve, Bowers, John, Chandler, Paul, Ciolfi, Luigina, Flintham, Martin, Fraser, Mike, Greenhalgh, Chris, Hall, Tony, Hellstrom, Sten-Olof, Izadi, Shahram, Rodden, Tom, Schndelbach, Holger and Taylor, Ian (2001): Unearthing Virtual History: Using Diverse Interfaces to Reveal Hidden Virtual Worlds. In: Abowd, Gregory D., Brumitt, Barry and Shafer, Steven A. (eds.) Ubicomp 2001 Ubiquitous Computing - Third International Conference September 30 - October 2, 2001, Atlanta, Georgia, USA. pp. 225-231.

Add publication
Show list on your website

Join our community and advance:




Join our community!
Date created: Not available
Date last modified: Not available Date created: Not available
Date last modified: Not available

Page Information

Page maintainer: The Editorial Team