Publication statistics

Pub. period:1999-2011
Pub. count:27
Number of co-authors:33



Co-authors

Number of publications with 3 favourite co-authors:

Hrvoje Benko:8
Meredith Ringel Morris:5
Jacob O. Wobbrock:3

 

 

Productive colleagues

Andrew D. Wilson's 3 most productive colleagues in number of publications:

Scott E. Hudson:113
Ravin Balakrishnan:108
Jacob O. Wobbrock:71
 
 
 

Upcoming Courses

go to course
User-Centred Design - Module 3
74% booked. Starts in 21 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Andrew D. Wilson

Ph.D

Picture of Andrew D. Wilson.
Update pic
Personal Homepage:
research.microsoft.com/en-us/um/people/awilson/


Current place of employment:
Microsoft Research

Andy Wilson is a Principal Researcher and Research Manager at Microsoft Research. There he has been applying sensing technologies to enable new modes of human-computer interaction. His interests include gesture-based interfaces, inertial sensing and display technologies. He helped found the Surface Computing group at Microsoft, and pioneered early efforts to commercialize depth cameras at Microsoft. Before joining Microsoft, Andy obtained his BA at Cornell University, and MS and PhD at the MIT Media Laboratory. He currently manages the Natural Interaction Research group at Microsoft Research.

Edit author info
Rename / change spelling
Add publication
 

Publications by Andrew D. Wilson (bibliography)

 what's this?
2011
 
Edit | Del

Harrison, Chris, Benko, Hrvoje and Wilson, Andrew D. (2011): OmniTouch: wearable multitouch interaction everywhere. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 441-450.

OmniTouch is a wearable depth-sensing and projection system that enables interactive multitouch applications on everyday surfaces. Beyond the shoulder-worn system, there is no instrumentation of the user or environment. Foremost, the system allows the wearer to use their hands, arms and legs as graphical, interactive surfaces. Users can also transiently appropriate surfaces from the environment to expand the interactive area (e.g., books, walls, tables). On such surfaces -- without any calibration -- OmniTouch provides capabilities similar to that of a mouse or touchscreen: X and Y location in 2D interfaces and whether fingers are "clicked" or hovering, enabling a wide variety of interactions. Reliable operation on the hands, for example, requires buttons to be 2.3cm in diameter. Thus, it is now conceivable that anything one can do on today's mobile devices, they could do in the palm of their hand.

© All rights reserved Harrison et al. and/or ACM Press

2010
 
Edit | Del

Benko, Hrvoje and Wilson, Andrew D. (2010): Pinch-the-sky dome: freehand multi-point interactions with immersive omni-directional data. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 3045-3050.

Pinch-the-Sky Dome is a large immersive installation where several users can interact simultaneously with omni-directional data inside of a tilted geodesic dome. Our system consists of an omni-directional projector-camera unit in the center of the dome. The projector is able to project an image spanning the entire 360 degrees and a camera is used to track freehand gestures for navigation of the content. The interactive demos include: 1) the exploration of the astronomical data provided by World Wide Telescope, 2) social networking 3D graph visualizations, 3) immersive panoramic images, and 4) 360 degree video conferencing. We combine speech commands with freehand pinch gestures to provide a highly immersive and interactive experience to several users inside the dome, with a very wide field of view for each user.

© All rights reserved Benko and Wilson and/or their publisher

 
Edit | Del

Hartmann, Bjorn, Morris, Meredith Ringel, Benko, Hrvoje and Wilson, Andrew D. (2010): Pictionaire: supporting collaborative design work by integrating physical and digital artifacts. In: Proceedings of ACM CSCW10 Conference on Computer-Supported Cooperative Work 2010. pp. 421-424.

This paper introduces an interactive tabletop system that enhances creative collaboration across physical and digital artifacts. Pictionaire offers capture, retrieval, annotation, and collection of visual material. It enables multiple designers to fluidly move imagery from the physical to the digital realm; work with found, drawn and captured imagery; organize items into functional collections; and record meeting histories. These benefits are made possible by a large interactive table augmented with high-resolution overhead image capture. Summative evaluations with 16 professionals and four student pairs validated discoverability and utility of interactions, uncovered emergent functionality, and suggested opportunities for transitioning content to and from the table.

© All rights reserved Hartmann et al. and/or their publisher

 
Edit | Del

Schwarz, Julia, Hudson, Scott E., Mankoff, Jennifer and Wilson, Andrew D. (2010): A framework for robust and flexible handling of inputs with uncertainty. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 47-56.

New input technologies (such as touch), recognition based input (such as pen gestures) and next-generation interactions (such as inexact interaction) all hold the promise of more natural user interfaces. However, these techniques all create inputs with some uncertainty. Unfortunately, conventional infrastructure lacks a method for easily handling uncertainty, and as a result input produced by these technologies is often converted to conventional events as quickly as possible, leading to a stunted interactive experience. We present a framework for handling input with uncertainty in a systematic, extensible, and easy to manipulate fashion. To illustrate this framework, we present several traditional interactors which have been extended to provide feedback about uncertain inputs and to allow for the possibility that in the end that input will be judged wrong (or end up going to a different interactor). Our six demonstrations include tiny buttons that are manipulable using touch input, a text box that can handle multiple interpretations of spoken input, a scrollbar that can respond to inexactly placed input, and buttons which are easier to click for people with motor impairments. Our framework supports all of these interactions by carrying uncertainty forward all the way through selection of possible target interactors, interpretation by interactors, generation of (uncertain) candidate actions to take, and a mediation process that decides (in a lazy fashion) which actions should become final.

© All rights reserved Schwarz et al. and/or their publisher

 
Edit | Del

Wilson, Andrew D. and Benko, Hrvoje (2010): Combining multiple depth cameras and projectors for interactions on, above and between surfaces. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 273-282.

Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector. Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays. For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display. Or the user may "pick up" the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and "drop" the object onto the wall by touching it with their other hand. We detail the interactions and algorithms unique to LightSpace, discuss some initial observations of use and suggest future directions.

© All rights reserved Wilson and Benko and/or their publisher

 Cited in the following chapter:

3D User Interfaces: [/encyclopedia/3d_user_interfaces.html]


 
 
Edit | Del

Benko, Hrvoje, Izadi, Shahram, Wilson, Andrew D., Cao, Xiang, Rosenfeld, Dan and Hinckley, Ken (2010): Design and evaluation of interaction models for multi-touch mice. In: Proceedings of the 2010 Conference on Graphics Interface 2010. pp. 253-260.

Adding multi-touch sensing to the surface of a mouse has the potential to substantially increase the number of interactions available to the user. However, harnessing this increased bandwidth is challenging, since the user must perform multi-touch interactions while holding the device and using it as a regular mouse. In this paper we describe the design challenges and formalize the design space of multi-touch mice interactions. From our design space categories we synthesize four interaction models which enable the use of both multi-touch and mouse interactions on the same device. We describe the results of a controlled user experiment evaluating the performance of these models in a 2D spatial manipulation task typical of touch-based interfaces and compare them to interacting directly on a multi-touch screen and with a regular mouse. We observed that our multi-touch mouse interactions were overall slower than the chosen baselines; however, techniques providing a single focus of interaction and explicit touch activation yielded better performance and higher preferences from our participants. Our results expose the difficulties in designing multi-touch mice interactions and define the problem space for future research in making these devices effective.

© All rights reserved Benko et al. and/or their publisher

 
Edit | Del

Morris, Meredith Ringel, Wobbrock, Jacob O. and Wilson, Andrew D. (2010): Understanding users' preferences for surface gestures. In: Proceedings of the 2010 Conference on Graphics Interface 2010. pp. 261-268.

We compare two gesture sets for interactive surfaces -- a set of gestures created by an end-user elicitation method and a set of gestures authored by three HCI researchers. Twenty-two participants who were blind to the gestures' authorship evaluated 81 gestures presented and performed on a Microsoft Surface. Our findings indicate that participants preferred gestures authored by larger groups of people, such as those created by end-user elicitation methodologies or those proposed by more than one researcher. This preference pattern seems to arise in part because the HCI researchers proposed more physically and conceptually complex gestures than end-users. We discuss our findings in detail, including the implications for surface gesture design.

© All rights reserved Morris et al. and/or their publisher

2009
 
Edit | Del

Wobbrock, Jacob O., Morris, Meredith Ringel and Wilson, Andrew D. (2009): User-defined gestures for surface computing. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1083-1092.

Many surface computing prototypes have employed gestures created by system designers. Although such gestures are appropriate for early investigations, they are not necessarily reflective of user behavior. We present an approach to designing tabletop gestures that relies on eliciting gestures from non-technical users by first portraying the effect of a gesture, and then asking users to perform its cause. In all, 1080 gestures from 20 participants were logged, analyzed, and paired with think-aloud data for 27 commands performed with 1 and 2 hands. Our findings indicate that users rarely care about the number of fingers they employ, that one hand is preferred to two, that desktop idioms strongly influence users' mental models, and that some commands elicit little gestural agreement, suggesting the need for on-screen widgets. We also present a complete user-defined gesture set, quantitative agreement scores, implications for surface technology, and a taxonomy of surface gestures. Our results will help designers create better gesture sets informed by user behavior.

© All rights reserved Wobbrock et al. and/or ACM Press

 
Edit | Del

Wilson, Andrew D. (2009): Simulating grasping behavior on an imaging interactive surface. In: Proceedings of the 2009 ACM International Conference on Interactive Tabletops and Surfaces 2009. pp. 125-132.

We present techniques and algorithms to simulate grasping behavior on an imaging interactive surface (e.g., Microsoft Surface). In particular, we describe a contour model of touch contact shape, and show how these contours may be represented in a real-time physics simulation in a way that allows more realistic grasping behavior. For example, a virtual object may be moved by "squeezing" it with multiple contacts undergoing motion. The virtual object is caused to move by simulated contact and friction forces. Previous work [14] uses many small rigid bodies ("particle proxies") to approximate touch contact shape. This paper presents a variation of the particle proxy approach which allows grasping behavior. The advantages and disadvantages of this new approach are discussed.

© All rights reserved Wilson and/or his/her publisher

 
Edit | Del

Hilliges, Otmar, Izadi, Shahram, Wilson, Andrew D., Hodges, Steve, Garcia-Mendoza, Armando and Butz, Andreas (2009): Interactions in the air: adding further depth to interactive tabletops. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 139-148.

Although interactive surfaces have many unique and compelling qualities, the interactions they support are by their very nature bound to the display surface. In this paper we present a technique for users to seamlessly switch between interacting on the tabletop surface to above it. Our aim is to leverage the space above the surface in combination with the regular tabletop display to allow more intuitive manipulation of digital content in three-dimensions. Our goal is to design a technique that closely resembles the ways we manipulate physical objects in the real-world; conceptually, allowing virtual objects to be 'picked up' off the tabletop surface in order to manipulate their three dimensional position or orientation. We chart the evolution of this technique, implemented on two rear projection-vision tabletops. Both use special projection screen materials to allow sensing at significant depths beyond the display. Existing and new computer vision techniques are used to sense hand gestures and postures above the tabletop, which can be used alongside more familiar multi-touch interactions. Interacting above the surface in this way opens up many interesting challenges. In particular it breaks the direct interaction metaphor that most tabletops afford. We present a novel shadow-based technique to help alleviate this issue. We discuss the strengths and limitations of our technique based on our own observations and initial user feedback, and provide various insights from comparing, and contrasting, our tabletop implementations.

© All rights reserved Hilliges et al. and/or their publisher

 
Edit | Del

Hartmann, Bjorn, Morris, Meredith Ringel, Benko, Hrvoje and Wilson, Andrew D. (2009): Augmenting interactive tables with mice & keyboards. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 149-152.

This note examines the role traditional input devices can play in surface computing. Mice and keyboards can enhance tabletop technologies since they support high fidelity input, facilitate interaction with distant objects, and serve as a proxy for user identity and position. Interactive tabletops, in turn, can enhance the functionality of traditional input devices: they provide spatial sensing, augment devices with co-located visual content, and support connections among a plurality of devices. We introduce eight interaction techniques for a table with mice and keyboards, and we discuss the design space of such interactions.

© All rights reserved Hartmann et al. and/or their publisher

2008
 
Edit | Del

Olwal, Alex and Wilson, Andrew D. (2008): SurfaceFusion: Unobtrusive Tracking of Everyday Objects in Tangible User Interfaces. In: Proceedings of the 2008 Conference on Graphics Interface May 28-30, 2008, Windsor, Ontario, Canada. pp. 235-242.

Interactive surfaces and related tangible user interfaces often involve everyday objects that are identified, tracked, and augmented with digital information. Traditional approaches for recognizing these objects typically rely on complex pattern recognition techniques, or the addition of active electronics or fiducials that alter the visual qualities of those objects, making them less practical for real-world use. Radio Frequency Identification (RFID) technology provides an unobtrusive method of sensing the presence of and identifying tagged nearby objects but has no inherent means of determining the position of tagged objects. Computer vision, on the other hand, is an established approach to track objects with a camera. While shapes and movement on an interactive surface can be determined from classic image processing techniques, object recognition tends to be complex, computationally expensive and sensitive to environmental conditions. We present a set of techniques in which movement and shape information from the computer vision system is fused with RFID events that identify what objects are in the image. By synchronizing these two complementary sensing modalities, we can associate changes in the image with events in the RFID data, in order to recover position, shape and identification of the objects on the surface, while avoiding complex computer vision processes and exotic RFID solutions.

© All rights reserved Olwal and Wilson and/or their publisher

 
Edit | Del

Everitt, Katherine, Morris, Meredith Ringel, Brush, A. J. Bernheim and Wilson, Andrew D. (2008): DocuDesk: An interactive surface for creating and rehydrating many-to-many linkages among paper and digital documents. In: Third IEEE International Workshop on Tabletops and Interactive Surfaces Tabletop 2008 October 1-3, 2008, Amsterdam, The Netherlands. pp. 25-28.

 
Edit | Del

Cao, Xiang, Wilson, Andrew D., Balakrishnan, Ravin, Hinckley, Ken and Hudson, Scott E. (2008): ShapeTouch: Leveraging contact shape on interactive surfaces. In: Third IEEE International Workshop on Tabletops and Interactive Surfaces Tabletop 2008 October 1-3, 2008, Amsterdam, The Netherlands. pp. 129-136.

 
Edit | Del

Benko, Hrvoje, Wilson, Andrew D. and Balakrishnan, Ravin (2008): Sphere: multi-touch interactions on a spherical display. In: Cousins, Steve B. and Beaudouin-Lafon, Michel (eds.) Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology October 19-22, 2008, Monterey, CA, USA. pp. 77-86.

 
Edit | Del

Wilson, Andrew D., Izadi, Shahram, Hilliges, Otmar, Garcia-Mendoza, Armando and Kirk, David (2008): Bringing physics to the surface. In: Cousins, Steve B. and Beaudouin-Lafon, Michel (eds.) Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology October 19-22, 2008, Monterey, CA, USA. pp. 67-76.

2007
 
Edit | Del

Wilson, Andrew D. and Sarin, Raman (2007): BlueTable: connecting wireless mobile devices on interactive surfaces using vision-based handshaking. In: Proceedings of the 2007 Conference on Graphics Interface 2007. pp. 119-125.

Associating and connecting mobile devices for the wireless transfer of data is often a cumbersome process. We present a technique of associating a mobile device to an interactive surface using a combination of computer vision and Bluetooth technologies. Users establish the connection of a mobile device to the system by simply placing the device on a table surface. When the computer vision process detects a phone-like object on the surface, the system follows a handshaking procedure using Bluetooth and vision techniques to establish that the phone on the surface and the wirelessly connected phone are the same device. The connection is broken simply by removing the device. Furthermore, the vision-based handshaking procedure determines the precise position of the device on the interactive surface, thus permitting a variety of interactive scenarios which rely on the presentation of graphics co-located with the device. As an example, we present a prototype interactive system which allows the exchange of automatically downloaded photos by selecting and dragging photos from one cameraphone device to another.

© All rights reserved Wilson and Sarin and/or Canadian Information Processing Society

 
Edit | Del

Wobbrock, Jacob O., Wilson, Andrew D. and Li, Yang (2007): Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 159-168.

Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user interface prototypers. Although some user interface libraries and toolkits offer gesture recognizers, such infrastructure is often unavailable in design-oriented environments like Flash, scripting environments like JavaScript, or brand new off-desktop prototyping environments. To enable novice programmers to incorporate gestures into their UI prototypes, we present a "$1 recognizer" that is easy, cheap, and usable almost anywhere in about 100 lines of code. In a study comparing our $1 recognizer, Dynamic Time Warping, and the Rubine classifier on user-supplied gestures, we found that $1 obtains over 97% accuracy with only 1 loaded template and 99% accuracy with 3+ loaded templates. These results were nearly identical to DTW and superior to Rubine. In addition, we found that medium-speed gestures, in which users balanced speed and accuracy, were recognized better than slow or fast gestures for all three recognizers. We also discuss the effect that the number of templates or training examples has on recognition, the score falloff along recognizers' N-best lists, and results for individual gestures. We include detailed pseudocode of the $1 recognizer to aid development, inspection, extension, and testing.

© All rights reserved Wobbrock et al. and/or ACM Press

 
Edit | Del

Wilson, Andrew D. (2007): Depth-Sensing Video Cameras for 3D Tangible Tabletop Interaction. In: Second IEEE International Workshop on Horizontal Interactive Human-Computer Systems Tabletop 2007 October 10-12, 2007, Newport, Rhode Island, USA. pp. 201-204.

2006
 
Edit | Del

Wilson, Andrew D. and Agrawala, Maneesh (2006): Text entry using a dual joystick game controller. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 475-478.

We present a new bimanual text entry technique designed for today's dual-joystick game controllers. The left and right joysticks are used to independently select characters from the corresponding (left/right) half of an on-screen selection keyboard. Our dual-stick approach is analogous to typing on a standard keyboard, where each hand (left/right) presses keys on the corresponding side of the keyboard. We conducted a user study showing that our technique supports keyboarding skills transfer and is thereby readily learnable. Our technique increases entry speed significantly compared to the status quo single stick selection keyboard technique.

© All rights reserved Wilson and Agrawala and/or ACM Press

 
Edit | Del

Benko, Hrvoje, Wilson, Andrew D. and Baudisch, Patrick (2006): Precise selection techniques for multi-touch screens. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 1263-1272.

The size of human fingers and the lack of sensing precision can make precise touch screen interactions difficult. We present a set of five techniques, called Dual Finger Selections, which leverage the recent development of multi-touch sensitive displays to help users select very small targets. These techniques facilitate pixel-accurate targeting by adjusting the control-display ratio with a secondary finger while the primary finger controls the movement of the cursor. We also contribute a "clicking" technique, called SimPress, which reduces motion errors during clicking and allows us to simulate a hover state on devices unable to sense proximity. We implemented our techniques on a multi-touch tabletop prototype that offers computer vision-based tracking. In our formal user study, we tested the performance of our three most promising techniques (Stretch, X-Menu, and Slider) against our baseline (Offset), on four target sizes and three input noise levels. All three chosen techniques outperformed the control technique in terms of error rate reduction and were preferred by our participants, with Stretch being the overall performance and preference winner.

© All rights reserved Benko et al. and/or ACM Press

 
Edit | Del

Wilson, Andrew D. (2006): Robust computer vision-based detection of pinching for one and two-handed gesture input. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 255-258.

We present a computer vision technique to detect when the user brings their thumb and forefinger together (a pinch gesture) for close-range and relatively controlled viewing circumstances. The technique avoids complex and fragile hand tracking algorithms by detecting the hole formed when the thumb and forefinger are touching; this hole is found by simple analysis of the connected components of the background segmented against the hand. Our Thumb and Fore-Finger Interface (TAFFI) demonstrates the technique for cursor control as well as map navigation using one and two-handed interactions.

© All rights reserved Wilson and/or ACM Press

2005
 
Edit | Del

Wilson, Andrew D. (2005): PlayAnywhere: a compact interactive tabletop projection-vision system. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 83-92.

We introduce PlayAnywhere, a front-projected computer vision-based interactive table system which uses a new commercially available projection technology to obtain a compact, self-contained form factor. PlayAnywhere\'s configuration addresses installation, calibration, and portability issues that are typical of most vision-based table systems, and thereby is particularly motivated in consumer applications. PlayAnywhere also makes a number of contributions related to image processing techniques for front-projected vision-based table systems, including a shadow-based touch detection algorithm, a fast, simple visual bar code scheme tailored to projection-vision table systems, the ability to continuously track sheets of paper, and an optical flow-based algorithm for the manipulation of onscreen objects that does not rely on fragile tracking algorithms.

© All rights reserved Wilson and/or ACM Press

2004
 
Edit | Del

Wilson, Andrew D. (2004): TouchLight: an imaging touch screen and display for gesture-based interaction. In: Sharma, Rajeev, Darrell, Trevor, Harper, Mary P., Lazzari, Gianni and Turk, Matthew (eds.) Proceedings of the 6th International Conference on Multimodal Interfaces - ICMI 2004 October 13-15, 2004, State College, PA, USA. pp. 69-76.

 
Edit | Del

Wilson, Andrew D. (2004): TouchLight: an imaging touch screen and display for gesture-based interaction. In: Proceedings of the 2004 International Conference on Multimodal Interfaces 2004. pp. 69-76.

A novel touch screen technology is presented. TouchLight uses simple image processing techniques to combine the output of two video cameras placed behind a semi-transparent plane in front of the user. The resulting image shows objects that are on the plane. This technique is well suited for application with a commercially available projection screen material (DNP HoloScreen) which permits projection onto a transparent sheet of acrylic plastic in normal indoor lighting conditions. The resulting touch screen display system transforms an otherwise normal sheet of acrylic plastic into a high bandwidth input/output surface suitable for gesture-based interaction. Image processing techniques are detailed, and several novel capabilities of the system are outlined.

© All rights reserved Wilson and/or his/her publisher

2000
 
Edit | Del

Bobick, Aaron F., Intille, Stephen S., Davis, James W., Baird, Freedom, Pinhanez, Claudio S., Campbell, Lee W., Ivanov, Yuri A., Schtte, Arjan and Wilson, Andrew D. (2000): The KidsRoom. In Communications of the ACM, 43 (3) pp. 60-61.

1999
 
Edit | Del

Bobick, Aaron F., Intille, Stephen S., Davis, James W., Baird, Freedom, Pinhanez, Claudio S., Campbell, Lee W., Ivanov, Yuri A., Schtte, Arjan and Wilson, Andrew D. (1999): The KidsRoom: A Perceptually-Based Interactive and Immersive Story Environment. In Presence: Teleoperators and Virtual Environments, 8 (4) pp. 369-393.

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Changes to this page (author)

22 Nov 2012: Added
05 Apr 2012: Modified
20 Apr 2011: Modified
16 Jan 2011: Modified
16 Jan 2011: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
17 Aug 2009: Modified
12 Jul 2009: Modified
12 Jul 2009: Modified
01 Jun 2009: Modified
30 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
09 May 2009: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
25 Jul 2007: Modified
24 Jul 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Modified
11 Jun 2007: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/andrew_d__wilson.html

Publication statistics

Pub. period:1999-2011
Pub. count:27
Number of co-authors:33



Co-authors

Number of publications with 3 favourite co-authors:

Hrvoje Benko:8
Meredith Ringel Morris:5
Jacob O. Wobbrock:3

 

 

Productive colleagues

Andrew D. Wilson's 3 most productive colleagues in number of publications:

Scott E. Hudson:113
Ravin Balakrishnan:108
Jacob O. Wobbrock:71
 
 
 

Upcoming Courses

go to course
User-Centred Design - Module 3
74% booked. Starts in 21 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading