Publication statistics

Pub. period:1999-2010
Pub. count:7
Number of co-authors:20



Co-authors

Number of publications with 3 favourite co-authors:

Hrvoje Benko:4
Koji Yatani:2
Ken Hinckley:2

 

 

Productive colleagues

Andy Wilson's 3 most productive colleagues in number of publications:

Bill Buxton:78
Ming C. Lin:58
Dinesh Manocha:57
 
 
 

Upcoming Courses

go to course
Emotional Design: How to make products people will love
91% booked. Starts in 4 days
go to course
UI Design Patterns for Successful Software
83% booked. Starts in 12 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Andy Wilson

 

Publications by Andy Wilson (bibliography)

 what's this?
2010
 
Edit | Del

Hinckley, Ken, Yatani, Koji, Pahud, Michel, Coddington, Nicole, Rodenhouse, Jenny, Wilson, Andy, Benko, Hrvoje and Buxton, Bill (2010): Manual deskterity: an exploration of simultaneous pen + touch direct input. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2793-2802. Available online

Manual Deskterity is a prototype digital drafting table that supports both pen and touch input. We explore a division of labor between pen and touch that flows from natural human skill and differentiation of roles of the hands. We also explore the simultaneous use of pen and touch to support novel compound gestures.

© All rights reserved Hinckley et al. and/or their publisher

 
Edit | Del

Hinckley, Ken, Yatani, Koji, Pahud, Michel, Coddington, Nicole, Rodenhouse, Jenny, Wilson, Andy, Benko, Hrvoje and Buxton, Bill (2010): Pen + touch = new tools. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 27-36. Available online

We describe techniques for direct pen+touch input. We observe people's manual behaviors with physical paper and notebooks. These serve as the foundation for a prototype Microsoft Surface application, centered on note-taking and scrapbooking of materials. Based on our explorations we advocate a division of labor between pen and touch: the pen writes, touch manipulates, and the combination of pen + touch yields new tools. This articulates how our system interprets unimodal pen, unimodal touch, and multimodal pen+touch inputs, respectively. For example, the user can hold a photo and drag off with the pen to create and place a copy; hold a photo and cross it in a freeform path with the pen to slice it in two; or hold selected photos and tap one with the pen to staple them all together. Touch thus unifies object selection with mode switching of the pen, while the muscular tension of holding touch serves as the "glue" that phrases together all the inputs into a unitary multimodal gesture. This helps the UI designer to avoid encumbrances such as physical buttons, persistent modes, or widgets that detract from the user's focus on the workspace.

© All rights reserved Hinckley et al. and/or their publisher

2009
 
Edit | Del

Seow, Steven C., Wixon, Dennis, MacKenzie, Scott, Jacucci, Giulio, Morrison, Ann and Wilson, Andy (2009): Multitouch and surface computing. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 4767-4770. Available online

Natural user interfaces (NUI) such as multitouch and surface computing are positioned as the next major evolution in computing and user interfaces. Just graphical user interfaces (GUIs) brought unprecedented interaction capabilities to their command-line predecessors, we believe multitouch and surface computing will spawn novel ways to interact with media and improve social usage patterns. Since experimentation and deployment are currently limited, the exploration of applications and interfaces in this area is still at an early stage.

© All rights reserved Seow et al. and/or ACM Press

 
Edit | Del

Nacenta, Miguel A., Baudisch, Patrick, Benko, Hrvoje and Wilson, Andy (2009): Separability of Spatial Manipulations in Multi-touch Interfaces. In: Proceedings of Graphics Interface 2009, Kelowna, Canada. pp. 175-182. Available online

 
Edit | Del

Nacenta, Miguel A., Baudisch, Patrick, Benko, Hrvoje and Wilson, Andy (2009): Separability of spatial manipulations in multi-touch interfaces. In: Proceedings of the 2009 Conference on Graphics Interface 2009. pp. 175-182. Available online

Multi-touch interfaces allow users to translate, rotate, and scale digital objects in a single interaction. However, this freedom represents a problem when users intend to perform only a subset of manipulations. A user trying to scale an object in a print layout program, for example, might find that the object was also slightly translated and rotated, interfering with what was already carefully laid out earlier. We implemented and tested interaction techniques that allow users to select a subset of manipulations. Magnitude Filtering eliminates transformations (e.g., rotation) that are small in magnitude. Gesture Matching attempts to classify the user's input into a subset of manipulation gestures. Handles adopts a conventional single-touch handles approach for touch input. Our empirical study showed that these techniques significantly reduce errors in layout, while the Handles technique was slowest. A variation of the Gesture Matching technique presented the best combination of speed and control, and was favored by participants.

© All rights reserved Nacenta et al. and/or their publisher

 
Edit | Del

Ramos, Gonzalo, Hinckley, Kenneth, Wilson, Andy and Sarin, Raman (2009): Synchronous Gestures in Multi-Display Environments. In Human-Computer Interaction, 24 (1) pp. 117-169. Available online

Synchronous gestures are patterns of sensed user or users' activity, spanning a distributed system that take on a new meaning when they occur together in time. Synchronous gestures draw inspiration from real-world social rituals such as toasting by tapping two drinking glasses together. In this article, we explore several interactions based on synchronous gestures, including bumping devices together, drawing corresponding pen gestures on touch-sensitive displays, simultaneously pressing a button on multiple smart-phones, or placing one or more devices on the sensing surface of a tabletop computer. These interactions focus on wireless composition of physically colocated devices, where users perceive one another and coordinate their actions through social protocol. We demonstrate how synchronous gestures may be phrased together with surrounding interactions. Such connection-action phrases afford a rich syntax of cross-device commands, operands, and one-to-one or one-to-many associations with a flexible physical arrangement of devices. Synchronous gestures enable colocated users to combine multiple devices into a heterogeneous display environment, where the users may establish a transient network connection with other select colocated users to facilitate the pooling of input capabilities, display resources, and the digital contents of each device. For example, participants at a meeting may bring mobile devices including tablet computers, PDAs, and smart-phones, and the meeting room infrastructure may include fixed interactive displays, such as a tabletop computer. Our techniques facilitate creation of an ad hoc display environment for tasks such as viewing a large document across multiple devices, presenting information to another user, or offering files to others. The interactions necessary to establish such ad hoc display environments must be rapid and minimally demanding of attention: during face-to-face communication, a pause of even 5 sec is socially awkward and disrupts collaboration. Current devices may associate using a direct transport such as Infrared Data Association ports, or the emerging Near Field Communication standard. However, such transports can only support one-to-one associations between devices and require close physical proximity as well as a specific relative orientation to connect the devices (e.g., the devices may be linked when touching head-to-head but not side-to-side). By contrast, sociology research in proxemics (the study of how people use the "personal space" surrounding their bodies) demonstrates that people carefully select physical distance as well as relative body orientation to suit the task, mood, and social relationship with other persons. Wireless networking can free device-to-device connections from the limitations of direct transports but results in a potentially large number of candidate devices. Synchronous gestures address these problems by allowing users to express naturally a spontaneous wireless connection between specific proximal (collocated) interactive displays.

© All rights reserved Ramos et al. and/or Taylor and Francis

1999
 
Edit | Del

Wilson, Andy, Larsen, Eric, Manocha, Dinesh and Lin, Ming C. (1999): Partitioning and Handling Massive Models for Interactive Collision Detection. In Comput. Graph. Forum, 18 (3) pp. 319-330.

 
Add publication
Show list on your website
 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/andy_wilson.html