Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology


 
Time and place:
Vancouver, Canada
November, 2-5, 2003
Conf. description:
UIST is the premier forum for innovations in developing human-computer interfaces. The symposium brings together user-interface researchers and practitioners with an interest in techniques, tools, and technology for constructing high-quality, innovative user interfaces.
Help us!
Do you know when the next conference is? If yes, please add it to the calendar!
Series:
This is a preferred venue for people like Scott E. Hudson, Ravin Balakrishnan, Brad A. Myers, Steven K. Feiner, and Takeo Igarashi. Part of the UIST - Symposium on User Interface Software and Technology conference series.
Other years:
ISBN:
158113-6366
Publisher:
ACM Press
EDIT

References from this conference (2003)

The following articles are from "Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology":

 what's this?

Articles

p. 10

Brown, Leonard D., Hua, Hong and Gao, Chunyu (2003): A widget framework for augmented interaction in SCAPE. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. p. 10. Available online

We have previously developed a collaborative infrastructure called SCAPE -- an acronym for Stereoscopic Collaboration in Augmented and Projective Environments -- that integrates the traditionally separate paradigms of virtual and augmented reality. In this paper, we extend SCAPE by formalizing its underlying mathematical framework and detailing three augmented Widgets constructed via this framework: CoCylinder, Magnifier, and CoCube. These devices promote intuitive ways of selecting, examining, and sharing synthetic objects, and retrieving associated documentary text. Finally we present a testbed application to showcase SCAPE's capabilities for interaction in large, augmented virtual environments.

© All rights reserved Brown et al. and/or ACM Press

p. 105-114

Ramos, Gonzalo and Balakrishnan, Ravin (2003): Fluid interaction techniques for the control and annotation of digital video. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 105-114. Available online

We explore a variety of interaction and visualization techniques for fluid navigation, segmentation, linking, and annotation of digital videos. These techniques are developed within a concept prototype called LEAN that is designed for use with pressure-sensitive digitizer tablets. These techniques include a transient position+velocity widget that allows users not only to move around a point of interest on a video, but also to rewind or fast forward at a controlled variable speed. We also present a new variation of fish-eye views called twist-lens, and incorporate this into a position control slider designed for the effective navigation and viewing of large sequences of video frames. We also explore a new style of widgets that exploit the use of the pen's pressure-sensing capability, increasing the input vocabulary available to the user. Finally, we elaborate on how annotations referring to objects that are temporal in nature, such as video, may be thought of as links, and fluidly constructed, visualized and navigated.

© All rights reserved Ramos and Balakrishnan and/or ACM Press

p. 11-20

Begole, James, Tang, John C. and Hill, Rosco (2003): Rhythm modeling, visualizations and applications. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 11-20. Available online

People use their awareness of others' temporal patterns to plan work activities and communication. This paper presents algorithms for programatically detecting and modeling temporal patterns from a record of online presence data. We describe analytic and end-user visualizations of rhythmic patterns and the tradeoffs between them. We conducted a design study that explored the accuracy of the derived rhythm models compared to user perceptions, user preference among the visualization alternatives, and users' privacy preferences. We also present a prototype application based on the rhythm model that detects when a person is "away" for an extended period and predicts their return. We discuss the implications of this technology on the design of computer-mediated communication.

© All rights reserved Begole et al. and/or ACM Press

p. 115-124

Wittenburg, Kent, Forlines, Clifton, Lanning, Tom, Esenther, Alan, Harada, Shigeo and Miyachi, Taizo (2003): Rapid serial visual presentation techniques for consumer digital video devices. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 115-124. Available online

In this paper we propose a new model for a class of rapid serial visual presentation (RSVP) interfaces [16] in the context of consumer video devices. The basic spatial layout "explodes" a sequence of image frames into a 3D trail in order to provide more context for a spatial/temporal presentation. As the user plays forward or back, the trail advances or recedes while the image in the foreground focus position is replaced. The design is able to incorporate a variety of methods for analyzing or highlighting images in the trail. Our hypotheses are that users can navigate more quickly and precisely to points of interest when compared to conventional consumer-based browsing, channel flipping, or fast-forwarding techniques. We report on an experiment testing our hypotheses in which we found that subjects were more accurate but not faster in browsing to a target of interest in recorded television content with a TV remote.

© All rights reserved Wittenburg et al. and/or ACM Press

p. 125-134

Fogarty, James and Hudson, Scott E. (2003): GADGET: a toolkit for optimization-based approaches to interface and display generation. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 125-134. Available online

Recent work is beginning to reveal the potential of numerical optimization as an approach to generating interfaces and displays. Optimization-based approaches can often allow a mix of independent goals and constraints to be blended in ways that would be difficult to describe algorithmically. While optimization-based techniques appear to offer several potential advantages, further research in this area is hampered by the lack of appropriate tools. This paper presents GADGET, an experimental toolkit to support optimization for interface and display generation. GADGET provides convenient abstractions of many optimization concepts. GADGET also provides mechanisms to help programmers quickly create optimizations, including an efficient lazy evaluation framework, a powerful and configurable optimization structure, and a library of reusable components. Together these facilities provide an appropriate tool to enable exploration of a new class of interface and display generation techniques.

© All rights reserved Fogarty and Hudson and/or ACM Press

p. 135-144

Lecolinet, Eric (2003): A molecular architecture for creating advanced GUIs. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 135-144. Available online

This paper presents a new GUI architecture for creating advanced interfaces. This model is based on a limited set of general principles that improve flexibility and provide capabilities for implementing information visualization techniques such as magic lenses, transparent tools or semantic zooming. This architecture also makes it possible to create multiple views and application-sharing systems (by sharing views on multiple computer screens) in a simple and uniform way and to handle bimanual interaction and multiple pointers. An experimental toolkit called Ubit was implemented to test the feasibility of this approach. It is based on a pseudo-declarative C++ API that tries to simplify GUI programming by providing a higher level of abstraction.

© All rights reserved Lecolinet and/or ACM Press

p. 145-148

Quan, Dennis, Huynh, David, Karger, David R. and Miller, Robert C. (2003): User interface continuations. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 145-148. Available online

Dialog boxes that collect parameters for commands often create ephemeral, unnatural interruptions of a program's normal execution flow, encouraging the user to complete the dialog box as quickly as possible in order for the program to process that command. In this paper we examine the idea of turning the act of collecting parameters from a user into a first class object called a user interface continuation. Programs can create user interface continuations by specifying what information is to be collected from the user and supplying a callback (i.e., a continuation) to be notified with the collected information. A partially completed user interface continuation can be saved as a new command, much as currying and partially evaluating a function with a set of parameters produces a new function. Furthermore, user interface continuations, like other continuation-passing paradigms, can be used to allow program execution to continue uninterrupted while the user determines a command's parameters at his or her leisure.

© All rights reserved Quan et al. and/or ACM Press

p. 149-158

Hinckley, Ken (2003): Synchronous gestures for multiple persons and computers. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 149-158. Available online

This research explores distributed sensing techniques for mobile devices using synchronous gestures. These are patterns of activity, contributed by multiple users (or one user with multiple devices), which take on a new meaning when they occur together in time, or in a specific sequence in time. To explore this new area of inquiry, this work uses tablet computers augmented with touch sensors and two-axis linear accelerometers (tilt sensors). The devices are connected via an 802.11 wireless network and synchronize their time-stamped sensor data. This paper describes a few practical examples of interaction techniques using synchronous gestures such as dynamically tiling together displays by physically bumping them together, discusses implementation issues, and speculates on further possibilities for synchronous gestures.

© All rights reserved Hinckley and/or ACM Press

p. 159-168

Izadi, Shahram, Brignull, Harry, Rodden, Tom, Rogers, Yvonne and Underwood, Mia (2003): Dynamo: a public interactive surface supporting the cooperative sharing and exchange of media. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 159-168. Available online

In this paper we propose a novel way of supporting occasional meetings that take place in unfamiliar public places, which promotes lightweight, visible and fluid collaboration. Our central idea is that the sharing and exchange of information occurs across public surfaces that users can easily access and interact with. To this end, we designed and implemented Dynamo, a communal multi-user interactive surface. The surface supports the cooperative sharing and exchange of a wide range of media that can be brought to the surface by users that are remote from their familiar organizational settings.

© All rights reserved Izadi et al. and/or ACM Press

p. 169-172

Denoue, Laurent, Nelson, Les and Churchill, Elizabeth F. (2003): A fast, interactive 3D paper-flier metaphor for digital bulletin boards. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 169-172. Available online

p. 173-182

Cao, Xiang and Balakrishnan, Ravin (2003): VisionWand: interaction techniques for large displays using a passive wand tracked in 3D. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 173-182. Available online

A passive wand tracked in 3D using computer vision techniques is explored as a new input mechanism for interacting with large displays. We demonstrate a variety of interaction techniques that exploit the affordances of the wand, resulting in an effective interface for large scale interaction. The lack of any buttons or other electronics on the wand presents a challenge that we address by developing a set of postures and gestures to track state and enable command input. We also describe the use of multiple wands, and posit designs for more complex wands in the future.

© All rights reserved Cao and Balakrishnan and/or ACM Press

p. 183-192

Saund, Eric, Fleet, David, Larner, Daniel and Mahoney, James (2003): Perceptually-supported image editing of text and graphics. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 183-192. Available online

This paper presents a novel image editing program emphasizing easy selection and manipulation of material found in informal, casual documents such as sketches, handwritten notes, whiteboard images, screen snapshots, and scanned documents. The program, called ScanScribe, offers four significant advances. First, it presents a new, intuitive model for maintaining image objects and groups, along with underlying logic for updating these in the course of an editing session. Second, ScanScribe takes advantage of newly developed image processing algorithms to separate foreground markings from a white or light background, and thus can automatically render the background transparent so that image material can be rearranged without occlusion by background pixels. Third, ScanScribe introduces new interface techniques for selecting image objects with a pointing device without resorting to a palette of tool modes. Fourth, ScanScribe presents a platform for exploiting image analysis and recognition methods to make perceptually significant structure readily available to the user. As a research prototype, ScanScribe has proven useful in the work of members of our laboratory, and has been released on a limited basis for user testing and evaluation.

© All rights reserved Saund et al. and/or ACM Press

p. 193-202

Wu, Mike and Balakrishnan, Ravin (2003): Multi-finger and whole hand gestural interaction techniques for multi-user tabletop displays. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 193-202. Available online

Recent advances in sensing technology have enabled a new generation of tabletop displays that can sense multiple points of input from several users simultaneously. However, apart from a few demonstration techniques [17], current user interfaces do not take advantage of this increased input bandwidth. We present a variety of multifinger and whole hand gestural interaction techniques for these displays that leverage and extend the types of actions that people perform when interacting on real physical tabletops. Apart from gestural input techniques, we also explore interaction and visualization techniques for supporting shared spaces, awareness, and privacy. These techniques are demonstrated within a prototype room furniture layout application, called RoomPlanner.

© All rights reserved Wu and Balakrishnan and/or ACM Press

p. 203-212

Rekimoto, Jun, Ishizawa, Takaaki, Schwesig, Carsten and Oba, Haruo (2003): PreSense: interaction techniques for finger sensing input devices. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 203-212. Available online

Although graphical user interfaces started as imitations of the physical world, many interaction techniques have since been invented that are not available in the real world. This paper focuses on one of these "previewing", and how a sensory enhanced input device called "PreSense Keypad" can provide a preview for users before they actually execute the commands. Preview important in the real world because it is often not possible to undo an action. This previewable feature helps users to see what will occur next. It is also helpful when the command assignment of the keypad dynamically changes, such as for universal commanders. We present several interaction techniques based on this input device, including menu and map browsing systems and a text input system. We also discuss finger gesture recognition for the PreSense Keypad.

© All rights reserved Rekimoto et al. and/or ACM Press

p. 21-30

Ganoe, Craig, Somervell, Jacob P., Neale, Dennis C., Isenhour, Philip, Carroll, John M., Rosson, Mary Beth and McCrickard, D. Scott (2003): Classroom BRIDGE: using collaborative public and desktop timelines to support activity awareness. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 21-30. Available online

Classroom BRIDGE supports activity awareness by facilitating planning and goal revision in collaborative, project-based middle school science. It integrates large-screen and desktop views of project times to support incidental creation of awareness information through routine document transactions, integrated presentation of awareness information as part of workspace views, and public access to subgroup activity. It demonstrates and develops an object replication approach to integrating synchronous and asynchronous distributed work for a platform incorporating both desktop and large-screen devices. This paper describes an implementation of these concepts with preliminary evaluation data, using timeline-based user interfaces.

© All rights reserved Ganoe et al. and/or ACM Press

p. 213-216

Saund, Eric and Lank, Edward (2003): Stylus input and editing without prior selection of mode. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 213-216. Available online

This paper offers a solution to the mode problem in computer sketch/notetaking programs. Conventionally, the user must specify the intended "draw" or "command" mode prior to performing a stroke. This necessity has proven to be a barrier to the usability of pen/stylus systems. We offer a novel Inferred-Mode interaction protocol that avoids the mode hassles of conventional sketch systems. The system infers the user's intent, if possible, from the properties of the pen trajectory and the context of the trajectory. If the intent is ambiguous, the user is offered a choice mediator in the form of a pop-up button. To maximize the fluidity of drawing, the user is entitled to ignore the mediator and continue drawing. We present decision logic for the inferred mode protocol, and discuss subtleties learned in the course of its development. We also present results of initial user trials validating the usability of this interaction design.

© All rights reserved Saund and Lank and/or ACM Press

p. 217-220

Poupyrev, Ivan and Maruyama, Shigeaki (2003): Tactile interfaces for small touch screens. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 217-220. Available online

We present the design, implementation, and informal evaluation of tactile interfaces for small touch screens used in mobile devices. We embedded a tactile apparatus in a Sony PDA touch screen and enhanced its basic GUI elements with tactile feedback. Instead of observing the response of interface controls, users can feel it with their fingers as they press the screen. In informal evaluations, tactile feedback was greeted with enthusiasm. We believe that tactile feedback will become the next step in touch screen interface design and a standard feature of future mobile devices.

© All rights reserved Poupyrev and Maruyama and/or ACM Press

p. 31-40

Goto, Masataka (2003): SmartMusicKIOSK: music listening station with chorus-search function. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 31-40. Available online

This paper describes a new music-playback interface for trial listening, SmartMusicKIOSK. In music stores, short trial listening of CD music is not usually a passive experience -- customers often search out the chorus or "hook" of a song using the fast-forward button. Listening of this type, however, has not been traditionally supported. This research achieves a function for jumping to the chorus section and other key parts of a song plus a function for visualizing song structure. These functions make it easier for a listener to find desired parts of a song and thereby facilitate an active listening experience. The proposed functions are achieved by an automatic chorus-section detecting method, and the results of implementing them as a listening station have demonstrated their usefulness.

© All rights reserved Goto and/or ACM Press

p. 41-50

Lakshmipathy, Vidya, Schmandt, Chris and Marmasse, Natalia (2003): TalkBack: a conversational answering machine. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 41-50. Available online

Current asynchronous voice messaging interfaces, like voicemail, fail to take advantage of our conversational skills. TalkBack restores conversational turn-taking to voicemail retrieval by dividing voice messages into smaller sections based on the most significant silent and filled pauses and pausing after each to record a response. The responses are composed into a reply, alternating with snippets of the original message for context. TalkBack is built into a digital picture frame; the recipient touches a picture of the caller to hear each segment of the message in turn. The minimal interface models synchronous interaction and facilitates asynchronous voice messaging. TalkBack can also present a voice-annotated slide show which it receives over the Internet.

© All rights reserved Lakshmipathy et al. and/or ACM Press

p. 51-60

Guimbretiere, Francois (2003): Paper augmented digital documents. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 51-60. Available online

Paper Augmented Digital Documents (PADDs) are digital documents that can be manipulated either on a computer screen or on paper. PADDs, and the infrastructure supporting them, can be seen as a bridge between the digital and the paper worlds. As digital documents, PADDs are easy to edit, distribute and archive; as paper documents, PADDs are easy to navigate, annotate and well accepted in social settings. The chimeric nature of PADDs make them well suited for many tasks such as proofreading, editing, and annotation of large format document like blueprints. We are presenting an architecture which supports the seamless manipulation of PADDs using today's technologies and reports on the lessons we learned while implementing the first PADD system.

© All rights reserved Guimbretiere and/or ACM Press

p. 61-70

Wobbrock, Jacob O., Myers, Brad A. and Kembel, John A. (2003): EdgeWrite: a stylus-based text entry method designed for high accuracy and stability of motion. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 61-70. Available online

EdgeWrite is a new unistroke text entry method for handheld devices designed to provide high accuracy and stability of motion for people with motor impairments. It is also effective for able-bodied people. An EdgeWrite user enters text by traversing the edges and diagonals of a square hole imposed over the usual text input area. Gesture recognition is accomplished not through pattern recognition but through the sequence of corners that are hit. This means that the full stroke path is unimportant and recognition is highly deterministic, enabling better accuracy than other gestural alphabets such as Graffiti. A study of able-bodied users showed subjects with no prior experience were 18% more accurate during text entry with Edge Write than with Graffiti (p>.05), with no significant difference in speed. A study of 4 subjects with motor impairments revealed that some of them were unable to do Graffiti, but all of them could do Edge Write. Those who could do both methods had dramatically better accuracy with Edge Write.

© All rights reserved Wobbrock et al. and/or ACM Press

p. 71-79

Fitzmaurice, George W., Khan, Azam, Pieke, Robert, Buxton, Bill and Kurtenbach, Gordon (2003): Tracking menus. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 71-79. Available online

We describe a new type of graphical user interface widget, known as a "tracking menu." A tracking menu consists of a cluster of graphical buttons, and as with traditional menus, the cursor can be moved within the menu to select and interact with items. However, unlike traditional menus, when the cursor hits the edge of the menu, the menu moves to continue tracking the cursor. Thus, the menu always stays under the cursor and close at hand. In this paper we define the behavior of tracking menus, show unique affordances of the widget, present a variety of examples, and discuss design characteristics. We examine one tracking menu design in detail, reporting on usability studies and our experience integrating the technique into a commercial application for the Tablet PC. While user interface issues on the Tablet PC, such as preventing round trips to tool palettes with the pen, inspired tracking menus, the design also works well with a standard mouse and keyboard configuration.

© All rights reserved Fitzmaurice et al. and/or ACM Press

p. 81-90

Wigdor, Daniel and Balakrishnan, Ravin (2003): TiltText: using tilt for text input to mobile phones. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 81-90. Available online

TiltText, a new technique for entering text into a mobile phone is described. The standard 12-button text entry keypad of a mobile phone forces ambiguity when the 26- letter Roman alphabet is mapped in the traditional manner onto keys 2-9. The TiltText technique uses the orientation of the phone to resolve this ambiguity, by tilting the phone in one of four directions to choose which character on a particular key to enter. We first discuss implementation strategies, and then present the results of a controlled experiment comparing TiltText to MultiTap, the most common text entry technique. The experiment included 10 participants who each entered a total of 640 phrases of text chosen from a standard corpus, over a period of about five hours. The results show that text entry speed including correction for errors using TiltText was 23% faster than MultiTap by the end of the experiment, despite a higher error rate for TiltText. TiltText is thus amongst the fastest known language-independent techniques for entering text into mobile phones.

© All rights reserved Wigdor and Balakrishnan and/or ACM Press

p. 91-94

Kobayashi, Makoto and Igarashi, Takeo (2003): Considering the direction of cursor movement for efficient traversal of cascading menus. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 91-94. Available online

Cascading menus are commonly seen in most GUI systems. However, people sometimes choose the wrong items by mistake, or become frustrated when submenus pop up unnecessarily. This paper proposes two methods for improving the usability of cascading menus. The first uses the direction of cursor movement to change the menu behavior: horizontal motion opens/closes submenus, while vertical motion changes the highlight within the current menu. This feature can reduce cursor movement errors. The second causes a submenu to pop up at the position where horizontal motion occurs. This is expected to reduce the length of the movement path for menu traversal. A user study showed that our methods reduce menu selection times, shorten search path lengths, and prevent unexpected submenu appearance and disappearance.

© All rights reserved Kobayashi and Igarashi and/or ACM Press

p. 95-104

Suh, Bongwon, Ling, Haibin, Bederson, Benjamin B. and Jacobs, David W. (2003): Automatic thumbnail cropping and its effectiveness. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 95-104. Available online

Thumbnail images provide users of image retrieval and browsing systems with a method for quickly scanning large numbers of images. Recognizing the objects in an image is important in many retrieval tasks, but thumbnails generated by shrinking the original image often render objects illegible. We study the ability of computer vision systems to detect key components of images so that automated cropping, prior to shrinking, can render objects more recognizable. We evaluate automatic cropping techniques 1) based on a general method that detects salient portions of images, and 2) based on automatic face detection. Our user study shows that these methods result in small thumbnails that are substantially more recognizable and easier to find in the context of visual search.

© All rights reserved Suh et al. and/or ACM Press




User-contributed notes

Give us your opinion! Do you have any comments/additions
that you would like other visitors to see?

 
comment You (your email) say: Sep 4th, 2014
#1
Sep 4
Add a thoughtful commentary or note to this page ! 
 

your homepage, facebook profile, twitter, or the like
will be spam-protected
How many?
= e.g. "6"
User ExperienceBy submitting you agree to the Site Terms
 
 
 
 

Changes to this page (conference)

28 May 2003: Added

Page Information

Page maintainer: The Editorial Team
This is a tablet-optimized version of http://www.interaction-design.org/references/conferences/proceedings_of_the_16th_annural_acm_symposium_on_user_interface_software_and_technology.html

Upcoming Courses

go to course
Human-computer Interaction
87% booked. Starts in 8 days
go to course
Gamification: Creating Addictive User Experience
82% booked. Starts in 13 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Upcoming Courses

go to course
Human-computer Interaction
87% booked. Starts in 8 days
go to course
Gamification: Creating Addictive User Experience
82% booked. Starts in 13 days