Proceedings of the 2005 ACM Symposium on User Interface Software and Technology


 
Time and place:

2005
Conf. description:
UIST is the premier forum for innovations in developing human-computer interfaces. The symposium brings together user-interface researchers and practitioners with an interest in techniques, tools, and technology for constructing high-quality, innovative user interfaces.
Help us!
Do you know when the next conference is? If yes, please add it to the calendar!
Series:
This is a preferred venue for people like Scott E. Hudson, Ravin Balakrishnan, Brad A. Myers, Steven K. Feiner, and Takeo Igarashi. Part of the UIST - Symposium on User Interface Software and Technology conference series.
Other years:
Publisher:
ACM Press
EDIT

References from this conference (2005)

The following articles are from "Proceedings of the 2005 ACM Symposium on User Interface Software and Technology":

 what's this?

Articles

p. 1-2

Starkweather, Gary K. (2005): Personal computing in the 21st century. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 1-2. Available online

Ever since the dawn of the digital computer, invention, innovation, and creativity have been a hallmark of the industry. The mainframe computer seemed for a while to be the real player with experts or at least highly trained professionals operating these large and expensive machines. Most users were allowed to see them through glass windows but \"hands on\" was a rare opportunity. In 1972, the Xerox Palo Alto Research Center (PARC), built a remarkable personal computer named the ALTO. Except for the visionaries at PARC and a few others, most people considered the personal computer a mere curiosity in this early period. Today, the personal computer has become a tool that very few imagined. What might be yet to come. While prognosticating about the future is a risky endeavor at best, perhaps we can obtain a look ahead with a straightforward review of the current status of personal computing. We will look at operating systems, application software and peripherals, however, the real goal of this talk is to see what the user interface, tools and interactions with this future computing environment might be or perhaps even should be. Will we still be using continuing variations of Doug Englebart\'s mouse in 2020 or might something new and much more advanced emerge? How might users seamlessly deal with terabytes of storage? How might multi-user environments be used and could multi-OS machines be an economic and generally available personal computing environment? Are there user experience issues that are critical in multi-OS environments? How might the user\'s display be different from today? Will tomorrow\'s displays be larger, have a significantly higher pixel density, be much more paper-like, etc.? Might electronic printers and their requisite paper output still be with us by 2025, for example? Will home and neighborhood network resources finally be a powerful ally of the computing environment? Many exciting opportunities and questions beg for answers and industry insight. This talk will attempt to peer into the near future to see what we might expect of the personal computing environment based on what we can extrapolate from current experience and technology directions. While the exactitude of such projections may be limited, taken as a whole, there is perhaps much that can be learned from such an exercise. Why do this? Charles Kettering, the great automotive inventor was asked why he spent so much time planning and thinking about the future. He wisely replied, \"Because I am going to spend the rest of my life there.\" Thirty years ago, very few could have imagined all the wonderful things that personal computing has enabled. Perhaps we have just begun our exciting journey.

© All rights reserved Starkweather and/or ACM Press

p. 103-106

Dickie, Connor, Vertegaal, Roel, Sohn, Changuk and Cheng, Daniel (2005): eyeLook: using attention to facilitate mobile media consumption. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 103-106. Available online

One of the problems with mobile media devices is that they may distract users during critical everyday tasks, such as navigating the streets of a busy city. We addressed this issue in the design of eyeLook: a platform for attention sensitive mobile computing. eyeLook appliances use embedded low cost eyeCONTACT sensors (ECS) to detect when the user looks at the display. We discuss two eyeLook applications, seeTV and seeTXT, that facilitate courteous media consumption in mobile contexts by using the ECS to respond to user attention. seeTV is an attentive mobile video player that automatically pauses content when the user is not looking. seeTXT is an attentive speed reading application that flashes words on the display, advancing text only when the user is looking. By making mobile media devices sensitive to actual user attention, eyeLook allows applications to gracefully transition users between consuming media, and managing life.

© All rights reserved Dickie et al. and/or ACM Press

p. 107-110

Sohn, Byungkon and Lee, Geehyuk (2005): Circle & identify: interactivity-augmented object recognition for handheld devices. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 107-110. Available online

The first requirement of a \"spatial mouse\" is the ability to identify the object that it is aiming at. Among many possible technologies that can be employed for this purpose, possibly the best solution would be object recognition by machine vision. The problem, however, is that object recognition algorithms are not yet reliable enough or light enough for hand-held devices. This paper demonstrates that a simple object recognition algorithm can become a practical solution when augmented by interactivity. The user draw a circle around a target using a spatial mouse, and the mouse captures a series of camera frames. The frames can be easily stitched together to give a target image separated from the background, with which we need only additional steps of feature extraction and object classification. We present here results from two experiments with a few household objects.

© All rights reserved Sohn and Lee and/or ACM Press

p. 111-114

Coelho, Enylton Machado, MacIntyre, Blair and Julier, Simon J. (2005): Supporting interaction in augmented reality in the presence of uncertain spatial knowledge. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 111-114. Available online

A significant problem encountered when building Augmented Reality (AR) systems is that all spatial knowledge about the world has uncertainty associated with it. This uncertainty manifests itself as registration errors between the graphics and the physical world, and ambiguity in user interaction. In this paper, we show how estimates of the registration error can be leveraged to support predictable selection in the presence of uncertain 3D knowledge. These ideas are demonstrated in osgAR, an extension to OpenSceneGraph with explicit support for uncertainty in the 3D transformations. The osgAR runtime propagates this uncertainty throughout the scene graph to compute robust estimates of the probable location of all entities in the system from the user\'s viewpoint, in real-time. We discuss the implementation of selection in osgAR, and the issues that must be addressed when creating interaction techniques in such a system.

© All rights reserved Coelho et al. and/or ACM Press

p. 115-118

Han, Jefferson Y. (2005): Low-cost multi-touch sensing through frustrated total internal reflection. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 115-118. Available online

This paper describes a simple, inexpensive, and scalable technique for enabling high-resolution multi-touch sensing on rear-projected interactive surfaces based on frustrated total internal reflection. We review previous applications of this phenomenon to sensing, provide implementation details, discuss results from our initial prototype, and outline future directions.

© All rights reserved Han and/or ACM Press

p. 119-122

Forlines, Clifton and Shen, Chia (2005): DTLens: multi-user tabletop spatial data exploration. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 119-122. Available online

Supporting groups of individuals exploring large maps and design diagrams on interactive tabletops is still an open research problem. Today\'s geospatial, mechanical engineering and CAD design applications are mostly single-user, keyboard and mouse-based desktop applications. In this paper, we present the design of and experience with DTLens, a new zoom-in-context, multi-user, two-handed, multi-lens interaction technique that enables group exploration of spatial data with multiple individual lenses on the same direct-touch interactive tabletop. DTLens provides a set of consistent interactions on lens operations, thus minimizes tool switching by users during spatial data exploration.

© All rights reserved Forlines and Shen and/or ACM Press

p. 123-131

Latulipe, Celine, Kaplan, Craig S. and Clarke, Charles L. A. (2005): Bimanual and unimanual image alignment: an evaluation of mouse-based techniques. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 123-131. Available online

We present an evaluation of three mouse-based techniques for aligning digital images. We investigate the physical image alignment task and discuss the implications for interacting with virtual images. In a formal evaluation we show that a symmetric bimanual technique outperforms an asymmetric bimanual technique which in turn outperforms a unimanual technique. We show that even after mode switching times are removed, the symmetric technique outperforms the single mouse technique. Subjects also exhibited more parallel interaction using the symmetric technique than when using the asymmetric technique.

© All rights reserved Latulipe et al. and/or ACM Press

p. 13-22

Chapuis, Olivier and Roussel, Nicolas (2005): Metisse is not a 3D desktop!. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 13-22. Available online

Twenty years after the general adoption of overlapping windows and the desktop metaphor, modern window systems differ mainly in minor details such as window decorations or mouse and keyboard bindings. While a number of innovative window management techniques have been proposed, few of them have been evaluated and fewer have made their way into real systems. We believe that one reason for this is that most of the proposed techniques have been designed using a low fidelity approach and were never made properly available. In this paper, we present Metisse, a fully functional window system specifically created to facilitate the design, the implementation and the evaluation of innovative window management techniques. We describe the architecture of the system, some of its implementation details and present several examples that illustrate its potential.

© All rights reserved Chapuis and Roussel and/or ACM Press

p. 133-141

Asano, Takeshi, Sharlin, Ehud, Kitamura, Yoshifumi, Takashima, Kazuki and Kishino, Fumio (2005): Predictive interaction using the delphian desktop. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 133-141. Available online

This paper details the design and evaluation of the Delphian Desktop, a mechanism for online spatial prediction of cursor movements in a Windows-Icons-Menus-Pointers (WIMP) environment. Interaction with WIMP-based interfaces often becomes a spatially challenging task when the physical interaction mediators are the common mouse and a high resolution, physically large display screen. These spatial challenges are especially evident in overly crowded Windows desktops. The Delphian Desktop integrates simple yet effective predictive spatial tracking and selection paradigms into ordinary WIMP environments in order to simplify and ease pointing tasks. Predictions are calculated by tracking cursor movements and estimating spatial intentions using a computationally inexpensive online algorithm based on estimating the movement direction and peak velocity. In testing the Delphian Desktop effectively shortened pointing time to faraway icons, and reduced the overall physical distance the mouse (and user hand) had to mechanically traverse.

© All rights reserved Asano et al. and/or ACM Press

p. 143-152

Ramos, Gonzalo and Balakrishnan, Ravin (2005): Zliding: fluid zooming and sliding for high precision parameter manipulation. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 143-152. Available online

High precision parameter manipulation tasks typically require adjustment of the scale of manipulation in addition to the parameter itself. This paper introduces the notion of Zoom Sliding, or Zliding, for fluid integrated manipulation of scale (zooming) via pressure input while parameter manipulation within that scale is achieved via x-y cursor movement (sliding). We also present the Zlider (Figure 1), a widget that instantiates the Zliding concept. We experimentally evaluate three different input techniques for use with the Zlider in conjunction with a stylus for x-y cursor positioning, in a high accuracy zoom and select task. Our results marginally favor the stylus with integrated isometric pressure sensing tip over bimanual techniques which separate zooming and sliding controls over the two hands. We discuss the implications of our results and present further designs that make use of Zliding.

© All rights reserved Ramos and Balakrishnan and/or ACM Press

p. 153-162

Liu, Feng and Gleicher, Michael (2005): Automatic image retargeting with fisheye-view warping. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 153-162. Available online

Image retargeting is the problem of adapting images for display on devices different than originally intended. This paper presents a method for adapting large images, such as those taken with a digital camera, for a small display, such as a cellular telephone. The method uses a non-linear fisheye-view warp that emphasizes parts of an image while shrinking others. Like previous methods, fisheye-view warping uses image information, such as low-level salience and high-level object recognition to find important regions of the source image. However, unlike prior approaches, a non-linear image warping function emphasizes the important aspects of the image while retaining the surrounding context. The method has advantages in preserving information content, alerting the viewer to missing information and providing robustness.

© All rights reserved Liu and Gleicher and/or ACM Press

p. 163-172

Bolin, Michael, Webber, Matthew, Rha, Philip, Wilson, Tom and Miller, Robert C. (2005): Automation and customization of rendered web pages. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 163-172. Available online

On the desktop, an application can expect to control its user interface down to the last pixel, but on the World Wide Web, a content provider has no control over how the client will view the page, once delivered to the browser. This creates an opportunity for end-users who want to automate and customize their web experiences, but the growing complexity of web pages and standards prevents most users from realizing this opportunity. We describe Chickenfoot, a programming system embedded in the Firefox web browser, which enables end-users to automate, customize, and integrate web applications without examining their source code. One way Chickenfoot addresses this goal is a novel technique for identifying page components by keyword pattern matching. We motivate this technique by studying how users name web page components, and present a heuristic keyword matching algorithm that identifies the desired component from the user\'s name.

© All rights reserved Bolin et al. and/or ACM Press

p. 173-182

Gajos, Krzysztof and Weld, Daniel S. (2005): Preference elicitation for interface optimization. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 173-182. Available online

Decision-theoretic optimization is becoming a popular tool in the user interface community, but creating accurate cost (or utility) functions has become a bottleneck -- in most cases the numerous parameters of these functions are chosen manually, which is a tedious and error-prone process. This paper describes ARNAULD, a general interactive tool for eliciting user preferences concerning concrete outcomes and using this feedback to automatically learn a factored cost function. We empirically evaluate our machine learning algorithm and two automatic query generation approaches and report on an informal user study.

© All rights reserved Gajos and Weld and/or ACM Press

p. 183-186

Diakopoulos, Nicholas and Essa, Irfan (2005): Mediating photo collage authoring. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 183-186. Available online

The medium of collage supports the visualization of meaningful event summaries using photographs. It can however be rather tedious to author a collage from a large collection of photographs. In this work we present an approach that supports efficient construction of a collage by assisting the user with an automatic layout procedure that can be controlled at a high level. Our layout method utilizes a pre-designed template which consists of cells for photos and annotations applied to these cells. The layout is then filled by matching the metadata of photos to the annotations in the cells using an optimization algorithm. The user exercises flexibility in the authoring process by (a) maintaining high-level control through the types of constraints applied and (b) leveraging visual emphases supported by the layout algorithm. The user can of course provide fine-grained control of the final collage through direct manipulation. Off-loading the tedium of collage construction to a user controlled yet automated process clears the way for rapidly generating different views of the same album and could also support the increased sharing of digital photos in the form of compact collages.

© All rights reserved Diakopoulos and Essa and/or ACM Press

p. 187-190

Yin, Min and Zhai, Shumin (2005): Dial and see: tackling the voice menu navigation problem with cross-device user experience integration. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 187-190. Available online

IVR (interactive voice response) menu navigation has long been recognized as a frustrating interaction experience. We propose an IM-based system that sends a coordinated visual IVR menu to the caller\'s computer screen. The visual menu is updated in real time in response to the caller\'s actions. With this automatically opened supplementary channel, callers can take advantages of different modalities over different devices and interact with the IVR system with the ease of graphical menu selection. Our approach of utilizing existing network infrastructure to pinpoint the caller\'s virtual location and coordinating multiple devices and multiple channels based on users\' ID registration can also be more generally applied to create integrated user experiences across a group of devices.

© All rights reserved Yin and Zhai and/or ACM Press

p. 191-200

Bergman, Lawrence, Castelli, Vittorio, Lau, Tessa and Oblinger, Daniel (2005): DocWizards: a system for authoring follow-me documentation wizards. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 191-200. Available online

Traditional documentation for computer-based procedures is difficult to use: readers have trouble navigating long complex instructions, have trouble mapping from the text to display widgets, and waste time performing repetitive procedures. We propose a new class of improved documentation that we call follow-me documentation wizards. Follow-me documentation wizards step a user through a script representation of a procedure by highlighting portions of the text, as well application UI elements. This paper presents algorithms for automatically capturing follow-me documentation wizards by demonstration, through observing experts performing the procedure. We also present our DocWizards implementation on the Eclipse platform. We evaluate our system with an initial user study that showing that most users have a marked preference for this form of guidance over traditional documentation.

© All rights reserved Bergman et al. and/or ACM Press

p. 201-210

Dragicevic, Pierre, Chatty, Stephane, Thevenin, David and Vinot, Jean-Luc (2005): Artistic resizing: a technique for rich scale-sensitive vector graphics. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 201-210. Available online

When involved in the visual design of graphical user interfaces, graphic designers can do more than providing static graphics for programmers to incorporate into applications. We describe a technique that allows them to provide examples of graphical objects at various key sizes using their usual drawing tool, then let the system interpolate their resizing behavior. We relate this technique to current practices of graphic designers, provide examples of its use and describe the underlying inference algorithm. We show how the mathematical properties of the algorithm allows the system to be predictable and explain how it can be combined with more traditional layout mechanisms.

© All rights reserved Dragicevic et al. and/or ACM Press

p. 211-220

Kandogan, Eser, Haber, Eben, Barrett, Rob, Cypher, Allen, Maglio, Paul P. and Zhao, Haixia (2005): A1: end-user programming for web-based system administration. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 211-220. Available online

System administrators work with many different tools to manage and fix complex hardware and software infrastructure in a rapidly paced work environment. Through extensive field studies, we observed that they often build and share custom tools for specific tasks that are not supported by vendor tools. Recent trends toward web-based management consoles offer many advantages but put an extra burden on system administrators, as customization requires web programming, which is beyond the skills of many system administrators. To meet their needs, we developed A1, a spreadsheet-based environment with a task-specific system-administration language for quickly creating small tools or migrating existing scripts to run as web portlets. Using A1, system administrators can build spreadsheets to access remote and heterogeneous systems, gather and integrate status data, and orchestrate control of disparate systems in a uniform way. A preliminary user study showed that in just a few hours, system administrators can learn to use A1 to build relatively complex tools from scratch.

© All rights reserved Kandogan et al. and/or ACM Press

p. 221-230

Li, Yang and Landay, James A. (2005): Informal prototyping of continuous graphical interactions by demonstration. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 221-230. Available online

Informal prototyping tools have shown great potential in facilitating the early stage design of user interfaces. However, continuous interactions, an important constituent of highly interactive interfaces, have not been well supported by previous tools. These interactions give continuous visual feedback, such as geometric changes of a graphical object, in response to continuous user input, such as the movement of a mouse. We built Monet, a sketch-based tool for prototyping continuous interactions by demonstration. In Monet, designers can prototype continuous widgets and their states of interest using examples. They can also demonstrate compound behaviors involving multiple widgets by direct manipulation. Monet allows continuous interactions to be easily integrated with event-based, discrete interactions. Continuous widgets can be embedded into storyboards and their states can condition or trigger storyboard transitions. Monet achieves these features by employing continuous function approximation and statistical classification techniques, without using any domain specific knowledge or assuming any application semantics. Informal feedback showed that Monet is a promising approach to enabling more complete tool support for early stage UI design.

© All rights reserved Li and Landay and/or ACM Press

p. 23-32

Berry, Lior, Bartram, Lyn and Booth, Kellogg S. (2005): Role-based control of shared application views. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 23-32. Available online

Collaboration often relies on all group members having a shared view of a single-user application. A common situation is a single active presenter sharing a live view of her workstation screen with a passive audience, using simple hardware-based video signal projection onto a large screen or simple bitmap-based sharing protocols. This offers simplicity and some advantages over more sophisticated software-based replication solutions, but everyone has the exact same view of the application. This conflicts with the presenter\'s need to keep some information and interaction details private. It also fails to recognize the needs of the passive audience, who may struggle to follow the presentation because of verbosity, display clutter or insufficient familiarity with the application. Views that cater to the different roles of the presenter and the audience can be provided by custom solutions, but these tend to be bound to a particular application. In this paper we describe a general technique and implementation details of a prototype system that allows standardized role-specific views of existing single-user applications and permits additional customization that is application-specific with no change to the application source code. Role-based policies control manipulation and display of shared windows and image buffers produced by the application, providing semi-automated privacy protection and relaxed verbosity to meet both presenter and audience needs.

© All rights reserved Berry et al. and/or ACM Press

p. 231-240

Marti, Stefan and Schmandt, Chris (2005): Physical embodiments for mobile communication agents. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 231-240. Available online

This paper describes a physically embodied and animated user interface to an interactive call handling agent, consisting of a small wireless animatronic device in the form of a squirrel, bunny, or parrot. A software tool creates movement primitives, composes these primitives into complex behaviors, and triggers these behaviors dynamically at state changes in the conversational agent\'s finite state machine. Gaze and gestural cues from the animatronics alert both the user and co-located third parties of incoming phone calls, and data suggests that such alerting is less intrusive than conventional telephones.

© All rights reserved Marti and Schmandt and/or ACM Press

p. 241-244

Liao, Chunyuan, Guimbretiere, Francois and Hinckley, Ken (2005): PapierCraft: a command system for interactive paper. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 241-244. Available online

Knowledge workers use paper extensively for document reviewing and note-taking due to its versatility and simplicity of use. As users annotate printed documents and gather notes, they create a rich web of annotations and cross references. Unfortunately, as paper is a static media, this web often gets trapped in the physical world. While several digital solutions such as XLibris [15] and Digital Desk [18] have been proposed, they suffer from a small display size or onerous hardware requirements. To address these limitations, we propose PapierCraft, a gesture-based interface that allows users to manipulate digital documents directly using their printouts as proxies. Using a digital pen, users can annotate a printout or draw command gestures to indicate operations such as copying a document area, pasting an area previously copied, or creating a link. Upon pen synchronization, our infrastructure executes these commands and presents the result in a customized viewer. In this paper we describe the design and implementation of the PapierCraft command system, and report on early user feedback.

© All rights reserved Liao et al. and/or ACM Press

p. 245-252

Dietz, Paul H., Harsham, Bret, Forlines, Clifton, Leigh, Darren, Yerazunis, William, Shipman, Sam, Schmidt-Nielsen, Bent and Ryall, Kathy (2005): DT controls: adding identity to physical interfaces. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 245-252. Available online

In this paper, we show how traditional physical interface components such as switches, levers, knobs and touch screens can be easily modified to identify who is activating each control. This allows us to change the function performed by the control, and the sensory feedback provided by the control itself, dependent upon the user. An auditing function is also available that logs each user\'s actions. We describe a number of example usage scenarios for our technique, and present two sample implementations.

© All rights reserved Dietz et al. and/or ACM Press

p. 253-258

Mankoff, Demi, Dey, Anind K., Mankoff, Jennifer and Mankoff, Ken (2005): Supporting interspecies social awareness: using peripheral displays for distributed pack awareness. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 253-258. Available online

In interspecies households, it is common for the non homo sapien members to be isolated and ignored for many hours each day when humans are out of the house or working. For pack animals, such as canines, information about a pack member\'s extended pack interactions (outside of the nuclear household) could help to mitigate this social isolation. We have developed a Pack Activity Watch System: Allowing Broad Interspecies Love In Telecommunication with Internet-Enabled Sociability (PAWSABILITIES) for helping to support remote awareness of social activities. Our work focuses on canine companions, and includes, pawticipatory design, labradory tests, and canid camera monitoring.

© All rights reserved Mankoff et al. and/or ACM Press

p. 3-12

Ko, Andrew J. and Myers, Brad A. (2005): Citrus: a language and toolkit for simplifying the creation of structured editors for code and data. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 3-12. Available online

Direct-manipulation editors for structured data are increasingly common. While such editors can greatly simplify the creation of structured data, there are few tools to simplify the creation of the editors themselves. This paper presents Citrus, a new programming language and user interface toolkit designed for this purpose. Citrus offers language-level support for constraints, restrictions and change notifications on primitive and aggregate data, mechanisms for automatically creating, removing, and reusing views as data changes, a library of widgets, layouts and behaviors for defining interactive views, and two comprehensive interactive editors as an interface to the language and toolkit itself. Together, these features support the creation of editors for a large class of data and code.

© All rights reserved Ko and Myers and/or ACM Press

p. 33-42

Vogel, Daniel and Balakrishnan, Ravin (2005): Distant freehand pointing and clicking on very large, high resolution displays. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 33-42. Available online

We explore the design space of freehand pointing and clicking interaction with very large high resolution displays from a distance. Three techniques for gestural pointing and two for clicking are developed and evaluated. In addition, we present subtle auditory and visual feedback techniques to compensate for the lack of kinesthetic feedback in freehand interaction, and to promote learning and use of appropriate postures.

© All rights reserved Vogel and Balakrishnan and/or ACM Press

p. 43-52

Malik, Shahzad, Ranjan, Abhishek and Balakrishnan, Ravin (2005): Interacting with large displays from a distance with vision-tracked multi-finger gestural input. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 43-52. Available online

We explore the idea of using vision-based hand tracking over a constrained tabletop surface area to perform multi-finger and whole-hand gestural interactions with large displays from a distance. We develop bimanual techniques to support a variety of asymmetric and symmetric interactions, including fast targeting and navigation to all parts of a large display from the comfort of a desk and chair, as well as techniques that exploit the ability of the vision-based hand tracking system to provide multi-finger identification and full 2D hand segmentation. We also posit a design that allows for handling multiple concurrent users.

© All rights reserved Malik et al. and/or ACM Press

p. 53-61

Smith, John D., Vertegaal, Roel and Sohn, Changuk (2005): ViewPointer: lightweight calibration-free eye tracking for ubiquitous handsfree deixis. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 53-61. Available online

We introduce ViewPointer, a wearable eye contact sensor that detects deixis towards ubiquitous computers embedded in real world objects. ViewPointer consists of a small wearable camera no more obtrusive than a common Bluetooth headset. ViewPointer allows any real-world object to be augmented with eye contact sensing capabilities, simply by embedding a small infrared (IR) tag. The headset camera detects when a user is looking at an infrared tag by determining whether the reflection of the tag on the cornea of the user\'s eye appears sufficiently central to the pupil. ViewPointer not only allows any object to become an eye contact sensing appliance, it also allows identification of users and transmission of data to the user through the object. We present a novel encoding scheme used to uniquely identify ViewPointer tags, as well as a method for transmitting URLs over tags. We present a number of scenarios of application as well as an analysis of design principles. We conclude eye contact sensing input is best utilized to provide context to action.

© All rights reserved Smith et al. and/or ACM Press

p. 63-72

Lee, Johnny C., Hudson, Scott E., Summet, Jay W. and Dietz, Paul H. (2005): Moveable interactive projected displays using projector based tracking. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 63-72. Available online

Video projectors have typically been used to display images on surfaces whose geometric relationship to the projector remains constant, such as walls or pre-calibrated surfaces. In this paper, we present a technique for projecting content onto moveable surfaces that adapts to the motion and location of the surface to simulate an active display. This is accomplished using a projector based location tracking technique. We use light sensors embedded into the moveable surface and project low-perceptibility Gray-coded patterns to first discover the sensor locations, and then incrementally track them at interactive rates. We describe how to reduce the perceptibility of tracking patterns, achieve interactive tracking rates, use motion modeling to improve tracking performance, and respond to sensor occlusions. A group of tracked sensors can define quadrangles for simulating moveable displays while single sensors can be used as control inputs. By unifying the tracking and display technology into a single mechanism, we can substantially reduce the cost and complexity of implementing applications that combine motion tracking and projected imagery.

© All rights reserved Lee et al. and/or ACM Press

p. 73-82

Forlines, Clifton, Balakrishnan, Ravin, Beardsley, Paul, Baar, Jeroen van and Raskar, Ramesh (2005): Zoom-and-pick: facilitating visual zooming and precision pointing with interactive handheld projectors. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 73-82. Available online

Designing interfaces for interactive handheld projectors is an exiting new area of research that is currently limited by two problems: hand jitter resulting in poor input control, and possible reduction of image resolution due to the needs of image stabilization and warping algorithms. We present the design and evaluation of a new interaction technique, called zoom-and-pick, that addresses both problems by allowing the user to fluidly zoom in on areas of interest and make accurate target selections. Subtle design features of zoom-and-pick enable pixel-accurate pointing, which is not possible in most freehand interaction techniques. Our evaluation results indicate that zoom-and-pick is significantly more accurate than the standard pointing technique described in our previous work.

© All rights reserved Forlines et al. and/or ACM Press

p. 83-92

Wilson, Andrew D. (2005): PlayAnywhere: a compact interactive tabletop projection-vision system. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 83-92. Available online

We introduce PlayAnywhere, a front-projected computer vision-based interactive table system which uses a new commercially available projection technology to obtain a compact, self-contained form factor. PlayAnywhere\'s configuration addresses installation, calibration, and portability issues that are typical of most vision-based table systems, and thereby is particularly motivated in consumer applications. PlayAnywhere also makes a number of contributions related to image processing techniques for front-projected vision-based table systems, including a shadow-based touch detection algorithm, a fast, simple visual bar code scheme tailored to projection-vision table systems, the ability to continuously track sheets of paper, and an optical flow-based algorithm for the manipulation of onscreen objects that does not rely on fragile tracking algorithms.

© All rights reserved Wilson and/or ACM Press

p. 93-102

Kortuem, Gerd, Kray, Christian and Gellersen, Hans-Werner (2005): Sensing and visualizing spatial relations of mobile devices. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 93-102. Available online

Location information can be used to enhance interaction with mobile devices. While many location systems require instrumentation of the environment, we present a system that allows devices to measure their spatial relations in a true peer-to-peer fashion. The system is based on custom sensor hardware implemented as USB dongle, and computes spatial relations in real-time. In extension of this system we propose a set of spatialized widgets for incorporation of spatial relations in the user interface. The use of these widgets is illustrated in a number of applications, showing how spatial relations can be employed to support and streamline interaction with mobile devices.

© All rights reserved Kortuem et al. and/or ACM Press




User-contributed notes

Give us your opinion! Do you have any comments/additions
that you would like other visitors to see?

 
comment You (your email) say: Aug 25th, 2014
#1
Aug 25
Add a thoughtful commentary or note to this page ! 
 

your homepage, facebook profile, twitter, or the like
will be spam-protected
How many?
= e.g. "6"
User ExperienceBy submitting you agree to the Site Terms
 
 
 
 

Changes to this page (conference)

11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Added

Page Information

Page maintainer: The Editorial Team
This is a tablet-optimized version of http://www.interaction-design.org/references/conferences/proceedings_of_the_2005_acm_symposium_on_user_interface_software_and_technology.html

Upcoming Courses

go to course
Quality Web Communication: The Beginner's Guide
Starts the day after tomorrow !
go to course
User-Centred Design - Module 2
87% booked. Starts in 8 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Upcoming Courses

go to course
Quality Web Communication: The Beginner's Guide
Starts the day after tomorrow !
go to course
User-Centred Design - Module 2
87% booked. Starts in 8 days