Proceedings of the 2004 ACM Symposium on User Interface Software and Technology


 
Time and place:

2004
Conf. description:
UIST is the premier forum for innovations in developing human-computer interfaces. The symposium brings together user-interface researchers and practitioners with an interest in techniques, tools, and technology for constructing high-quality, innovative user interfaces.
Help us!
Do you know when the next conference is? If yes, please add it to the calendar!
Series:
This is a preferred venue for people like Scott E. Hudson, Ravin Balakrishnan, Brad A. Myers, Steven K. Feiner, and Takeo Igarashi. Part of the UIST - Symposium on User Interface Software and Technology conference series.
Other years:
Publisher:
ACM Press
EDIT

References from this conference (2004)

The following articles are from "Proceedings of the 2004 ACM Symposium on User Interface Software and Technology":

 what's this?

Articles

p. 1

Czerwinski, Mary (2004): Bridging the gap from theory to practice: the path toward innovation in human-computer interaction. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. p. 1. Available online

How do we break away from existing tools and techniques in HCI and truly innovate in a way that benefits the next generation of computer users? Today, too many of our technological designs and inventions are \"one off\" point designs, not building on or contributing to a theoretical foundation of understanding around human perception, cognition, social behavior and physical movement. Of course, these point designs can be successful in and of themselves, so why bother with theory and models? In order to mature as a field in a way that benefits users, it can be argued that we need to work more closely together and with an awareness of multiple disciplines, including not just the computer science and engineering arenas, but also psychology, sociology, and any field of human behavior. Of course, this could be a daunting task-how do we know that important improvements in user interface design can be obtained? I will present a series of examples of what I consider to be significant contributions to the field of HCI, each based on a multidisciplinarian, theory-driven approach. I hope to challenge the audience to creatively consider ways that their own work could be more theoretically motivated, and what it might take for more of us to move forward in that direction.

© All rights reserved Czerwinski and/or ACM Press

p. 109-118

Yamada, Toshiya, Shingu, Jun, Churchill, Elizabeth F., Nelson, Les, Helfman, Jonathan and Murphy, Paul (2004): Who cares?: reflecting who is reading what on distributed community bulletin boards. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 109-118. Available online

In this paper, we describe the YeTi information sharing system that has been designed to foster community building through informal digital content sharing. The YeTi system is a general information parsing, hosting and distribution infrastructure, with interfaces designed for individual and public content reading. In this paper we describe the YeTi public display interface, with a particular focus on tools we have designed to provide lightweight awareness of others\' interactions with posted content. Our tools augment content with metadata that reflect people\'s reading of content - captured video clips of who\'s reading and interacting with content, tools to allow people to leave explicit freehand annotations about content, and a visualization of the content access history to show when content is interacted with. Results from an initial evaluation are presented and discussed.

© All rights reserved Yamada et al. and/or ACM Press

p. 119-122

Letessier, Julien and Berard, Francois (2004): Visual tracking of bare fingers for interactive surfaces. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 119-122. Available online

Visual tracking of bare fingers allows more direct manipulation of digital objects, multiple simultaneous users interacting with their two hands, and permits the interaction on large surfaces, using only commodity hardware. After presenting related work, we detail our implementation. Its design is based on our modeling of two classes of algorithms that are key to the tracker: Image Differencing Segmentation (IDS) and Fast Rejection Filters (FRF). We introduce a new chromatic distance for IDS and a FRF that is independent to finger rotation. The system runs at full frame rate (25 Hz) with an average total system latency of 80 ms, independently of the number of tracked fingers. When used in a controlled environment such as a meeting room, its robustness is satisfying for everyday use.

© All rights reserved Letessier and Berard and/or ACM Press

p. 123-126

Lee, Johnny C., Dietz, Paul H., Maynes-Aminzade, Dan, Raskar, Ramesh and Hudson, Scott E. (2004): Automatic projector calibration with embedded light sensors. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 123-126. Available online

Projection technology typically places several constraints on the geometric relationship between the projector and the projection surface to obtain an undistorted, properly sized image. In this paper we describe a simple, robust, fast, and low-cost method for automatic projector calibration that eliminates many of these constraints. We embed light sensors in the target surface, project Gray-coded binary patterns to discover the sensor locations, and then prewarp the image to accurately fit the physical features of the projection surface. This technique can be expanded to automatically stitch multiple projectors, calibrate onto non-planar surfaces for object decoration, and provide a method for simple geometry acquisition.

© All rights reserved Lee et al. and/or ACM Press

p. 127-136

Khan, Azam, Fitzmaurice, George W., Almeida, Don, Burtnyk, Nicolas and Kurtenbach, Gordon (2004): A remote control interface for large displays. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 127-136. Available online

We describe a new widget and interaction technique, known as a \"Frisbee,\" for interacting with areas of a large display that are difficult or impossible to access directly. A frisbee is simply a portal to another part of the display. It consists of a local \"telescope\" and a remote \"target\". The remote data surrounded by the target is drawn in the telescope and interactions performed within it are applied on the remote data. In this paper we define the behavior of frisbees, show unique affordances of the widget, and discuss design characteristics. We have implemented a test application and report on an experiment that shows the benefit of using the frisbee on a large display. Our results suggest that the frisbee is preferred over walking back and forth to the local and remote spaces at a distance of 4.5 feet.

© All rights reserved Khan et al. and/or ACM Press

p. 13-22

Kara, Levent Burak and Stahovich, Thomas F. (2004): Hierarchical parsing and recognition of hand-sketched diagrams. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 13-22. Available online

A long standing challenge in pen-based computer interaction is the ability to make sense of informal sketches. A main difficulty lies in reliably extracting and recognizing the intended set of visual objects from a continuous stream of pen strokes. Existing pen-based systems either avoid these issues altogether, thus resulting in the equivalent of a drawing program, or rely on algorithms that place unnatural constraints on the way the user draws. As one step toward alleviating these difficulties, we present an integrated sketch parsing and recognition approach designed to enable natural, fluid, sketch-based computer interaction. The techniques presented in this paper are oriented toward the domain of network diagrams. In the first step of our approach, the stream of pen strokes is examined to identify the arrows in the sketch. The identified arrows then anchor a spatial analysis which groups the uninterpreted strokes into distinct clusters, each representing a single object. Finally, a trainable shape recognizer, which is informed by the spatial analysis, is used to find the best interpretations of the clusters. Based on these concepts, we have built SimuSketch, a sketch-based interface for Matlab\'s Simulink software package. An evaluation of SimuSketch has indicated that even novice users can effectively utilize our system to solve real engineering problems without having to know much about the underlying recognition techniques.

© All rights reserved Kara and Stahovich and/or ACM Press

p. 137-146

Vogel, Daniel and Balakrishnan, Ravin (2004): Interactive public ambient displays: transitioning from implicit to explicit, public to personal, interaction with multiple users. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 137-146. Available online

We develop design principles and an interaction framework for sharable, interactive public ambient displays that support the transition from implicit to explicit interaction with both public and personal information. A prototype system implementation that embodies these design principles is described. We use novel display and interaction techniques such as simple hand gestures and touch screen input for explicit interaction and contextual body orientation and position cues for implicit interaction. Techniques are presented for subtle notification, self-revealing help, privacy controls, and shared use by multiple people each in their own context. Initial user feedback is also presented, and future directions discussed.

© All rights reserved Vogel and Balakrishnan and/or ACM Press

p. 147-156

Miyaoku, Kento, Higashino, Suguru and Tonomura, Yoshinobu (2004): C-blink: a hue-difference-based light signal marker for large screen interaction via any mobile terminal. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 147-156. Available online

To enable common mobile terminals to interact with contents shown on large screens, we propose \"C-Blink\", a new light signal marker method that uses the color liquid-crystal display of a mobile terminal as a visible light source. We overcome the performance limitations of such displays by developing a hue-difference-blink technique. In combination with a screen-side sensor, we describe a system that detects and receives light signal markers sent by cell phone displays. Evaluations of a prototype system confirm that C-Blink performs well under common indoor lighting. The C-Blink program can be installed in any mobile terminal that has a color display, and the installation costs are small. C-Blink is a very useful way of enabling ubiquitous large screens to become interfaces for mobile terminals.

© All rights reserved Miyaoku et al. and/or ACM Press

p. 157-160

Patel, Shwetak N., Pierce, Jeffrey S. and Abowd, Gregory D. (2004): A gesture-based authentication scheme for untrusted public terminals. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 157-160. Available online

Powerful mobile devices with minimal I/O capabilities increase the likelihood that we will want to annex these devices to I/O resources we encounter in the local environment. This opportunistic annexing will require authentication. We present a sensor-based authentication mechanism for mobile devices that relies on physical possession instead of knowledge to setup the initial connection to a public terminal. Our solution provides a simple mechanism for shaking a device to authenticate with the public infrastructure, making few assumptions about the surrounding infrastructure while also maintaining a reasonable level of security.

© All rights reserved Patel et al. and/or ACM Press

p. 161

Greenberg, Saul (2004): Physical user interfaces: what they are and how to build them. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. p. 161. Available online

Physical user interfaces are special purpose devices that can be situated in a real-world setting. Unlike general purpose computers, they are typically designed for particular contexts and uses. In this survey, I present an introductory tour of this new interface genre. First, I will summarize what they are by describing several design niches for these devices: ubiquitous computing, tangible media, foreground and ambient devices, collaborative devices, roomware, and physical controls. Examples will be plentiful, and will range from the playful, to the artistic, and to the serious. Second, I will introduce technologies that are suitable for software professionals who wish to prototype these physical user interfaces. The commercially available Phidgets (www.phidgets.com) are used as a case study of what is available and what can be done with them.

© All rights reserved Greenberg and/or ACM Press

p. 163

Kaye, Joseph Jofish (2004): Olfactory display. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. p. 163. Available online

The last twenty years have seen enormous leaps forward in computers\' abilities to generate sound and video. What happens when computers can produce scents on demand? In this talk, I present three approaches to this question. I first look at human olfactory processing: what is our olfactory bandwidth, and what are the limitations of our sense of smell? I then explore the use of scent to accompany other media, from historical examples like Sense-o-Rama and Aromarama, to more recent work including firefighter training systems, augmented gaming, and food and beverage applications. Finally, I look at the possibilities of olfactory output as an ambient display medium. I conclude with an overview of current computer-controlled olfactory output devices: off the shelf solutions for incorporating scent into user interface applications.

© All rights reserved Kaye and/or ACM Press

p. 165-174

Jr., Dan R. Olsen,, Taufer, Trent and Fails, Jerry (2004): ScreenCrayons: annotating anything. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 165-174. Available online

ScreenCrayons is a system for collecting annotations on any type of document or visual information from any application. The basis for the system is a screen capture upon which the user can highlight the relevant portions of the image. The user can define any number of topics for organizing notes. Each topic is associated with a highlighting \"crayon.\" In addition the user can supply annotations in digital ink or text. Algorithms are described that summarize captured images based on the highlight strokes so as to provide overviews of many annotations as well as being able to \"zoom in\" on particular information about a given note and the context of that note.

© All rights reserved Jr. et al. and/or ACM Press

p. 175-184

Fujima, Jun, Lunzer, Aran, Hornbaek, Kasper and Tanaka, Yuzuru (2004): Clip, connect, clone: combining application elements to build custom interfaces for information access. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 175-184. Available online

Many applications provide a form-like interface for requesting information: the user fills in some fields, submits the form, and the application presents corresponding results. Such a procedure becomes burdensome if (1) the user must submit many different requests, for example in pursuing a trial-and-error search, (2) results from one application are to be used as inputs for another, requiring the user to transfer them by hand, or (3) the user wants to compare results, but only the results from one request can be seen at a time. We describe how users can reduce this burden by creating custom interfaces using three mechanisms: clipping of input and result elements from existing applications to form cells on a spreadsheet; connecting these cells using formulas, thus enabling result transfer between applications; and cloning cells so that multiple requests can be handled side by side. We demonstrate a prototype of these mechanisms, initially specialised for handling Web applications, and show how it lets users build new interfaces to suit their individual needs.

© All rights reserved Fujima et al. and/or ACM Press

p. 185-188

Stylos, Jeffrey, Myers, Brad A. and Faulring, Andrew (2004): Citrine: providing intelligent copy-and-paste. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 185-188. Available online

We present Citrine, a system that extends the widespread copy-and-paste interaction technique with intelligent transformations, making it useful in more situations. Citrine uses text parsing to find the structure in copied text and allows users to paste the structured information, which might have many pieces, in a single paste operation. For example, using Citrine, a user can copy the text of a meeting request and add it to the Outlook calendar with a single paste. In applications such as Excel, users can teach Citrine by example how to copy and paste data by showing it which fields go into which columns, and can use this to copy or paste many items at a time in a user-defined manner. Citrine can be used with a wide variety of applications and types of data and can be easily extended to work with more. It currently includes parsers that recognize contact information, calendar appointments and bibliographic citations. It works with Internet Explorer, Outlook, Excel, Palm Desktop, EndNote and other applications. Citrine is available to download on the internet.

© All rights reserved Stylos et al. and/or ACM Press

p. 189-192

Ishak, Edward W. and Feiner, Steven K. (2004): Interacting with hidden content using content-aware free-space transparency. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 189-192. Available online

We present content-aware free-space transparency, an approach to viewing and manipulating the otherwise hidden content of obscured windows through unimportant regions of overlapping windows. Traditional approaches to interacting with otherwise obscured content in a window system render an entire window uniformly transparent. In contrast, content-aware free-space transparency uses opaque-to-transparent gradients and image-processing filters to minimize the interference from overlapping material, based on properties of that material. By increasing the amount of simultaneously visible content and allowing basic interaction with otherwise obscured content, without modifying window geometry, we believe that free-space transparency has the potential to improve user productivity.

© All rights reserved Ishak and Feiner and/or ACM Press

p. 193-196

Dragicevic, Pierre (2004): Combining crossing-based and paper-based interaction paradigms for dragging and dropping between overlapping windows. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 193-196. Available online

Despite novel interaction techniques proposed for virtual desktops, common yet challenging tasks remain to be investigated. Dragging and dropping between overlapping windows is one of them. The fold-and-drop technique presented here offers a natural and efficient way of performing those tasks. We show how this technique successfully builds upon several interaction paradigms previously described, while shedding new light on them.

© All rights reserved Dragicevic and/or ACM Press

p. 197-206

MacIntyre, Blair, Gandy, Maribeth, Dow, Steven and Bolter, Jay David (2004): DART: a toolkit for rapid design exploration of augmented reality experiences. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 197-206. Available online

In this paper, we describe The Designer\'s Augmented Reality Toolkit (DART). DART is built on top of Macromedia Director, a widely used multimedia development environment. We summarize the most significant problems faced by designers working with AR in the real world, and discuss how DART addresses them. Most of DART is implemented in an interpreted scripting language, and can be modified by designers to suit their needs. Our work focuses on supporting early design activities, especially a rapid transition from story-boards to working experience, so that the experiential part of a design can be tested early and often. DART allows designers to specify complex relationships between the physical and virtual worlds, and supports 3D animatic actors (informal, sketch-based content) in addition to more polished content. Designers can capture and replay synchronized video and sensor data, allowing them to work off-site and to test specific parts of their experience more effectively.

© All rights reserved MacIntyre et al. and/or ACM Press

p. 207-216

Zanden, Brad Vander, Baker, David and Jin, Jing (2004): An explanation-based, visual debugger for one-way constraints. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 207-216. Available online

This paper describes a domain-specific debugger for one-way constraint solvers. The debugger makes use of several new techniques. First, the debugger displays only a portion of the dataflow graph, called a constraint slice, that is directly related to an incorrect variable. This technique helps the debugger scale to a system containing thousands of constraints. Second, the debugger presents a visual representation of the solver\'s data structures and uses color encodings to highlight changes to the data structures. Finally, the debugger allows the user to point to a variable that has an unexpected value and ask the debugger to suggest reasons for the unexpected value. The debugger makes use of information gathered during the constraint satisfaction process to generate plausible suggestions. Informal testing has shown that the explanatory capability and the color coding of the constraint solver\'s data structures are particularly useful in locating bugs in constraint code.

© All rights reserved Zanden et al. and/or ACM Press

p. 217-226

Li, Yang, Hong, Jason I. and Landay, James A. (2004): Topiary: a tool for prototyping location-enhanced applications. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 217-226. Available online

Location-enhanced applications use the location of people, places, and things to augment or streamline interaction. Location-enhanced applications are just starting to emerge in several different domains, and many people believe that this type of application will experience tremendous growth in the near future. However, it currently requires a high level of technical expertise to build location-enhanced applications, making it hard to iterate on designs. To address this problem we introduce Topiary, a tool for rapidly prototyping location-enhanced applications. Topiary lets designers create a map that models the location of people, places, and things; use this active map to demonstrate scenarios depicting location contexts; use these scenarios in creating storyboards that describe interaction sequences; and then run these storyboards on mobile devices, with a wizard updating the location of people and things on a separate device. We performed an informal evaluation with seven researchers and interface designers and found that they reacted positively to the concept.

© All rights reserved Li et al. and/or ACM Press

p. 227-236

Zhou, Michelle X. and Aggarwal, Vikram (2004): An optimization-based approach to dynamic data content selection in intelligent multimedia interfaces. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 227-236. Available online

We are building a multimedia conversation system to facilitate information seeking in large and complex data spaces. To provide tailored responses to diverse user queries introduced during a conversation, we automate the generation of a system response. Here we focus on the problem of determining the data content of a response. Specifically, we develop an optimization-based approach to content selection. Compared to existing rule-based or plan-based approaches, our work offers three unique contributions. First, our approach provides a general framework that effectively addresses content selection for various interaction situations by balancing a comprehensive set of constraints (e.g., content quality and quantity constraints). Second, our method is easily extensible, since it uses feature-based metrics to systematically model selection constraints. Third, our method improves selection results by incorporating content organization and media allocation effects, which otherwise are treated separately. Preliminary studies show that our method can handle most of the user situations identified in a Wizard-of-Oz study, and achieves results similar to those produced by human designers.

© All rights reserved Zhou and Aggarwal and/or ACM Press

p. 23-32

Alvarado, Christine and Davis, Randall (2004): SketchREAD: a multi-domain sketch recognition engine. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 23-32. Available online

We present SketchREAD, a multi-domain sketch recognition engine capable of recognizing freely hand-drawn diagrammatic sketches. Current computer sketch recognition systems are difficult to construct, and either are fragile or accomplish robustness by severely limiting the designer\'s drawing freedom. Our system can be applied to a variety of domains by providing structural descriptions of the shapes in that domain; no training data or programming is necessary. Robustness to the ambiguity and uncertainty inherent in complex, freely-drawn sketches is achieved through the use of context. The system uses context to guide the search for possible interpretations and uses a novel form of dynamically constructed Bayesian networks to evaluate these interpretations. This process allows the system to recover from low-level recognition errors (e.g., a line misclassified as an arc) that would otherwise result in domain level recognition errors. We evaluated Sketch-READ on real sketches in two domains -- family trees and circuit diagrams -- and found that in both domains the use of context to reclassify low-level shapes significantly reduced recognition error over a baseline system that did not reinterpret low-level classifications. We also discuss the system\'s potential role in sketch based user interfaces.

© All rights reserved Alvarado and Davis and/or ACM Press

p. 237-246

Lyons, Kent, Skeels, Christopher, Starner, Thad, Snoeck, Cornelis M., Wong, Benjamin A. and Ashbrook, Daniel (2004): Augmenting conversations using dual-purpose speech. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 237-246. Available online

In this paper, we explore the concept of dual-purpose speech: speech that is socially appropriate in the context of a human-to-human conversation which also provides meaningful input to a computer. We motivate the use of dual-purpose speech and explore issues of privacy and technological challenges related to mobile speech recognition. We present three applications that utilize dual-purpose speech to assist a user in conversational tasks: the Calendar Navigator Agent, DialogTabs, and Speech Courier. The Calendar Navigator Agent navigates a user\'s calendar based on socially appropriate speech used while scheduling appointments. DialogTabs allows a user to postpone cognitive processing of conversational material by proving short-term capture of transient information. Finally, Speech Courier allows asynchronous delivery of relevant conversational information to a third party.

© All rights reserved Lyons et al. and/or ACM Press

p. 247-256

Matthews, Tara, Dey, Anind K., Mankoff, Jennifer, Carter, Scott and Rattenbury, Tye (2004): A toolkit for managing user attention in peripheral displays. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 247-256. Available online

Traditionally, computer interfaces have been confined to conventional displays and focused activities. However, as displays become embedded throughout our environment and daily lives, increasing numbers of them must operate on the periphery of our attention. Peripheral displays can allow a person to be aware of information while she is attending to some other primary task or activity. We present the Peripheral Displays Toolkit (PTK), a toolkit that provides structured support for managing user attention in the development of peripheral displays. Our goal is to enable designers to explore different approaches to managing user attention. The PTK supports three issues specific to conveying information on the periphery of human attention. These issues are abstraction of raw input, rules for assigning notification levels to input, and transitions for updating a display when input arrives. Our contribution is the investigation of issues specific to attention in peripheral display design and a toolkit that encapsulates support for these issues. We describe our toolkit architecture and present five sample peripheral displays demonstrating our toolkit\'s capabilities.

© All rights reserved Matthews et al. and/or ACM Press

p. 257-266

Huot, Stephane, Dumas, Cedric, Dragicevic, Pierre, Fekete, Jean-Daniel and Hegron, Gerard (2004): The MaggLite post-WIMP toolkit: draw it, connect it and run it. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 257-266. Available online

This article presents MaggLite, a toolkit and sketch-based interface builder allowing fast and interactive design of post-WIMP user interfaces. MaggLite improves design of advanced UIs thanks to its novel mixed-graph architecture that dynamically combines scene-graphs with interaction-graphs. Scene-graphs provide mechanisms to describe and produce rich graphical effects, whereas interaction-graphs allow expressive and fine-grained description of advanced interaction techniques and behaviors such as multiple pointers management, toolglasses, bimanual interaction, gesture, and speech recognition. Both graphs can be built interactively by sketching the UI and specifying the interaction using a dataflow visual language. Communication between the two graphs is managed at runtime by components we call Interaction Access Points. While developers can extend the toolkit by refining built-in generic mechanisms, UI designers can quickly and interactively design, prototype and test advanced user interfaces by applying the MaggLite principle: \"draw it, connect it and run it\".

© All rights reserved Huot et al. and/or ACM Press

p. 267-276

Chatty, Stephane, Sire, Stephane, Vinot, Jean-Luc, Lecoanet, Patrick, Lemort, Alexandre and Mertz, Christophe (2004): Revisiting visual interface programming: creating GUI tools for designers and programmers. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 267-276. Available online

Involving graphic designers in the large-scale development of user interfaces requires tools that provide more graphical flexibility and support efficient software processes. These requirements were analysed and used in the design of the TkZ-inc graphical library and the IntuiKit interface design environment. More flexibility is obtained through a wider palette of visual techniques and support for iterative construction of images, composition and parametric displays. More efficient processes are obtained with the use of the SVG standard to import graphics, support for linking graphics and behaviour, and a unifying model-driven architecture. We describe the corresponding features of our tools, and show their use in the development of an application for airports. Benefits include a wider access to high quality visual interfaces for specialised applications, and shorter prototyping and development cycles for multidisciplinary teams.

© All rights reserved Chatty et al. and/or ACM Press

p. 277-285

Chi, Ed H., Song, Jin and Corbin, Greg (2004): \. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 277-285. Available online

Ubiquitous and Wearable Computing both have the goal of pushing the computer into the background, supporting all kinds of human activities. Application areas include areas such as everyday environments (e.g. clothing, home, office), promoting new forms of creative learning via physical/virtual objects, and new tools for interactive design. In this paper, we thrust ubiquitous computing into the extremely hostile environment of the sparring ring of a martial art competition. Our system uses piezoelectric force sensors that transmit signals wirelessly to enable the detection of when a significant impact has been delivered to a competitor\'s body. The objective is to support the judges in scoring the sparring matches accurately, while preserving the goal of merging and blending into the background of the activity. The system therefore must take into account of the rules of the game, be responsive in real-time asynchronously, and often cope with untrained operators of the system. We present a pilot study of the finished prototype and detail our experience.

© All rights reserved Chi et al. and/or ACM Press

p. 287-290

(2004): Scott E. Hudson. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 287-290. Available online

Light Emitting Diodes (LEDs) offer long life, low cost, efficiency, brightness, and a full range of colors. Because of these properties, they are widely used for simple displays in electronic devices. A previously characterized, but little known property of LEDs allows them to be used as photo sensors. In this paper, we show how this capability can be used to turn unmodified, off the shelf, LED arrays into touch sensitive input devices (while still remaining capable of producing output). The technique is simple and requires little or no extra hardware - in some cases operating with the same micro-controller based circuitry normally used to produce output, requiring only software changes. We will describe a simple hybrid input/output device prototype implemented with this technique, and discuss the design opportunities that this type of device opens up.

© All rights reserved and/or ACM Press

p. 291-294

(2004): Johnny C. Lee, Paul H. Dietz, Darren Leigh, William S. Yerazunis, Scott E. Hudson. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 291-294. Available online

In this paper we present a system for providing tactile feedback for stylus-based touch-screen displays. The Haptic Pen is a simple low-cost device that provides individualized tactile feedback for multiple simultaneous users and can operate on large touch screens as well as ordinary surfaces. A pressure-sensitive stylus is combined with a small solenoid to generate a wide range of tactile sensations. The physical sensations generated by the Haptic pen can be used to enhance our existing interaction with graphical user interfaces as well as to help make modern computing systems more accessible to those with visual or motor impairments.

© All rights reserved and/or ACM Press

p. 3-12

Apitz, Georg and Guimbretiere, Francois (2004): CrossY: a crossing-based drawing application. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 3-12. Available online

We introduce CrossY, a simple drawing application developed as a benchmark to demonstrate the feasibility of goal crossing as the basis for a graphical user interface. We show that crossing is not only as expressive as the current point-and-click interface, but also offers more flexibility in interaction design. In particular, crossing encourages the fluid composition of commands which supports the development of more fluid interfaces. While crossing was previously identified as a potential substitute for the classic point-and-click interaction, this work is the first to report on the practical aspects of implementing an interface based on goal crossing as the fundamental building block.

© All rights reserved Apitz and Guimbretiere and/or ACM Press

p. 33-42

Zhao, Shengdong and Balakrishnan, Ravin (2004): Simple vs. compound mark hierarchical marking menus. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 33-42. Available online

We present a variant of hierarchical marking menus where items are selected using a series of inflection-free simple marks, rather than the single \"zig-zag\" compound mark used in the traditional design. Theoretical analysis indicates that this simple mark approach has the potential to significantly increase the number of items in a marking menu that can be selected efficiently and accurately. A user experiment is presented that compares the simple and compound mark techniques. Results show that the simple mark technique allows for significantly more accurate and faster menu selections overall, but most importantly also in menus with a large number of items where performance of the compound mark technique is particularly poor. The simple mark technique also requires significantly less physical input space to perform the selections, making it particularly suitable for small footprint pen-based input devices. Visual design alternatives are also discussed.

© All rights reserved Zhao and Balakrishnan and/or ACM Press

p. 43-52

Kristensson, Per-Ola and Zhai, Shumin (2004): SHARK{sup:2}: a large vocabulary shorthand writing system for pen-based computers. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 43-52. Available online

Zhai and Kristensson (2003) presented a method of speed-writing for pen-based computing which utilizes gesturing on a stylus keyboard for familiar words and tapping for others. In SHARK{sup:2}:, we eliminated the necessity to alternate between the two modes of writing, allowing any word in a large vocabulary (e.g. 10,000-20,000 words) to be entered as a shorthand gesture. This new paradigm supports a gradual and seamless transition from visually guided tracing to recall-based gesturing. Based on the use characteristics and human performance observations, we designed and implemented the architecture, algorithms and interfaces of a high-capacity multi-channel pen-gesture recognition system. The system\'s key components and performance are also reported.

© All rights reserved Kristensson and Zhai and/or ACM Press

p. 53-56

Smith, G. M. and Schraefel, M. C. (2004): The radial scroll tool: scrolling support for stylus- or touch-based document navigation. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 53-56. Available online

We present radial scroll, an interface widget to support scrolling particularly on either small or large scale touch displays. Instead of dragging a elevator in a scroll bar, or using repetitive key presses to page up or down, users gesture anywhere on the document surface such that clockwise gestures advance the document; counter clockwise gestures reverse the document. We describe our prototype implementation and discuss the results of an initial user study.

© All rights reserved Smith and Schraefel and/or ACM Press

p. 57-60

Moscovich, Tomer and Hughes, John F. (2004): Navigating documents with the virtual scroll ring. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 57-60. Available online

We present a technique for scrolling through documents that is simple to implement and requires no special hardware. This is accomplished by simulating a hardware scroll ring -- a device that maps circular finger motion into vertical scrolling. The technique performs at least as well as a mouse wheel for medium and long distances, and is preferred by users. It can be particularly useful in portable devices where screen-space and space for peripherals is at a premium.

© All rights reserved Moscovich and Hughes and/or ACM Press

p. 61-70

Grossman, Tovi, Wigdor, Daniel and Balakrishnan, Ravin (2004): Multi-finger gestural interaction with 3d volumetric displays. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 61-70. Available online

Volumetric displays provide interesting opportunities and challenges for 3D interaction and visualization, particularly when used in a highly interactive manner. We explore this area through the design and implementation of techniques for interactive direct manipulation of objects with a 3D volumetric display. Motion tracking of the user\'s fingers provides for direct gestural interaction with the virtual objects, through manipulations on and around the display\'s hemispheric enclosure. Our techniques leverage the unique features of volumetric displays, including a 360{deg} viewing volume that enables manipulation from any viewpoint around the display, as well as natural and accurate perception of true depth information in the displayed 3D scene. We demonstrate our techniques within a prototype 3D geometric model building application.

© All rights reserved Grossman et al. and/or ACM Press

p. 71-80

Carpendale, Sheelagh, Ligh, John and Pattison, Eric (2004): Achieving higher magnification in context. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 71-80. Available online

The difficulty of accessing information details while preserving context has generated many different focus-in-context techniques. A common limitation of focus-in-context techniques is their ability to work well at high magnification. We present a set of improvements that will make high magnification in context more feasible. We demonstrate new distortion functions that effectively integrate high magnification within its context. Finally, we show how lenses can be used on top of other lenses, effectively multiplying their magnification power in the same manner that a magnifying glass applied on top of another causes multiplicative magnification. The combined effect is to change feasible detail-in-context magnification factors from less than 8 to more than 40.

© All rights reserved Carpendale et al. and/or ACM Press

p. 81-90

Bae, Seok-Hyung, Kobayash, Takahiro, Kijima, Ryugo and Kim, Won-Sup (2004): Tangible NURBS-curve manipulation techniques using graspable handles on a large display. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 81-90. Available online

This paper presents tangible interaction techniques for fine-tuning one-to-one scale NURBS curves on a large display for automotive design. We developed a new graspable handle with a transparent groove that allows designers to manipulate virtual curves on a display screen directly. The use of the proposed handle leads naturally to a rich vocabulary of terms describing interaction techniques that reflect existing shape styling methods. A user test raised various issues related to the graspable user interface, two-handed input, and large-display interaction.

© All rights reserved Bae et al. and/or ACM Press

p. 91-94

Baudisch, Patrick, Xie, Xing, Wang, Chong and Ma, Wei-Ying (2004): Collapse-to-zoom: viewing web pages on small screen devices by interactively removing irrelevant content. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 91-94. Available online

Overview visualizations for small-screen web browsers were designed to provide users with visual context and to allow them to rapidly zoom in on tiles of relevant content. Given that content in the overview is reduced, however, users are often unable to tell which tiles hold the relevant material, which can force them to adopt a time-consuming hunt-and-peck strategy. Collapse-to-zoom addresses this issue by offering an alternative exploration strategy. In addition to allowing users to zoom into relevant areas, collapse-to-zoom allows users to collapse areas deemed irrelevant, such as columns containing menus, archive material, or advertising. Collapsing content causes all remaining content to expand in size causing it to reveal more detail, which increases the user\'s chance of identifying relevant content. Collapse-to-zoom navigation is based on a hybrid between a marquee selection tool and a marking menu, called marquee menu. It offers four commands for collapsing content areas at different granularities and to switch to a full-size reading view of what is left of the page.

© All rights reserved Baudisch et al. and/or ACM Press

p. 95-98

Singh, Karan, Grimm, Cindy and Sudarsanam, Nisha (2004): The IBar: a perspective-based camera widget. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 95-98. Available online

We present a new screen space widget, the IBar, for effective camera control in 3D graphics environments. The IBar provides a compelling interface for controlling scene perspective based on the artistic concept of vanishing points. Various handles on the widget manipulate multiple camera parameters simultaneously to create a single perceived projection change. For example, changing just the perspective distortion is accomplished by simultaneously decreasing the camera\'s distance to the scene while increasing focal length. We demonstrate that the IBar is easier to learn for novice users and improves their understanding of camera perspective.

© All rights reserved Singh et al. and/or ACM Press

p. 99-107

Kim, Jiwon, Seitz, Steven M. and Agrawala, Maneesh (2004): Video-based document tracking: unifying your physical and electronic desktops. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 99-107. Available online

This paper presents an approach for tracking paper documents on the desk over time and automatically linking them to the corresponding electronic documents using an overhead video camera. We demonstrate our system in the context of two scenarios, paper tracking and photo sorting. In the paper tracking scenario, the system tracks changes in the stacks of printed documents and books on the desk and builds a complete representation of the spatial structure of the desktop. When users want to find a printed document buried in the stacks, they can query the system based on appearance, keywords, or access time. The system also provides a remote desktop interface for directly browsing the physical desktop from a remote location. In the photo sorting scenario, users sort printed photographs into physical stacks on the desk. The systemautomatically recognizes the photographs and organizes the corresponding digital photographs into separate folders according to the physical arrangement. Our framework provides a way to unify the physical and electronic desktops without the need for a specialized physical infrastructure except for a video camera.

© All rights reserved Kim et al. and/or ACM Press




User-contributed notes

Give us your opinion! Do you have any comments/additions
that you would like other visitors to see?

 
comment You (your email) say: Aug 25th, 2014
#1
Aug 25
Add a thoughtful commentary or note to this page ! 
 

your homepage, facebook profile, twitter, or the like
will be spam-protected
How many?
= e.g. "6"
User ExperienceBy submitting you agree to the Site Terms
 
 
 
 

Changes to this page (conference)

11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
07 Jun 2007: Modified
07 Jun 2007: Modified
07 Jun 2007: Modified
07 Jun 2007: Added
07 Jun 2007: Added

Page Information

Page maintainer: The Editorial Team
This is a tablet-optimized version of http://www.interaction-design.org/references/conferences/proceedings_of_the_2004_acm_symposium_on_user_interface_software_and_technology.html

Upcoming Courses

go to course
Quality Web Communication: The Beginner's Guide
Starts the day after tomorrow !
go to course
User-Centred Design - Module 2
87% booked. Starts in 8 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Upcoming Courses

go to course
Quality Web Communication: The Beginner's Guide
Starts the day after tomorrow !
go to course
User-Centred Design - Module 2
87% booked. Starts in 8 days