Publication statistics

Pub. period:1992-2011
Pub. count:91
Number of co-authors:137



Co-authors

Number of publications with 3 favourite co-authors:

Jason I. Hong:16
Yang Li:10
Susumu Harada:9

 

 

Productive colleagues

James A. Landay's 3 most productive colleagues in number of publications:

Brad A. Myers:154
Gregory D. Abowd:116
Ravin Balakrishnan:108
 
 
 

Upcoming Courses

go to course
User-Centred Design - Module 3
69% booked. Starts in 26 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

James A. Landay

Picture of James A. Landay.
Update pic
Has also published under the name of:
"James Landay"

Personal Homepage:
cs.washington.edu/homes/landay/

I am a Professor in Computer Science & Engineering at the University of Washington, specializing in human-computer interaction. My current research interests include Automated Usability Evaluation, Demonstrational Interfaces, Ubiquitous Computing, User Interface Design Tools, and Web Design. I am also an Adjunct Associate Professor of both Human Centered Design & Engineering and in the Information School. I was previously the Laboratory Director of Intel Labs Seattle, a university affiliated research lab that is exploring the new usage models, applications, and technology for ubiquitous computing. I am a founding member of the University of Washington Design:Use:Build (DUB) Center, a cross-campus interdisciplinary group of HCI and Design researchers. I received my B.S. in EECS from UC Berkeley in 1990 and my M.S. and Ph.D. in CS from Carnegie Mellon University in 1993 and 1996, respectively. My Ph.D. dissertation was the first to demonstrate the use of sketching in user interface design tools. I was also the chief scientist and co-founder of NetRaker, which was acquired by KeyNote Systems in 2004. From 1997 through 2003, I was a tenured professor in EECS at UC Berkeley. My curriculum vitae gives a more complete picture of my record.

Edit author info
Rename / change spelling
Add publication
 

Publications by James A. Landay (bibliography)

 what's this?
2011
 
Edit | Del

Edge, Darren, Searle, Elly, Chiu, Kevin, Zhao, Jing and Landay, James A. (2011): MicroMandarin: mobile language learning in context. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 3169-3178.

Learning a new language is hard, but learning to use it confidently in conversations with native speakers is even harder. From our field research with language learners, with support from Cognitive Psychology and Second Language Acquisition, we argue for the value of contextual microlearning in the many breaks spread across different places and throughout the day. We present a mobile application that supports such microlearning by leveraging the location-based service Foursquare to automatically provide contextually relevant content in the world's major cities. In an evaluation of Mandarin Chinese learning, a four-week, 23-user study spanning Beijing and Shanghai compared this contextual system to a system based on word frequency. Study sessions with the contextual version lasted half as long but occurred in twice as many places as sessions with the frequency version, suggesting a complementary relationship between the two approaches.

© All rights reserved Edge et al. and/or their publisher

2010
 
Edit | Del

Li, Yang, Cao, Xiang, Everitt, Katherine, Dixon, Morgan and Landay, James A. (2010): FrameWire: a tool for automatically extracting interaction logic from paper prototyping tests. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 503-512.

Paper prototyping offers unique affordances for interface design. However, due to its spontaneous nature and the limitations of paper, it is difficult to distill and communicate a paper prototype design and its user test findings to a wide audience. To address these issues, we created FrameWire, a computer vision-based system that automatically extracts interaction flows from the video recording of paper prototype user tests. Based on the extracted logic, FrameWire offers two distinct benefits for designers: a structural view of the video recording that allows a designer or a stakeholder to easily distill and understand the design concept and user interaction behaviors, and automatic generation of interactive HTML-based prototypes that can be easily tested with a larger group of users as well as "walked through" by other stakeholders. The extraction is achieved by automatically aggregating video frame sequences into an interaction flow graph based on frame similarities and a designer-guided clustering process. The results of evaluating FrameWire with realistic paper prototyping tests show that our extraction approach is feasible and FrameWire is a promising tool for enhancing existing prototyping practice.

© All rights reserved Li et al. and/or their publisher

 
Edit | Del

Saponas, T. Scott, Tan, Desney S., Morris, Dan, Turner, Jim and Landay, James A. (2010): Making muscle-computer interfaces more practical. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 851-854.

Recent work in muscle sensing has demonstrated the potential of human-computer interfaces based on finger gestures sensed from electrodes on the upper forearm. While this approach holds much potential, previous work has given little attention to sensing finger gestures in the context of three important real-world requirements: sensing hardware suitable for mobile and off-desktop environments, electrodes that can be put on quickly without adhesives or gel, and gesture recognition techniques that require no new training or calibration after re-donning a muscle-sensing armband. In this note, we describe our approach to overcoming these challenges, and we demonstrate average classification accuracies as high as 86% for pinching with one of three fingers in a two-session, eight-person experiment.

© All rights reserved Saponas et al. and/or their publisher

 
Edit | Del

Froehlich, Jon, Findlater, Leah and Landay, James A. (2010): The design of eco-feedback technology. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 1999-2008.

Eco-feedback technology provides feedback on individual or group behaviors with a goal of reducing environmental impact. The history of eco-feedback extends back more than 40 years to the origins of environmental psychology. Despite its stated purpose, few HCI eco-feedback studies have attempted to measure behavior change. This leads to two overarching questions: (1) what can HCI learn from environmental psychology and (2) what role should HCI have in designing and evaluating eco-feedback technology? To help answer these questions, this paper conducts a comparative survey of eco-feedback technology, including 89 papers from environmental psychology and 44 papers from the HCI and UbiComp literature. We also provide an overview of predominant models of proenvironmental behaviors and a summary of key motivation techniques to promote this behavior.

© All rights reserved Froehlich et al. and/or their publisher

 
Edit | Del

Patel, Kayur, Bancroft, Naomi, Drucker, Steven M., Fogarty, James, Ko, Andrew J. and Landay, James A. (2010): Gestalt: integrated support for implementation and analysis in machine learning. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 37-46.

We present Gestalt, a development environment designed to support the process of applying machine learning. While traditional programming environments focus on source code, we explicitly support both code and data. Gestalt allows developers to implement a classification pipeline, analyze data as it moves through that pipeline, and easily transition between implementation and analysis. An experiment shows this significantly improves the ability of developers to find and fix bugs in machine learning systems. Our discussion of Gestalt and our experimental observations provide new insight into general-purpose support for the machine learning process.

© All rights reserved Patel et al. and/or their publisher

 
Edit | Del

Landay, James A. (2010): Activity-based Ubicomp: a new research basis for the future of human-computer interaction. In: Proceedings of the 2010 International Conference on Multimodal Interfaces 2010. p. 28.

Ubiquitous computing (Ubicomp) is bringing computing off the desktop and into our everyday lives. For example, an interactive display can be used by the family of an elder to stay in constant touch with the elder's everyday wellbeing, or by a group to visualize and share information about exercise and fitness. Mobile sensors, networks, and displays are proliferating worldwide in mobile phones, enabling this new wave of applications that are intimate with the user's physical world. In addition to being ubiquitous, these applications share a focus on high-level activities, which are long-term social processes that take place in multiple environments and are supported by complex computation and inference of sensor data. However, the promise of this Activity-based Ubicomp is unfulfilled, primarily due to methodological, design, and tool limitations in how we understand the dynamics of activities. The traditional cognitive psychology basis for human-computer interaction, which focuses on our short term interactions with technological artifacts, is insufficient for achieving the promise of Activity-based Ubicomp. We are developing design methodologies and tools, as well as activity recognition technologies, to both demonstrate the potential of Activity-based Ubicomp as well as to support designers in fruitfully creating these types of applications.

© All rights reserved Landay and/or ACM Press

2009
 
Edit | Del

Harada, Susumu, Wobbrock, Jacob O., Malkin, Jonathan, Bilmes, Jeff A. and Landay, James A. (2009): Longitudinal study of people learning to use continuous voice-based cursor control. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 347-356.

We conducted a 2.5 week longitudinal study with five motor impaired (MI) and four non-impaired (NMI) participants, in which they learned to use the Vocal Joystick, a voice-based user interface control system. We found that the participants were able to learn the mapping between the vowel sounds and directions used by the Vocal Joystick, and showed marked improvement in their target acquisition performance. At the end of the ten session period, the NMI group reached the same level of performance as the previously measured "expert" Vocal Joystick performance, and the MI group was able to reach 70% of that level. Two of the MI participants were also able to approach the performance of their preferred device, a touchpad. We report on a number of issues that can inform the development of further enhancements in the realm of voice-driven computer control.

© All rights reserved Harada et al. and/or ACM Press

 
Edit | Del

Consolvo, Sunny, McDonald, David W. and Landay, James A. (2009): Theory-driven design strategies for technologies that support behavior change in everyday life. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 405-414.

In this paper, we propose design strategies for persuasive technologies that help people who want to change their everyday behaviors. Our strategies use theory and prior work to substantially extend a set of existing design goals. Our extensions specifically account for social characteristics and other tactics that should be supported by persuasive technologies that target long-term discretionary use throughout everyday life. We used these strategies to design and build a system that encourages people to lead a physically active lifestyle. Results from two field studies of the system -- a three-week trial and a three-month experiment -- have shown that the system was successful at helping people maintain a more physically active lifestyle and validate the usefulness of the strategies.

© All rights reserved Consolvo et al. and/or ACM Press

 
Edit | Del

Froehlich, Jon, Dillahunt, Tawanna, Klasnja, Predrag, Mankoff, Jennifer, Consolvo, Sunny, Harrison, Beverly and Landay, James A. (2009): UbiGreen: investigating a mobile tool for tracking and supporting green transportation habits. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1043-1052.

The greatest contributor of CO2 emissions in the average American household is personal transportation. Because transportation is inherently a mobile activity, mobile devices are well suited to sense and provide feedback about these activities. In this paper, we explore the use of personal ambient displays on mobile phones to give users feedback about sensed and self-reported transportation behaviors. We first present results from a set of formative studies exploring our respondents' existing transportation routines, willingness to engage in and maintain green transportation behavior, and reactions to early mobile phone "green" application design concepts. We then describe the results of a 3-week field study (N=13) of the UbiGreen Transportation Display prototype, a mobile phone application that semi-automatically senses and reveals information about transportation behavior. Our contributions include a working system for semi-automatically tracking transit activity, a visual design capable of engaging users in the goal of increasing green transportation, and the results of our studies, which have implications for the design of future green applications.

© All rights reserved Froehlich et al. and/or ACM Press

 
Edit | Del

Toomim, Michael, Drucker, Steven M., Dontcheva, Mira, Rahimi, Ali, Thomson, Blake and Landay, James A. (2009): Attaching UI enhancements to websites with end users. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1859-1868.

We present reform, a step toward write-once apply-anywhere user interface enhancements. The reform system envisions roles for both programmers and end users in enhancing existing websites to support new goals. First, a programmer authors a traditional mashup or browser extension, but they do not write a web scraper. Instead they use reform, which allows novice end users to attach the enhancement to their favorite sites with a scraping by-example interface. reform makes enhancements easier to program while also carrying the benefit that end users can apply the enhancements to any number of new websites. We present reform's architecture, user interface, interactive by-example extraction algorithm for novices, and evaluation, along with five example reform enabled enhancements.

© All rights reserved Toomim et al. and/or ACM Press

 
Edit | Del

Saponas, T. Scott, Tan, Desney S., Morris, Dan, Balakrishnan, Ravin, Turner, Jim and Landay, James A. (2009): Enabling always-available input with muscle-computer interfaces. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 167-176.

Previous work has demonstrated the viability of applying offline analysis to interpret forearm electromyography (EMG) and classify finger gestures on a physical surface. We extend those results to bring us closer to using muscle-computer interfaces for always-available input in real-world applications. We leverage existing taxonomies of natural human grips to develop a gesture set covering interaction in free space even when hands are busy with other objects. We present a system that classifies these gestures in real-time and we introduce a bi-manual paradigm that enables use in interactive systems. We report experimental results demonstrating four-finger classification accuracies averaging 79% for pinching, 85% while holding a travel mug, and 88% when carrying a weighted bag. We further show generalizability across different arm postures and explore the tradeoffs of providing real-time visual feedback.

© All rights reserved Saponas et al. and/or their publisher

 
Edit | Del

Paradiso, Joseph A. and Landay, James A. (2009): Guest Editors' Introduction: Cross-Reality Environments. In IEEE Pervasive Computing, 8 (3) pp. 14-15.

 
Edit | Del

Landay, James A., Joseph, Anthony D. and Reynolds, Franklin (2009): Guest Editors' Introduction: Smarter Phones. In IEEE Pervasive Computing, 8 (2) pp. 12-13.

 
Edit | Del

Consolvo, Sunny, Klasnja, Predrag, McDonald, David W. and Landay, James A. (2009): Goal-setting considerations for persuasive technologies that encourage physical activity. In: Proceedings of the 2009 International Conference on Persuasive Technology 2009. p. 8.

Goal-setting has been shown to be an effective strategy for changing behavior; therefore employing goal-setting in persuasive technologies could be an effective way to encourage behavior change. In our work, we are developing persuasive technologies to encourage individuals to live healthy lifestyles with a focus on being physically active. As part of our investigations, we have explored individuals' reactions to goal-setting, specifically goal sources (i.e., who should set the individual's goal) and goal timeframes (i.e., over what time period should an individual have to achieve the goal). In this paper, we present our findings related to various approaches for implementing goal-setting in a persuasive technology to encourage physical activity.

© All rights reserved Consolvo et al. and/or ACM Press

2008
 
Edit | Del

Toomim, Michael, Zhang, Xianhang, Fogarty, James and Landay, James A. (2008): Access control by testing for shared knowledge. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 193-196.

Controlling the privacy of online content is difficult and often confusing. We present a social access control where users devise simple questions testing shared knowledge instead of constructing authenticated accounts and explicit access control rules. We implemented a prototype and conducted studies to explore the context of photo sharing security, gauge the difficulty of creating shared knowledge questions, measure their resilience to adversarial attack, and evaluate user ability to understand and predict this resilience.

© All rights reserved Toomim et al. and/or ACM Press

 
Edit | Del

Davis, Richard C., Colwell, Brien and Landay, James A. (2008): K-sketch: a 'kinetic' sketch pad for novice animators. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 413-422.

Because most animation tools are complex and time-consuming to learn and use, most animations today are created by experts. To help novices create a wide range of animations quickly, we have developed a general-purpose, informal, 2D animation sketching system called K-Sketch. Field studies investigating the needs of animators and would-be animators helped us collect a library of usage scenarios for our tool. A novel optimization technique enabled us to design an interface that is simultaneously fast, simple, and powerful. The result is a pen-based system that relies on users' intuitive sense of space and time while still supporting a wide range of uses. In a laboratory experiment that compared K-Sketch to a more formal animation tool (PowerPoint), participants worked three times faster, needed half the learning time, and had significantly lower cognitive load with K-Sketch.

© All rights reserved Davis et al. and/or ACM Press

 
Edit | Del

Patel, Kayur, Fogarty, James, Landay, James A. and Harrison, Beverly L. (2008): Investigating statistical machine learning as a tool for software development. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 667-676.

As statistical machine learning algorithms and techniques continue to mature, many researchers and developers see statistical machine learning not only as a topic of expert study, but also as a tool for software development. Extensive prior work has studied software development, but little prior work has studied software developers applying statistical machine learning. This paper presents interviews of eleven researchers experienced in applying statistical machine learning algorithms and techniques to human-computer interaction problems, as well as a study of ten participants working during a five-hour study to apply statistical machine learning algorithms and techniques to a realistic problem. We distill three related categories of difficulties that arise in applying statistical machine learning as a tool for software development: (1) difficulty pursuing statistical machine learning as an iterative and exploratory process, (2) difficulty understanding relationships between data and the behavior of statistical machine learning algorithms, and (3) difficulty evaluating the performance of statistical machine learning algorithms and techniques in the context of applications. This paper provides important new insight into these difficulties and the need for development tools that better support the application of statistical machine learning.

© All rights reserved Patel et al. and/or ACM Press

 
Edit | Del

Li, Yang and Landay, James A. (2008): Activity-based prototyping of ubicomp applications for long-lived, everyday human activities. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1303-1312.

We designed an activity-based prototyping process realized in the ActivityDesigner system that combines the theoretical framework of Activity-Centered Design with traditional iterative design. This process allows designers to leverage human activities as first class objects for design and is supported in ActivityDesigner by three novel features. First, this tool allows designers to model activities based on concrete scenarios collected from everyday lives. The models form a context for design and computational constructs for creating functional prototypes. Second, it allows designers to prototype interaction behaviors based on activity streams spanning time. Third, it allows designers to easily test these prototypes with real users continuously, in situ. We have garnered positive feedback from a series of laboratory user studies and several case studies in which ActivityDesigner was used in realistic design situations. ActivityDesigner was able to effectively streamline a ubicomp design process, and it allowed creating realistic ubicomp application prototypes at a low cost and testing them in everyday lives over an extended period.

© All rights reserved Li and Landay and/or ACM Press

 
Edit | Del

Lin, James and Landay, James A. (2008): Employing patterns and layers for early-stage design and prototyping of cross-device user interfaces. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1313-1322.

Designing UIs that run across multiple devices is increasingly important. To address this, we have created a prototyping tool called Damask, which targets web UIs that run on PCs and mobile phones, and prompt-and-response style voice UIs. In Damask, designers sketch out their design for one device while using design patterns to specify higher-level concepts within their design. Damask's patterns include pre-built UI fragments that are already optimized for each device. Designers also use layers to specify which UI parts are common across devices and which are specific to one device. Damask uses the sketches and patterns to generate designs for the other devices, which the designers can refine. A study performed with 12 professional UI designers found that, in the early stages, designers using patterns and layers in Damask created cross-device UIs that are rated at least as good as those created without patterns and layers, without more time.

© All rights reserved Lin and Landay and/or ACM Press

 Cited in the following chapter:

User Interface Design Adaptation: [/encyclopedia/user_interface_design_adaptation.html]


 
 
Edit | Del

Consolvo, Sunny, McDonald, David W., Toscos, Tammy, Chen, Mike Y., Froehlich, Jon, Harrison, Beverly L., Klasnja, Predrag, LaMarca, Anthony, LeGrand, Louis, Libby, Ryan, Smith, Ian and Landay, James A. (2008): Activity sensing in the wild: a field trial of ubifit garden. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1797-1806.

Recent advances in small inexpensive sensors, low-power processing, and activity modeling have enabled applications that use on-body sensing and machine learning to infer people's activities throughout everyday life. To address the growing rate of sedentary lifestyles, we have developed a system, UbiFit Garden, which uses these technologies and a personal, mobile display to encourage physical activity. We conducted a 3-week field trial in which 12 participants used the system and report findings focusing on their experiences with the sensing and activity inference. We discuss key implications for systems that use on-body sensing and activity inference to encourage physical activity.

© All rights reserved Consolvo et al. and/or ACM Press

 
Edit | Del

Harada, Susumu, Lester, Jonathan, Patel, Kayur, Saponas, T. Scott, Fogarty, James, Landay, James A. and Wobbrock, Jacob O. (2008): VoiceLabel: using speech to label mobile sensor data. In: Digalakis, Vassilios, Potamianos, Alexandros, Turk, Matthew, Pieraccini, Roberto and Ivanov, Yuri (eds.) Proceedings of the 10th International Conference on Multimodal Interfaces - ICMI 2008 October 20-22, 2008, Chania, Crete, Greece. pp. 69-76.

 
Edit | Del

Consolvo, Sunny, Klasnja, Predrag V., McDonald, David W., Avrahami, Daniel, Froehlich, Jon, LeGrand, Louis, Libby, Ryan, Mosher, Keith and Landay, James A. (2008): Flowers or a robot army?: encouraging awareness & activity with personal, mobile displays. In: Youn, Hee Yong and Cho, We-Duke (eds.) UbiComp 2008 Ubiquitous Computing - 10th International Conference September 21-24, 2008, Seoul, Korea. pp. 54-63.

 
Edit | Del

Choudhury, Tanzeem, Borriello, Gaetano, Consolvo, Sunny, Hhnel, Dirk, Harrison, Beverly L., Hemingway, Bruce, Hightower, Jeffrey, Klasnja, Predrag V., Koscher, Karl, LaMarca, Anthony, Landay, James A., LeGrand, Louis, Lester, Jonathan, Rahimi, Ali, Rea, Adam and Wyatt, Danny (2008): The Mobile Sensing Platform: An Embedded Activity Recognition System. In IEEE Pervasive Computing, 7 (2) pp. 32-41.

 
Edit | Del

Klemmer, Scott R., Everitt, Katherine M. and Landay, James A. (2008): Integrating Physical and Digital Interactions on Walls for Fluid Design Collaboration. In Human-Computer Interaction, 23 (2) pp. 138-213.

Web designers use pens, paper, walls, and tables for explaining, developing, and communicating ideas during the early phases of design. These practices inspired The Designers' Outpost. With Outpost, users collaboratively author Web site information architectures on an electronic whiteboard using physical media (sticky notes and images), structuring and annotating that information with electronic pens. This interaction is enabled by a touch-sensitive electronic whiteboard augmented with a computer vision system. The Designers' Outpost integrates wall-scale, paper-based design practices with novel electronic tools to better support collaboration during early-phase design. Our studies with professional designers showed this integration to be especially helpful for fluidly transitioning to other design tools, access and exploration of design history, and remote collaboration.

© All rights reserved Klemmer et al. and/or Taylor and Francis

 
Edit | Del

Harada, Susumu, Lester, Jonathan, Patel, Kayur, Saponas, T. Scott, Fogarty, James, Landay, James A. and Wobbrock, Jacob O. (2008): VoiceLabel: using speech to label mobile sensor data. In: Proceedings of the 2008 International Conference on Multimodal Interfaces 2008. pp. 69-76.

Many mobile machine learning applications require collecting and labeling data, and a traditional GUI on a mobile device may not be an appropriate or viable method for this task. This paper presents an alternative approach to mobile labeling of sensor data called VoiceLabel. VoiceLabel consists of two components: (1) a speech-based data collection tool for mobile devices, and (2) a desktop tool for offline segmentation of recorded data and recognition of spoken labels. The desktop tool automatically analyzes the audio stream to find and recognize spoken labels, and then presents a multimodal interface for reviewing and correcting data labels using a combination of the audio stream, the system's analysis of that audio, and the corresponding mobile sensor data. A study with ten participants showed that VoiceLabel is a viable method for labeling mobile sensor data. VoiceLabel also illustrates several key features that inform the design of other data labeling tools.

© All rights reserved Harada et al. and/or their publisher

 
Edit | Del

Consolvo, Sunny, McDonald, David W., Toscos, Tammy, Chen, Mike Y., Froehlich, Jon, Harrison, Beverly, Klasnja, Predrag, LaMarca, Anthony, LeGrand, Louis, Libby, Ryan, Smith, Ian and Landay, James A. (2008): Activity Sensing in the Wild: A Field Trial of UbiFit Garden. In , .

Recent advances in small inexpensive sensors, low-power processing, and activity modeling have enabled applications that use on-body sensing and machine learning to infer people’s activities throughout everyday life. To address the growing rate of sedentary lifestyles, we have developed a system, UbiFit Garden, which uses these technologies and a personal, mobile display to encourage physical activity. We conducted a 3-week field trial in which 12 participants used the system and report findings focusing on their experiences with the sensing and activity inference. We discuss key implications for systems that use on-body sensing and activity inference to encourage physical activity.

© All rights reserved Consolvo et al. and/or their publisher

2007
 
Edit | Del

Harada, Susumu, Wobbrock, Jacob O. and Landay, James A. (2007): Voicedraw: a hands-free voice-driven drawing application for people with motor impairments. In: Ninth Annual ACM Conference on Assistive Technologies 2007. pp. 27-34.

We present VoiceDraw, a voice-driven drawing application for people with motor impairments that provides a way to generate free-form drawings without needing manual interaction. VoiceDraw was designed and built to investigate the potential of the human voice as a modality to bring fluid, continuous direct manipulation interaction to users who lack the use of their hands. VoiceDraw also allows us to study the issues surrounding the design of a user interface optimized for non-speech voice-based interaction. We describe the features of the VoiceDraw application, our design process, including our user-centered design sessions with a 'voice painter', and offer lessons learned that could inform future voice-based design efforts. In particular, we offer insights for mapping human voice to continuous control.

© All rights reserved Harada et al. and/or ACM Press

 
Edit | Del

Consolvo, Sunny, Harrison, Beverly L., Smith, Ian, Chen, Mike Y., Everitt, Katherine, Froehlich, Jon and Landay, James A. (2007): Conducting In Situ Evaluations for and With Ubiquitous Computing Technologies. In International Journal of Human-Computer Interaction, 22 (1) pp. 103-118.

To evaluate ubiquitous computing technologies, which may be embedded in the environment, embedded in objects, worn, or carried by the user throughout everyday life, it is essential to use methods that accommodate the often unpredictable, real-world environments in which the technologies are used. This article discusses how we have adapted and applied traditional methods from psychology and human-computer interaction, such as Wizard of Oz and Experience Sampling, to be more amenable to the in situ evaluations of ubiquitous computing applications, particularly in the early stages of design. The way that ubiquitous computing technologies can facilitate the in situ collection of self-report data is also discussed. Although the focus is on ubiquitous computing applications and tools for their assessment, it is believed that the in situ evaluation tools that are proposed will be generally useful for field trials of other technology, applications, or formative studies that are concerned with collecting data in situ.

© All rights reserved Consolvo et al. and/or Lawrence Erlbaum Associates

 
Edit | Del

Davis, Richard C., Saponas, T. Scott, Shilman, Michael and Landay, James A. (2007): SketchWizard: Wizard of Oz prototyping of pen-based user interfaces. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 119-128.

SketchWizard allows designers to create Wizard of Oz prototypes of pen-based user interfaces in the early stages of design. In the past, designers have been inhibited from participating in the design of pen-based interfaces because of the inadequacy of paper prototypes and the difficulty of developing functional prototypes. In SketchWizard, designers and end users share a drawing canvas between two computers, allowing the designer to simulate the behavior of recognition or other technologies. Special editing features are provided to help designers respond quickly to end-user input. This paper describes the SketchWizard system and presents two evaluations of our approach. The first is an early feasibility study in which Wizard of Oz was used to prototype a pen-based user interface. The second is a laboratory study in which designers used SketchWizard to simulate existing pen-based interfaces. Both showed that end users gave valuable feedback in spite of delays between end-user actions and wizard updates.

© All rights reserved Davis et al. and/or ACM Press

 
Edit | Del

Everitt, Katherine, Harada, Susumu, Bilmes, Jeff A. and Landay, James A. (2007): Disambiguating speech commands using physical context. In: Massaro, Dominic W., Takeda, Kazuya, Roy, Deb and Potamianos, Alexandros (eds.) Proceedings of the 9th International Conference on Multimodal Interfaces - ICMI 2007 November 12-15, 2007, Nagoya, Aichi, Japan. pp. 247-254.

 
Edit | Del

Harada, Susumu, Saponas, T. Scott and Landay, James A. (2007): Voicepen: augmenting pen input with simultaneous non-linguisitic vocalization. In: Massaro, Dominic W., Takeda, Kazuya, Roy, Deb and Potamianos, Alexandros (eds.) Proceedings of the 9th International Conference on Multimodal Interfaces - ICMI 2007 November 12-15, 2007, Nagoya, Aichi, Japan. pp. 178-185.

 
Edit | Del

Li, Yang, Hong, Jason I. and Landay, James A. (2007): Design Challenges and Principles for Wizard of Oz Testing of Location-Enhanced Applications. In IEEE Pervasive Computing, 6 (2) pp. 70-75.

 
Edit | Del

Harada, Susumu, Saponas, T. Scott and Landay, James A. (2007): VoicePen: augmenting pen input with simultaneous non-linguistic vocalization. In: Proceedings of the 2007 International Conference on Multimodal Interfaces 2007. pp. 178-185.

This paper explores using non-linguistic vocalization as an additional modality to augment digital pen input on a tablet computer. We investigated this through a set of novel interaction techniques and a feasibility study. Typically, digital pen users control one or two parameters using stylus position and sometimes pen pressure. However, in many scenarios the user can benefit from the ability to continuously vary additional parameters. Non-linguistic vocalizations, such as vowel sounds, variation of pitch, or control of loudness have the potential to provide fluid continuous input concurrently with pen interaction. We present a set of interaction techniques that leverage the combination of voice and pen input when performing both creative drawing and object manipulation tasks. Our feasibility evaluation suggests that with little training people can use non-linguistic vocalization to productively augment digital pen interaction.

© All rights reserved Harada et al. and/or their publisher

 
Edit | Del

Everitt, Katherine M., Harada, Susumu, Bilmes, Jeff and Landay, James A. (2007): Disambiguating speech commands using physical context. In: Proceedings of the 2007 International Conference on Multimodal Interfaces 2007. pp. 247-254.

Speech has great potential as an input mechanism for ubiquitous computing. However, the current requirements necessary for accurate speech recognition, such as a quiet environment and a well-positioned and high-quality microphone, are unreasonable to expect in a realistic setting. In a physical environment, there is often contextual information which can be sensed and used to augment the speech signal. We investigated improving speech recognition rates for an electronic personal trainer using knowledge about what equipment was in use as context. We performed an experiment with participants speaking in an instrumented apartment environment and compared the recognition rates of a larger grammar with those of a smaller grammar that is determined by the context.

© All rights reserved Everitt et al. and/or their publisher

2006
 
Edit | Del

Consolvo, Sunny, Everitt, Katherine, Smith, Ian and Landay, James A. (2006): Design requirements for technologies that encourage physical activity. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 457-466.

Overweight and obesity are a global epidemic, with over one billion overweight adults worldwide (300+ million of whom are obese). Obesity is linked to several serious health problems and medical conditions. Medical experts agree that physical activity is critical to maintaining fitness, reducing weight, and improving health, yet many people have difficulty increasing and maintaining physical activity in everyday life. Clinical studies have shown that health benefits can occur from simply increasing the number of steps one takes each day and that social support can motivate people to stay active. In this paper, we describe Houston, a prototype mobile phone application for encouraging activity by sharing step count with friends. We also present four design requirements for technologies that encourage physical activity that we derived from a three-week long in situ pilot study that was conducted with women who wanted to increase their physical activity.

© All rights reserved Consolvo et al. and/or ACM Press

 
Edit | Del

Li, Yang, Welbourne, Evan and Landay, James A. (2006): Design and experimental analysis of continuous location tracking techniques for Wizard of Oz testing. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 1019-1022.

Wizard of Oz (WOz) testing has shown promise as an effective way to test location-enhanced applications. However, it is challenging to conduct a location-based WOz test because of the dynamic nature of target settings in the field. In particular, continuous location tracking, a major task in such a test, requires a wizard to frequently update a user's location to simulate a location system. This imposes a heavy task load on a wizard. To ease wizards' tasks for location tracking, we designed two techniques, Directional Crossing and Steering, and conducted a field experiment to investigate the performance of the two techniques. A quantitative analysis shows that Directional Crossing and Steering significantly lowered a wizard's task load for location tracking without sacrificing accuracy.

© All rights reserved Li et al. and/or ACM Press

 
Edit | Del

Harada, Susumu, Landay, James A., Malkin, Jonathan, Li, Xiao and Bilmes, Jeff A. (2006): The vocal joystick: evaluation of voice-based cursor control techniques. In: Eighth Annual ACM Conference on Assistive Technologies 2006. pp. 197-204.

Mouse control has become a crucial aspect of many modern day computer interactions. This poses a challenge for individuals with motor impairments or those whose use of hands are restricted due to situational constraints. We present a system called the Vocal Joystick which allows the user to continuously control the mouse cursor by varying vocal parameters such as vowel quality, loudness and pitch. A survey of existing cursor control methods is presented to highlight the key characteristics of the Vocal Joystick. Evaluations were conducted to characterize expert performance capability of the Vocal Joystick, and to compare novice user performance and preference for the Vocal Joystick and two other existing speech based cursor control methods. Our results show that Fitts' law is a good predictor of the speedaccuracy tradeoff for the Vocal Joystick, and suggests that the optimal performance of the Vocal Joystick may be comparable to that of a conventional hand-operated joystick. Novice user evaluations show that the Vocal Joystick can be used by people without extensive training, and that it presents a viable alternative to existing speech-based cursor control methods.

© All rights reserved Harada et al. and/or ACM Press

 
Edit | Del

Saponas, T. Scott, Prabaker, Madhu K., Abowd, Gregory D. and Landay, James A. (2006): The impact of pre-patterns on the design of digital home applications. In: Proceedings of DIS06: Designing Interactive Systems: Processes, Practices, Methods, & Techniques 2006. pp. 189-198.

Recent research suggests design pre-patterns, structured collections of evidence-based research and design knowledge, provide a useful resource for design activities in emerging application domains. This paper extends previous research by exploring the impact of pre-patterns and tools to support pre-pattern exploration for the domain of ubiquitous computing in the home. We conducted an empirical study of 44 designers engaged in a two hour concentrated brainstorming and design task for the home of the future. Our results show pre-patterns are an easily adopted resource for designers that can impact even the earliest of design activities. We also provide insights for future development of pre-patterns based on designer feedback.

© All rights reserved Saponas et al. and/or ACM Press

 
Edit | Del

Dow, Steven, Saponas, T. Scott, Li, Yang and Landay, James A. (2006): External representations in ubiquitous computing design and the implications for design tools. In: Proceedings of DIS06: Designing Interactive Systems: Processes, Practices, Methods, & Techniques 2006. pp. 241-250.

One challenge for ubiquitous computing is providing appropriate tools for professional designers, thus leading to stronger user-valued applications. Unlike many previous tool-builders' attempts to support a specific technology, we take a designer-centered stance, asking the question: how do professional designers externalize ideas for off-the-desktop computing and how do these inform next generation design tools? We report on interviews with designers from various domains, including experience, interaction, industrial, and space designers. The study broadly reveals perceived challenges of moving into a non-traditional design medium, emphasizes the practice of storytelling for relating the context of interaction, and through two case studies, traces the use of various external representations during the design progression of ubicomp applications. Using paperprototyped "walkthroughs" centered on two common design representations (storyboards and physical simulations), we formed a deeper understanding of issues influencing tool development. We offer guidelines for builders of future ubicomp tools, especially early-stage conceptual tools for professional designers to prototype applications across multiple sensors, displays, and physical environments.

© All rights reserved Dow et al. and/or ACM Press

 
Edit | Del

Patel, Kayur, Chen, Mike Y., Smith, Ian and Landay, James A. (2006): Personalizing routes. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 187-190.

Navigation services (e.g., in-car navigation systems and online mapping sites) compute routes between two locations to help users navigate. However, these routes may direct users along an unfamiliar path when a familiar path exists, or, conversely, may include redundant information that the user already knows. These overly complicated directions increase the cognitive load of the user, which may lead to a dangerous driving environment. Since the level of detail is user specific and depends on their familiarity with a region, routes need to be personalized. We have developed a system, called MyRoute, that reduces route complexity by creating user specific routes based on a priori knowledge of familiar routes and landmarks. MyRoute works by compressing well known steps into a single contextualized step and rerouting users along familiar routes.

© All rights reserved Patel et al. and/or ACM Press

2005
 
Edit | Del

Li, Yang and Landay, James A. (2005): Informal prototyping of continuous graphical interactions by demonstration. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 221-230.

Informal prototyping tools have shown great potential in facilitating the early stage design of user interfaces. However, continuous interactions, an important constituent of highly interactive interfaces, have not been well supported by previous tools. These interactions give continuous visual feedback, such as geometric changes of a graphical object, in response to continuous user input, such as the movement of a mouse. We built Monet, a sketch-based tool for prototyping continuous interactions by demonstration. In Monet, designers can prototype continuous widgets and their states of interest using examples. They can also demonstrate compound behaviors involving multiple widgets by direct manipulation. Monet allows continuous interactions to be easily integrated with event-based, discrete interactions. Continuous widgets can be embedded into storyboards and their states can condition or trigger storyboard transitions. Monet achieves these features by employing continuous function approximation and statistical classification techniques, without using any domain specific knowledge or assuming any application semantics. Informal feedback showed that Monet is a promising approach to enabling more complete tool support for early stage UI design.

© All rights reserved Li and Landay and/or ACM Press

 
Edit | Del

Heer, Jeffrey, Card, Stuart K. and Landay, James A. (2005): prefuse: a toolkit for interactive information visualization. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 421-430.

Although information visualization (infovis) technologies have proven indispensable tools for making sense of complex data, wide-spread deployment has yet to take hold, as successful infovis applications are often difficult to author and require domain-specific customization. To address these issues, we have created prefuse, a software framework for creating dynamic visualizations of both structured and unstructured data. prefuse provides theoretically-motivated abstractions for the design of a wide range of visualization applications, enabling programmers to string together desired components quickly to create and customize working visualizations. To evaluate prefuse we have built both existing and novel visualizations testing the toolkit's flexibility and performance, and have run usability studies and usage surveys finding that programmers find the toolkit usable and effective.

© All rights reserved Heer et al. and/or ACM Press

 
Edit | Del

Li, Yang, Hinckley, Ken, Guan, Zhiwei and Landay, James A. (2005): Experimental analysis of mode switching techniques in pen-based user interfaces. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 461-470.

Inking and gesturing are two central tasks in pen-based user interfaces. Switching between modes for entry of uninterpreted ink and entry of gestures is required by many pen-based user interfaces. Without an appropriate mode switching technique, pen-based interactions in such situations may be inefficient and cumbersome. In this paper, we investigate five techniques for switching between ink and gesture modes in pen interfaces, including a pen-pressure based mode switching technique that allows implicit mode transition. A quantitative experimental study was conducted to evaluate the performance of these techniques. The results suggest that pressing a button with the non-preferred hand offers the fastest performance, while the technique of holding the pen still is significantly slower and more prone to error than the other techniques. Pressure, while promising, did not perform as well as the non-preferred hand button with our current implementation.

© All rights reserved Li et al. and/or ACM Press

 
Edit | Del

Davis, Richard C. and Landay, James A. (2005): Forms of Expression for Designing Visual Languages for Animation. In: VL-HCC 2005 - IEEE Symposium on Visual Languages and Human-Centric Computing 21-24 September, 2005, Dallas, TX, USA. pp. 327-328.

 
Edit | Del

Stahovich, Thomas F., Davis, Randall, Miller, Robert C., Landay, James A. and Saund, Eric (2005): Pen-based computing. In Computers & Graphics, 29 (4) pp. 477-479.

2004
 
Edit | Del

Klemmer, Scott R., Li, Jack, Lin, James and Landay, James A. (2004): Papier-Mache: toolkit support for tangible input. In: Dykstra-Erickson, Elizabeth and Tscheligi, Manfred (eds.) Proceedings of ACM CHI 2004 Conference on Human Factors in Computing Systems April 24-29, 2004, Vienna, Austria. pp. 399-406.

Tangible user interfaces (TUIs) augment the physical world by integrating digital information with everyday physical objects. Currently, building these UIs requires "getting down and dirty" with input technologies such as computer vision. Consequently, only a small cadre of technology experts can currently build these UIs. Based on a literature review and structured interviews with nine TUI researchers, we created Papier-Mache, a toolkit for building tangible interfaces using computer vision, electronic tags, and barcodes. Papier-Mache introduces a high-level event model for working with these technologies that facilitates technology portability. For example, an application can be prototyped with computer vision and deployed with RFID. We present an evaluation of our toolkit with six class projects and a user study with seven programmers, finding the input abstractions, technology portability, and monitoring window to be highly effective.

© All rights reserved Klemmer et al. and/or ACM Press

 
Edit | Del

Jiang, Xiaodong, Hong, Jason I., Takayama, Leila A. and Landay, James A. (2004): Ubiquitous computing for firefighters: field studies and prototypes of large displays for incident command. In: Dykstra-Erickson, Elizabeth and Tscheligi, Manfred (eds.) Proceedings of ACM CHI 2004 Conference on Human Factors in Computing Systems April 24-29, 2004, Vienna, Austria. pp. 679-686.

In this paper, we demonstrate how field studies, interviews, and low-fidelity prototypes can be used to inform the design of ubiquitous computing systems for firefighters. We describe the artifacts and processes used by firefighters to assess, plan, and communicate during emergency situations, showing how accountability affects these decisions, how their current Incident Command System supports these tasks, and some drawbacks of existing solutions. These factors informed the design of a large electronic display for supporting the incident commander, the person who coordinates the overall response strategy in an emergency. Although our focus was on firefighters, our results are applicable for other aspects of emergency response as well, due to common procedures and training.

© All rights reserved Jiang et al. and/or ACM Press

 
Edit | Del

Li, Yang, Hong, Jason I. and Landay, James A. (2004): Topiary: a tool for prototyping location-enhanced applications. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 217-226.

Location-enhanced applications use the location of people, places, and things to augment or streamline interaction. Location-enhanced applications are just starting to emerge in several different domains, and many people believe that this type of application will experience tremendous growth in the near future. However, it currently requires a high level of technical expertise to build location-enhanced applications, making it hard to iterate on designs. To address this problem we introduce Topiary, a tool for rapidly prototyping location-enhanced applications. Topiary lets designers create a map that models the location of people, places, and things; use this active map to demonstrate scenarios depicting location contexts; use these scenarios in creating storyboards that describe interaction sequences; and then run these storyboards on mobile devices, with a wizard updating the location of people and things on a separate device. We performed an informal evaluation with seven researchers and interface designers and found that they reacted positively to the concept.

© All rights reserved Li et al. and/or ACM Press

 
Edit | Del

Hong, Jason I., Ng, Jennifer D., Lederer, Scott and Landay, James A. (2004): Privacy risk models for designing privacy-sensitive ubiquitous computing systems. In: Proceedings of DIS04: Designing Interactive Systems: Processes, Practices, Methods, & Techniques 2004. pp. 91-100.

Privacy is a difficult design issue that is becoming increasingly important as we push into ubiquitous computing environments. While there is a fair amount of theoretical work on designing for privacy, there are few practical methods for helping designers create applications that provide end-users with a reasonable level of privacy protection that is commensurate with the domain, with the community of users, and with the risks and benefits to all stakeholders in the intended system. Towards this end, we propose privacy risk models as a general method for refining privacy from an abstract concept into concrete issues for specific applications and prioritizing those issues. In this paper, we introduce a privacy risk model we have developed specifically for ubiquitous computing, and outline two case studies describing our use of this privacy risk model in the design of two ubiquitous computing applications.

© All rights reserved Hong et al. and/or ACM Press

 
Edit | Del

Chung, Eric S., Hong, Jason I., Lin, James, Prabaker, Madhu K., Landay, James A. and Liu, Alan L. (2004): Development and evaluation of emerging design patterns for ubiquitous computing. In: Proceedings of DIS04: Designing Interactive Systems: Processes, Practices, Methods, & Techniques 2004. pp. 233-242.

Design patterns are a format for capturing and sharing design knowledge. In this paper, we look at a new domain for design patterns, namely ubiquitous computing. The overall goal of this work is to aid practice by speeding up the diffusion of new interaction techniques and evaluation results from researchers, presenting the information in a form more usable to practicing designers. Towards this end, we have developed an initial and emerging pattern language for ubiquitous computing, consisting of 45 pre-patterns describing application genres, physical-virtual spaces, interaction and systems techniques for managing privacy, and techniques for fluid interactions. We evaluated the effectiveness of our pre-patterns with 16 pairs of designers in helping them design location-enhanced applications. We observed that our pre-patterns helped new and experienced designers unfamiliar with ubiquitous computing in generating and communicating ideas, and in avoiding design problems early in the design process.

© All rights reserved Chung et al. and/or ACM Press

 
Edit | Del

Lederer, Scott, Hong, Jason I., Dey, Anind K. and Landay, James A. (2004): Personal privacy through understanding and action: five pitfalls for designers. In Personal and Ubiquitous Computing, 8 (6) pp. 440-454.

 
Edit | Del

Davis, Richard C. and Landay, James A. (2004): A Visual Language for Animating Sketches. In: VL-HCC 2004 - IEEE Symposium on Visual Languages and Human-Centric Computing 26-29 September, 2004, Rome, Italy. pp. 273-274.

 
Edit | Del

Jiang, Xiaodong, Chen, Nicholas Y., Hong, Jason I., Wang, Kevin, Takayama, Leila and Landay, James A. (2004): Siren: Context-aware Computing for Firefighting. In: Ferscha, Alois and Mattern, Friedemann (eds.) PERVASIVE 2004 - Pervasive Computing, Second International Conference April 21-23, 2004, Vienna, Austria. pp. 87-105.

2003
 
Edit | Del

Klemmer, Scott R., Graham, Jamey, Wolff, Gregory J. and Landay, James A. (2003): Books with voices: paper transcripts as a physical interface to oral histories. In: Cockton, Gilbert and Korhonen, Panu (eds.) Proceedings of the ACM CHI 2003 Human Factors in Computing Systems Conference April 5-10, 2003, Ft. Lauderdale, Florida, USA. pp. 89-96.

 
Edit | Del

Ho-Ching, F. Wai-ling, Mankoff, Jennifer and Landay, James A. (2003): Can you see what i hear?: the design and evaluation of a peripheral sound display for the deaf. In: Cockton, Gilbert and Korhonen, Panu (eds.) Proceedings of the ACM CHI 2003 Human Factors in Computing Systems Conference April 5-10, 2003, Ft. Lauderdale, Florida, USA. pp. 161-168.

 
Edit | Del

Everitt, Katherine M., Klemmer, Scott R., Lee, Robert and Landay, James A. (2003): Two worlds apart: bridging the gap between physical and virtual media for distributed design collaboration. In: Cockton, Gilbert and Korhonen, Panu (eds.) Proceedings of the ACM CHI 2003 Human Factors in Computing Systems Conference April 5-10, 2003, Ft. Lauderdale, Florida, USA. pp. 553-560.

 
Edit | Del

Newman, Mark W., Lin, James J. W., Hong, Jason I. and Landay, James A. (2003): DENIM: An Informal Web Site Design Tool Inspired by Observations of Practice. In Human-Computer Interaction, 18 (3) pp. 259-324.

Through a study of Web site design practice, we observed that designers employ multiple representations of Web sites as they progress through the design process and that these representations allow them to focus on different aspects of the design. In particular, we observed that Web site designers focus their design efforts at 3 different levels of granularity-site map, storyboard, and individual page-and that designers sketch at all levels during the early stages of design. Sketching on paper is especially important during the early phases of a project, when designers wish to explore many design possibilities quickly without focusing on low-level details. Existing Web design tools do not support such exploration tasks well, nor do they adequately integrate multiple site representations. Informed by these observations we developed DENIM: an informal Web site design tool that supports early phase information and navigation design of Web sites. It supports sketching input, allows design at different levels of granularity, and unifies the levels through zooming. Designers are able to interact with their sketched designs as if in a Web browser, thus allowing rapid creation and exploration of interactive prototypes. Based on an evaluation with professional designers as well as usage feedback from users who have downloaded DENIM from the Internet, we have made numerous improvements to the system and have received many positive reactions from designers who would like to use a system like DENIM in their work.

© All rights reserved Newman et al. and/or Taylor and Francis

 Cited in the following chapter:

End-User Development: [/encyclopedia/end-user_development.html]


 
 
Edit | Del

Li, Yang, Landay, James A., Guan, Zhiwei, Ren, Xiangshi and Dai, Guozhong (2003): Sketching informal presentations. In: Oviatt, Sharon L., Darrell, Trevor, Maybury, Mark T. and Wahlster, Wolfgang (eds.) Proceedings of the 5th International Conference on Multimodal Interfaces - ICMI 2003 November 5-7, 2003, Vancouver, British Columbia, Canada. pp. 234-241.

 
Edit | Del

Sinha, Anoop K. and Landay, James A. (2003): Capturing user tests in a multimodal, multidevice informal prototyping tool. In: Oviatt, Sharon L., Darrell, Trevor, Maybury, Mark T. and Wahlster, Wolfgang (eds.) Proceedings of the 5th International Conference on Multimodal Interfaces - ICMI 2003 November 5-7, 2003, Vancouver, British Columbia, Canada. pp. 117-124.

 
Edit | Del

Landay, James A. and Borriello, Gaetano (2003): Design Patterns for Ubiquitous Computing. In IEEE Computer, 36 (8) pp. 93-95.

 
Edit | Del

Sinha, Anoop K. and Landay, James A. (2003): Capturing user tests in a multimodal, multidevice informal prototyping tool. In: Proceedings of the 2003 International Conference on Multimodal Interfaces 2003. pp. 117-124.

Interaction designers are increasingly faced with the challenge of creating interfaces that incorporate multiple input modalities, such as pen and speech, and span multiple devices. Few early stage prototyping tools allow non-programmers to prototype these interfaces. Here we describe CrossWeaver, a tool for informally prototyping multimodal, multidevice user interfaces. This tool embodies the informal prototyping paradigm, leaving design representations in an informal, sketched form, and creates a working prototype from these sketches. CrossWeaver allows a user interface designer to sketch storyboard scenes on the computer, specifying simple multimodal command transitions between scenes. The tool also allows scenes to target different output devices. Prototypes can run across multiple standalone devices simultaneously, processing multimodal input from each one. Thus, a designer can visually create a multimodal prototype for a collaborative meeting or classroom application. CrossWeaver captures all of the user interaction when running a test of a prototype. This input log can quickly be viewed visually for the details of the users' multimodal interaction or it can be replayed across all participating devices, giving the designer information to help him or her analyze and iterate on the interface design.

© All rights reserved Sinha and Landay and/or their publisher

 
Edit | Del

Li, Yang, Landay, James A., Guan, Zhiwei, Ren, Xiangshi and Dai, Guozhong (2003): Sketching informal presentations. In: Proceedings of the 2003 International Conference on Multimodal Interfaces 2003. pp. 234-241.

Informal presentations are a lightweight means for fast and convenient communication of ideas. People communicate their ideas to others on paper and whiteboards, which afford fluid sketching of graphs, words and other expressive symbols. Unlike existing authoring tools that are designed for formal presentations, we created SketchPoint to help presenters design informal presentations via freeform sketching. In SketchPoint, presenters can quickly author presentations by sketching slide content, overall hierarchical structures and hyperlinks. To facilitate the transition from idea capture to communication, a note-taking workspace was built for accumulating ideas and sketching presentation outlines. Informal feedback showed that SketchPoint is a promising tool for idea communication.

© All rights reserved Li et al. and/or their publisher

2002
 
Edit | Del

Klemmer, Scott R., Thomsen, Michael, Phelps-Goodman, Ethan, Lee, Robert and Landay, James A. (2002): Where do web sites come from?: capturing and interacting with design history. In: Terveen, Loren (ed.) Proceedings of the ACM CHI 2002 Conference on Human Factors in Computing Systems Conference April 20-25, 2002, Minneapolis, Minnesota. pp. 1-8.

 
Edit | Del

Lin, James, Thomsen, Michael and Landay, James A. (2002): A visual language for sketching large and complex interactive designs. In: Terveen, Loren (ed.) Proceedings of the ACM CHI 2002 Conference on Human Factors in Computing Systems Conference April 20-25, 2002, Minneapolis, Minnesota. pp. 307-314.

 
Edit | Del

van Duyne, Douglas K., Landay, James A. and Hong, Jason I. (2002): The Design of Sites: Patterns, Principles, and Processes for Crafting a Customer-Centered Web Experience. Addison-Wesley Publishing

 Cited in the following chapter:

Interaction Design Patterns: [/encyclopedia/interaction_design_patterns.html]


 
 
Edit | Del

Kamel, Hesham M. and Landay, James A. (2002): Sketching images eyes-free: a grid-based dynamic drawing tool for the blind. In: Fifth Annual ACM Conference on Assistive Technologies 2002. pp. 33-40.

In this paper we describe one method of transforming a mouse-based graphical user interface into a navigable, grid-based auditory interface. We also report the results of an experiment that tested the effectiveness of a drawing tool for the blind called IC2D that uses this interaction style. The experiment included eight visually impaired participants and eight blindfolded sighted participants. The results show that auditory interpretation of graphics is an effective interface technique for visually impaired users. Further, the experiment demonstrates that visually impaired users can develop meaningful drawings when given adequate technological support.

© All rights reserved Kamel and Landay and/or ACM Press

 
Edit | Del

Sinha, Anoop K. and Landay, James A. (2002): Embarking on Multimodal Interface Design. In: 4th IEEE International Conference on Multimodal Interfaces - ICMI 2002 14-16 October, 2002, Pittsburgh, PA, USA. pp. 355-360.

 
Edit | Del

Jiang, Xiaodong, Hong, Jason I. and Landay, James A. (2002): Approximate Information Flows: Socially-Based Modeling of Privacy in Ubiquitous Computing. In: Borriello, Gaetano and Holmquist, Lars Erik (eds.) UbiComp 2002 Ubiquitous Computing - 4th International Conference September 29 - October 1, 2002, Gteborg, Sweden. pp. 176-193.

 
Edit | Del

Landay, James A. (2002): Informal Tools for Designing Anywhere, Anytime, Anydevice User Interfaces. In: Hegarty, Mary, Meyer, Bernd and Narayanan, N. Hari (eds.) Diagrams 2002 - Diagrammatic Representation and Inference - Second International Conference April 18-20, 2002, Callaway Gardens, GA, USA. p. 359.

 
Edit | Del

Sinha, Anoop K. and Landay, James A. (2002): Embarking on Multimodal Interface Design. In: Proceedings of the 2002 International Conference on Multimodal Interfaces 2002. p. 355.

Designers are increasingly faced with the challenge of targeting multimodal applications, those that span heterogeneous devices and use multimodal input, but do not have tools to support them. We studied the early stage work practices of professional multimodal interaction designers. We noted the variety of different artifacts produced, such as design sketches and paper prototypes. Additionally, we observed Wizard of Oz techniques that are sometimes used to simulate an interactive application from these sketches. These studies have led to our development of a technique for interface designers to consider as they embark on creating multimodal applications.

© All rights reserved Sinha and Landay and/or their publisher

2001
 
Edit | Del

Hong, Jason I. and Landay, James A. (2001): An Infrastructure Approach to Context-Aware Computing. In Human-Computer Interaction, 16 (2) pp. 287-303.

The Context Toolkit (Dey, Abowd, and Salber, 2001 [this special issue]) is only one of many possible architectures for supporting context-aware applications. In this essay, we look at the tradeoffs involved with a service infrastructure approach to context-aware computing. We describe the advantages that a service infrastructure for context awareness has over other approaches, outline some of the core technical challenges that must be addressed before such an infrastructure can be built, and point out promising research directions for overcoming these challenges.

© All rights reserved Hong and Landay and/or Taylor and Francis

 
Edit | Del

Klemmer, Scott R., Newman, Mark W., Farrell, Ryan, Bilezikjian, Mark and Landay, James A. (2001): The designers' outpost: a tangible interface for collaborative web site. In: Marks, Joe and Mynatt, Elizabeth D. (eds.) Proceedings of the 14th annual ACM symposium on User interface software and technology November 11 - 14, 2001, Orlando, Florida. pp. 1-10.

In our previous studies into web design, we found that pens, paper, walls, and tables were often used for explaining, developing, and communicating ideas during the early phases of design. These wall-scale paper-based design practices inspired The Designers' Outpost, a tangible user interface that combines the affordances of paper and large physical workspaces with the advantages of electronic media to support information design. With Outpost, users collaboratively author web site information architectures on an electronic whiteboard using physical media (Post-it notes and images), structuring and annotating that information with electronic pens. This interaction is enabled by a touch-sensitive SMART Board augmented with a robust computer vision system, employing a rear-mounted video camera for capturing movement and a front-mounted high-resolution camera for capturing ink. We conducted a participatory design study with fifteen professional web designers. The study validated that Outpost supports information architecture work practice, and led to our adding support for fluid transitions to other tools.

© All rights reserved Klemmer et al. and/or ACM Press

 
Edit | Del

Hong, Jason I., Heer, Jeffrey, Waterson, Sarah and Landay, James A. (2001): WebQuilt: A proxy-based approach to remote web usability testing. In ACM Transactions on Information Systems, 19 (3) pp. 263-285.

WebQuilt is a web logging and visualization system that helps web design teams run usability tests (both local and remote) and analyze the collected data. Logging is done through a proxy, overcoming many of the problems with server-side and client-side logging. Captured usage traces can be aggregated and visualized in a zooming interface that shows the web pages people viewed. The visualization also shows the most common paths taken through the web site for a given task, as well as the optimal path for that task, as designated by the designer. This paper discusses the architecture of WebQuilt and describes how it can be extended for new kinds of analyses and visualizations.

© All rights reserved Hong et al. and/or ACM Press

 
Edit | Del

Hong, Jason I. and Landay, James A. (2001): A Context/Communication Information Agent. In Personal and Ubiquitous Computing, 5 (1) pp. 78-81.

 
Edit | Del

Landay, James A. and Myers, Brad A. (2001): Sketching Interfaces: Toward More Human Interface Design. In IEEE Computer, 34 (3) pp. 56-64.

 
Edit | Del

Hong, Jason I. and Landay, James A. (2001): WebQuilt: a framework for capturing and visualizing the web experience. In: Proceedings of the 2001 International Conference on the World Wide Web 2001. pp. 717-724.

2000
 
Edit | Del

Long, A. Chris, Landay, James A., Rowe, Lawrence A. and Michiels, Joseph (2000): Visual Similarity of Pen Gestures. In: Turner, Thea, Szwillus, Gerd, Czerwinski, Mary, Peterno, Fabio and Pemberton, Steven (eds.) Proceedings of the ACM CHI 2000 Human Factors in Computing Systems Conference April 1-6, 2000, The Hague, The Netherlands. pp. 360-367.

Pen-based user interfaces are becoming ever more popular. Gestures (i.e., marks made with a pen to invoke a command) are a valuable aspect of pen-based UIs, but they also have drawbacks. The challenge in designing good gestures is to make them easy for people to learn and remember. With the goal of better gesture design, we performed a pair of experiments to determine why users find gestures similar. From these experiments, we have derived a computational model for predicting perceived gesture similarity that correlates 0.56 with observation. We will incorporate the results of these experiments into a gesture design tool, which will aid the pen-based UI designer in creating gesture sets that are easier to learn and more memorable.

© All rights reserved Long et al. and/or ACM Press

 
Edit | Del

Lin, James, Newman, Mark W., Hong, Jason I. and Landay, James A. (2000): DENIM: Finding a Tighter Fit between Tools and Practice for Web Site Design. In: Turner, Thea, Szwillus, Gerd, Czerwinski, Mary, Peterno, Fabio and Pemberton, Steven (eds.) Proceedings of the ACM CHI 2000 Human Factors in Computing Systems Conference April 1-6, 2000, The Hague, The Netherlands. pp. 510-517.

Through a study of web site design practice, we observed that web site designers design sites at different levels of refinement -- site map, storyboard, and individual page -- and that designers sketch at all levels during the early stages of design. However, existing web design tools do not support these tasks very well. Informed by these observations, we created DENIM, a system that helps web site designers in the early stages of design. DENIM supports sketching input, allows design at different refinement levels, and unifies the levels through zooming. We performed an informal evaluation with seven professional designers and found that they reacted positively to the concept and were interested in using such a system in their work.

© All rights reserved Lin et al. and/or ACM Press

 Cited in the following chapter:

Formal Methods: [/encyclopedia/formal_methods.html]


 
 
Edit | Del

Klemmer, Scott R., Sinha, Anoop K., Chen, Jack, Landay, James A., Aboobaker, Nadeem and Wang, Annie (2000): Suede: A Wizard of Oz Prototyping Tool for Speech User Interfaces. In: Ackerman, Mark S. and Edwards, Keith (eds.) Proceedings of the 13th annual ACM symposium on User interface software and technology November 06 - 08, 2000, San Diego, California, United States. pp. 1-10.

 
Edit | Del

Hong, Jason I. and Landay, James A. (2000): SATIN: A Toolkit for Informal Ink-Based Applications. In: Ackerman, Mark S. and Edwards, Keith (eds.) Proceedings of the 13th annual ACM symposium on User interface software and technology November 06 - 08, 2000, San Diego, California, United States. pp. 63-72.

 
Edit | Del

Oviatt, Sharon, Cohen, Philip R., Wu, Lizhong, Duncan, Lisbeth, Suhm, Bernhard, Bers, Josh, Holzman, Thomas C., Winograd, Terry, Landay, James A., Larson, Jim and Ferro, David (2000): Designing the User Interface for Multimodal Speech and Pen-Based Gesture Applications: State-of-the-Art Systems and Future Research Directions. In Human-Computer Interaction, 15 (4) pp. 263-322.

The growing interest in multimodal interface design is inspired in large part by the goals of supporting more transparent, flexible, efficient, and powerfully expressive means of human-computer interaction than in the past. Multimodal interfaces are expected to support a wider range of diverse applications, be usable by a broader spectrum of the average population, and function more reliably under realistic and challenging usage conditions. In this article, we summarize the emerging architectural approaches for interpreting speech and pen-based gestural input in a robust manner-including early and late fusion approaches, and the new hybrid symbolic-statistical approach. We also describe a diverse collection of state-of-the-art multimodal systems that process users' spoken and gestural input. These applications range from map-based and virtual reality systems for engaging in simulations and training, to field medic systems for mobile use in noisy environments, to web-based transactions and standard text-editing applications that will reshape daily computing and have a significant commercial impact. To realize successful multimodal systems of the future, many key research challenges remain to be addressed. Among these challenges are the development of cognitive theories to guide multimodal system design, and the development of effective natural language processing, dialogue processing, and error-handling techniques. In addition, new multimodal systems will be needed that can function more robustly and adaptively, and with support for collaborative multiperson use. Before this new class of systems can proliferate, toolkits also will be needed to promote software development for both simulated and functioning systems.

© All rights reserved Oviatt et al. and/or Taylor and Francis

 
Edit | Del

Kamel, Hesham M. and Landay, James A. (2000): A Study of Blind Drawing Practice: Creating Graphical Information Without the Visual Channel. In: Fourth Annual ACM Conference on Assistive Technologies 2000. pp. 34-41.

Existing drawing tools for blind users give inadequate contextual feedback on the state of the drawing, leaving blind users unable to comprehend and successfully produce graphical information. We have investigated a tactile method of drawing used by blind users that mimics drawing with a pencil and a paper. Our study revealed a set of properties that must be incorporated into drawing tools for blind users, including giving feedback for relocating important points, determining angles, and communicating the overall structure of the drawing. We describe a grid-based model that provides these properties in a primitive-based 2D graphics environment, and we introduce its use in drawing and other graphical interactions.

© All rights reserved Kamel and Landay and/or ACM Press

 
Edit | Del

Newman, Mark W. and Landay, James A. (2000): Sitemaps, Storyboards, and Specifications: A Sketch of Web Site Design Practice. In: Proceedings of DIS00: Designing Interactive Systems: Processes, Practices, Methods, & Techniques 2000. pp. 263-274.

Through a study of web site design practice, we observed that designers employ multiple representations of web sites as they progress through the design process, and that these representations allow them to focus on different aspects of the design. Designers also employ multiple tools during the course of a project, including graphic design, web development, presentation, and word processing software, as well as pen and paper. Sketching on paper is especially important during the design exploration phase of a project, when designers wish to explore many design possibilities quickly without focusing on low-level details. Web site design tools intended to support the early phases of the design process should employ informal interaction techniques, should support multiple site representations, and should integrate well with other applications that designers use regularly.

© All rights reserved Newman and Landay and/or ACM Press

1999
 
Edit | Del

Long Jr, Allan Christian, Landay, James A. and Rowe, Lawrence A. (1999): Implications for a Gesture Design Tool. In: Altom, Mark W. and Williams, Marian G. (eds.) Proceedings of the ACM CHI 99 Human Factors in Computing Systems Conference May 15-20, 1999, Pittsburgh, Pennsylvania. pp. 40-47.

Interest in pen-based user interfaces is growing rapidly. One potentially useful feature of pen-based user interfaces is gestures, that is, a mark or stroke that causes a command to execute. Unfortunately, it is difficult to design gestures that are easy 1) for computers to recognize and 2) for humans to learn and remember. To investigate these problems, we built a prototype tool typical fo those used for designing gesture sets. An experiment was then performed to gain insight into the gesture design process and to evaluate this style of tool. The experiment confirmed that gesture design is very difficult and suggested several ways in which current tools can be improved. The most important improvement is to make the tools more active and provide more guidance for designers. This paper describes the gesture design tool, the experiment, and its results.

© All rights reserved Long Jr et al. and/or ACM Press

 
Edit | Del

Davis, Richard, Landay, James A., Chen, Victor, Huang, Jonathan, Lee, Rebecca B., Li, Francis, Lin, James, Morrey III, Charles B. and Schleimer, Ben (1999): NotePals: Lightweight Note Sharing by the Group, for the Group. In: Altom, Mark W. and Williams, Marian G. (eds.) Proceedings of the ACM CHI 99 Human Factors in Computing Systems Conference May 15-20, 1999, Pittsburgh, Pennsylvania. pp. 338-345.

NotePals is a lightweight note sharing system that gives group members easy access to each other's experiences through their personal notes. The system allows notes taken by group members in any context to be uploaded to a shared repository. Group members view these notes with browsers that allow them to retrieve all notes taken in a given context or to access notes from other related notes or documents. This is possible because NotePals records the context in which each note is created (e.g., its author, subject, and creation time). The system is "lightweight" because it fits easily into group members' regular note-taking practices, and uses informal, ink-based user interfaces that run on portable, inexpensive hardware. In this paper we describe NotePals, show how we have used it to share our notes, and present our evaluations of the system.

© All rights reserved Davis et al. and/or ACM Press

1998
 
Edit | Del

Woodruff, Allison, Landay, James A. and Stonebraker, Michael (1998): Constant Density Visualizations of Non-Uniform Distributions of Data. In: Mynatt, Elizabeth D. and Jacob, Robert J. K. (eds.) Proceedings of the 11th annual ACM symposium on User interface software and technology November 01 - 04, 1998, San Francisco, California, United States. pp. 19-28.

The cartographic Principle of Constant Information Density suggests that the amount of information in an interactive visualization should remain constant as the user pans and zooms. In previous work, we presented a system, VIDA (Visual Information Density Adjuster), which helps users manually construct applications in which overall display density remains constant. In the context of semantic zoom systems, this approach ensures uniformity in the z dimension, but does not extend naturally to ensuring uniformity in the x and y dimensions. In this paper, we present a new approach that automatically creates displays that are uniform in the x, y, and z dimensions. In the new system, users express constraints about visual representations that should appear in the display. The system applies these constraints to subdivisions of the display such that each subdivision meets a target density value. We have implemented our technique in the DataSplash/VIDA database visualization environment. We describe our algorithm, implementation, and the advantages and disadvantages of our approach.

© All rights reserved Woodruff et al. and/or ACM Press

 
Edit | Del

Davis, Richard, Lin, James, Brotherton, Jason, Landay, James A., Price, Morgan N. and Schilit, Bill N. (1998): A Framework for Sharing Handwritten Notes. In: Mynatt, Elizabeth D. and Jacob, Robert J. K. (eds.) Proceedings of the 11th annual ACM symposium on User interface software and technology November 01 - 04, 1998, San Francisco, California, United States. pp. 119-120.

NotePals is an ink-based, collaborative note taking application that runs on personal digital assistants (PDAs). Meeting participants write notes in their own handwriting on a PDA. These notes are shared with other participants by synchronizing later with a shared note repository that can be viewed using a desktop-based web browser. NotePals is distinguished by its lightweight process, interface, and hardware. This demonstration illustrates the design of two different NotePals clients and our web-based note browser.

© All rights reserved Davis et al. and/or ACM Press

 
Edit | Del

Woodruff, Allison, Landay, James A. and Stonebraker, Michael (1998): Constant information density in zoomable interfaces. In: Catarci, Tiziana, Costabile, Maria Francesca, Santucci, Giuseppe and Tarantino, Laura (eds.) AVI 1998 - Proceedings of the working conference on Advanced Visual Interfaces May 24 - 27, 1998, LAquila, Italy. pp. 57-65.

1995
 
Edit | Del

Landay, James A. and Myers, Brad A. (1995): Interactive Sketching for the Early Stages of User Interface Design. In: Katz, Irvin R., Mack, Robert L., Marks, Linn, Rosson, Mary Beth and Nielsen, Jakob (eds.) Proceedings of the ACM CHI 95 Human Factors in Computing Systems Conference May 7-11, 1995, Denver, Colorado. pp. 43-50.

Current interactive user interface construction tools are often more of a hindrance than a benefit during the early stages of user interface design. These tools take too much time to use and force designers to specify more of the design details than they wish at this early stage. Most interface designers, especially those who have a background in graphic design, prefer to sketch early interface ideas on paper or on a whiteboard. We are developing an interactive tool called SILK that allows designers to quickly sketch an interface using an electronic pad and stylus. SILK preserves the important properties of pencil and paper: a rough drawing can be produced very quickly and the medium is very flexible. However, unlike a paper sketch, this electronic sketch is interactive and can easily be modified. In addition, our system allows designers to examine, annotate, and edit a complete history of the design. When the designer is satisfied with this early prototype, SILK can transform the sketch into a complete, operational interface in a specified look-and-feel. This transformation is guided by the designer. By supporting the early phases of the interface design life cycle, our tool should both ease the development of user interface prototypes and reduce the time needed to create a final interface. This paper describes our prototype and provides design ideas for a production-level system.

© All rights reserved Landay and Myers and/or ACM Press

1993
 
Edit | Del

Karsenty, Solange, Weikart, Chris and Landay, James A. (1993): Inferring Graphical Constraints with Rockit. In: Ashlund, Stacey, Mullet, Kevin, Henderson, Austin, Hollnagel, Erik and White, Ted (eds.) Proceedings of the ACM CHI 93 Human Factors in Computing Systems Conference April 24-29, 1993, Amsterdam, The Netherlands. p. 531.

Graphical constraints define relations among graphical objects that must be maintained by an underlying system. The automatic maintenance of these relations has become important in increasing the functionality of graphical editors and user interface builders. Yet this increase in functionality has also brought the users of these tools the difficult task of specifying the constraints -- generally by writing mathematical equations that define the relations which must hold. The purpose of Rockit [2] is to identify the possible graphical constraints between objects in a scene and allow the user to quickly and easily choose and apply the desired constraints. Rockit is embedded in a graphical editor that allows the creation of application objects by direct manipulation. The user creates graphical objects and applies constraints to them. Typical objects include diagrams, circuits, flowcharts, and also standard application widgets. The supported constraints include connectors, aligners, and spacers. In this videotape, we illustrate our system through the construction of a slider.

© All rights reserved Karsenty et al. and/or ACM Press

1992
 
Edit | Del

Karsenty, Solange, Landay, James A. and Weikart, Chris (1992): Inferring Graphical Constraints with Rockit. In: Monk, Andrew, Diaper, Dan and Harrison, Michael D. (eds.) Proceedings of the Seventh Conference of the British Computer Society Human Computer Interaction Specialist Group - People and Computers VII August 15-18, 1992, University of York, UK. pp. 137-153.

Rockit is a system that identifies the possible graphical constraints between objects in a two-dimensional scene and allows the user to choose and apply the desired constraints quickly and easily. Rockit looks for intersections between the position of a designated object and the gravity fields of other objects to determine the possible constraints. These candidate constraints are passed to a rule system that encodes some simple knowledge about how graphical objects normally interact and can thus be constrained to one another. The rules are used to determine the most likely constraints to be applied between the designated object and the other objects in the scene. As the user manipulates the object, the object will gravitate towards the most likely constraint scenario. The inferred constraints are indicated by the creation of graphical and sonic feedback objects. Rockit makes it easy to try other likely scenarios by simply pressing a key, causing the system to cycle through the other possibilities.

© All rights reserved Karsenty et al. and/or Cambridge University Press

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Changes to this page (author)

10 Feb 2014: Added
07 Nov 2012: Modified
27 Jul 2012: Added
04 Apr 2012: Modified
04 Apr 2012: Modified
26 Jul 2011: Modified
26 Jul 2011: Modified
05 Jul 2011: Modified
20 Apr 2011: Modified
20 Apr 2011: Modified
20 Apr 2011: Modified
20 Apr 2011: Modified
20 Apr 2011: Modified
20 Apr 2011: Modified
18 Apr 2011: Modified
16 Jan 2011: Modified
18 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Added
24 Aug 2009: Modified
09 Jul 2009: Modified
26 Jun 2009: Modified
17 Jun 2009: Modified
16 Jun 2009: Modified
16 Jun 2009: Modified
05 Jun 2009: Modified
02 Jun 2009: Modified
02 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
31 May 2009: Modified
31 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
24 Jul 2007: Modified
29 Jun 2007: Modified
29 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Modified
28 Apr 2003: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/james_a__landay.html

Publication statistics

Pub. period:1992-2011
Pub. count:91
Number of co-authors:137



Co-authors

Number of publications with 3 favourite co-authors:

Jason I. Hong:16
Yang Li:10
Susumu Harada:9

 

 

Productive colleagues

James A. Landay's 3 most productive colleagues in number of publications:

Brad A. Myers:154
Gregory D. Abowd:116
Ravin Balakrishnan:108
 
 
 

Upcoming Courses

go to course
User-Centred Design - Module 3
69% booked. Starts in 26 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading