Publication statistics

Pub. period:2002-2012
Pub. count:71
Number of co-authors:94



Co-authors

Number of publications with 3 favourite co-authors:

Brad A. Myers:16
Richard E. Ladner:10
Shaun K. Kane:9

 

 

Productive colleagues

Jacob O. Wobbrock's 3 most productive colleagues in number of publications:

Brad A. Myers:154
Scott E. Hudson:113
James A. Landay:91
 
 
 
Jul 12

To design an easy-to-use interface, pay attention to what users do, not what they say. Self-reported claims are unreliable, as are user speculations about future behavior.

-- Jakob Nielsen

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!

 
 

Jacob O. Wobbrock

Picture of Jacob O. Wobbrock.
Has also published under the name of:
"Jacob Wobbrock"

Personal Homepage:
faculty.washington.edu/wobbrock/

Current place of employment:
University of Washington

Jacob O. Wobbrock is a professor in The Information School at the University of Washington. He also holds an adjunct appointment in Computer Science and Engineering. His research interests are input techniques, interaction design, human performance with computing systems, assistive technology, universal design, mobile user interfaces, and other areas of human-computer interaction (HCI). He received his Ph.D. from the Human-Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University in 2006.

Edit author info
Add publication

Publications by Jacob O. Wobbrock (bibliography)

 what's this?
2012
 
Edit | Del

O'Leary, Katie, Delahunt, Charles, Dowden, Patricia, Darmansya, Ivan, Heng, Jiaqi, Riskin, Eve A., Ladner, Richard E. and Wobbrock, Jacob O. (2012): Design goals for a system for enhancing AAC with personalized video. In: Fourteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2012. pp. 223-224.

Enabling end-users of Augmentative and Alternative Communication (AAC) systems to add personalized video content at runtime holds promise for improving communication, but the requirements for such systems are as yet unclear. To explore this issue, we present Vid2Speech, a prototype AAC system for children with complex communication needs (CCN) that uses personalized video to enhance representations of action words. We describe three design goals that guided the integration of personalized video to enhance AAC in our early-stage prototype: 1) Providing social-temporal navigation; 2) Enhancing comprehension; and 3) Enabling customization in real time. Our system concept represents one approach to realizing these goals, however, we contribute the goals and the system as a starting point for future innovations in personalized video-based AAC.

© All rights reserved O'Leary et al. and/or ACM Press

 
Edit | Del

Levy, David M., Wobbrock, Jacob O., Kaszniak, Alfred W. and Ostergren, Marilyn (2012): The effects of mindfulness meditation training on multitasking in a high-stress information environment. In: Proceedings of the 2012 Conference on Graphics Interface 2012. pp. 45-52.

We describe an experiment to determine the effects of meditation training on the multitasking behavior of knowledge workers. Three groups each of 12-15 human resources personnel were tested: (1) those who underwent an 8-week training course on mindfulness-based meditation, (2) those who endured a wait period, were tested, and then underwent the same 8-week training, and (3) those who had 8-weeks of training in body relaxation. We found that only those trained in meditation stayed on tasks longer and made fewer task switches, as well as reporting less negative emotion after task performance, as compared with the other two groups. In addition, both the meditation and the relaxation groups showed improved memory for the tasks they performed.

© All rights reserved Levy et al. and/or their publisher

 
Edit | Del

Anthony, Lisa and Wobbrock, Jacob O. (2012): $N-protractor: a fast and accurate multistroke recognizer. In: Proceedings of the 2012 Conference on Graphics Interface 2012. pp. 117-120.

Prior work introduced $N, a simple multistroke gesture recognizer based on template matching, intended to be easy to port to new platforms for rapid prototyping, and derived from the unistroke $1 recognizer. $N uses an iterative search method to find the optimal angular alignment between two gesture templates, like $1 before it. Since then, Protractor has been introduced, a unistroke pen and finger gesture recognition algorithm also based on template-matching and $1, but using a closed-form template-matching method instead of an iterative search method, considerably improving recognition speed over $1. This paper presents work to streamline $N with Protractor by using Protractor's closed-form matching approach, and demonstrates that similar speed benefits occur for multistroke gestures from datasets from multiple domains. We find that the Protractor enhancements are over 91% faster than the original $N, and negligibly less accurate (<0.2%). We also discuss the impact that the number of templates, the input speed, and input method (e.g., pen vs. finger) have on recognition accuracy, and examine the most confusable gestures.

© All rights reserved Anthony and Wobbrock and/or their publisher

 
Edit | Del

Azenkot, Shiri, Wobbrock, Jacob O., Prasain, Sanjana and Ladner, Richard E. (2012): Input finger detection for nonvisual touch screen text entry in Perkinput. In: Proceedings of the 2012 Conference on Graphics Interface 2012. pp. 121-129.

We present Input Finger Detection (IFD), a novel technique for nonvisual touch screen input, and its application, the Perkinput text entry method. With IFD, signals are input into a device with multi-point touches, where each finger represents one bit, either touching the screen or not. Maximum likelihood and tracking algorithms are used to detect which fingers touch the screen based on user-set reference points. The Perkinput text entry method uses the 6-bit Braille encoding with audio feedback, enabling one- and two-handed input. A longitudinal evaluation with 8 blind participants who are proficient in Braille showed that one-handed Perkinput was significantly faster and more accurate than iPhone's VoiceOver. Furthermore, in a case study to evaluate expert performance, one user reached an average session speed of 17.56 words per minute (WPM) with an average uncorrected error rate of just 0.14% using one hand for input. The same participant reached an average session speed of 38.0

© All rights reserved Azenkot et al. and/or their publisher

 
Edit | Del

Vatavu, Radu-Daniel, Anthony, Lisa and Wobbrock, Jacob O. (2012): Gestures as point clouds: a $P recognizer for user interface prototypes. In: Proceedings of the 2012 International Conference on Multimodal Interfaces 2012. pp. 273-280.

Rapid prototyping of gesture interaction for emerging touch platforms requires that developers have access to fast, simple, and accurate gesture recognition approaches. The $-family of recognizers ($1, $N) addresses this need, but the current most advanced of these, $N-Protractor, has significant memory and execution costs due to its combinatoric gesture representation approach. We present $P, a new member of the $-family, that remedies this limitation by considering gestures as clouds of points. $P performs similarly to $1 on unistrokes and is superior to $N on multistrokes. Specifically, $P delivers >99% accuracy in user-dependent testing with 5+ training samples per gesture type and stays above 99% for user-independent tests when using data from 10 participants. We provide a pseudocode listing of $P to assist developers in porting it to their specific platform and a "cheat sheet" to aid developers in selecting the best member of the $-family for their specific application needs.

© All rights reserved Vatavu et al. and/or ACM Press

2011
 
Edit | Del

Wobbrock, Jacob O., Findlater, Leah, Gergle, Darren and Higgins, James J. (2011): The aligned rank transform for nonparametric factorial analyses using only anova procedures. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 143-146.

Nonparametric data from multi-factor experiments arise often in human-computer interaction (HCI). Examples may include error counts, Likert responses, and preference tallies. But because multiple factors are involved, common nonparametric tests (e.g., Friedman) are inadequate, as they are unable to examine interaction effects. While some statistical techniques exist to handle such data, these techniques are not widely available and are complex. To address these concerns, we present the Aligned Rank Transform (ART) for nonparametric factorial data analysis in HCI. The ART relies on a preprocessing step that "aligns" data before applying averaged ranks, after which point common ANOVA procedures can be used, making the ART accessible to anyone familiar with the F-test. Unlike most articles on the ART, which only address two factors, we generalize the ART to N factors. We also provide ARTool and ARTweb, desktop and Web-based programs for aligning and ranking data. Our re-examination of some published HCI results exhibits advantages of the ART.

© All rights reserved Wobbrock et al. and/or their publisher

 
Edit | Del

Kane, Shaun K., Wobbrock, Jacob O. and Ladner, Richard E. (2011): Usable gestures for blind people: understanding preference and performance. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 413-422.

Despite growing awareness of the accessibility issues surrounding touch screen use by blind people, designers still face challenges when creating accessible touch screen interfaces. One major stumbling block is a lack of understanding about how blind people actually use touch screens. We conducted two user studies that compared how blind people and sighted people use touch screen gestures. First, we conducted a gesture elicitation study in which 10 blind and 10 sighted people invented gestures to perform common computing tasks on a tablet PC. We found that blind people have different gesture preferences than sighted people, including preferences for edge-based gestures and gestures that involve tapping virtual keys on a keyboard. Second, we conducted a performance study in which the same participants performed a set of reference gestures. We found significant differences in the speed, size, and shape of gestures performed by blind people versus those performed by sighted people. Our results suggest new design guidelines for accessible touch screen interfaces.

© All rights reserved Kane et al. and/or their publisher

 
Edit | Del

Shinohara, Kristen and Wobbrock, Jacob O. (2011): In the shadow of misperception: assistive technology use and social interactions. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 705-714.

Few research studies focus on how the use of assistive technologies is affected by social interaction among people. We present an interview study of 20 individuals to determine how assistive technology use is affected by social and professional contexts and interactions. We found that specific assistive devices sometimes marked their users as having disabilities; that functional access took priority over feeling self-conscious when using assistive technologies; and that two misperceptions pervaded assistive technology use: (1) that assistive devices could functionally eliminate a disability, and (2) that people with disabilities would be helpless without their devices. Our findings provide further evidence that accessibility should be built into mainstream technologies. When this is not feasible, assistive devices should incorporate cutting edge technologies and strive to be designed for social acceptability, a new design approach we propose here.

© All rights reserved Shinohara and Wobbrock and/or their publisher

 
Edit | Del

Wobbrock, Jacob O., Shinohara, Kristen and Jansen, Alex (2011): The effects of task dimensionality, endpoint deviation, throughput calculation, and experiment design on pointing measures and models. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1639-1648.

Fitts' law (1954) characterizes pointing speed-accuracy performance as throughput, whose invariance to target distances (A) and sizes (W) is known. However, it is unknown whether throughput and Fitts' law models in general are invariant to task dimensionality (1-D vs. 2-D), whether univariate (SDx) or bivariate (SDx,y) endpoint deviation is used, whether throughput is calculated using the mean-of-means approach or the slope-inverse approach, or whether Guiard's (2009) Form -- Scale experiment design is used instead of fully crossed A-W factors. We empirically investigate the confluence of these issues, finding that Fitts' law is largely invariant across 1-D and 2-D, provided that univariate endpoint deviation (SDx) is used in both, but that for 2-D pointing data, bivariate endpoint deviation (SDx,y) results in better Fitts' law models. Also, the mean-of-means throughput calculation exhibits lower variance across subjects and dimensionalities than the slope-inverse calculation. In light of these and other findings, we offer recommendations for pointing evaluations, especially in 2-D. We also offer an evaluation tool called Fitts Study to facilitate comparisons.

© All rights reserved Wobbrock et al. and/or their publisher

 
Edit | Del

Wobbrock, Jacob O., Jansen, Alex and Shinohara, Kristen (2011): Modeling and predicting pointing errors in two dimensions. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1653-1656.

Recently, Wobbrock et al. (2008) derived a predictive model of pointing accuracy to complement Fitts' law's predictive model of pointing speed. However, their model was based on one-dimensional (1-D) horizontal movement, while applications of such a model require two dimensions (2-D). In this paper, the pointing error model is investigated for 2-D pointing in a study of 21 participants performing a time-matching task on the ISO 9241-9 ring-of-circles layout. Results show that the pointing error model holds well in 2-D. If univariate endpoint deviation (SDx) is used, regressing on N=72 observed vs. predicted error rate points yields R{squared}=.953. If bivariate endpoint deviation (SDx,y) is used, regression yields R{squared}=.936. For both univariate and bivariate models, the magnitudes of observed and predicted error rates are comparable.

© All rights reserved Wobbrock et al. and/or their publisher

 
Edit | Del

Chilana, Parmit K., Ko, Andrew J., Wobbrock, Jacob O., Grossman, Tovi and Fitzmaurice, George (2011): Post-deployment usability: a survey of current practices. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2243-2246.

Despite the growing research on usability in the pre-development phase, we know little about post-deployment usability activities. To characterize these activities, we surveyed 333 full-time usability professionals and consultants working in large and small corporations from a wide range of industries. Our results show that, as a whole, usability professionals are currently not playing a substantial role in the post-deployment phase compared to other phases of user-centered design, but when they do, practitioners find their interactions quite valuable. We highlight opportunities in HCI research and practice to bridge this gap by working more closely with software support and maintenance teams. We also raise the need to understand what might be called 'usability maintenance,' that is, the process and procedures, by which usability is maintained after deployment.

© All rights reserved Chilana et al. and/or their publisher

 
Edit | Del

Findlater, Leah, Wobbrock, Jacob O. and Wigdor, Daniel (2011): Typing on flat glass: examining ten-finger expert typing patterns on touch surfaces. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2453-2462.

Touch screen surfaces large enough for ten-finger input have become increasingly popular, yet typing on touch screens pales in comparison to physical keyboards. We examine typing patterns that emerge when expert users of physical keyboards touch-type on a flat surface. Our aim is to inform future designs of touch screen keyboards, with the ultimate goal of supporting touch-typing with limited tactile feedback. To study the issues inherent to flat-glass typing, we asked 20 expert typists to enter text under three conditions: (1) with no visual keyboard and no feedback on input errors, then (2) with and (3) without a visual keyboard, but with some feedback. We analyzed touch contact points and hand contours, looking at attributes such as natural finger positioning, the spread of hits among individual keys, and the pattern of non-finger touches. We also show that expert typists exhibit spatially consistent key press distributions within an individual, which provides evidence that eyes-free touch-typing may be possible on touch surfaces and points to the role of personalization in such a solution. We conclude with implications for design.

© All rights reserved Findlater et al. and/or their publisher

 
Edit | Del

Azenkot, Shiri, Prasain, Sanjana, Borning, Alan, Fortuna, Emily, Ladner, Richard E. and Wobbrock, Jacob O. (2011): Enhancing independence and safety for blind and deaf-blind public transit riders. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 3247-3256.

Blind and deaf-blind people often rely on public transit for everyday mobility, but using transit can be challenging for them. We conducted semi-structured interviews with 13 blind and deaf-blind people to understand how they use public transit and what human values were important to them in this domain. Two key values were identified: independence and safety. We developed GoBraille, two related Braille-based applications that provide information about buses and bus stops while supporting the key values. GoBraille is built on MoBraille, a novel framework that enables a Braille display to benefit from many features in a smartphone without knowledge of proprietary, device-specific protocols. Finally, we conducted user studies with blind people to demonstrate that GoBraille enables people to travel more independently and safely. We also conducted co-design with a deaf-blind person, finding that a minimalist interface, with short input and output messages, was most effective for this population.

© All rights reserved Azenkot et al. and/or their publisher

 
Edit | Del

Jansen, Alex, Findlater, Leah and Wobbrock, Jacob O. (2011): From the lab to the world: lessons from extending a pointing technique for real-world use. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1867-1872.

We present the Pointing Magnifier as a case study for understanding the issues and challenges of deploying lab-validated pointing facilitation techniques into the real world. The Pointing Magnifier works by magnifying the contents of an area cursor to allow for selection in a magnified visual and motor space. The technique has been shown in prior lab studies to be effective at reducing the need for fine pointing for motor-impaired users. We highlight key design and technical challenges in bringing the technique, and such techniques in general, from the lab to the field.

© All rights reserved Jansen et al. and/or their publisher

 
Edit | Del

Evans, Abigail and Wobbrock, Jacob O. (2011): Input observer: measuring text entry and pointing performance from naturalistic everyday computer use. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1879-1884.

In this paper we describe the Input Observer, a background application that will be capable of measuring a user's text entry and pointing abilities from everyday computer use "in the wild." The application runs quietly in the background of the user's computer and utilizes global Windows Hooks to observe the text entry input stream and use of the mouse, and will yield data equivalent to results from lab-based measures of text entry and target acquisition. A major challenge is the lack of a task model from which researchers can know the intent of the user at every moment. We describe our approach to handling this issue for both text entry and mouse pointing.

© All rights reserved Evans and Wobbrock and/or their publisher

 
Edit | Del

Levy, David M., Wobbrock, Jacob O., Kaszniak, Alfred W. and Ostergren, Marilyn (2011): Initial results from a study of the effects of meditation on multitasking performance. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2011-2016.

This paper reports initial results from a study exploring whether training in meditation or relaxation can improve office workers' ability to multitask on a computer more effectively andor with less stress. Human resource (HR) personnel were given 8 weeks of training in either mindfulness meditation or body relaxation techniques, and were given a stressful multitasking test both before and after training. (A third group, a control group, received no intervention during the 8-week period but was tested both before and after this period.) Results indicate that overall task time and errors did not differ significantly among the three groups. However, the meditation group reported lower levels of stress and showed better memory for the tasks they had performed; they also switched tasks less often and remained focused on tasks longer.

© All rights reserved Levy et al. and/or their publisher

 
Edit | Del

Kim, Joy, Tran, Jessica J., Johnson, Tressa W., Ladner, Richard, Riskin, Eve and Wobbrock, Jacob O. (2011): Effect of MobileASL on communication among deaf users. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2185-2190.

MobileASL, a software program enabling sign-language video on mobile devices over conventional U.S. cellular networks, was evaluated in a three-week field study during the summer of 2010. Through a series of interviews, questionnaires, and a focus group, we asked participants about their behavior with and perceptions of MobileASL. In addition, we used on-device experience sampling and usage logging to observe how MobileASL was used. Initial results indicate that although participants felt that MobileASL's short battery life limited its use, participants took advantage of the mobility of the technology and used it for in-the-moment information gathering in places like buses, restaurants, and shopping areas.

© All rights reserved Kim et al. and/or their publisher

 
Edit | Del

Tran, Jessica J., Kim, Joy, Chon, Jaehong, Riskin, Eve A., Ladner, Richard E. and Wobbrock, Jacob O. (2011): Evaluating quality and comprehension of real-time sign language video on mobile phones. In: Thirteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2011. pp. 115-122.

Video and image quality are often objectively measured using peak signal-to-noise ratio (PSNR), but for sign language video, human comprehension is most important. Yet the relationship of human comprehension to PSNR has not been studied. In this survey, we determine how well PSNR matches human comprehension of sign language video. We use very low bitrates (10-60 kbps) and two low spatial resolutions (192×144 and 320×240 pixels) which may be typical of video transmission on mobile phones using 3G networks. In a national online video-based user survey of 103 respondents, we found that respondents preferred the 320×240 spatial resolution transmitted at 20 kbps and higher; this does not match what PSNR results would predict. However, when comparing perceived ease/difficulty of comprehension, we found that responses did correlate well with measured PSNR. This suggests that PSNR may not be suitable for representing subjective video quality, but can be reliable as a measure for comprehensibility of American Sign Language (ASL) video. These findings are applied to our experimental mobile phone application, MobileASL, which enables real-time sign language communication for Deaf users at low bandwidths over the U.S. 3G cellular network.

© All rights reserved Tran et al. and/or ACM Press

 
Edit | Del

Azenkot, Shiri, Ladner, Richard E. and Wobbrock, Jacob O. (2011): Smartphone haptic feedback for nonvisual wayfinding. In: Thirteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2011. pp. 281-282.

We explore using vibration on a smartphone to provide turn-by-turn walking instructions to people with visual impairments. We present two novel feedback methods called Wand and ScreenEdge and compare them to a third method called Pattern. We built a prototype and conducted a user study where 8 participants walked along a pre-programmed route using the 3 vibration feedback methods and no audio output. Participants interpreted the feedback with an average error rate of just 4 percent. Most preferred the Pattern method, where patterns of vibrations indicate different directions, or the ScreenEdge method, where areas of the screen correspond to directions and touching them may induce vibration.

© All rights reserved Azenkot et al. and/or ACM Press

 
Edit | Del

Kane, Shaun K., Morris, Meredith Ringel, Perkins, Annuska Z., Wigdor, Daniel, Ladner, Richard E. and Wobbrock, Jacob O. (2011): Access overlays: improving non-visual access to large touch screens for blind users. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 273-282.

Many touch screens remain inaccessible to blind users, and those approaches to providing access that do exist offer minimal support for interacting with large touch screens or spatial data. In this paper, we introduce a set of three software-based access overlays intended to improve the accessibility of large touch screen interfaces, specifically interactive tabletops. Our access overlays are called edge projection, neighborhood browsing, and touch-and-speak. In a user study, 14 blind users compared access overlays to an implementation of Apple's VoiceOver screen reader. Our results show that two of our techniques were faster than VoiceOver, that participants correctly answered more questions about the screen's layout using our techniques, and that participants overwhelmingly preferred our techniques. We developed several applications demonstrating the use of access overlays, including an accessible map kiosk and an accessible board game.

© All rights reserved Kane et al. and/or ACM Press

 
Edit | Del

Avrahami, Daniel, Wobbrock, Jacob O. and Izadi, Shahram (2011): Portico: tangible interaction on and around a tablet. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 347-356.

We present Portico, a portable system for enabling tangible interaction on and around tablet computers. Two cameras on small foldable arms are positioned above the display to recognize a variety of physical objects placed on or around the tablet. These cameras have a larger field-of-view than the screen, allowing Portico to extend interaction significantly beyond the tablet itself. Our prototype, which uses a 12" tablet, delivers an interaction space six times the size of the tablet screen. Portico thus allows tablets to extend both their sensing capabilities and interaction space without sacrificing portability. We describe the design of our system and present a number of applications that demonstrate Portico's unique capability to track objects. We focus on a number of fun applications that demonstrate how such a device can be used as a low-cost way to create personal surface computing experiences. Finally, we discuss the challenges in supporting tangible interaction beyond the screen and describe possible mechanisms for overcoming them.

© All rights reserved Avrahami et al. and/or ACM Press

2010
 
Edit | Del

Rosenthal, Stephanie, Kane, Shaun K., Wobbrock, Jacob O. and Avrahami, Daniel (2010): Augmenting on-screen instructions with micro-projected guides: when it works, and when it fails. In: Proceedings of the 2010 International Conference on Uniquitous Computing 2010. pp. 203-212.

We present a study that evaluates the effectiveness of augmenting on-screen instructions with micro-projection for manual task guidance unlike prior work, which replaced screen instructions with alternative modalities (e.g., head-mounted displays). In our study, 30 participants completed 10 trials each of 11 manual tasks chosen to represent a set of common task-components (e.g., cutting, folding) found in many everyday activities such as crafts, cooking, and hobby electronics. Fifteen participants received only on-screen instructions, and 15 received both on-screen and micro-projected instructions. In contrast to prior work, which focused only on whole tasks, our study examines the benefit of augmenting common task instructions. The augmented instructions improved participants' performance overall; however, we show that in certain cases when projected guides and physical objects visually interfered, projected elements caused increased errors. Our results demonstrate that examining effectiveness at an instruction level is both useful and necessary, and provide insight into the design of systems that help users perform everyday tasks.

© All rights reserved Rosenthal et al. and/or their publisher

 
Edit | Del

Chilana, Parmit K., Wobbrock, Jacob O. and Ko, Andrew J. (2010): Understanding usability practices in complex domains. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2337-2346.

Although usability methods are widely used for evaluating conventional graphical user interfaces and websites, there is a growing concern that current approaches are inadequate for evaluating complex, domain-specific tools. We interviewed 21 experienced usability professionals, including in-house experts, external consultants, and managers working in a variety of complex domains, and uncovered the challenges commonly posed by domain complexity and how practitioners work around them. We found that despite the best efforts by usability professionals to get familiar with complex domains on their own, the lack of formal domain expertise can be a significant hurdle for carrying out effective usability evaluations. Partnerships with domain experts lead to effective results as long as domain experts are willing to be an integral part of the usability team. These findings suggest that for achieving usability in complex domains, some fundamental educational changes may be needed in the training of usability professionals.

© All rights reserved Chilana et al. and/or their publisher

 
Edit | Del

Findlater, Leah, Jansen, Alex, Shinohara, Kristen, Dixon, Morgan, Kamb, Peter, Rakita, Joshua and Wobbrock, Jacob O. (2010): Enhanced area cursors: reducing fine pointing demands for people with motor impairments. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 153-162.

Computer users with motor impairments face major challenges with conventional mouse pointing. These challenges are mostly due to fine pointing corrections at the final stages of target acquisition. To reduce the need for correction-phase pointing and to lessen the effects of small target size on acquisition difficulty, we introduce four enhanced area cursors, two of which rely on magnification and two of which use goal crossing. In a study with motor-impaired and able-bodied users, we compared the new designs to the point and Bubble cursors, the latter of which had not been evaluated for users with motor impairments. Two enhanced area cursors, the Visual-Motor-Magnifier and Click-and-Cross, were the most successful new designs for users with motor

© All rights reserved Findlater et al. and/or their publisher

 
Edit | Del

Tran, Jessica J., Johnson, Tressa W., Kim, Joy, Rodriguez, Rafael, Yin, Sheri, Riskin, Eve A., Ladner, Richard E. and Wobbrock, Jacob O. (2010): A web-based user survey for evaluating power saving strategies for deaf users of mobileASL. In: Twelfth Annual ACM SIGACCESS Conference on Assistive Technologies 2010. pp. 115-122.

MobileASL is a video compression project for two-way, real-time video communication on cell phones, allowing Deaf people to communicate in the language most accessible to them, American Sign Language. Unfortunately, running MobileASL quickly depletes a full battery charge in a few hours. Previous work on MobileASL investigated a method called variable frame rate (VFR) to increase the battery duration. We expand on this previous work by creating two new power saving algorithms, variable spatial resolution (VSR), and the application of both VFR and VSR. These algorithms extend the battery life by altering the temporal and/or spatial resolutions of video transmitted on MobileASL. We found that implementing only VFR extended the battery life from 284 minutes to 307 minutes; implementing only VSR extended the battery life to 306 minutes, and implementing both VFR and VSR extended the battery life to 315 minutes. We evaluated all three algorithms by creating a linguistically accessible online survey to investigate Deaf people's perceptions of video quality when these algorithms were applied. In our survey results, we found that VFR produces perceived video choppiness and VSR produces perceived video blurriness; however, a surprising finding was that when both VFR and VSR are used together, they largely ameliorate the choppiness and blurriness perceived, i.e., they each improve the use of the other. This is a useful finding because using VFR and VSR together saves the most battery life.

© All rights reserved Tran et al. and/or their publisher

 
Edit | Del

Anthony, Lisa and Wobbrock, Jacob O. (2010): A lightweight multistroke recognizer for user interface prototypes. In: Proceedings of the 2010 Conference on Graphics Interface 2010. pp. 245-252.

With the expansion of pen- and touch-based computing, new user interface prototypes may incorporate stroke gestures. Many gestures comprise multiple strokes, but building state-of-the-art multistroke gesture recognizers is nontrivial and time-consuming. Luckily, user interface prototypes often do not require state-of-the-art recognizers that are general and maintainable, due to the simpler nature of most user interface gestures. To enable easy incorporation of multistroke recognition in user interface prototypes, we present $N, a lightweight, concise multistroke recognizer that uses only simple geometry and trigonometry. A full pseudocode listing is given as an appendix. $N is a significant extension to the $1 unistroke recognizer, which has seen quick uptake in prototypes but has key limitations. $N goes further by (1) recognizing gestures comprising multiple strokes, (2) automatically generalizing from one multistroke to all possible multistrokes using alternative stroke orders and directions, (3) recognizing one-dimensional gestures such as lines, and (4) providing bounded rotation invariance. In addition, $N uses two speed optimizations, one with start angles that saves

© All rights reserved Anthony and Wobbrock and/or their publisher

 
Edit | Del

Morris, Meredith Ringel, Wobbrock, Jacob O. and Wilson, Andrew D. (2010): Understanding users' preferences for surface gestures. In: Proceedings of the 2010 Conference on Graphics Interface 2010. pp. 261-268.

We compare two gesture sets for interactive surfaces -- a set of gestures created by an end-user elicitation method and a set of gestures authored by three HCI researchers. Twenty-two participants who were blind to the gestures' authorship evaluated 81 gestures presented and performed on a Microsoft Surface. Our findings indicate that participants preferred gestures authored by larger groups of people, such as those created by end-user elicitation methodologies or those proposed by more than one researcher. This preference pattern seems to arise in part because the HCI researchers proposed more physically and conceptually complex gestures than end-users. We discuss our findings in detail, including the implications for surface gesture design.

© All rights reserved Morris et al. and/or their publisher

 
Edit | Del

Ko, Andrew J. and Wobbrock, Jacob O. (2010): Cleanroom: Edit-Time Error Detection with the Uniqueness Heuristic. In: Hundhausen, Christopher D., Pietriga, Emmanuel, Diaz, Paloma and Rosson, Mary Beth (eds.) IEEE Symposium on Visual Languages and Human-Centric Computing, VL/HCC 2010 21-25 September 2010, 2010, Leganés-Madrid, Spain. pp. 7-14.

 
Edit | Del

Chilana, Parmit K., Ko, Andrew J. and Wobbrock, Jacob O. (2010): Understanding Expressions of Unwanted Behaviors in Open Bug Reporting. In: Hundhausen, Christopher D., Pietriga, Emmanuel, Diaz, Paloma and Rosson, Mary Beth (eds.) IEEE Symposium on Visual Languages and Human-Centric Computing, VL/HCC 2010 21-25 September 2010, 2010, Leganés-Madrid, Spain. pp. 203-206.

 
Edit | Del

Gajos, Krzysztof Z., Weld, Daniel S. and Wobbrock, Jacob O. (2010): Automatically generating personalized user interfaces with Supple. In Artificial Intelligence, 174 (12) pp. 910-950.

Today's computer-human interfaces are typically designed with the assumption that they are going to be used by an able-bodied person, who is using a typical set of input and output devices, who has typical perceptual and cognitive abilities, and who is sitting in a stable, warm environment. Any deviation from these assumptions may drastically hamper the person's effectiveness-not because of any inherent barrier to interaction, but because of a mismatch between the person's effective abilities and the assumptions underlying the interface design. We argue that automatic personalized interface generation is a feasible and scalable solution to this challenge. We present our Supple system, which can automatically generate interfaces adapted to a person's devices, tasks, preferences, and abilities. In this paper we formally define interface generation as an optimization problem and demonstrate that, despite a large solution space (of up to 10^1^7 possible interfaces), the problem is computationally feasible. In fact, for a particular class of cost functions, Supple produces exact solutions in under a second for most cases, and in a little over a minute in the worst case encountered, thus enabling run-time generation of user interfaces. We further show how several different design criteria can be expressed in the cost function, enabling different kinds of personalization. We also demonstrate how this approach enables extensive user- and system-initiated run-time adaptations to the interfaces after they have been generated. Supple is not intended to replace human user interface designers-instead, it offers alternative user interfaces for those people whose devices, tasks, preferences, and abilities are not sufficiently addressed by the hand-crafted designs. Indeed, the results of our study show that, compared to manufacturers' defaults, interfaces automatically generated by Supple significantly improve speed, accuracy and satisfaction of people with motor impairments.

© All rights reserved Gajos et al. and/or Elsevier Science

 Cited in the following chapter:

User Interface Design Adaptation: [/encyclopedia/user_interface_design_adaptation.html]


 
2009
 
Edit | Del

Harada, Susumu, Wobbrock, Jacob O., Malkin, Jonathan, Bilmes, Jeff A. and Landay, James A. (2009): Longitudinal study of people learning to use continuous voice-based cursor control. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 347-356.

We conducted a 2.5 week longitudinal study with five motor impaired (MI) and four non-impaired (NMI) participants, in which they learned to use the Vocal Joystick, a voice-based user interface control system. We found that the participants were able to learn the mapping between the vowel sounds and directions used by the Vocal Joystick, and showed marked improvement in their target acquisition performance. At the end of the ten session period, the NMI group reached the same level of performance as the previously measured "expert" Vocal Joystick performance, and the MI group was able to reach 70% of that level. Two of the MI participants were also able to approach the performance of their preferred device, a touchpad. We report on a number of issues that can inform the development of further enhancements in the realm of voice-driven computer control.

© All rights reserved Harada et al. and/or ACM Press

 
Edit | Del

Wobbrock, Jacob O., Morris, Meredith Ringel and Wilson, Andrew D. (2009): User-defined gestures for surface computing. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1083-1092.

Many surface computing prototypes have employed gestures created by system designers. Although such gestures are appropriate for early investigations, they are not necessarily reflective of user behavior. We present an approach to designing tabletop gestures that relies on eliciting gestures from non-technical users by first portraying the effect of a gesture, and then asking users to perform its cause. In all, 1080 gestures from 20 participants were logged, analyzed, and paired with think-aloud data for 27 commands performed with 1 and 2 hands. Our findings indicate that users rarely care about the number of fingers they employ, that one hand is preferred to two, that desktop idioms strongly influence users' mental models, and that some commands elicit little gestural agreement, suggesting the need for on-screen widgets. We also present a complete user-defined gesture set, quantitative agreement scores, implications for surface technology, and a taxonomy of surface gestures. Our results will help designers create better gesture sets informed by user behavior.

© All rights reserved Wobbrock et al. and/or ACM Press

 
Edit | Del

Wobbrock, Jacob O., Fogarty, James, Liu, Shih-Yen (Sean), Kimuro, Shunichi and Harada, Susumu (2009): The angle mouse: target-agnostic dynamic gain adjustment based on angular deviation. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1401-1410.

We present a novel method of dynamic C-D gain adaptation that improves target acquisition for users with motor impairments. Our method, called the Angle Mouse, adjusts the mouse C-D gain based on the deviation of angles sampled during movement. When angular deviation is low, the gain is kept high. When angular deviation is high, the gain is dropped, making the target bigger in motor-space. A key feature of the Angle Mouse is that, unlike most pointing facilitation techniques, it is target-agnostic, requiring no knowledge of target locations or dimensions. This means that the problem of distractor targets is avoided because adaptation is based solely on the user's behavior. In a study of 16 people, 8 of which had motor impairments, we found that the Angle Mouse improved motor-impaired pointing throughput by 10.3% over the Windows default mouse and 11.0% over sticky icons. For able-bodied users, there was no significant difference among the three techniques, as Angle Mouse throughput was within 1.2% of the default. Thus, the Angle Mouse improved pointing performance for users with motor impairments while remaining unobtrusive for able-bodied users.

© All rights reserved Wobbrock et al. and/or ACM Press

 
Edit | Del

Choe, Eun Kyoung, Shinohara, Kristen, Chilana, Parmit K., Dixon, Morgan and Wobbrock, Jacob O. (2009): Exploring the design of accessible goal crossing desktop widgets. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 3733-3738.

Prior work has shown that goal crossing may be a more accessible interaction technique than conventional pointing-and-clicking for motor-impaired users. Although goal crossing with pen-based input devices has been studied, pen-based designs have limited applicability on the desktop because the pen can "fly in," cross, and "fly out," whereas a persistent mouse cursor cannot. We therefore explore possible designs for accessible mouse-based goal crossing widgets that avoid triggering unwanted goals by using secondary goals, gestures, and corners and edges. We identify four design principles for accessible desktop goal crossing widgets: ease of use for motor-impaired users, safety from false selections, efficiency, and scalability.

© All rights reserved Choe et al. and/or ACM Press

 
Edit | Del

Kane, Shaun K., Jayant, Chandrika, Wobbrock, Jacob O. and Ladner, Richard E. (2009): Freedom to roam: a study of mobile device adoption and accessibility for people with visual and motor disabilities. In: Eleventh Annual ACM SIGACCESS Conference on Assistive Technologies 2009. pp. 115-122.

Mobile devices provide people with disabilities new opportunities to act independently in the world. However, these empowering devices have their own accessibility challenges. We present a formative study that examines how people with visual and motor disabilities select, adapt, and use mobile devices in their daily lives. We interviewed 20 participants with visual and motor disabilities and asked about their current use of mobile devices, including how they select them, how they use them while away from home, and how they adapt to accessibility challenges when on the go. Following the interviews, 19 participants completed a diary study in which they recorded their experiences using mobile devices for one week. Our results show that people with visual and motor disabilities use a variety of strategies to adapt inaccessible mobile devices and successfully use them to perform everyday tasks and navigate independently. We provide guidelines for more accessible and empowering mobile device design.

© All rights reserved Kane et al. and/or their publisher

 
Edit | Del

Cherniavsky, Neva, Chon, Jaehong, Wobbrock, Jacob O., Ladner, Richard E. and Riskin, Eve A. (2009): Activity analysis enabling real-time video communication on mobile phones for deaf users. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 79-88.

We describe our system called MobileASL for real-time video communication on the current U.S. mobile phone network. The goal of MobileASL is to enable Deaf people to communicate with Sign Language over mobile phones by compressing and transmitting sign language video in real-time on an off-the-shelf mobile phone, which has a weak processor, uses limited bandwidth, and has little battery capacity. We develop several H.264-compliant algorithms to save system resources while maintaining ASL intelligibility by focusing on the important segments of the video. We employ a dynamic skin-based region-of-interest (ROI) that encodes the skin at higher quality at the expense of the rest of the video. We also automatically recognize periods of signing versus not signing and raise and lower the frame rate accordingly, a technique we call variable frame rate (VFR). We show that our variable frame rate technique results in a 47% gain in battery life on the phone, corresponding to an extra 68 minutes of talk time. We also evaluate our system in a user study. Participants fluent in ASL engage in unconstrained conversations over mobile phones in a laboratory setting. We find that the ROI increases intelligibility and decreases guessing. VFR increases the need for signs to be repeated and the number of conversational breakdowns, but does not affect the users' perception of adopting the technology. These results show that our sign language sensitive algorithms can save considerable resources without sacrificing intelligibility.

© All rights reserved Cherniavsky et al. and/or their publisher

 
Edit | Del

Kane, Shaun K., Avrahami, Daniel, Wobbrock, Jacob O., Harrison, Beverly, Rea, Adam D., Philipose, Matthai and LaMarca, Anthony (2009): Bonfire: a nomadic system for hybrid laptop-tabletop interaction. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 129-138.

We present Bonfire, a self-contained mobile computing system that uses two laptop-mounted laser micro-projectors to project an interactive display space to either side of a laptop keyboard. Coupled with each micro-projector is a camera to enable hand gesture tracking, object recognition, and information transfer within the projected space. Thus, Bonfire is neither a pure laptop system nor a pure tabletop system, but an integration of the two into one new nomadic computing platform. This integration (1) enables observing the periphery and responding appropriately, e.g., to the casual placement of objects within its field of view, (2) enables integration between physical and digital objects via computer vision, (3) provides a horizontal surface in tandem with the usual vertical laptop display, allowing direct pointing and gestures, and (4) enlarges the input/output space to enrich existing applications. We describe Bonfire's architecture, and offer scenarios that highlight Bonfire's advantages. We also include lessons learned and insights for further development and use.

© All rights reserved Kane et al. and/or their publisher

 
Edit | Del

Wobbrock, Jacob O., Ko, Andrew J. and Kientz, Julie A. (2009): Reflections on the future of iSchools from inspired junior faculty. In Interactions, 16 (5) pp. 69-71.

 
Edit | Del

Chilana, Parmit K., Ko, Andrew J. and Wobbrock, Jacob O. (2009): Designing software for unfamiliar domains. In: Proceedings of the 2009 International Workshop on Cooperative and Human Aspects of Software Engineering 2009. p. 22.

In recent years, software has become indispensable in complex domains such as science, engineering, biomedicine, and finance. Unfortunately, software developers and user researchers, who are usually experts in programming and Human-Computer Interaction (HCI) methods, respectively, often find that the insight needed to design for complex domains only comes with years of domain experience. How can everyone on a software design team acquire just enough knowledge to design effective software, especially user interfaces, without having to become domain experts? We are performing a series of studies to investigate this question, with the ultimate goal of designing tools to help software teams better capture, manage and explore domain knowledge.expand

© All rights reserved Chilana et al. and/or ACM Press

2008
 
Edit | Del

Gajos, Krzysztof Z., Wobbrock, Jacob O. and Weld, Daniel S. (2008): Improving the performance of motor-impaired users with automatically-generated, ability-based interfaces. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1257-1266.

We evaluate two systems for automatically generating personalized interfaces adapted to the individual motor capabilities of users with motor impairments. The first system, SUPPLE, adapts to users' capabilities indirectly by first using the ARNAULD preference elicitation engine to model a user's preferences regarding how he or she likes the interfaces to be created. The second system, SUPPLE++, models a user's motor abilities directly from a set of one-time motor performance tests. In a study comparing these approaches to baseline interfaces, participants with motor impairments were 26.4% faster using ability-based user interfaces generated by SUPPLE++. They also made 73% fewer errors, strongly preferred those interfaces to the manufacturers' defaults, and found them more efficient, easier to use, and much less physically tiring. These findings indicate that rather than requiring some users with motor impairments to adapt themselves to software using separate assistive technologies, software can now adapt itself to the capabilities of its users.

© All rights reserved Gajos et al. and/or ACM Press

 
Edit | Del

Wobbrock, Jacob O., Cutrell, Edward, Harada, Susumu and MacKenzie, I. Scott (2008): An error model for pointing based on Fitts' law. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1613-1622.

For decades, Fitts' law (1954) has been used to model pointing time in user interfaces. As with any rapid motor act, faster pointing movements result in increased errors. But although prior work has examined accuracy as the "spread of hits," no work has formulated a predictive model for error rates (0-100%) based on Fitts' law parameters. We show that Fitts' law mathematically implies a predictive error rate model, which we derive. We then describe an experiment in which target size, target distance, and movement time are manipulated. Our results show a strong model fit: a regression analysis of observed vs. predicted error rates yields a correlation of R{sup:2}=.959 for N=90 points. Furthermore, we show that the effect on error rate of target size (W) is greater than that of target distance (A), indicating a departure from Fitts' law, which maintains that W and A contribute proportionally to index of difficulty (ID). Our error model can be used with Fitts' law to estimate and predict error rates along with speeds, providing a framework for unifying this dichotomy.

© All rights reserved Wobbrock et al. and/or ACM Press

 
Edit | Del

Kane, Shaun K., Bigham, Jeffrey P. and Wobbrock, Jacob O. (2008): Slide rule: making mobile touch screens accessible to blind people using multi-touch interaction techniques. In: Tenth Annual ACM SIGACCESS Conference on Assistive Technologies 2008. pp. 73-80.

Recent advances in touch screen technology have increased the prevalence of touch screens and have prompted a wave of new touch screen-based devices. However, touch screens are still largely inaccessible to blind users, who must adopt error-prone compensatory strategies to use them or find accessible alternatives. This inaccessibility is due to interaction techniques that require the user to visually locate objects on the screen. To address this problem, we introduce Slide Rule, a set of audio-based multi-touch interaction techniques that enable blind users to access touch screen applications. We describe the design of Slide Rule, our interaction techniques, and a user study in which 10 blind people used Slide Rule and a button-based Pocket PC screen reader. Results show that Slide Rule was significantly faster than the button-based system, and was preferred by 7 of 10 users. However, users made more errors when using Slide Rule than when using the more familiar button-based system.

© All rights reserved Kane et al. and/or ACM Press

 
Edit | Del

Kane, Shaun K., Wobbrock, Jacob O., Harniss, Mark and Johnson, Kurt L. (2008): TrueKeys: identifying and correcting typing errors for people with motor impairments. In: Proceedings of the 2008 International Conference on Intelligent User Interfaces 2008. pp. 349-352.

People with motor impairments often have difficulty typing using desktop keyboards. We developed TrueKeys, a system that combines models of word frequency, keyboard layout, and typing error patterns to automatically identify and correct typing mistakes. In this paper, we describe the TrueKeys algorithm, compare its performance to existing correction algorithms, and report on a study of TrueKeys with 9 motor-impaired and 9 non-impaired participants. Running in non-interactive mode, TrueKeys performed more corrections than popular commercial and open source spell checkers. Used interactively, both motor-impaired and non-impaired users performed typing tasks significantly more accurately with TrueKeys than without. However, typing speed was reduced while TrueKeys was enabled.

© All rights reserved Kane et al. and/or ACM Press

 
Edit | Del

Kane, Shaun K., Wobbrock, Jacob O. and Smith, Ian E. (2008): Getting off the treadmill: evaluating walking user interfaces for mobile devices in public spaces. In: Hofte, G. Henri ter, Mulder, Ingrid and Ruyter, Boris E. R. de (eds.) Proceedings of the 10th Conference on Human-Computer Interaction with Mobile Devices and Services - Mobile HCI 2008 September 2-5, 2008, Amsterdam, the Netherlands. pp. 109-118.

 
Edit | Del

Harada, Susumu, Lester, Jonathan, Patel, Kayur, Saponas, T. Scott, Fogarty, James, Landay, James A. and Wobbrock, Jacob O. (2008): VoiceLabel: using speech to label mobile sensor data. In: Digalakis, Vassilios, Potamianos, Alexandros, Turk, Matthew, Pieraccini, Roberto and Ivanov, Yuri (eds.) Proceedings of the 10th International Conference on Multimodal Interfaces - ICMI 2008 October 20-22, 2008, Chania, Crete, Greece. pp. 69-76.

 
Edit | Del

Wobbrock, Jacob O., Rubinstein, James, Sawyer, Michael W. and Duchowski, Andrew T. (2008): Longitudinal evaluation of discrete consecutive gaze gestures for text entry. In: Räihä, Kari-Jouko and Duchowski, Andrew T. (eds.) ETRA 2008 - Proceedings of the Eye Tracking Research and Application Symposium March 26-28, 2008, Savannah, Georgia, USA. pp. 11-18.

 
Edit | Del

Wobbrock, Jacob O. and Gajos, Krzysztof Z. (2008): Goal Crossing with Mice and Trackballs for People with Motor Impairments: Performance, Submovements, and Design Directions. In ACM Transactions on Accessible Computing, 1 (1) p. 4.

Prior research shows that people with motor impairments face considerable challenges when using conventional mice and trackballs. One challenge is positioning the mouse cursor within confined target areas; another is executing a precise click without slipping. These problems can make mouse pointing in graphical user interfaces very difficult for some people. This article explores goal crossing as an alternative strategy for more accessible target acquisition. In goal crossing, targets are boundaries that are simply crossed by the mouse cursor. Thus, goal crossing avoids the two aforementioned problems. To date, however, researchers have not examined the feasibility of goal crossing for people with motor difficulties. We therefore present a study comparing area pointing and goal crossing. Our performance results indicate that although Fitts' throughput for able-bodied users is higher for area pointing than for goal crossing (4.72 vs. 3.61 bits/s), the opposite is true for users with motor impairments (2.34 vs. 2.88 bits/s). However, error rates are higher for goal crossing than for area pointing under a strict definition of crossing errors (6.23% vs. 1.94%). We also present path analyses and an examination of submovement velocity, acceleration, and jerk (the change in acceleration over time). These results show marked differences between crossing and pointing and almost categorically favor crossing. An important finding is that crossing reduces jerk for both participant groups, indicating more fluid, stable motion. To help realize the potential of goal crossing for computer access, we offer design concepts for crossing widgets that address the occlusion problem, which occurs when one crossing goal obscures another in persistent mouse-cursor interfaces. This work provides the motivation and initial steps for further exploration of goal crossing on the desktop, and may help researchers and designers to radically reshape user interfaces to provide accessible goal crossing, thereby lowering barriers to access.

© All rights reserved Wobbrock and Gajos and/or ACM Press

 
Edit | Del

Wobbrock, Jacob O., Myers, Brad A. and Aung, Htet Htet (2008): The performance of hand postures in front- and back-of-device interaction for mobile computing. In International Journal of Human-Computer Studies, 20 (12) pp. 857-875.

Three studies of different mobile-device hand postures are presented. The first study measures the performance of postures in Fitts' law tasks using one and two hands, thumbs and index fingers, horizontal and vertical movements, and front- and back-of-device interaction. Results indicate that the index finger performs well on both the front and the back of the device, and that thumb performance on the front of the device is generally worse. Fitts' law models are created and serve as a basis for comparisons. The second study examines the orientation of shapes on the front and back of a mobile device. It shows that participants' expectations of visual feedback for finger movements on the back of a device reverse the direction of their finger movements to favor a "transparent device" orientation. The third study examines letter-like gestures made on the front and back of a device. It confirms the performance of the index finger on the front of the device, while showing limitations in the ability for the index finger on the back to perform complex gestures. Taken together, these results provide an empirical foundation upon which new mobile interaction designs can be based. A set of design implications and recommendations are given based directly on the findings presented.

© All rights reserved Wobbrock et al. and/or Academic Press

 
Edit | Del

Harada, Susumu, Lester, Jonathan, Patel, Kayur, Saponas, T. Scott, Fogarty, James, Landay, James A. and Wobbrock, Jacob O. (2008): VoiceLabel: using speech to label mobile sensor data. In: Proceedings of the 2008 International Conference on Multimodal Interfaces 2008. pp. 69-76.

Many mobile machine learning applications require collecting and labeling data, and a traditional GUI on a mobile device may not be an appropriate or viable method for this task. This paper presents an alternative approach to mobile labeling of sensor data called VoiceLabel. VoiceLabel consists of two components: (1) a speech-based data collection tool for mobile devices, and (2) a desktop tool for offline segmentation of recorded data and recognition of spoken labels. The desktop tool automatically analyzes the audio stream to find and recognize spoken labels, and then presents a multimodal interface for reviewing and correcting data labels using a combination of the audio stream, the system's analysis of that audio, and the corresponding mobile sensor data. A study with ten participants showed that VoiceLabel is a viable method for labeling mobile sensor data. VoiceLabel also illustrates several key features that inform the design of other data labeling tools.

© All rights reserved Harada et al. and/or their publisher

2007
 
Edit | Del

Wobbrock, Jacob O., Chau, Duen Horng and Myers, Brad A. (2007): An alternative to push, press, and tap-tap-tap: gesturing on an isometric joystick for mobile phone text entry. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 667-676.

A gestural text entry method for mobile is presented. Unlike most mobile phone text entry methods, which rely on repeatedly pressing buttons, our gestural method uses an isometric joystick and the EdgeWrite alphabet to allow users to write by making letter-like "pressure strokes." In a 15-session study comparing character-level EdgeWrite to Multitap, subjects' speeds were statistically indistinguishable, reaching about 10 WPM. In a second 15-session study comparing word-level EdgeWrite to T9, the same subjects were again statistically indistinguishable, reaching about 16 WPM. Uncorrected errors were low, around 1% or less for each method. In addition, subjective results favored EdgeWrite. Overall, results indicate that our isometric joystick-based method is highly competitive with two commercial keypad-based methods, opening the way for keypad-less designs and text entry on tiny devices. Additional results showed that a joystick on the back could be used at about 70% of the speed of the front, and the front joystick could be used eyes-free at about 80% of the speed of normal use.

© All rights reserved Wobbrock et al. and/or ACM Press

 
Edit | Del

González, Iván E., Wobbrock, Jacob O., Chau, Duen Horng, Faulring, Andrew and Myers, Brad A. (2007): Eyes on the road, hands on the wheel: thumb-based interaction techniques for input on steering wheels. In: Proceedings of the 2007 Conference on Graphics Interface 2007. pp. 95-102.

The increasing quantity and complexity of in-vehicle systems creates a demand for user interfaces which are suited to driving. The steering wheel is a common location for the placement of buttons to control navigation, entertainment, and environmental systems, but what about a small touchpad? To investigate this question, we embedded a Synaptics StampPad in a computer game steering wheel and evaluated seven methods for selecting from a list of over 3000 street names. Selection speed was measured while stationary and while driving a simulator. Results show that the EdgeWrite gestural text entry method is about 20% to 50% faster than selection-based text entry or direct list-selection methods. They also show that methods with slower selection speeds generally resulted in faster driving speeds. However, with EdgeWrite, participants were able to maintain their speed and avoid incidents while selecting and driving at the same time. Although an obvious choice for constrained input, on-screen keyboards generally performed quite poorly.

© All rights reserved González et al. and/or Canadian Information Processing Society

 
Edit | Del

Wobbrock, Jacob O. and Gajos, Krzysztof Z. (2007): A comparison of area pointing and goal crossing for people with and without motor impairments. In: Ninth Annual ACM Conference on Assistive Technologies 2007. pp. 3-10.

Prior work has highlighted the challenges faced by people with motor impairments when trying to acquire on-screen targets using a mouse or trackball. Two reasons for this are the difficulty of positioning the mouse cursor within a confined area, and the challenge of accurately executing a click. We hypothesize that both of these difficulties with area pointing may be alleviated in a different target acquisition paradigm called "goal crossing." In goal crossing, users do not acquire a confined area, but instead pass over a target line. Although goal crossing has been studied for able-bodied users, its suitability for people with motor impairments is unknown. We present a study of 16 people, 8 of whom had motor impairments, using mice and trackballs to do area pointing and goal crossing. Our results indicate that Fitts' law models both techniques for both user groups. Furthermore, although throughput for able-bodied users was higher for area pointing than for goal crossing (4.72 vs. 3.61 bits/s), the opposite was true for users with motor impairments (2.34 vs. 2.88 bits/s), suggesting that goal crossing may be viable for them. However, error rates were higher for goal crossing than for area pointing under a strict definition of crossing errors (6.23% vs. 1.94%). Subjective results indicate a preference for goal crossing among motor-impaired users. This work provides the empirical foundation from which to pursue the design of crossing-based interfaces as accessible alternatives to pointing-based interfaces.

© All rights reserved Wobbrock and Gajos and/or ACM Press

 
Edit | Del

Froehlich, Jon, Wobbrock, Jacob O. and Kane, Shaun K. (2007): Barrier pointing: using physical ed. In: Ninth Annual ACM Conference on Assistive Technologies 2007. pp. 19-26.

Mobile phones and personal digital assistants (PDAs) are incredibly popular pervasive technologies. Many of these devices contain touch screens, which can present problems for users with motor impairments due to small targets and their reliance on tapping for target acquisition. In order to select a target, users must tap on the screen, an action which requires the precise motion of flying into a target and lifting without slipping. In this paper, we propose a new technique for target acquisition called barrier pointing, which leverages the elevated physical edges surrounding the screen to improve pointing accuracy. After designing a series of barrier pointing techniques, we conducted an initial study with 9 able bodied users and 9 users with motor impairments in order to discover the parameters that make barrier pointing successful. From this data, we offer an in-depth analysis of the performance of two motor impaired users for whom barrier pointing was especially beneficial. We show the importance of providing physical stability by allowing the stylus to press against the screen and its physical edge. We offer other design insights and lessons learned that can inform future attempts at leveraging the physical properties of mobile devices to improve accessibility.

© All rights reserved Froehlich et al. and/or ACM Press

 
Edit | Del

Harada, Susumu, Wobbrock, Jacob O. and Landay, James A. (2007): Voicedraw: a hands-free voice-driven drawing application for people with motor impairments. In: Ninth Annual ACM Conference on Assistive Technologies 2007. pp. 27-34.

We present VoiceDraw, a voice-driven drawing application for people with motor impairments that provides a way to generate free-form drawings without needing manual interaction. VoiceDraw was designed and built to investigate the potential of the human voice as a modality to bring fluid, continuous direct manipulation interaction to users who lack the use of their hands. VoiceDraw also allows us to study the issues surrounding the design of a user interface optimized for non-speech voice-based interaction. We describe the features of the VoiceDraw application, our design process, including our user-centered design sessions with a 'voice painter', and offer lessons learned that could inform future voice-based design efforts. In particular, we offer insights for mapping human voice to continuous control.

© All rights reserved Harada et al. and/or ACM Press

 
Edit | Del

Bigham, Jeffrey P., Cavender, Anna C., Brudvik, Jeremy T., Wobbrock, Jacob O. and Lander, Richard E. (2007): WebinSitu: a comparative analysis of blind and sighted browsing behavior. In: Ninth Annual ACM Conference on Assistive Technologies 2007. pp. 51-58.

Web browsing is inefficient for blind web users because of persistent accessibility problems, but the extent of these problems and their practical effects from the perspective of the user has not been sufficiently examined. We conducted a study in situ to investigate the accessibility of the web as experienced by web users. This remote study used an advanced web proxy that leverages AJAX technology to record both the pages viewed and the actions taken by users on the web pages that they visited. Our study was conducted remotely over the period of one week, and our participants used the assistive technology and software to which they were already accustomed and had already configured according to preference. These advantages allowed us to aggregate observations of many users and to explore the practical effects on and coping strategies employed by our blind participants. Our study reflects web accessibility from the perspective of web users and describes quantitative differences in the browsing behavior of blind and sighted web users.

© All rights reserved Bigham et al. and/or ACM Press

 
Edit | Del

Wobbrock, Jacob O., Wilson, Andrew D. and Li, Yang (2007): Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 159-168.

Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user interface prototypers. Although some user interface libraries and toolkits offer gesture recognizers, such infrastructure is often unavailable in design-oriented environments like Flash, scripting environments like JavaScript, or brand new off-desktop prototyping environments. To enable novice programmers to incorporate gestures into their UI prototypes, we present a "$1 recognizer" that is easy, cheap, and usable almost anywhere in about 100 lines of code. In a study comparing our $1 recognizer, Dynamic Time Warping, and the Rubine classifier on user-supplied gestures, we found that $1 obtains over 97% accuracy with only 1 loaded template and 99% accuracy with 3+ loaded templates. These results were nearly identical to DTW and superior to Rubine. In addition, we found that medium-speed gestures, in which users balanced speed and accuracy, were recognized better than slow or fast gestures for all three recognizers. We also discuss the effect that the number of templates or training examples has on recognition, the score falloff along recognizers' N-best lists, and results for individual gestures. We include detailed pseudocode of the $1 recognizer to aid development, inspection, extension, and testing.

© All rights reserved Wobbrock et al. and/or ACM Press

 
Edit | Del

Gajos, Krzysztof Z., Wobbrock, Jacob O. and Weld, Daniel S. (2007): Automatically generating user interfaces adapted to users' motor and vision capabilities. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 231-240.

Most of today's GUIs are designed for the typical, able-bodied user; atypical users are, for the most part, left to adapt as best they can, perhaps using specialized assistive technologies as an aid. In this paper, we present an alternative approach: SUPPLE++ automatically generates interfaces which are tailored to an individual's motor capabilities and can be easily adjusted to accommodate varying vision capabilities. SUPPLE++ models users. motor capabilities based on a onetime motor performance test and uses this model in an optimization process, generating a personalized interface. A preliminary study indicates that while there is still room for improvement, SUPPLE++ allowed one user to complete tasks that she could not perform using a standard interface, while for the remaining users it resulted in an average time savings

© All rights reserved Gajos et al. and/or ACM Press

2006
 
Edit | Del

Wobbrock, Jacob O. and Myers, Brad A. (2006): Trackball text entry for people with motor impairments. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 479-488.

We present a new gestural text entry method for trackballs. The method uses the mouse cursor and relies on crossing instead of pointing. A user writes in fluid Roman-like unistrokes by ""pulsing"" the trackball in desired letter patterns. We examine this method both theoretically using the Steering Law and empirically in two studies. Our studies show that able-bodied users who were unfamiliar with trackballs could write at about 10 wpm with <4% total errors after 45 minutes. In eight sessions, a motor-impaired trackball user peaked at 7.11 wpm with 0% uncorrected errors, compared to 5.95 wpm with 0% uncorrected errors with an on-screen keyboard. Over sessions, his speeds were significantly faster with our gestural method than with an on-screen keyboard. A former 15-year veteran of on-screen keyboards, he now uses our gestural method instead.

© All rights reserved Wobbrock and Myers and/or ACM Press

 
Edit | Del

Wobbrock, Jacob O., Myers, Brad A. and Rothrock, Brandon (2006): Few-key text entry revisited: mnemonic gestures on four keys. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 489-492.

We present a new 4-key text entry method that, unlike most few-key methods, is gestural instead of selection-based. Importantly, its gestures mimic the writing of Roman letters for high learnability. We compare this new 4-key method to predominant 3-key and 5-key methods theoretically using KSPC and empirically using a longitudinal study of 5 subjects over 10 sessions. The study includes an evaluation of the 4-key method without any on-screen visualization-an impossible condition for the selection-based methods. Our results show that the new 4-key method is quickly learned, becoming faster than the 3-key and 5-key methods after just 10 minutes of writing, although it produces more errors. Interestingly, removing a visualization of the gestures being made causes no detriment to the 4-key method, which is an advantage for eyes-free text entry.

© All rights reserved Wobbrock et al. and/or ACM Press

 
Edit | Del

Wobbrock, Jacob O. and Myers, Brad A. (2006): From letters to words: efficient stroke-based word completion for trackball text entry. In: Eighth Annual ACM Conference on Assistive Technologies 2006. pp. 2-9.

We present a major extension to our previous work on Trackball EdgeWrite -- a unistroke text entry method for trackballs -- by taking it from a character-level technique to a word-level one. Our design is called stroke-based word completion, and it enables efficient word selection as part of the stroke-making process. Unlike most word completion designs, which require users to select words from a list, our technique allows users to select words by performing a fluid crossing gesture. Our theoretical model shows this word-level design to be 45.0% faster than our prior model for character-only strokes. A study with a subject with spinal cord injury comparing Trackball EdgeWrite to the onscreen keyboard WiViK, both using word prediction and completion, shows that Trackball EdgeWrite is competitive with WiViK in speed (12.09 vs. 11.82 WPM) and accuracy (3.95% vs. 2.21% total errors), but less visually tedious and ultimately preferred. The results also show that word-level Trackball EdgeWrite is 46.5% faster and 36.7% more accurate than our subject's prior peak performance with character-level Trackball EdgeWrite, and 75.2% faster and 40.2% more accurate than his prior peak performance with his preferred on-screen keyboard. An additional evaluation of the same subject over a two-month field deployment shows a 43.9% reduction in unistrokes due to strokebased word completion in Trackball EdgeWrite.

© All rights reserved Wobbrock and Myers and/or ACM Press

 
Edit | Del

Wobbrock, Jacob O. and Myers, Brad A. (2006): Analyzing the input stream for character-level errors in unconstrained text entry evaluations. In ACM Transactions on Computer-Human Interaction, 13 (4) pp. 458-489.

Recent improvements in text entry error rate measurement have enabled the running of text entry experiments in which subjects are free to correct errors (or not) as they transcribe a presented string. In these "unconstrained" experiments, it is no longer necessary to force subjects to unnaturally maintain synchronicity with presented text for the sake of performing overall error rate calculations. However, the calculation of character-level error rates, which can be trivial in artificially constrained evaluations, is far more complicated in unconstrained text entry evaluations because it is difficult to infer a subject's intention at every character. For this reason, prior character-level error analyses for unconstrained experiments have only compared presented and transcribed strings, not input streams. But input streams are rich sources of character-level error information, since they contain all of the text entered (and erased) by a subject. The current work presents an algorithm for the automated analysis of character-level errors in input streams for unconstrained text entry evaluations. It also presents new character-level metrics that can aid method designers in refining text entry methods. To exercise these metrics, we perform two analyses on data from an actual text entry experiment. One analysis, available from the prior work, uses only presented and transcribed strings. The other analysis uses input streams, as described in the current work. The results confirm that input stream error analysis yields richer information for the same empirical data. To facilitate the use of these new analyses, we offer pseudocode and downloadable software for performing unconstrained text entry experiments and analyzing data.

© All rights reserved Wobbrock and Myers and/or ACM Press

 
Edit | Del

Wobbrock, Jacob O., Myers, Brad A. and Chau, Duen Horng (2006): In-stroke word completion. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 333-336.

We present the design and implementation of a word-level stroking system called Fisch, which is intended to improve the speed of character-level unistrokes. Importantly, Fisch does not alter the way in which character-level unistrokes are made, but allows users to gradually ramp up to word-level unistrokes by extending their letters in minimal ways. Fisch relies on in-stroke word completion, a flexible design for fluidly turning unistroke letters into whole words. Fisch can be memorized at the motor level since word completions always appear at the same positions relative to the strokes being made. Our design for Fisch is suitable for use with any unistroke alphabet. We have implemented Fisch for multiple versions of EdgeWrite, and results show that Fisch reduces the number of strokes during entry by 43.9% while increasing the rate of entry. An informal test of "record speed" with the stylus version resulted in 50-60 wpm with no uncorrected errors.

© All rights reserved Wobbrock et al. and/or ACM Press

2005
 
Edit | Del

Wobbrock, Jacob O. and Myers, Brad A. (2005): Gestural text entry on multiple devices. In: Seventh Annual ACM Conference on Assistive Technologies 2005. pp. 184-185.

We present various adaptations of the EdgeWrite unistroke text entry method that work on multiple computer input devices: styluses, touchpads, displacement and isometric joysticks, four keys or buttons, and trackballs. We argue that consistent, flexible, multi-device input is important to both accessibility and to ubiquitous computing. For accessibility, multi-device input means users can switch among devices, distributing strain and fatigue among different muscle groups. For ubiquity, it means users can "learn once, write anywhere," even as new devices emerge. By considering the accessibility and ubiquity of input techniques, we can design for both motor-impaired users and "situationally impaired" able-bodied users who are on-the-go. We discuss the requirements for such input and the challenges of multi-device text entry, such as solving the segmentation problem. This paper accompanies a demonstration of EdgeWrite on multiple devices.

© All rights reserved Wobbrock and Myers and/or ACM Press

 
Edit | Del

Wobbrock, Jacob O., Aung, Htet Htet, Rothrock, Brandon and Myers, Brad A. (2005): Maximizing the guessability of symbolic input. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1869-1872.

Guessability is essential for symbolic input, in which users enter gestures or keywords to indicate characters or commands, or rely on labels or icons to access features. We present a unified approach to both maximizing and evaluating the guessability of symbolic input. This approach can be used by anyone wishing to design a symbol set with high guessability, or to evaluate the guessability of an existing symbol set. We also present formulae for quantifying guessability and agreement among guesses. An example is offered in which the guessability of the EdgeWrite unistroke alphabet was improved by users from 51.0% to 80.1% without designer intervention. The original and improved alphabets were then tested for their immediate usability with the procedure used by MacKenzie and Zhang (1997). Users entered the original alphabet with 78.8% and 90.2% accuracy after 1 and 5 minutes of learning,

© All rights reserved Wobbrock et al. and/or ACM Press

2004
 
Edit | Del

Wobbrock, Jacob O., Myers, Brad A., Aung, Htet Htet and LoPresti, Edmund F. (2004): Text entry from power wheelchairs: edgewrite for joysticks and touchpads. In: Sixth Annual ACM Conference on Assistive Technologies 2004. pp. 110-117.

Power wheelchair joysticks have been used to control a mouse cursor on desktop computers, but they offer no integrated text entry solution, confining users to point-and-click or point-and-dwell with on-screen keyboards. But on-screen keyboards reduce useful screen real-estate, exacerbate the need for frequent window management, and impose a second focus of attention. By contrast, we present two integrated gestural text entry methods designed for use from power wheelchairs: one for joysticks and the other for touchpads. Both techniques are adaptations of EdgeWrite, originally a stylus-based unistroke method designed for people with tremor. In a preliminary study of 7 power wheelchair users, we found that touchpad EdgeWrite was faster than joystick WiVik, and joystick EdgeWrite was only slightly slower after minimal practice. These findings reflect "walk up and use"-ability and warrant further investigation into extended use.

© All rights reserved Wobbrock et al. and/or ACM Press

 
Edit | Del

Wobbrock, Jacob O., Myers, Brad A. and Aung, Htet Htet (2004): Writing with a joystick: a comparison of date stamp, selection keyboard, and EdgeWrite. In: Graphics Interface 2004 May 17-19, 2004, London, Ontario, Canada. pp. 1-8.

A joystick text entry method for game controllers and mobile phones would be valuable, since these devices often have joysticks but no conventional keyboards. But prevalent joystick text entry methods are slow because they are selection-based. EdgeWrite, a new joystick text entry method, is not based on selection but on gestures from a unistroke alphabet. Our experiment shows that this new method is faster, leaves fewer errors, and is more satisfying than date stamp and selection keyboard (two prevalent selection-based methods) for novices after minimal practice. For more practiced users, our results show that EdgeWrite is at least 1.5 times faster than selection keyboard, and 2.4 times faster than date stamp.

© All rights reserved Wobbrock et al. and/or their publisher

 
Edit | Del

Myers, Brad A., Nichols, Jeffrey, Wobbrock, Jacob O. and Miller, Robert C. (2004): Taking Handheld Devices to the Next Level. In IEEE Computer, 37 (12) pp. 36-43.

2003
 
Edit | Del

Wobbrock, Jacob O., Myers, Brad A. and Kembel, John A. (2003): EdgeWrite: a stylus-based text entry method designed for high accuracy and stability of motion. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 61-70.

EdgeWrite is a new unistroke text entry method for handheld devices designed to provide high accuracy and stability of motion for people with motor impairments. It is also effective for able-bodied people. An EdgeWrite user enters text by traversing the edges and diagonals of a square hole imposed over the usual text input area. Gesture recognition is accomplished not through pattern recognition but through the sequence of corners that are hit. This means that the full stroke path is unimportant and recognition is highly deterministic, enabling better accuracy than other gestural alphabets such as Graffiti. A study of able-bodied users showed subjects with no prior experience were 18% more accurate during text entry with Edge Write than with Graffiti (p>.05), with no significant difference in speed. A study of 4 subjects with motor impairments revealed that some of them were unable to do Graffiti, but all of them could do Edge Write. Those who could do both methods had dramatically better accuracy with Edge Write.

© All rights reserved Wobbrock et al. and/or ACM Press

2002
 
Edit | Del

Wobbrock, Jacob O., Forlizzi, Jodi, Hudson, Scott E. and Myers, Brad A. (2002): WebThumb: interaction techniques for small-screen browsers. In: Beaudouin-Lafon, Michel (ed.) Proceedings of the 15th annual ACM symposium on User interface software and technology October 27-30, 2002, Paris, France. pp. 205-208.

The proliferation of wireless handheld devices is placing the World Wide Web in the palms of users, but this convenience comes at a high interactive cost. The Web that came of age on the desktop is ill-suited for use on the small displays of handhelds. Today, handheld browsing often feels like browsing on a PC with a shrunken desktop. Overreliance on scrolling is a big problem in current handheld browsing. Users confined to viewing a small portion of each page often lack a sense of the overall context -- they may feel lost in a large page and be forced to remember the locations of items as those items scroll out of view. In this paper, we present a synthesis of interaction techniques to address these problems. We implemented these techniques in a prototype, WebThumb, that can browse the live Web.

© All rights reserved Wobbrock et al. and/or ACM Press

 
Edit | Del

Myers, Brad A., Wobbrock, Jacob O., Yang, Sunny, Yeung, Brian, Nichols, Jeffrey and Miller, Robert (2002): Using handhelds to help people with motor impairments. In: Fifth Annual ACM Conference on Assistive Technologies 2002. pp. 89-96.

People with Muscular Dystrophy (MD) and certain other muscular and nervous system disorders lose their gross motor control while retaining fine motor control. The result is that they lose the ability to move their wrists and arms, and therefore their ability to operate a mouse and keyboard. However, they can often still use their fingers to control a pencil or stylus, and thus can use a handheld computer such as a Palm. We have developed software that allows the handheld to substitute for the mouse and keyboard of a PC, and tested it with four people (ages 10, 12, 27 and 53) with MD. The 12-year old had lost the ability to use a mouse and keyboard, but with our software, he was able to use the Palm to access email, the web and computer games. The 27-year-old reported that he found the Palm so much better that he was using it full-time instead of a keyboard and mouse. The other two subjects said that our software was much less tiring than using the conventional input devices, and enabled them to use computers for longer periods. We report the results of these case studies, and the adaptations made to our software for people with disabilities.

© All rights reserved Myers et al. and/or ACM Press

 
Edit | Del

Nichols, Jeffrey, Wobbrock, Jacob O., Gergle, Darren and Forlizzi, Jodi (2002): Mediator and medium: doors as interruption gateways and aesthetic displays. In: Proceedings of DIS02: Designing Interactive Systems: Processes, Practices, Methods, & Techniques 2002. pp. 379-386.

Office doors are more than entrances to rooms, they are entrances to a person's time and attention. People can mediate access to themselves by choosing whether to leave their door open or closed when they are in their office. Doors also serve as a medium for communication, where people can broadcast individual messages to passersby, or accept messages from others who stopped by when the door was closed. These qualities make the door an excellent location for designing solutions that help people better manage their time and attention. In this paper, we present a study of doors, derive design insights from the study, and then realize some of these insights in two cooperating implementations deployed in our workplace.

© All rights reserved Nichols et al. and/or ACM Press

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
 
 
 

Changes to this page (author)

09 Sep 2013: Added
10 Nov 2012: Modified
10 Nov 2012: Modified
10 Nov 2012: Modified
10 Nov 2012: Modified
09 Nov 2012: Modified
05 Apr 2012: Modified
05 Apr 2012: Modified
03 Apr 2012: Modified
03 Apr 2012: Modified
03 Apr 2012: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
20 Apr 2011: Modified
20 Apr 2011: Modified
20 Apr 2011: Modified
20 Apr 2011: Modified
16 Jan 2011: Modified
16 Jan 2011: Modified
15 Jan 2011: Modified
18 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
01 Mar 2010: Added
05 Jun 2009: Modified
03 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
30 May 2009: Modified
29 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
08 Apr 2009: Modified
07 Apr 2009: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
25 Jul 2007: Modified
24 Jul 2007: Modified
24 Jul 2007: Modified
29 Jun 2007: Modified
23 Jun 2007: Modified
23 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Added
28 Apr 2003: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/jacob_o__wobbrock.html

Publication statistics

Pub. period:2002-2012
Pub. count:71
Number of co-authors:94



Co-authors

Number of publications with 3 favourite co-authors:

Brad A. Myers:16
Richard E. Ladner:10
Shaun K. Kane:9

 

 

Productive colleagues

Jacob O. Wobbrock's 3 most productive colleagues in number of publications:

Brad A. Myers:154
Scott E. Hudson:113
James A. Landay:91
 
 
 
Jul 12

To design an easy-to-use interface, pay attention to what users do, not what they say. Self-reported claims are unreliable, as are user speculations about future behavior.

-- Jakob Nielsen

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!