Publication statistics

Pub. period:1984-2012
Pub. count:32
Number of co-authors:61



Co-authors

Number of publications with 3 favourite co-authors:

Jacob O. Wobbrock:10
Jeffrey P. Bigham:10
Eve A. Riskin:6

 

 

Productive colleagues

Richard E. Ladner's 3 most productive colleagues in number of publications:

Mary Beth Rosson:142
Margaret M. Burnet..:103
Jacob O. Wobbrock:71
 
 
 

Upcoming Courses

go to course
Emotional Design: How to make products people will love
Starts TODAY LAST CALL!
go to course
UI Design Patterns for Successful Software
87% booked. Starts in 8 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Richard E. Ladner

Personal Homepage:
homes.cs.washington.edu/~ladner/

 

Publications by Richard E. Ladner (bibliography)

 what's this?
2012
 
Edit | Del

Ruamviboonsuk, Vaspol, Azenkot, Shiri and Ladner, Richard E. (2012): Tapulator: a non-visual calculator using natural prefix-free codes. In: Fourteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2012. pp. 221-222. Available online

A new non-visual method of numeric entry into a smartphone is designed, implemented, and tested. Users tap the smartphone screen with one to three fingers or swipe the screen in order to enter numbers. No buttons are used -- only simple, easy-to-remember gestures. A preliminary valuation with sighted users compares the method to a standard accessible numeric keyboard with a VoiceOver-like screen reader interface for non-visual entry. We found that users entered numbers faster and with higher accuracy with our number entry method than with a VoiceOver-like interface, showing there is potential for use among blind people as well. The Tapulator, a complete calculator based on this non-visual numeric entry that uses simple gestures for arithmetic operations and other calculator actions is described.

© All rights reserved Ruamviboonsuk et al. and/or ACM Press

 
Edit | Del

O'Leary, Katie, Delahunt, Charles, Dowden, Patricia, Darmansya, Ivan, Heng, Jiaqi, Riskin, Eve A., Ladner, Richard E. and Wobbrock, Jacob O. (2012): Design goals for a system for enhancing AAC with personalized video. In: Fourteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2012. pp. 223-224. Available online

Enabling end-users of Augmentative and Alternative Communication (AAC) systems to add personalized video content at runtime holds promise for improving communication, but the requirements for such systems are as yet unclear. To explore this issue, we present Vid2Speech, a prototype AAC system for children with complex communication needs (CCN) that uses personalized video to enhance representations of action words. We describe three design goals that guided the integration of personalized video to enhance AAC in our early-stage prototype: 1) Providing social-temporal navigation; 2) Enhancing comprehension; and 3) Enabling customization in real time. Our system concept represents one approach to realizing these goals, however, we contribute the goals and the system as a starting point for future innovations in personalized video-based AAC.

© All rights reserved O'Leary et al. and/or ACM Press

 
Edit | Del

Azenkot, Shiri, Wobbrock, Jacob O., Prasain, Sanjana and Ladner, Richard E. (2012): Input finger detection for nonvisual touch screen text entry in Perkinput. In: Proceedings of the 2012 Conference on Graphics Interface 2012. pp. 121-129. Available online

We present Input Finger Detection (IFD), a novel technique for nonvisual touch screen input, and its application, the Perkinput text entry method. With IFD, signals are input into a device with multi-point touches, where each finger represents one bit, either touching the screen or not. Maximum likelihood and tracking algorithms are used to detect which fingers touch the screen based on user-set reference points. The Perkinput text entry method uses the 6-bit Braille encoding with audio feedback, enabling one- and two-handed input. A longitudinal evaluation with 8 blind participants who are proficient in Braille showed that one-handed Perkinput was significantly faster and more accurate than iPhone's VoiceOver. Furthermore, in a case study to evaluate expert performance, one user reached an average session speed of 17.56 words per minute (WPM) with an average uncorrected error rate of just 0.14% using one hand for input. The same participant reached an average session speed of 38.0

© All rights reserved Azenkot et al. and/or their publisher

2011
 
Edit | Del

Kane, Shaun K., Wobbrock, Jacob O. and Ladner, Richard E. (2011): Usable gestures for blind people: understanding preference and performance. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 413-422. Available online

Despite growing awareness of the accessibility issues surrounding touch screen use by blind people, designers still face challenges when creating accessible touch screen interfaces. One major stumbling block is a lack of understanding about how blind people actually use touch screens. We conducted two user studies that compared how blind people and sighted people use touch screen gestures. First, we conducted a gesture elicitation study in which 10 blind and 10 sighted people invented gestures to perform common computing tasks on a tablet PC. We found that blind people have different gesture preferences than sighted people, including preferences for edge-based gestures and gestures that involve tapping virtual keys on a keyboard. Second, we conducted a performance study in which the same participants performed a set of reference gestures. We found significant differences in the speed, size, and shape of gestures performed by blind people versus those performed by sighted people. Our results suggest new design guidelines for accessible touch screen interfaces.

© All rights reserved Kane et al. and/or their publisher

 
Edit | Del

Azenkot, Shiri, Prasain, Sanjana, Borning, Alan, Fortuna, Emily, Ladner, Richard E. and Wobbrock, Jacob O. (2011): Enhancing independence and safety for blind and deaf-blind public transit riders. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 3247-3256. Available online

Blind and deaf-blind people often rely on public transit for everyday mobility, but using transit can be challenging for them. We conducted semi-structured interviews with 13 blind and deaf-blind people to understand how they use public transit and what human values were important to them in this domain. Two key values were identified: independence and safety. We developed GoBraille, two related Braille-based applications that provide information about buses and bus stops while supporting the key values. GoBraille is built on MoBraille, a novel framework that enables a Braille display to benefit from many features in a smartphone without knowledge of proprietary, device-specific protocols. Finally, we conducted user studies with blind people to demonstrate that GoBraille enables people to travel more independently and safely. We also conducted co-design with a deaf-blind person, finding that a minimalist interface, with short input and output messages, was most effective for this population.

© All rights reserved Azenkot et al. and/or their publisher

 
Edit | Del

Gilbert, Juan E., Burnett, Margaret M., Ladner, Richard E., Rosson, Mary Beth and Davis, Janet (2011): Applying the NSF broader impacts criteria to HCI research. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 459-462. Available online

Broader impacts emerged as a major concern in a recent evaluation of the Division of Computer and Network Systems (CNS) at the U.S. National Science Foundation (NSF). Evaluators found that the intellectual merit contributions from the CNS investigators were strong, but broader impacts could (and should) be improved [10]. As a result, a summit was held in Washington, D.C., to clarify and strengthen the broader impacts criteria for computing research [4]. In this SIG meeting, we will discuss the outcomes of this summit, with particular attention to broader impacts in human-computer interaction research.

© All rights reserved Gilbert et al. and/or their publisher

 
Edit | Del

Bigham, Jeffrey P., Ladner, Richard E. and Borodin, Yevgen (2011): The design of human-powered access technology. In: Thirteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2011. pp. 3-10. Available online

People with disabilities have always overcome accessibility problems by enlisting people in their community to help. The Internet has broadened the available community and made it easier to get on-demand assistance remotely. In particular, the past few years have seen the development of technology in both research and industry that uses human power to overcome technical problems too difficult to solve automatically. In this paper, we frame recent developments in human computation in the historical context of accessibility, and outline a framework for discussing new advances in human-powered access technology. Specifically, we present a set of 13 design principles for human-powered access technology motivated both by historical context and current technological developments. We then demonstrate the utility of these principles by using them to compare several existing human-powered access technologies. The power of identifying the 13 principles is that they will inspire new ways of thinking about human-powered access technologies.

© All rights reserved Bigham et al. and/or ACM Press

 
Edit | Del

Tran, Jessica J., Kim, Joy, Chon, Jaehong, Riskin, Eve A., Ladner, Richard E. and Wobbrock, Jacob O. (2011): Evaluating quality and comprehension of real-time sign language video on mobile phones. In: Thirteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2011. pp. 115-122. Available online

Video and image quality are often objectively measured using peak signal-to-noise ratio (PSNR), but for sign language video, human comprehension is most important. Yet the relationship of human comprehension to PSNR has not been studied. In this survey, we determine how well PSNR matches human comprehension of sign language video. We use very low bitrates (10-60 kbps) and two low spatial resolutions (192144 and 320240 pixels) which may be typical of video transmission on mobile phones using 3G networks. In a national online video-based user survey of 103 respondents, we found that respondents preferred the 320240 spatial resolution transmitted at 20 kbps and higher; this does not match what PSNR results would predict. However, when comparing perceived ease/difficulty of comprehension, we found that responses did correlate well with measured PSNR. This suggests that PSNR may not be suitable for representing subjective video quality, but can be reliable as a measure for comprehensibility of American Sign Language (ASL) video. These findings are applied to our experimental mobile phone application, MobileASL, which enables real-time sign language communication for Deaf users at low bandwidths over the U.S. 3G cellular network.

© All rights reserved Tran et al. and/or ACM Press

 
Edit | Del

Azenkot, Shiri, Ladner, Richard E. and Wobbrock, Jacob O. (2011): Smartphone haptic feedback for nonvisual wayfinding. In: Thirteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2011. pp. 281-282. Available online

We explore using vibration on a smartphone to provide turn-by-turn walking instructions to people with visual impairments. We present two novel feedback methods called Wand and ScreenEdge and compare them to a third method called Pattern. We built a prototype and conducted a user study where 8 participants walked along a pre-programmed route using the 3 vibration feedback methods and no audio output. Participants interpreted the feedback with an average error rate of just 4 percent. Most preferred the Pattern method, where patterns of vibrations indicate different directions, or the ScreenEdge method, where areas of the screen correspond to directions and touching them may induce vibration.

© All rights reserved Azenkot et al. and/or ACM Press

 
Edit | Del

Kane, Shaun K., Morris, Meredith Ringel, Perkins, Annuska Z., Wigdor, Daniel, Ladner, Richard E. and Wobbrock, Jacob O. (2011): Access overlays: improving non-visual access to large touch screens for blind users. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 273-282. Available online

Many touch screens remain inaccessible to blind users, and those approaches to providing access that do exist offer minimal support for interacting with large touch screens or spatial data. In this paper, we introduce a set of three software-based access overlays intended to improve the accessibility of large touch screen interfaces, specifically interactive tabletops. Our access overlays are called edge projection, neighborhood browsing, and touch-and-speak. In a user study, 14 blind users compared access overlays to an implementation of Apple's VoiceOver screen reader. Our results show that two of our techniques were faster than VoiceOver, that participants correctly answered more questions about the screen's layout using our techniques, and that participants overwhelmingly preferred our techniques. We developed several applications demonstrating the use of access overlays, including an accessible map kiosk and an accessible board game.

© All rights reserved Kane et al. and/or ACM Press

2010
 
Edit | Del

Cavender, Anna C., Otero, Daniel S., Bigham, Jeffrey P. and Ladner, Richard E. (2010): Asl-stem forum: enabling sign language to grow through online collaboration. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2075-2078. Available online

American Sign Language (ASL) currently lacks agreed-upon signs for complex terms in scientific fields, causing deaf students to miss or misunderstand course material. Furthermore, the same term or concept may have multiple signs, resulting in inconsistent standards and strained collaboration. The ASL-STEM Forum is an online, collaborative, video forum for sharing ASL signs and discussing them. An initial user study of the Forum has shown its viability and revealed lessons in accommodating varying user types, from lurkers to advanced contributors, until critical mass is achieved.

© All rights reserved Cavender et al. and/or their publisher

 
Edit | Del

Bigham, Jeffrey P., Chisholm, Wendy and Ladner, Richard E. (2010): WebAnywhere: experiences with a new delivery model for access technology. In: Proceedings of the 2010 International Cross-Disciplinary Conference on Web Accessibility W4A 2010. p. 15. Available online

In this paper we describe recent developments and our experiences in releasing WebAnywhere. WebAnywhere was originally designed as a web-based alternative to a traditional screen reader. It can be run on any computer without installing new software, making it ideal for use on-the-go or in libraries and schools where the appropriate access technology is unlikely to already be installed and where users do not have permission to install it. Since its initial release nearly two years ago, WebAnywhere has expanded from its original goal of supporting blind web users to become a platform for an array of technologies supporting access for people with disabilities.

© All rights reserved Bigham et al. and/or their publisher

 
Edit | Del

Tran, Jessica J., Johnson, Tressa W., Kim, Joy, Rodriguez, Rafael, Yin, Sheri, Riskin, Eve A., Ladner, Richard E. and Wobbrock, Jacob O. (2010): A web-based user survey for evaluating power saving strategies for deaf users of mobileASL. In: Twelfth Annual ACM SIGACCESS Conference on Assistive Technologies 2010. pp. 115-122. Available online

MobileASL is a video compression project for two-way, real-time video communication on cell phones, allowing Deaf people to communicate in the language most accessible to them, American Sign Language. Unfortunately, running MobileASL quickly depletes a full battery charge in a few hours. Previous work on MobileASL investigated a method called variable frame rate (VFR) to increase the battery duration. We expand on this previous work by creating two new power saving algorithms, variable spatial resolution (VSR), and the application of both VFR and VSR. These algorithms extend the battery life by altering the temporal and/or spatial resolutions of video transmitted on MobileASL. We found that implementing only VFR extended the battery life from 284 minutes to 307 minutes; implementing only VSR extended the battery life to 306 minutes, and implementing both VFR and VSR extended the battery life to 315 minutes. We evaluated all three algorithms by creating a linguistically accessible online survey to investigate Deaf people's perceptions of video quality when these algorithms were applied. In our survey results, we found that VFR produces perceived video choppiness and VSR produces perceived video blurriness; however, a surprising finding was that when both VFR and VSR are used together, they largely ameliorate the choppiness and blurriness perceived, i.e., they each improve the use of the other. This is a useful finding because using VFR and VSR together saves the most battery life.

© All rights reserved Tran et al. and/or their publisher

2009
 
Edit | Del

Cavender, Anna C., Bigham, Jeffrey P. and Ladner, Richard E. (2009): ClassInFocus: enabling improved visual attention strategies for deaf and hard of hearing students. In: Eleventh Annual ACM SIGACCESS Conference on Assistive Technologies 2009. pp. 67-74. Available online

Deaf and hard of hearing students must juggle their visual attention in current classroom settings. Managing many visual sources of information (instructor, interpreter or captions, slides or whiteboard, classmates, and personal notes) can be a challenge. ClassInFocus automatically notifies students of classroom changes, such as slide changes or new speakers, helping them employ more beneficial observing strategies. A user study of notification techniques shows that students who liked the notifications were more likely to visually utilize them to improve performance.

© All rights reserved Cavender et al. and/or their publisher

 
Edit | Del

Kane, Shaun K., Jayant, Chandrika, Wobbrock, Jacob O. and Ladner, Richard E. (2009): Freedom to roam: a study of mobile device adoption and accessibility for people with visual and motor disabilities. In: Eleventh Annual ACM SIGACCESS Conference on Assistive Technologies 2009. pp. 115-122. Available online

Mobile devices provide people with disabilities new opportunities to act independently in the world. However, these empowering devices have their own accessibility challenges. We present a formative study that examines how people with visual and motor disabilities select, adapt, and use mobile devices in their daily lives. We interviewed 20 participants with visual and motor disabilities and asked about their current use of mobile devices, including how they select them, how they use them while away from home, and how they adapt to accessibility challenges when on the go. Following the interviews, 19 participants completed a diary study in which they recorded their experiences using mobile devices for one week. Our results show that people with visual and motor disabilities use a variety of strategies to adapt inaccessible mobile devices and successfully use them to perform everyday tasks and navigate independently. We provide guidelines for more accessible and empowering mobile device design.

© All rights reserved Kane et al. and/or their publisher

 
Edit | Del

Cherniavsky, Neva, Chon, Jaehong, Wobbrock, Jacob O., Ladner, Richard E. and Riskin, Eve A. (2009): Activity analysis enabling real-time video communication on mobile phones for deaf users. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 79-88. Available online

We describe our system called MobileASL for real-time video communication on the current U.S. mobile phone network. The goal of MobileASL is to enable Deaf people to communicate with Sign Language over mobile phones by compressing and transmitting sign language video in real-time on an off-the-shelf mobile phone, which has a weak processor, uses limited bandwidth, and has little battery capacity. We develop several H.264-compliant algorithms to save system resources while maintaining ASL intelligibility by focusing on the important segments of the video. We employ a dynamic skin-based region-of-interest (ROI) that encodes the skin at higher quality at the expense of the rest of the video. We also automatically recognize periods of signing versus not signing and raise and lower the frame rate accordingly, a technique we call variable frame rate (VFR). We show that our variable frame rate technique results in a 47% gain in battery life on the phone, corresponding to an extra 68 minutes of talk time. We also evaluate our system in a user study. Participants fluent in ASL engage in unconstrained conversations over mobile phones in a laboratory setting. We find that the ROI increases intelligibility and decreases guessing. VFR increases the need for signs to be repeated and the number of conversational breakdowns, but does not affect the users' perception of adopting the technology. These results show that our sign language sensitive algorithms can save considerable resources without sacrificing intelligibility.

© All rights reserved Cherniavsky et al. and/or their publisher

2008
 
Edit | Del

Brudvik, Jeremy T., Bigham, Jeffrey P., Cavender, Anna C. and Ladner, Richard E. (2008): Hunting for headings: sighted labeling vs. automatic classification of headings. In: Tenth Annual ACM SIGACCESS Conference on Assistive Technologies 2008. pp. 201-208. Available online

Proper use of headings in web pages can make navigation more efficient for blind web users by indicating semantic divisions in the page. Unfortunately, many web pages do not use proper HTML markup (h1-h6 tags) to indicate headings, instead using visual styling to create headings, thus making the distinction between headings and other page text indistinguishable to blind users. In a user study in which sighted participants labeled headings on a set of web pages, participants did not often agree on which elements on the page should be labeled as headings, suggesting why headings are not used properly on the web today. To address this problem, we have created a system called HeadingHunter that predicts whether web page text semantically functions as a heading by examining visual features of the text as rendered in a web browser. Its performance in labeling headings compares favorably with both a manually-classified set of heading examples and the combined results of the sighted labelers in our study. The resulting system illustrates a general methodology of creating simple scripts operating over visual features that can be directly included in existing tools.

© All rights reserved Brudvik et al. and/or ACM Press

 
Edit | Del

Ladner, Richard E. (2008): Access and Empowerment: Commentary on "Computers and People with Disabilities. In ACM Transactions on Accessible Computing, 1 (2) p. 11. Available online

A number of positive changes have taken place since Glinert and York's 1992 call-to-arms. Progress reviewed in this article includes evolving considerations of universal design in the marketplace, ubiquitous computing with accessibility features, increasing computing research and conference venues that address needs of users with disabilities, and attention to the importance of user empowerment in development.

© All rights reserved Ladner and/or ACM Press

 
Edit | Del

Bigham, Jeffrey P., Prince, Craig M. and Ladner, Richard E. (2008): WebAnywhere: a screen reader on-the-go. In: Proceedings of the 2008 International Cross-Disciplinary Conference on Web Accessibility W4A 2008. pp. 73-82. Available online

People often use computers other than their own to access web content, but blind users are restricted to using only computers equipped with expensive, special-purpose screen reading programs that they use to access the web. Web-Anywhere is a web-based, self-voicing web browser that enables blind web users to access the web from almost any computer that can produce sound without installing new software. The system could serve as a convenient, low-cost solution for blind users on-the-go, for blind users unable to afford a full screen reader and for web developers targeting accessible design. This paper overviews existing solutions for mobile web access for blind users and presents the design of the WebAnywhere system. WebAnywhere generates speech remotely and uses prefetching strategies designed to reduce perceived latency. A user evaluation of the system is presented showing that blind users can use Web-Anywhere to complete tasks representative of what users might want to complete on computers that are not their own. A survey of public computer terminals shows that WebAnywhere can run on most.

© All rights reserved Bigham et al. and/or ACM Press

 
Edit | Del

Bigham, Jeffrey P., Prince, Craig M., Hahn, Sangyun and Ladner, Richard E. (2008): WebAnywhere: a screen reading interface for the web on any computer. In: Proceedings of the 2008 International Cross-Disciplinary Conference on Web Accessibility W4A 2008. pp. 132-133. Available online

Fulfilling the promise of a web-enabled global community means enabling blind web users to access their information and collaborative web services wherever they happen to be on whatever computer to which they happen to have access. Whether they're checking their email at a local internet caf, using an airport kiosk to connect with a new business contact on a social networking site, or collaboratively editing a document in a hotel business center, blind web users need to stay connected to be successful. While web-enabled computers are everywhere, screen readers are installed on very few. Downloading and installing new software can take a long time and is difficult without a screen reading interface, and many will not allow users to download and install new software at all. Accessible mobile devices are prohibitively expensive. WebAnywhere is a free screen-reading web application capable of making the web accessible to blind users on any web-enabled computer or device, regardless of platform or browser used, without installing new software.

© All rights reserved Bigham et al. and/or ACM Press

 
Edit | Del

Bigham, Jeffrey P., Prince, Craig M. and Ladner, Richard E. (2008): Webanywhere: enabling a screen reading interface for the web on any computer. In: Proceedings of the 2008 International Conference on the World Wide Web 2008. pp. 1159-1160. Available online

People often use computers other than their own to access web content, but blind users are restricted to using computers equipped with expensive, special-purpose screen reading programs that they use to access the web. WebAnywhere is a web-based, self-voicing web application that enables blind web users to access the web from almost any computer that can produce sound without installing new software. WebAnywhere could serve as a convenient, low-cost solution for blind users on-the-go, for blind users unable to afford another screen reader and for web developers targeting accessible design. This paper describes the implementation of WebAnywhere, overviews an evaluation of it by blind web users, and summarizes a survey of public terminals that shows it can run on most public computers.

© All rights reserved Bigham et al. and/or ACM Press

2007
 
Edit | Del

Bigham, Jeffrey P. and Ladner, Richard E. (2007): Accessmonkey: a collaborative scripting framework for web users and developers. In: Proceedings of the 2007 International Cross-Disciplinary Conference on Web Accessibility (W4A) 2007. pp. 25-34. Available online

Efficient access to web content remains elusive for individuals accessing the web using assistive technology. Previous efforts to improve web accessibility have focused on developer awareness, technological improvement, and legislation, but these approaches have left remaining concerns. First, while many tools can help produce accessible content, these tools are generally difficult to integrate into existing developer workflows and rarely offer specific suggestions that developers can implement. Second, tools that automatically improve web content for users generally solve specific problems and are difficult to combine and use on a diversity of existing assistive technology. Finally, although blind web users have proven adept at overcoming the shortcomings of the web and existing tools, they have been only marginally involved in improving the accessibility of their own web experience. As a first step toward addressing these concerns, we introduce Accessmonkey, a common scripting framework that web users, web developers and web researchers can use to collaboratively improve accessibility. This framework advances the idea that Javascript and dynamic web content can be used to improve inaccessible content instead of being a cause of it. Using Accessmonkey, web users and developers on different platforms with potentially different goals can collaboratively make the web more accessible. In this paper we first present the Accessmonkey framework, describe three implementations of it that we have created and offer several example scripts that demonstrate its utility. We conclude by discussing future extensions of this work that will provide efficient access to scripts as users browse the web and allow non-technical users be involved in creating scripts.

© All rights reserved Bigham and Ladner and/or ACM Press

 
Edit | Del

Kane, Shaun K., Shulman, Jessie A., Shockley, Timothy J. and Ladner, Richard E. (2007): A web accessibility report card for top international university web sites. In: Proceedings of the 2007 International Cross-Disciplinary Conference on Web Accessibility (W4A) 2007. pp. 148-156. Available online

University web pages play a central role in the activities of current and prospective postsecondary students. University sites that are not accessible may exclude people with disabilities from participation in educational, social and professional activities. In order to assess the current state of university web site accessibility, we performed a multi-method analysis of the home pages of 100 top international universities. Each site was analyzed for compliance with accessibility standards, image accessibility, alternate-language and text-only content, and quality of web accessibility statements. Results showed that many top universities continue to have accessibility problems. University web site accessibility also varies greatly across different countries and geographic regions. Remaining obstacles to universal accessibility for universities include low accessibility in non-English-speaking countries and absent or low-quality accessibility policies.

© All rights reserved Kane et al. and/or ACM Press

 
Edit | Del

Cherniavsky, Neva, Cavender, Anna C., Ladner, Richard E. and Riskin, Eve A. (2007): Variable frame rate for low power mobile sign language communication. In: Ninth Annual ACM Conference on Assistive Technologies 2007. pp. 163-170. Available online

The MobileASL project aims to increase accessibility by enabling Deaf people to communicate over video cell phones in their native language, American Sign Language (ASL). Real-time video over cell phones can be a computationally intensive task that quickly drains the battery, rendering the cell phone useless. Properties of conversational sign language allow us to save power and bits: namely, lower frame rates are possible when one person is not signing due to turn-taking, and signing can potentially employ a lower frame rate than fingerspelling. We conduct a user study with native signers to examine the intelligibility of varying the frame rate based on activity in the video. We then describe several methods for automatically determining the activity of signing or not signing from the video stream in real-time. Our results show that varying the frame rate during turn-taking is a good way to save power without sacrificing intelligibility, and that automatic activity analysis is feasible.

© All rights reserved Cherniavsky et al. and/or ACM Press

 
Edit | Del

Burgstahler, Sheryl and Ladner, Richard E. (2007): Increasing the Participation of People with Disabilities in Computing Fields. In IEEE Computer, 40 (5) pp. 94-97. Available online

2006
 
Edit | Del

Cavender, Anna, Ladner, Richard E. and Riskin, Eve A. (2006): MobileASL: intelligibility of sign language video as constrained by mobile phone technology. In: Eighth Annual ACM Conference on Assistive Technologies 2006. pp. 71-78. Available online

For Deaf people, access to the mobile telephone network in the United States is currently limited to text messaging, forcing communication in English as opposed to American Sign Language (ASL), the preferred language. Because ASL is a visual language, mobile video phones have the potential to give Deaf people access to real-time mobile communication in their preferred language. However, even today's best video compression techniques can not yield intelligible ASL at limited cell phone network bandwidths. Motivated by this constraint, we conducted one focus group and one user study with members of the Deaf Community to determine the intelligibility effects of video compression techniques that exploit the visual nature of sign language. Inspired by eyetracking results that show high resolution foveal vision is maintained around the face, we studied region-of-interest encodings (where the face is encoded at higher quality) as well as reduced frame rates (where fewer, better quality, frames are displayed every second). At all bit rates studied here, participants preferred moderate quality increases in the face region, sacrificing quality in other regions. They also preferred slightly lower frame rates because they yield better quality frames for a fixed bit rate. These results show promise for realtime access to the current cell phone network through signlanguage-specific encoding techniques.

© All rights reserved Cavender et al. and/or ACM Press

 
Edit | Del

Bigham, Jeffrey P., Kaminsky, Ryan S., Ladner, Richard E., Danielsson, Oscar M. and Hempton, Gordon L. (2006): WebInSight: making web images accessible. In: Eighth Annual ACM Conference on Assistive Technologies 2006. pp. 181-188. Available online

Images without alternative text are a barrier to equal web access for blind users. To illustrate the problem, we conducted a series of studies that conclusively show that a large fraction of significant images have no alternative text. To ameliorate this problem, we introduce WebInSight, a system that automatically creates and inserts alternative text into web pages on-the-fly. To formulate alternative text for images, we present three labeling modules based on web context analysis, enhanced optical character recognition (OCR) and human labeling. The system caches alternative text in a local database and can add new labels seamlessly after a web page is downloaded, resulting in minimal impact to the browsing experience.

© All rights reserved Bigham et al. and/or ACM Press

2005
 
Edit | Del

Ladner, Richard E., Ivory, Melody Y., Rao, Rajesh, Burgstahler, Sheryl, Comden, Dan, Hahn, Sangyun, Renzelmann, Matthew, Krisnandi, Satria, Ramasamy, Mahalakshmi, Slabosky, Beverly, Martin, Andrew and Lacenski, Amelia (2005): Automating tactile graphics translation. In: Seventh Annual ACM Conference on Assistive Technologies 2005. pp. 150-157. Available online

Access to graphical images (bar charts, diagrams, line graphs, etc.) that are in a tactile form (representation through which content can be accessed by touch) is inadequate for students who are blind and take mathematics, science, and engineering courses. We describe our analysis of the current work practices of tactile graphics specialists who create tactile forms of graphical images. We propose automated means by which to improve the efficiency of current work practices. We describe the implementation of various components of this new automated process, which includes image classification, segmentation, simplification, and layout. We summarize our development of the tactile graphics assistant, which will enable tactile graphics specialists to be more efficient in creating tactile graphics both in batches and individually. We describe our unique team of researchers, practitioners, and student consultants who are blind, all of whom are needed to successfully develop this new way of translating tactile graphics.

© All rights reserved Ladner et al. and/or ACM Press

1989
 
Edit | Del

Ladner, Richard E. (1989): Computer Accessibility For Federal Workers With Disabilities: It's The Law. In Communications of the ACM, 32 (8) pp. 952-956.

1988
 
Edit | Del

Ladner, Richard E., McDonough, Francis A., Roth, William, Scadden, Lawrence A. and Vanderheiden, Gregg C. (1988): Public Law 99-506, "Section 508" Electronic Equipment Accessibility for Disabled Workers. In: Soloway, Elliot, Frye, Douglas and Sheppard, Sylvia B. (eds.) Proceedings of the ACM CHI 88 Human Factors in Computing Systems Conference June 15-19, 1988, Washington, DC, USA. pp. 219-222.

1987
 
Edit | Del

Ladner, Richard E., Day, Randy, Gentry, Dennis, Meyer, Karin and Rose, Scott (1987): A user interface for deaf--blind people (preliminary report). In: Graphics Interface 87 (CHI+GI 87) April 5-9, 1987, Toronto, Ontario, Canada. pp. 75-80.

1984
 
Edit | Del

Glinert, Ephraim P. and Ladner, Richard E. (1984): A Large Font Virtual Terminal Interface, A Software Prothesis for Visually Impaired. In Communications of the ACM, 27 (6) pp. 567-572.

 
Add publication
Show list on your website
 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/richard_e__ladner.html