Publication statistics

Pub. period:2006-2012
Pub. count:32
Number of co-authors:48



Co-authors

Number of publications with 3 favourite co-authors:

Richard E. Ladner:10
Anna C. Cavender:6
Craig M. Prince:5

 

 

Productive colleagues

Jeffrey P. Bigham's 3 most productive colleagues in number of publications:

Jacob O. Wobbrock:71
Robert C. Miller:42
Hironobu Takagi:33
 
 
 

Upcoming Courses

go to course
Gamification: Creating Addictive User Experience
Starts TODAY LAST CALL!
go to course
User-Centred Design - Module 3
68% booked. Starts in 27 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Jeffrey P. Bigham

Personal Homepage:
cs.rochester.edu/~jbigham/

Add description
Rename / change spelling
Add publication
 

Publications by Jeffrey P. Bigham (bibliography)

 what's this?
2012
 
Edit | Del

Kushalnagar, Raja S., Lasecki, Walter S. and Bigham, Jeffrey P. (2012): A readability evaluation of real-time crowd captions in the classroom. In: Fourteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2012. pp. 71-78.

Deaf and hard of hearing individuals need accommodations that transform aural to visual information, such as captions that are generated in real-time to enhance their access to spoken information in lectures and other live events. The captions produced by professional captionists work well in general events such as community or legal meetings, but is often unsatisfactory in specialized content events such as higher education classrooms. In addition, it is hard to hire professional captionists, especially those that have experience in specialized content areas, as they are scarce and expensive. The captions produced by commercial automatic speech recognition (ASR) software are far cheaper, but is often perceived as unreadable due to ASR's sensitivity to accents, background noise and slow response time. We ran a study to evaluate the readability of captions generated by a new crowd captioning approach versus professional captionists and ASR. In this approach, captions are typed by classmates into a system that aligns and merges the multiple incomplete caption streams into a single, comprehensive real-time transcript. Our study asked 48 deaf and hearing readers to evaluate transcripts produced by a professional captionist, ASR and crowd captioning software respectively and found the readers preferred crowd captions over professional captions and ASR.

© All rights reserved Kushalnagar et al. and/or ACM Press

 
Edit | Del

Burton, Michele A., Brady, Erin, Brewer, Robin, Neylan, Callie, Bigham, Jeffrey P. and Hurst, Amy (2012): Crowdsourcing subjective fashion advice using VizWiz: challenges and opportunities. In: Fourteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2012. pp. 135-142.

Fashion is a language. How we dress signals to others who we are and how we want to be perceived. However, this language is primarily visual, making it inaccessible to people with vision impairments. Someone who is low-vision or completely blind cannot see what others are wearing or readily know what constitutes the norms and extremes of fashion, but most everyone they encounter can see (and judge) their fashion choices. We describe our findings of a diary study with people with vision impairments that revealed the many accessibility barriers fashion presents, and how an online survey revealed that clothing decisions are often made collaboratively, regardless of visual ability. Based on these findings, we identified a need for a collaborative and real-time environment for fashion advice. We have tested the feasibility of providing this advice through crowdsourcing using VizWiz, a mobile phone application where participants receive nearly real-time answers to visual questions. Our pilot study results show that this application has the potential to address a great need within the blind community, but remaining challenges include improving photo capture and assembling a set of crowd workers with the requisite expertise. More broadly our research highlights the feasibility of using crowdsourcing for subjective, opinion-based advice.

© All rights reserved Burton et al. and/or ACM Press

 
Edit | Del

Lasecki, Walter S. and Bigham, Jeffrey P. (2012): Online quality control for real-time crowd captioning. In: Fourteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2012. pp. 143-150.

Approaches for real-time captioning of speech are either expensive (professional stenographers) or error-prone (automatic speech recognition). As an alternative approach, we have been exploring whether groups of non-experts can collectively caption speech in real-time. In this approach, each worker types as much as they can and the partial captions are merged together in real-time automatically. This approach works best when partial captions are correct and received within a few seconds of when they were spoken, but these assumptions break down when engaging workers on-demand from existing sources of crowd work like Amazon's Mechanical Turk. In this paper, we present methods for quickly identifying workers who are producing good partial captions and estimating the quality of their input. We evaluate these methods in experiments run on Mechanical Turk in which a total of 42 workers captioned 20 minutes of audio. The methods introduced in this paper were able to raise overall accuracy from 57.8% to 81.22% while keeping coverage of the ground truth signal nearly unchanged.

© All rights reserved Lasecki and Bigham and/or ACM Press

2011
 
Edit | Del

Bigham, Jeffrey P., Ladner, Richard E. and Borodin, Yevgen (2011): The design of human-powered access technology. In: Thirteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2011. pp. 3-10.

People with disabilities have always overcome accessibility problems by enlisting people in their community to help. The Internet has broadened the available community and made it easier to get on-demand assistance remotely. In particular, the past few years have seen the development of technology in both research and industry that uses human power to overcome technical problems too difficult to solve automatically. In this paper, we frame recent developments in human computation in the historical context of accessibility, and outline a framework for discussing new advances in human-powered access technology. Specifically, we present a set of 13 design principles for human-powered access technology motivated both by historical context and current technological developments. We then demonstrate the utility of these principles by using them to compare several existing human-powered access technologies. The power of identifying the 13 principles is that they will inspire new ways of thinking about human-powered access technologies.

© All rights reserved Bigham et al. and/or ACM Press

 
Edit | Del

Jayant, Chandrika, Ji, Hanjie, White, Samuel and Bigham, Jeffrey P. (2011): Supporting blind photography. In: Thirteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2011. pp. 203-210.

Blind people want to take photographs for the same reasons as others -- to record important events, to share experiences, and as an outlet for artistic expression. Furthermore, both automatic computer vision technology and human-powered services can be used to give blind people feedback on their environment, but to work their best these systems need high-quality photos as input. In this paper, we present the results of a large survey that shows how blind people are currently using cameras. Next, we introduce EasySnap, an application that provides audio feedback to help blind people take pictures of objects and people and show that blind photographers take better photographs with this feedback. We then discuss how we iterated on the portrait functionality to create a new application called PortraitFramer designed specifically for this function. Finally, we present the results of an in-depth study with 15 blind and low-vision participants, showing that they could pick up how to successfully use the application very quickly.

© All rights reserved Jayant et al. and/or ACM Press

 
Edit | Del

Lasecki, Walter S., Murray, Kyle I., White, Samuel, Miller, Robert C. and Bigham, Jeffrey P. (2011): Real-time crowd control of existing interfaces. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 23-32.

Crowdsourcing has been shown to be an effective approach for solving difficult problems, but current crowdsourcing systems suffer two main limitations: (i) tasks must be repackaged for proper display to crowd workers, which generally requires substantial one-off programming effort and support infrastructure, and (ii) crowd workers generally lack a tight feedback loop with their task. In this paper, we introduce Legion, a system that allows end users to easily capture existing GUIs and outsource them for collaborative, real-time control by the crowd. We present mediation strategies for integrating the input of multiple crowd workers in real-time, evaluate these mediation strategies across several applications, and further validate Legion by exploring the space of novel applications that it enables.

© All rights reserved Lasecki et al. and/or ACM Press

2010
 
Edit | Del

Cavender, Anna C., Otero, Daniel S., Bigham, Jeffrey P. and Ladner, Richard E. (2010): Asl-stem forum: enabling sign language to grow through online collaboration. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2075-2078.

American Sign Language (ASL) currently lacks agreed-upon signs for complex terms in scientific fields, causing deaf students to miss or misunderstand course material. Furthermore, the same term or concept may have multiple signs, resulting in inconsistent standards and strained collaboration. The ASL-STEM Forum is an online, collaborative, video forum for sharing ASL signs and discussing them. An initial user study of the Forum has shown its viability and revealed lessons in accommodating varying user types, from lurkers to advanced contributors, until critical mass is achieved.

© All rights reserved Cavender et al. and/or their publisher

 
Edit | Del

Borodin, Yevgen, Bigham, Jeffrey P., Dausch, Glenn and Ramakrishnan, I. V. (2010): More than meets the eye: a survey of screen-reader browsing strategies. In: Proceedings of the 2010 International Cross-Disciplinary Conference on Web Accessibility W4A 2010. p. 13.

Browsing the Web with screen readers can be difficult and frustrating. Web pages often contain inaccessible content that is expressed only visually or that can be accessed only with the mouse. Screen-reader users must also contend with usability challenges encountered when the reading content is designed with built-in assumptions of how it will be accessed -- generally by a sighted person on a standard display. Far from passive consumers of content who simply accept web content as accessible or not, many screen-reader users are adept at developing, discovering, and employing browsing strategies that help them overcome the accessibility and usability problems they encounter. In this paper, we overview the browsing strategies that we have observed screen-reader users employ when faced with challenges, ranging from unfamiliar web sites and complex web pages to dynamic and automatically-refreshing content. A better understanding of existing browsing strategies can inform the design of accessible websites, development of new tools that make experienced users more effective, and help overcome the initial learning curve for users who have not yet acquired effective browsing strategies.

© All rights reserved Borodin et al. and/or their publisher

 
Edit | Del

Bigham, Jeffrey P., Chisholm, Wendy and Ladner, Richard E. (2010): WebAnywhere: experiences with a new delivery model for access technology. In: Proceedings of the 2010 International Cross-Disciplinary Conference on Web Accessibility W4A 2010. p. 15.

In this paper we describe recent developments and our experiences in releasing WebAnywhere. WebAnywhere was originally designed as a web-based alternative to a traditional screen reader. It can be run on any computer without installing new software, making it ideal for use on-the-go or in libraries and schools where the appropriate access technology is unlikely to already be installed and where users do not have permission to install it. Since its initial release nearly two years ago, WebAnywhere has expanded from its original goal of supporting blind web users to become a platform for an array of technologies supporting access for people with disabilities.

© All rights reserved Bigham et al. and/or their publisher

 
Edit | Del

Bigham, Jeffrey P., Jayant, Chandrika, Ji, Hanjie, Little, Greg, Miller, Andrew, Miller, Robert C., Tatarowicz, Aubrey, White, Brandyn, White, Samuel and Yeh, Tom (2010): VizWiz: nearly real-time answers to visual questions. In: Proceedings of the 2010 International Cross-Disciplinary Conference on Web Accessibility W4A 2010. p. 24.

Visual information pervades our environment. Vision is used to decide everything from what we want to eat at a restaurant and which bus route to take to whether our clothes match and how long until the milk expires. Individually, the inability to interpret such visual information is a nuisance for blind people who often have effective, if inefficient, work-arounds to overcome them. Collectively, however, they can make blind people less independent. Specialized technology addresses some problems in this space, but automatic approaches cannot yet answer the vast majority of visual questions that blind people may have. VizWiz addresses this shortcoming by using the Internet connections and cameras on existing smartphones to connect blind people and their questions to remote paid workers' answers. VizWiz is designed to have low latency and low cost, making it both competitive with expensive automatic solutions and much more versatile.

© All rights reserved Bigham et al. and/or their publisher

 
Edit | Del

Lau, Tessa, Cerruti, Julian, Manzato, Guillermo, Bengualid, Mateo, Bigham, Jeffrey P. and Nichols, Jeffrey (2010): A conversational interface to web automation. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 229-238.

This paper presents CoCo, a system that automates web tasks on a user's behalf through an interactive conversational interface. Given a short command such as "get road conditions for highway 88," CoCo synthesizes a plan to accomplish the task, executes it on the web, extracts an informative response, and returns the result to the user as a snippet of text. A novel aspect of our approach is that we leverage a repository of previously recorded web scripts and the user's personal web browsing history to determine how to complete each requested task. This paper describes the design and implementation of our system, along with the results of a brief user study that evaluates how likely users are to understand what CoCo does for them.

© All rights reserved Lau et al. and/or their publisher

 
Edit | Del

Bigham, Jeffrey P., Jayant, Chandrika, Ji, Hanjie, Little, Greg, Miller, Andrew, Miller, Robert C., Miller, Robin, Tatarowicz, Aubrey, White, Brandyn, White, Samual and Yeh, Tom (2010): VizWiz: nearly real-time answers to visual questions. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 333-342.

The lack of access to visual information like text labels, icons, and colors can cause frustration and decrease independence for blind people. Current access technology uses automatic approaches to address some problems in this space, but the technology is error-prone, limited in scope, and quite expensive. In this paper, we introduce VizWiz, a talking application for mobile phones that offers a new alternative to answering visual questions in nearly real-time -- asking multiple people on the web. To support answering questions quickly, we introduce a general approach for intelligently recruiting human workers in advance called quikTurkit so that workers are available when new questions arrive. A field deployment with 11 blind participants illustrates that blind people can effectively use VizWiz to cheaply answer questions in their everyday lives, highlighting issues that automatic approaches will need to address to be useful. Finally, we illustrate the potential of using VizWiz as part of the participatory design of advanced tools by using it to build and evaluate VizWiz::LocateIt, an interactive mobile tool that helps blind people solve general visual search problems.

© All rights reserved Bigham et al. and/or their publisher

 
Edit | Del

White, Samuel, Ji, Hanjie and Bigham, Jeffrey P. (2010): EasySnap: real-time audio feedback for blind photography. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 409-410.

This demonstration presents EasySnap, an application that enables blind and low-vision users to take high-quality photos by providing real-time audio feedback as they point their existing camera phones. Users can readily follow the audio instructions to adjust their framing, zoom level and subject lighting appropriately. Real-time feedback is achieved on current hardware using computer vision in conjunction with use patterns drawn from current blind photographers.

© All rights reserved White et al. and/or their publisher

 
Edit | Del

Bigham, Jeffrey P., Brudvik, Jeremy T. and Zhang, Bernie (2010): Accessibility by demonstration: enabling end users to guide developers to web accessibility solutions. In: Twelfth Annual ACM SIGACCESS Conference on Assistive Technologies 2010. pp. 35-42.

Few web developers have been explicitly trained to create accessible web pages, and are unlikely to recognize subtle accessibility and usability concerns that disabled people face. Evaluating web pages with assistive technology can reveal problems, but this software takes time to install and its complexity can be overwhelming. To address these problems, we introduce a new approach for accessibility evaluation called Accessibility by Demonstration (ABD). ABD lets assistive technology users retroactively record accessibility problems at the time they experience them as human-readable macros and easily send those recordings and the software necessary to replay them to others. This paper describes an implementation of ABD as an extension to the WebAnywhere screen reader, and presents an evaluation with 15 web developers not experienced with accessibility showing that interacting with these recordings helped them understand and fix some subtle accessibility problems better than existing tools.

© All rights reserved Bigham et al. and/or their publisher

2009
 
Edit | Del

Bigham, Jeffrey P. and Cavender, Anna C. (2009): Evaluating existing audio CAPTCHAs and an interface optimized for non-visual use. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1829-1838.

Audio CAPTCHAs were introduced as an accessible alternative for those unable to use the more common visual CAPTCHAs, but anecdotal accounts have suggested that they may be more difficult to solve. This paper demonstrates in a large study of more than 150 participants that existing audio CAPTCHAs are clearly more difficult and time-consuming to complete as compared to visual CAPTCHAs for both blind and sighted users. In order to address this concern, we developed and evaluated a new interface for solving CAPTCHAs optimized for non-visual use that can be added in-place to existing audio CAPTCHAs. In a subsequent study, the optimized interface increased the success rate of blind participants by 59% on audio CAPTCHAs, illustrating a broadly applicable principle of accessible design: the most usable audio interfaces are often not direct translations of existing visual interfaces.

© All rights reserved Bigham and Cavender and/or ACM Press

 
Edit | Del

Bigham, Jeffrey P., Lau, Tessa and Nichols, Jeffrey (2009): Trailblazer: enabling blind users to blaze trails through the web. In: Proceedings of the 2009 International Conference on Intelligent User Interfaces 2009. pp. 177-186.

For blind web users, completing tasks on the web can be frustrating. Each step can require a time-consuming linear search of the current web page to find the needed interactive element or piece of information. Existing interactive help systems and the playback components of some programming-by-demonstration tools identify the needed elements of a page as they guide the user through predefined tasks, obviating the need for a linear search on each step. We introduce TrailBlazer, a system that provides an accessible, non-visual interface to guide blind users through existing how-to knowledge. A formative study indicated that participants saw the value of TrailBlazer but wanted to use it for tasks and web sites for which no existing script was available. To address this, TrailBlazer offers suggestion-based help created on-the-fly from a short, user-provided task description and an existing repository of how-to knowledge. In an evaluation on 15 tasks, the correct prediction was contained within the top 5 suggestions 75.9% of the time.

© All rights reserved Bigham et al. and/or their publisher

 
Edit | Del

Cavender, Anna C., Bigham, Jeffrey P. and Ladner, Richard E. (2009): ClassInFocus: enabling improved visual attention strategies for deaf and hard of hearing students. In: Eleventh Annual ACM SIGACCESS Conference on Assistive Technologies 2009. pp. 67-74.

Deaf and hard of hearing students must juggle their visual attention in current classroom settings. Managing many visual sources of information (instructor, interpreter or captions, slides or whiteboard, classmates, and personal notes) can be a challenge. ClassInFocus automatically notifies students of classroom changes, such as slide changes or new speakers, helping them employ more beneficial observing strategies. A user study of notification techniques shows that students who liked the notifications were more likely to visually utilize them to improve performance.

© All rights reserved Cavender et al. and/or their publisher

 
Edit | Del

Bigham, Jeffrey P., Kaminsky, Ryan S. and Nichols, Jeffrey (2009): Mining web interactions to automatically create mash-ups. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 203-212.

The deep web contains an order of magnitude more information than the surface web, but that information is hidden behind the web forms of a large number of web sites. Metasearch engines can help users explore this information by aggregating results from multiple resources, but previously these could only be created and maintained by programmers. In this paper, we explore the automatic creation of metasearch mash-ups by mining the web interactions of multiple web users to find relations between query forms on different web sites. We also present an implemented system called TX2 that uses those connections to search multiple deep web resources simultaneously and integrate the results in context in a single results page. TX2 illustrates the promise of constructing mash-ups automatically and the potential of mining web interactions to explore deep web resources.

© All rights reserved Bigham et al. and/or their publisher

2008
 
Edit | Del

Kane, Shaun K., Bigham, Jeffrey P. and Wobbrock, Jacob O. (2008): Slide rule: making mobile touch screens accessible to blind people using multi-touch interaction techniques. In: Tenth Annual ACM SIGACCESS Conference on Assistive Technologies 2008. pp. 73-80.

Recent advances in touch screen technology have increased the prevalence of touch screens and have prompted a wave of new touch screen-based devices. However, touch screens are still largely inaccessible to blind users, who must adopt error-prone compensatory strategies to use them or find accessible alternatives. This inaccessibility is due to interaction techniques that require the user to visually locate objects on the screen. To address this problem, we introduce Slide Rule, a set of audio-based multi-touch interaction techniques that enable blind users to access touch screen applications. We describe the design of Slide Rule, our interaction techniques, and a user study in which 10 blind people used Slide Rule and a button-based Pocket PC screen reader. Results show that Slide Rule was significantly faster than the button-based system, and was preferred by 7 of 10 users. However, users made more errors when using Slide Rule than when using the more familiar button-based system.

© All rights reserved Kane et al. and/or ACM Press

 
Edit | Del

Borodin, Yevgen, Bigham, Jeffrey P., Raman, Rohit and Ramakrishnan, I. V. (2008): What's new?: making web page updates accessible. In: Tenth Annual ACM SIGACCESS Conference on Assistive Technologies 2008. pp. 145-152.

Web applications facilitated by technologies such as JavaScript, DHTML, AJAX, and Flash use a considerable amount of dynamic web content that is either inaccessible or unusable by blind people. Server side changes to web content cause whole page refreshes, but only small sections of the page update, causing blind web users to search linearly through the page to find new content. The connecting theme is the need to quickly and unobtrusively identify the segments of a web page that have changed and notify the user of them. In this paper we propose Dynamo, a system designed to unify different types of dynamic content and make dynamic content accessible to blind web users. Dynamo treats web page updates uniformly and its methods encompass both web updates enabled through dynamic content and scripting, and updates resulting from static page refreshes, form submissions, and template-based web sites. From an algorithmic and interaction perspective Dynamo detects underlying changes and provides users with a single and intuitive interface for reviewing the changes that have occurred. We report on the quantitative and qualitative results of an evaluation conducted with blind users. These results suggest that Dynamo makes access to dynamic content faster, and that blind web users like it better than existing interfaces.

© All rights reserved Borodin et al. and/or ACM Press

 
Edit | Del

Kawanaka, Shinya, Borodin, Yevgen, Bigham, Jeffrey P., Lunn, Darren, Takagi, Hironobu and Asakawa, Chieko (2008): Accessibility commons: a metadata infrastructure for web accessibility. In: Tenth Annual ACM SIGACCESS Conference on Assistive Technologies 2008. pp. 153-160.

Research projects, assistive technology, and individuals all create metadata in order to improve Web accessibility for visually impaired users. However, since these projects are disconnected from one another, this metadata is isolated in separate tools, stored in disparate repositories, and represented in incompatible formats. Web accessibility could be greatly improved if these individual contributions were merged. An integration method will serve as the bridge between future academic research projects and end users, enabling new technologies to reach end users more quickly. Therefore we introduce Accessibility Commons, a common infrastructure to integrate, store, and share metadata designed to improve Web accessibility. We explore existing tools to show how the metadata that they produce could be integrated into this common infrastructure, we present the design decisions made in order to help ensure that our common repository will remain relevant in the future as new metadata is developed, and we discuss how the common infrastructure component facilitates our broader social approach to improving accessibility.

© All rights reserved Kawanaka et al. and/or ACM Press

 
Edit | Del

Brudvik, Jeremy T., Bigham, Jeffrey P., Cavender, Anna C. and Ladner, Richard E. (2008): Hunting for headings: sighted labeling vs. automatic classification of headings. In: Tenth Annual ACM SIGACCESS Conference on Assistive Technologies 2008. pp. 201-208.

Proper use of headings in web pages can make navigation more efficient for blind web users by indicating semantic divisions in the page. Unfortunately, many web pages do not use proper HTML markup (h1-h6 tags) to indicate headings, instead using visual styling to create headings, thus making the distinction between headings and other page text indistinguishable to blind users. In a user study in which sighted participants labeled headings on a set of web pages, participants did not often agree on which elements on the page should be labeled as headings, suggesting why headings are not used properly on the web today. To address this problem, we have created a system called HeadingHunter that predicts whether web page text semantically functions as a heading by examining visual features of the text as rendered in a web browser. Its performance in labeling headings compares favorably with both a manually-classified set of heading examples and the combined results of the sighted labelers in our study. The resulting system illustrates a general methodology of creating simple scripts operating over visual features that can be directly included in existing tools.

© All rights reserved Brudvik et al. and/or ACM Press

 
Edit | Del

Bigham, Jeffrey P., Cavender, Anna C., Kaminsky, Ryan S., Prince, Craig M. and Robison, Tyler S. (2008): Transcendence: enabling a personal view of the deep web. In: Proceedings of the 2008 International Conference on Intelligent User Interfaces 2008. pp. 169-178.

A wealth of structured, publicly-available information exists in the deep web but is only accessible by querying web forms. As a result, users are restricted by the interfaces provided and lack a convenient mechanism to express novel and independent extractions and queries on the underlying data. Transcendence enables personalized access to the deep web by enabling users to partially reconstruct web databases in order to perform new types of queries. From just a few examples, Transcendence helps users produce a large number of values for form input fields by using unsupervised information extraction and collaborative filtering of user suggestions. Structural and semantic analysis of returned pages finds individual results and identifies relevant fields. Users may revise automated decisions, balancing the power of automation with the errors it can introduce. In a user evaluation, both programmers and non-programmers found Transcendence to be a powerful way to explore deep web resources and wanted to use it in the future.

© All rights reserved Bigham et al. and/or ACM Press

 
Edit | Del

Bigham, Jeffrey P., Prince, Craig M. and Ladner, Richard E. (2008): WebAnywhere: a screen reader on-the-go. In: Proceedings of the 2008 International Cross-Disciplinary Conference on Web Accessibility W4A 2008. pp. 73-82.

People often use computers other than their own to access web content, but blind users are restricted to using only computers equipped with expensive, special-purpose screen reading programs that they use to access the web. Web-Anywhere is a web-based, self-voicing web browser that enables blind web users to access the web from almost any computer that can produce sound without installing new software. The system could serve as a convenient, low-cost solution for blind users on-the-go, for blind users unable to afford a full screen reader and for web developers targeting accessible design. This paper overviews existing solutions for mobile web access for blind users and presents the design of the WebAnywhere system. WebAnywhere generates speech remotely and uses prefetching strategies designed to reduce perceived latency. A user evaluation of the system is presented showing that blind users can use Web-Anywhere to complete tasks representative of what users might want to complete on computers that are not their own. A survey of public computer terminals shows that WebAnywhere can run on most.

© All rights reserved Bigham et al. and/or ACM Press

 
Edit | Del

Borodin, Yevgen, Bigham, Jeffrey P., Stent, Amanda and Ramakrishnan, I. V. (2008): Towards one world web with HearSay3. In: Proceedings of the 2008 International Cross-Disciplinary Conference on Web Accessibility W4A 2008. pp. 130-131.

In this paper, we present the key functionalities of HearSay 3, a non-visual web browser designed with the goal of improving web accessibility across the world. The featured functionalities include transparent support for multiple languages, collaborative labeling that harnesses the power of the social web, and a usable interface for handling dynamic web content.

© All rights reserved Borodin et al. and/or ACM Press

 
Edit | Del

Bigham, Jeffrey P., Prince, Craig M., Hahn, Sangyun and Ladner, Richard E. (2008): WebAnywhere: a screen reading interface for the web on any computer. In: Proceedings of the 2008 International Cross-Disciplinary Conference on Web Accessibility W4A 2008. pp. 132-133.

Fulfilling the promise of a web-enabled global community means enabling blind web users to access their information and collaborative web services wherever they happen to be on whatever computer to which they happen to have access. Whether they're checking their email at a local internet caf, using an airport kiosk to connect with a new business contact on a social networking site, or collaboratively editing a document in a hotel business center, blind web users need to stay connected to be successful. While web-enabled computers are everywhere, screen readers are installed on very few. Downloading and installing new software can take a long time and is difficult without a screen reading interface, and many will not allow users to download and install new software at all. Accessible mobile devices are prohibitively expensive. WebAnywhere is a free screen-reading web application capable of making the web accessible to blind users on any web-enabled computer or device, regardless of platform or browser used, without installing new software.

© All rights reserved Bigham et al. and/or ACM Press

 
Edit | Del

Bigham, Jeffrey P., Prince, Craig M. and Ladner, Richard E. (2008): Webanywhere: enabling a screen reading interface for the web on any computer. In: Proceedings of the 2008 International Conference on the World Wide Web 2008. pp. 1159-1160.

People often use computers other than their own to access web content, but blind users are restricted to using computers equipped with expensive, special-purpose screen reading programs that they use to access the web. WebAnywhere is a web-based, self-voicing web application that enables blind web users to access the web from almost any computer that can produce sound without installing new software. WebAnywhere could serve as a convenient, low-cost solution for blind users on-the-go, for blind users unable to afford another screen reader and for web developers targeting accessible design. This paper describes the implementation of WebAnywhere, overviews an evaluation of it by blind web users, and summarizes a survey of public terminals that shows it can run on most public computers.

© All rights reserved Bigham et al. and/or ACM Press

2007
 
Edit | Del

Bigham, Jeffrey P. (2007): Increasing web accessibility by automatically judging alternative text quality. In: Proceedings of the 2007 International Conference on Intelligent User Interfaces 2007. pp. 349-352.

The lack of appropriate alternative text for web images remains a problem for blind users and others accessing the web with non-visual interfaces. The content contained within web images is vital for understanding many web sites but the majority are assigned either inaccurate alternative text or none at all. The capability to automatically judge the quality of alternative text has the promise to dramatically improve the accessibility of the web by bringing intelligence to three categories of interfaces: tools that help web authors verify that they have provided adequate alternative text for web images, systems that automatically produce and insert alternative text for web images, and screen reading software. In this paper we describe a classifier capable of measuring the quality of alternative text given only a few labeled training examples by automatically considering the image context.

© All rights reserved Bigham and/or ACM Press

 
Edit | Del

Bigham, Jeffrey P. and Ladner, Richard E. (2007): Accessmonkey: a collaborative scripting framework for web users and developers. In: Proceedings of the 2007 International Cross-Disciplinary Conference on Web Accessibility (W4A) 2007. pp. 25-34.

Efficient access to web content remains elusive for individuals accessing the web using assistive technology. Previous efforts to improve web accessibility have focused on developer awareness, technological improvement, and legislation, but these approaches have left remaining concerns. First, while many tools can help produce accessible content, these tools are generally difficult to integrate into existing developer workflows and rarely offer specific suggestions that developers can implement. Second, tools that automatically improve web content for users generally solve specific problems and are difficult to combine and use on a diversity of existing assistive technology. Finally, although blind web users have proven adept at overcoming the shortcomings of the web and existing tools, they have been only marginally involved in improving the accessibility of their own web experience. As a first step toward addressing these concerns, we introduce Accessmonkey, a common scripting framework that web users, web developers and web researchers can use to collaboratively improve accessibility. This framework advances the idea that Javascript and dynamic web content can be used to improve inaccessible content instead of being a cause of it. Using Accessmonkey, web users and developers on different platforms with potentially different goals can collaboratively make the web more accessible. In this paper we first present the Accessmonkey framework, describe three implementations of it that we have created and offer several example scripts that demonstrate its utility. We conclude by discussing future extensions of this work that will provide efficient access to scripts as users browse the web and allow non-technical users be involved in creating scripts.

© All rights reserved Bigham and Ladner and/or ACM Press

 
Edit | Del

Bigham, Jeffrey P., Cavender, Anna C., Brudvik, Jeremy T., Wobbrock, Jacob O. and Lander, Richard E. (2007): WebinSitu: a comparative analysis of blind and sighted browsing behavior. In: Ninth Annual ACM Conference on Assistive Technologies 2007. pp. 51-58.

Web browsing is inefficient for blind web users because of persistent accessibility problems, but the extent of these problems and their practical effects from the perspective of the user has not been sufficiently examined. We conducted a study in situ to investigate the accessibility of the web as experienced by web users. This remote study used an advanced web proxy that leverages AJAX technology to record both the pages viewed and the actions taken by users on the web pages that they visited. Our study was conducted remotely over the period of one week, and our participants used the assistive technology and software to which they were already accustomed and had already configured according to preference. These advantages allowed us to aggregate observations of many users and to explore the practical effects on and coping strategies employed by our blind participants. Our study reflects web accessibility from the perspective of web users and describes quantitative differences in the browsing behavior of blind and sighted web users.

© All rights reserved Bigham et al. and/or ACM Press

 
Edit | Del

Bigham, Jeffrey P. and Prince, Craig M. (2007): WebAnywhere: a screen reader on-the-go. In: Ninth Annual ACM Conference on Assistive Technologies 2007. pp. 225-226.

People often use computers other than their own to browse the web, but blind web users are limited in where they access the web because they require specialized, expensive programs for access. WebAnywhere is a web-based, self-voicing browser that enables blind web users to access the web from almost any computer that can produce sound. The system runs entirely in standard web browsers and requires no additional software to be installed. The system could serve as a convenient, low-cost solution for both web developers targeting accessible design and end users unable to afford a full screen reader. This demonstration will offer visitors the opportunity to try WebAnywhere and learn more about it.

© All rights reserved Bigham and Prince and/or ACM Press

2006
 
Edit | Del

Bigham, Jeffrey P., Kaminsky, Ryan S., Ladner, Richard E., Danielsson, Oscar M. and Hempton, Gordon L. (2006): WebInSight: making web images accessible. In: Eighth Annual ACM Conference on Assistive Technologies 2006. pp. 181-188.

Images without alternative text are a barrier to equal web access for blind users. To illustrate the problem, we conducted a series of studies that conclusively show that a large fraction of significant images have no alternative text. To ameliorate this problem, we introduce WebInSight, a system that automatically creates and inserts alternative text into web pages on-the-fly. To formulate alternative text for images, we present three labeling modules based on web context analysis, enhanced optical character recognition (OCR) and human labeling. The system caches alternative text in a local database and can add new labels seamlessly after a web page is downloaded, resulting in minimal impact to the browsing experience.

© All rights reserved Bigham et al. and/or ACM Press

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Changes to this page (author)

09 Nov 2012: Modified
09 Nov 2012: Modified
09 Nov 2012: Modified
05 Apr 2012: Modified
03 Apr 2012: Modified
03 Apr 2012: Modified
15 Jan 2011: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
09 Jul 2009: Modified
27 Jun 2009: Modified
27 Jun 2009: Modified
27 Jun 2009: Modified
02 Jun 2009: Modified
09 May 2009: Modified
08 Apr 2009: Modified
07 Apr 2009: Modified
07 Apr 2009: Modified
07 Apr 2009: Modified
07 Apr 2009: Modified
12 May 2008: Modified
12 May 2008: Modified
25 Jul 2007: Modified
24 Jul 2007: Modified
22 Jun 2007: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/jeffrey_p__bigham.html

Publication statistics

Pub. period:2006-2012
Pub. count:32
Number of co-authors:48



Co-authors

Number of publications with 3 favourite co-authors:

Richard E. Ladner:10
Anna C. Cavender:6
Craig M. Prince:5

 

 

Productive colleagues

Jeffrey P. Bigham's 3 most productive colleagues in number of publications:

Jacob O. Wobbrock:71
Robert C. Miller:42
Hironobu Takagi:33
 
 
 

Upcoming Courses

go to course
Gamification: Creating Addictive User Experience
Starts TODAY LAST CALL!
go to course
User-Centred Design - Module 3
68% booked. Starts in 27 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading