Number of co-authors:11
Number of publications with 3 favourite co-authors:Jeffrey P. Bigham:4Hanjie Ji:3Robert C. Miller:2
Samuel White's 3 most productive colleagues in number of publications:Robert C. Miller:42Jeffrey P. Bigham:32Greg Little:15
Starts tomorrow LAST CALL!
UI Design Patterns for Successful Software
88% booked. Starts in 7 days
Affordances: Designing Intuitive User Interfaces
Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess
User Experience and Experience Design !
Our Latest Books
Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger
Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad
Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam
Publications by Samuel White (bibliography)
Jayant, Chandrika, Ji, Hanjie, White, Samuel and Bigham, Jeffrey P. (2011): Supporting blind photography. In: Thirteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2011. pp. 203-210.
Blind people want to take photographs for the same reasons as others -- to record important events, to share experiences, and as an outlet for artistic expression. Furthermore, both automatic computer vision technology and human-powered services can be used to give blind people feedback on their environment, but to work their best these systems need high-quality photos as input. In this paper, we present the results of a large survey that shows how blind people are currently using cameras. Next, we introduce EasySnap, an application that provides audio feedback to help blind people take pictures of objects and people and show that blind photographers take better photographs with this feedback. We then discuss how we iterated on the portrait functionality to create a new application called PortraitFramer designed specifically for this function. Finally, we present the results of an in-depth study with 15 blind and low-vision participants, showing that they could pick up how to successfully use the application very quickly.
© All rights reserved Jayant et al. and/or ACM Press
Lasecki, Walter S., Murray, Kyle I., White, Samuel, Miller, Robert C. and Bigham, Jeffrey P. (2011): Real-time crowd control of existing interfaces. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 23-32.
Crowdsourcing has been shown to be an effective approach for solving difficult problems, but current crowdsourcing systems suffer two main limitations: (i) tasks must be repackaged for proper display to crowd workers, which generally requires substantial one-off programming effort and support infrastructure, and (ii) crowd workers generally lack a tight feedback loop with their task. In this paper, we introduce Legion, a system that allows end users to easily capture existing GUIs and outsource them for collaborative, real-time control by the crowd. We present mediation strategies for integrating the input of multiple crowd workers in real-time, evaluate these mediation strategies across several applications, and further validate Legion by exploring the space of novel applications that it enables.
© All rights reserved Lasecki et al. and/or ACM Press
Bigham, Jeffrey P., Jayant, Chandrika, Ji, Hanjie, Little, Greg, Miller, Andrew, Miller, Robert C., Tatarowicz, Aubrey, White, Brandyn, White, Samuel and Yeh, Tom (2010): VizWiz: nearly real-time answers to visual questions. In: Proceedings of the 2010 International Cross-Disciplinary Conference on Web Accessibility W4A 2010. p. 24.
Visual information pervades our environment. Vision is used to decide everything from what we want to eat at a restaurant and which bus route to take to whether our clothes match and how long until the milk expires. Individually, the inability to interpret such visual information is a nuisance for blind people who often have effective, if inefficient, work-arounds to overcome them. Collectively, however, they can make blind people less independent. Specialized technology addresses some problems in this space, but automatic approaches cannot yet answer the vast majority of visual questions that blind people may have. VizWiz addresses this shortcoming by using the Internet connections and cameras on existing smartphones to connect blind people and their questions to remote paid workers' answers. VizWiz is designed to have low latency and low cost, making it both competitive with expensive automatic solutions and much more versatile.
© All rights reserved Bigham et al. and/or their publisher
White, Samuel, Ji, Hanjie and Bigham, Jeffrey P. (2010): EasySnap: real-time audio feedback for blind photography. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 409-410.
This demonstration presents EasySnap, an application that enables blind and low-vision users to take high-quality photos by providing real-time audio feedback as they point their existing camera phones. Users can readily follow the audio instructions to adjust their framing, zoom level and subject lighting appropriately. Real-time feedback is achieved on current hardware using computer vision in conjunction with use patterns drawn from current blind photographers.
© All rights reserved White et al. and/or their publisher
White, Samuel (2010): Audiowiz: nearly real-time audio transcriptions. In: Twelfth Annual ACM SIGACCESS Conference on Assistive Technologies 2010. pp. 307-308.
Existing automated transcription solutions filter out environmental noises and focus only on transcribing the spoken word. This leaves deaf and hard of hearing users with no way of learning about events that provide no spoken information such as the sounds produced by a faulty appliance or the barked alert of a dutiful guard dog. In this paper we present AudioWiz, a mobile application that provides highly detailed audio transcriptions of both the spoken word and the accompanying environmental sounds. This approach is made possible by harnessing humans to provide audio transcriptions instead of more traditional automated means. Web-workers are recruited automatically in nearly real-time as dictated by demand.
© All rights reserved White and/or his/her publisher
Show list on your website
Join our community and advance:
Changes to this page (author)05 Apr 2012: Modified03 Apr 2012: Modified
15 Jan 2011: Modified
03 Nov 2010: Modified
03 Nov 2010: Added
Page maintainer: The Editorial Team