Number of co-authors:24
Number of publications with 3 favourite co-authors:Myounghoon Jeon:6Benjamin K. Davison:2Julia DeBlasio Olsheski:2
Bruce N. Walker's 3 most productive colleagues in number of publications:Myounghoon Jeon:10Benjamin K. Daviso..:3Julia DeBlasio Ols..:2
Civilization advances by extending the number of important operations which we can perform without thinking of them.
-- Alfred North Whitehead
Read the fascinating history of Wearable Computing, told by its father, Steve Mann
Read Steve's chapter !
Bruce N. Walker
Publications by Bruce N. Walker (bibliography)
Batterman, Jared M. and Walker, Bruce N. (2012): Displaying error & uncertainty in auditory graphs. In: Fourteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2012. pp. 285-286.
Clear representation of uncertainty or error is crucial in graphs and other displays of data. Error bars are quite common in visual graphs, even though they are not necessarily well-designed, and often are not well understood, even by those who use them often (e.g., scientists, engineers). There has been little study of how to represent uncertainty in auditory graphs, such as those used increasingly by students and scientists with vision impairment. This study used conceptual magnitude estimation to determine how well different auditory dimensions (frequency, tempo) can represent error and uncertainty. The results will lead to more effective auditory displays of quantitative information and data.
© All rights reserved Batterman and Walker and/or ACM Press
Suh, Hyewon, Jeon, Myounghoon and Walker, Bruce N. (2012): Spearcons Improve Navigation Performance and Perceived Speediness in Korean Auditory Menus. In: Proceedings of the Human Factors and Ergonomics Society 2012 Annual Meeting 2012. pp. 1361-1365.
For decades, auditory menus using both speech (usually text-to-speech, TTS) and non-speech sounds have been extensively studied. Researchers have developed situation-optimized auditory menus involving such cues as auditory icons, earcons, spearcons, and spindex. Spearcons have generally outperformed other cues in terms of providing both contextual information and item-specific information. However, little research has been devoted to exploration of spearcons in languages other than English, or the use of spearcon-only auditory menus. In this study, we evaluated the use of spearcons in Korean menus, as well as the use of spearcons alone. Twenty-five native Korean speakers navigated through a two-dimensional auditory menu presented via TTS, with or without spearcon enhancements. Korean spearcons were successful. Participants also rated the spearcon-enhanced menu as seeming speedier and more fun than the TTS-only menu. After a short learning period, mean time-to-target in the auditory menu was even faster with spearcons alone, compared to traditional TTS-only menus.
© All rights reserved Suh et al. and/or Human Factors and Ergonomics Society
Olsheski, Julia DeBlasio, Walker, Bruce N. and McCloud, Jeff (2011): In-vehicle assistive technology (IVAT) for drivers who have survived a traumatic brain injury. In: Thirteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2011. pp. 257-258.
IVAT (in-vehicle assistive technology) is an in-dash interface borne out from a collaborative effort between the Shepherd Center assistive technology team, the Georgia Tech Sonification Laboratory, and Centrafuse. The aim of this technology is to increase driver safety by taking individual cognitive abilities and limitations into account. While the potential applications of IVAT are widespread, the initial population of interest for the current research is survivors of a traumatic brain injury (TBI). TBI can cause a variety of impairments that limit driving ability. IVAT is aimed at enabling the individual to overcome these limitations in order to regain some independence by driving after injury.
© All rights reserved Olsheski et al. and/or ACM Press
Jeon, Mounghoon, Roberts, Jason, Raman, Parameshwaran, Yim, Jung-Bin and Walker, Bruce N. (2011): Participatory design process for an in-vehicle affect detection and regulation system for various drivers. In: Thirteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2011. pp. 271-272.
Considerable research has shown that diverse affective (emotional) states influence cognitive processes and performance. To detect a driver's affective states and regulate them may help increase driving performance and safety. There are some populations who are more vulnerable to issues regarding driving, affect, and affect regulation (e.g., novice drivers, young drivers, older drivers, and drivers with TBI (Traumatic Brain Injury)). This paper describes initial findings from multiple participatory design processes, including interviews with 21 young drivers, and focus groups with a TBI driver and two driver rehab specialists. Depending on user groups, there are distinct issues and needs; therefore, differentiated approaches are needed to design an in-vehicle assistive technology system for a specific target user group.
© All rights reserved Jeon et al. and/or ACM Press
Jeon, Myounghoon and Walker, Bruce N. (2011): What to detect?: Analyzing Factor Structures of Affect in Driving Contexts for an Emotion Detection and Regulation System. In: Proceedings of the Human Factors and Ergonomics Society 55th Annual Meeting 2011. pp. 1889-1893.
This research is a part of the IVAT (In-Vehicle Assistive Technology) project, an in-dash interface design project to help drivers who have various disabilities, including deficits in emotion regulation. While there have been several studies on emotion detection for drivers, few studies have seriously addressed what to detect and why. Those are crucial issues to consider when implementing an effective affect management system. Phase 1 of our study gathered a total of 33 different driving situations that can induce emotions and 56 plausible affective keywords to describe such emotions. Phase 2 analyzed factor structures of affect for driving contexts through user ratings and Factor Analysis, and obtained nine factors: fearful, happy, angry, depressed, curious, embarrassed, urgent, bored, and relieved. These factors accounted for 65.1% of the total variance. Results are discussed in terms of designing the IVAT emotion detection and regulation system for driving contexts.
© All rights reserved Jeon and Walker and/or HFES
Olsheski, Julia DeBlasio and Walker, Bruce N. (2011): Documentation in a Medical Setting: Effects of Technology on Perceived Quality of Care. In: Proceedings of the Human Factors and Ergonomics Society 55th Annual Meeting 2011. pp. 1980-1984.
The authors extend previous findings regarding the social impact of introducing new documentation technologies to the doctor-patient interaction by including an age comparison. Participants (including 'young adults' aged 18-39 and 'older adults' aged 62-87) viewed one of several video conditions portraying a medical interview during which the physician used one of five documenting methods/devices (nothing, pen and paper, desktop computer, PDA, wearable computer). After viewing the doctor-patient interaction, participants completed a series of questionnaires evaluating their general satisfaction with the quality of care (QoC) delivered during the medical interview. Results show a significant effect of the documentation method on QoC ratings. Further, participant responses varied significantly by age group, with younger adults tending to rate the doctor more favorably. Though advanced technology may afford the opportunity for better healthcare delivery, there may be a trade-off with lower levels of patient satisfaction.
© All rights reserved Olsheski and Walker and/or HFES
Jeon, Myounghoon, Gupta, Siddharth, Davison, Benjamin K. and Walker, Bruce N. (2010): Auditory menus are not just spoken visual menus: a case study of "unavailable" menu items. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 3319-3324.
Auditory menus can supplement or replace visual menus to enhance usability and accessibility. Despite the rapid increase of research on auditory displays, more is still needed to optimize the auditory-specific aspects of these implementations. In particular, there are several menu attributes and features that are often displayed visually, but that are not or poorly conveyed in the auditory version of the menu. Here, we report on two studies aimed at determining how best to render the important concept of an unavailable menu item. In Study 1, 23 undergraduates navigated a Microsoft Word-like auditory menu with a mix of available and unavailable items. For unavailable items, using whisper was favored over attenuated voice or saying "unavailable". In Study 2, 26 undergraduates navigated a novel auditory menu. With practice, whispering unavailable items was more effective than skipping unavailable items. Results are discussed in terms of acoustic theory and cognitive menu selection theory.
© All rights reserved Jeon et al. and/or their publisher
Choi, Stephen H. and Walker, Bruce N. (2010): Digitizer auditory graph: making graphs accessible to the visually impaired. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 3445-3450.
This paper describes the design goal, design approach, and user testing of an assistive technology called Digitizer Auditory Graphfia sonification software tool that allows users to upload or take an image of a line graph with an optical input device (e.g., webcam, digital camera, cell phone camera) and then hear an auditory graph of the digitized graph image. This technique enables visually impaired students to have a multimodal display of the information in a graph. Preliminary evaluation results indicate that both visually impaired and sighted people can understand the patterns of graphs by listening to auditory graph, and optical input allows them to have simple and fast output results.
© All rights reserved Choi and Walker and/or their publisher
Bruce, Carrie M. and Walker, Bruce N. (2010): Designing effective sound-based aquarium exhibit interpretation for visitors with vision impairments. In: Twelfth Annual ACM SIGACCESS Conference on Assistive Technologies 2010. pp. 251-252.
Sound-based exhibit interpretation at aquariums has the potential to more effectively mediate visitor-exhibit interaction and support participation for visitors with vision impairments. However, existing interpretation strategies do not adequately convey dynamic animal information to visitors with vision impairments. In an effort to improve access, we are developing research-based guidelines for sound-based exhibit interpretation including audio tours, interpretive staff presentations, and a real-time information delivery system. This poster reports on proposed and completed user-centered design activities.
© All rights reserved Bruce and Walker and/or their publisher
Moskovitch, Yarden and Walker, Bruce N. (2010): Evaluating text descriptions of mathematical graphs. In: Twelfth Annual ACM SIGACCESS Conference on Assistive Technologies 2010. pp. 259-260.
One approach to making graphs more accessible has been the incorporation of natural language descriptions of graphs into multimodal assistive technologies. MathTrax is software targeted at middle and high school students that employs a Math Description Engine (MDE)  to produce a textual description of graphs, as well as a visual and auditory representation of the graphs. Our study compared descriptions generated by the MDE to those generated by teachers of high school math, in order to better understand how to optimize the structure and content of mathematical graph descriptions. Feedback from these experts is compiled into suggestions for description templates to improve graphs descriptions, as well as design recommendations for future applications.
© All rights reserved Moskovitch and Walker and/or their publisher
Campbell, Tyler and Walker, Bruce N. (2010): Increasing Trust In Online Shopping Environments Increases Purchasing Behavior. In: Proceedings of the Human Factors and Ergonomics Society 54th Annual Meeting 2010. pp. 1259-1263.
Trust has been shown to be a critical factor in promoting online purchasing behavior. This study tested the ability of three website modifications to increase levels of trust and intent to purchase. Previous theory dictates that trust is akin to certainties in the presence of uncertainty. Three easily implementable website features were chosen to reduce uncertainty, thus creating trust and in turn increase economic activity. Options on shopping websites of (1) overnight delivery, (2) in-store pick-up, and (3) a live video stream of a business's facilities were tested for effects on trust and usefulness via survey. The results suggest there are significant increases in at least some measures of trust and other correlates of purchasing behavior for all three suggested website enhancements.
© All rights reserved Campbell and Walker and/or HFES
Moskovitch, Yarden, Jeon, Myounghoon and Walker, Bruce N. (2010): Enhanced Auditory Menu Cues on a Mobile Phone Improve Time-Shared Performance of a Driving-Like Dual Task. In: Proceedings of the Human Factors and Ergonomics Society 54th Annual Meeting 2010. pp. 1321-1325.
The growing trend of using mobile phones and other in-vehicle technologies (IVT) while driving has spurred research on driver distraction, its effects and alleviation (Ashley, 2001; Young&Regan, 2007). The present study used a dual task in which 21 undergraduates navigated a mobile phone contact list for a target name (secondary task) while playing a computer game representative of driving (primary task). The phone menu was enhanced with two audio navigation cues: traditional text-to-speech (TTS) and spearcons (i.e., compressed speech). These cues were tested with and without visual display of the contact list. Spearcons in conjunction with TTS enhanced performance on the primary task while having no negative effect on the secondary task. Auditory menus reduced perceived workload and increased subjective ratings. Results are discussed in terms of multiple resources theory and practical mobile phone menu design.
© All rights reserved Moskovitch et al. and/or HFES
DeBlasio, Julia and Walker, Bruce N. (2009): Documentation in a Medical Setting: Effects of Technology on Perceived Quality of Care. In: Proceedings of the Human Factors and Ergonomics Society 53rd Annual Meeting 2009. pp. 645-649.
The authors examine the social impact of introducing advanced exam-room technologies to the doctor-patient interaction. A total 342 participants viewed one of several video conditions portraying a physician conducting a medical interview in which he used one of 5 documenting methods/devices (nothing, pen and paper, PDA, desktop computer, wearable computer). After viewing the interaction, participants completed a series of questionnaires evaluating their general satisfaction with the quality of care (QoC) delivered during the medical interview. Results reveal that the type of technology used has a significant effect on QoC ratings. Though advanced technology offers the opportunity of better healthcare delivery, there may be a trade-off with lower ratings of interpersonal interactions.
© All rights reserved DeBlasio and Walker and/or their publisher
Jeon, Myounghoon and Walker, Bruce N. (2009): "Spindex": Accelerated Initial Speech Sounds Improve Navigation Performance in Auditory Menus. In: Proceedings of the Human Factors and Ergonomics Society 53rd Annual Meeting 2009. pp. 1081-1085.
Users interact with mobile devices through menus, which can include many items. Auditory menus can supplement or even replace visual menus. Unfortunately, little research has been devoted to enhancing the usability of large auditory menus. We evaluated a novel auditory menu enhancement called a "spindex" (i.e., speech index), in which brief audio cues inform the user where she is in a long menu. In the current implementation, each item in a menu is preceded by a sound based on the item's initial letter. 25 undergraduates navigated through an alphabetized contact list of 50 or 150 names. The menu was presented with text-to-speech (TTS) alone, or TTS plus spindex, and with the visual menu displayed or not. Search time was faster with the spindex-enhanced menu, especially for long lists. Subjective ratings also favored the spindex. Results are discussed in terms of theory and practical applications.
© All rights reserved Jeon and Walker and/or their publisher
Stanley, Raymond M. and Walker, Bruce N. (2009): Intelligibility of bone-conducted speech at different locations compared to air-conducted speech. In: Proceedings of the Human Factors and Ergonomics Society 53rd Annual Meeting 2009. pp. 1086-1090.
Bone-conduction transducers offer a unique advantage for radio communication systems, allowing sound transmission while the ear canals remain open for access to environmental sounds, or plugged for blocking of environmental sounds. This study compared the intelligibility of noise-degraded speech presented through bone-conduction hearing administered at different locations, and through air-conduction. Speech intelligibility was assessed using the Diagnostic Rhyme Test. Speech intelligibility was reduced for all of the bone-conduction hearing locations, relative to air-conduction hearing. There were also differences in performance for the various bone conduction locations. These results suggest that given noise-degraded speech, the performance decrement from using bone conduction will have to be weighed against the benefits of being able to dynamically block the ear canal, or leave it open, as situations require. Further, the choice of bone conduction transducer location would need to weigh possible performance differences against the various practical advantages of each location.
© All rights reserved Stanley and Walker and/or their publisher
Jeon, Myounghoon, Davison, Benjamin K., Nees, Michael A., Wilson, Jeff and Walker, Bruce N. (2009): Enhanced auditory menu cues improve dual task performance and are preferred with in-vehicle technologies. In: Schmidt, Albrecht, Dey, Anind K., Seder, Thomas and Juhlin, Oskar (eds.) Proceedings of 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications - AutomotiveUI 2009 21-22 September , 2009, Essen, Germany. pp. 91-98.
Yalla, Pavani and Walker, Bruce N. (2008): Advanced auditory menus: design and evaluation of auditory scroll bars. In: Tenth Annual ACM SIGACCESS Conference on Assistive Technologies 2008. pp. 105-112.
Auditory menus have the potential to make devices that use visual menus accessible to a wide range of users. Visually impaired users could especially benefit from the auditory feedback received during menu navigation. However, auditory menus are a relatively new concept, and there are very few guidelines that describe how to design them. This paper details how visual menu concepts may be applied to auditory menus in order to help develop design guidelines. Specifically, this set of studies examined possible ways of designing an auditory scrollbar for an auditory menu. The following different auditory scrollbar designs were evaluated: single-tone, double-tone, alphabetical grouping, and proportional grouping. Three different evaluations were conducted to determine the best design. The first two evaluations were conducted with sighted users, and the last evaluation was conducted with visually impaired users. The results suggest that pitch polarity does not matter, and proportional grouping is the best of the auditory scrollbar designs evaluated here.
© All rights reserved Yalla and Walker and/or ACM Press
Pendse, Anandi, Pate, Michael and Walker, Bruce N. (2008): The accessible aquarium: identifying and evaluating salient creature features for sonification. In: Tenth Annual ACM SIGACCESS Conference on Assistive Technologies 2008. pp. 297-298.
Informal learning environments (e.g., aquaria, zoos, science centers) are often inaccessible to the visually impaired. Sonification can make such environments more accessible while also adding to the experience of sighted visitors. This study was to determine the salient features of moving creatures in the sort of dynamic display typically found in such environments and to evaluate the efficacy of sonification in improving the experience of viewing such displays by sighted research participants.
© All rights reserved Pendse et al. and/or ACM Press
Walker, Bruce N., Lindsay, Jeffrey and Godfrey, Justin (2004): The audio abacus: representing numerical values with nonspeech sound for the visually impaired. In: Sixth Annual ACM Conference on Assistive Technologies 2004. pp. 9-15.
Point estimation is a relatively unexplored facet of sonification. We present a new computer application, the Audio Abacus, designed to transform numbers into tones following the analogy of an abacus. As this is an entirely novel approach to sonifying exact data values, we have begun a systematic line of investigation into the application settings that work most effectively. Results are presented for an initial study. Users were able to perform relatively well with very little practice or training, boding well for this type of display. Further investigations are planned. This could prove to be very useful for visually impaired individuals given the common nature of numerical data in everyday settings.
© All rights reserved Walker et al. and/or ACM Press
Show this list on your homepage
Join the technology elite and advance:
Changes to this page (author)10 Nov 2012: Added09 Nov 2012: Added
04 Apr 2012: Added
04 Apr 2012: Added
03 Apr 2012: Added
03 Apr 2012: Added
06 Jul 2011: Added
16 Jan 2011: Added
16 Jan 2011: Added
15 Jan 2011: Added
15 Jan 2011: Added
03 Nov 2010: Added
03 Nov 2010: Added
03 Nov 2010: Added
02 Nov 2010: Added
02 Nov 2010: Added
16 Feb 2010: Modified
07 Apr 2009: Added
07 Apr 2009: Added
22 Jun 2007: Added
Page maintainer: The Editorial Team