Publication statistics

Pub. period:1993-2012
Pub. count:108
Number of co-authors:107



Co-authors

Number of publications with 3 favourite co-authors:

Andrew Crossan:12
Eve Hoggan:6
Roderick Murray-Smith:6

 

 

Productive colleagues

Stephen A. Brewster's 3 most productive colleagues in number of publications:

Carl Gutwin:116
Matt Jones:63
Joemon M. Jose:57
 
 
 
Jul 30

It's all about one thing: creative problem-solving to get the story out.

-- Robert Greenberg, R/GA, 2006

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!

 
 

Stephen A. Brewster

Picture of Stephen A. Brewster.
Has also published under the name of:
"Stephen Brewster"

Personal Homepage:
dcs.gla.ac.uk/~stephen/aboutme.shtml


I am a Professor of Human-Computer Interaction in the Department of Computing Science at the University of Glasgow, UK. My main research interest is in Multimodal Human-Computer Interaction, sound and haptics and gestures. I have done a lot of research into Earcons, a particular form of non-speech sounds.

Edit author info
Add publication

Publications by Stephen A. Brewster (bibliography)

 what's this?
2012
 
Edit | Del

Balcer, Bartoz, Halvey, Martin, Jose, Joemon M. and Brewster, Stephen A. (2012): COPE: interactive image retrieval using conversational recommendation. In: Proceedings of the HCI12 Conference on People and Computers XXVI 2012. pp. 1-10.

Most multimedia retrieval services e.g. YouTube, Flickr, Google etc. rely on users searching using textual queries or examples. However, this solution is inadequate when there is no text, very little text, the text is in a foreign language or the user cannot form textual a query. In order to overcome these shortcomings we have developed an image retrieval system called COPE (COnversational Picture Exploration) that can use a number of different preference feedback mechanisms, inspired by conversational recommendation paradigms, for image retrieval. In COPE users are presented with a small number of search results and simply have to express whether these results match their information need. We examine the suitability of a number of feedback approaches for semiautomatic and interactive image retrieval. For interactive retrieval we compared our preference based approaches to text based search (where we consider text to be an upper bound), our results indicate that users prefer preference based search to text based search and in some cases our approaches can outperform text based search.

© All rights reserved Balcer et al. and/or their publisher

2011
 
Edit | Del

Gutwin, Carl, Schneider, Oliver, Xiao, Robert and Brewster, Stephen A. (2011): Chalk sounds: the effects of dynamic synthesized audio on workspace awareness in distributed groupware. In: Proceedings of ACM CSCW11 Conference on Computer-Supported Cooperative Work 2011. pp. 85-94.

Awareness of other people's activity is an important part of shared-workspace collaboration, and is typically supported using visual awareness displays such as radar views. These visual presentations are limited in that the user must be able to see and attend to the view in order to gather awareness information. Using audio to convey awareness information does not suffer from these limitations, and previous research has shown that audio can provide valuable awareness in distributed settings. In this paper we evaluate the effectiveness of synthesized dynamic audio information, both on its own and as an adjunct to a visual radar view. We developed a granular-synthesis engine that produces realistic chalk sounds for off-screen activity in a groupware workspace, and tested the audio awareness in two ways. First, we measured people's ability to identify off-screen activities using only sound, and found that people are almost as accurate with synthesized sounds as with real sounds. Second, we tested dynamic audio awareness in a realistic groupware scenario, and found that adding audio to a radar view significantly improved awareness of off-screen activities in situations where it was difficult to see or attend to the visual display. Our work provides new empirical evidence about the value of dynamic synthesized audio in distributed groupware.

© All rights reserved Gutwin et al. and/or their publisher

 
Edit | Del

Vazquez-Alvarez, Yolanda and Brewster, Stephen A. (2011): Eyes-free multitasking: the effect of cognitive load on mobile spatial audio interfaces. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2173-2176.

As mobile devices increase in functionality, users perform more tasks when on the move. Spatial audio interfaces offer a solution for eyes-free interaction. However, such interfaces face a number of challenges when supporting multiple and simultaneous tasks, namely: 1) interference amongst multiple audio streams, and 2) the constraints of cognitive load. We present a comparative study of spatial audio techniques evaluated in a divided- and selective-attention task. A podcast was used for high cognitive load (divided-attention) and classical music for low cognitive load (selective-attention), while interacting with an audio menu. Results showed that spatial audio techniques were preferred when cognitive load was kept low, while a baseline technique using an interruptible single audio stream was significantly less preferred. Conversely, when cognitive load was increased the preferences reversed. Thus, given an appropriate task structure, spatial techniques offer a means of designing effective audio interfaces to support eyes-free mobile multitasking.

© All rights reserved Vazquez-Alvarez and Brewster and/or their publisher

 
Edit | Del

Wilson, Graham, Halvey, Martin, Brewster, Stephen A. and Hughes, Stephen A. (2011): Some like it hot: thermal feedback for mobile devices. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2555-2564.

Thermal stimulation is a rich, emotive and salient feedback channel that is well suited to HCI, but one that is yet to be fully investigated. Thermal feedback may be suited to environments that are too loud for audio or too bumpy for vibrotactile feedback. This paper presents two studies into how well users could detect hot and cold stimuli presented to the fingertips, the palm, the dorsal surface of the forearm and the dorsal surface of the upper arm. Evaluations were carried out in static and mobile settings. Results showed that the palm is most sensitive, cold is more perceivable and comfortable than warm and that stronger and faster-changing stimuli are more detectable but less comfortable. Guidelines for the design of thermal feedback are outlined, with attention paid to perceptual and hedonic factors.

© All rights reserved Wilson et al. and/or their publisher

 
Edit | Del

Wilson, Graham, Brewster, Stephen A. and Halvey, Martin (2011): The effects of walking and control method on pressure-based interaction. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2275-2280.

Pressure-based interactions have largely been limited to static scenarios; very few have focused on its use on mobile devices and even fewer have investigated the use of pressure while the user is in motion (i.e. walking). Pressure input is well suited to mobile interaction as mobile devices almost universally adopt touch and gestural input. This paper presents the initial results of research looking into the effects of walking on the application of pressure during linear targeting. Positional and rate-based (velocity) control methods are compared in order to determine which allows for more stable and accurate selections. Results suggest that rate-based control is superior for both mobile (walking) and static (sitting) linear targeting and that mobility significantly increases errors, selection time and subjective workload. These results will influence the design of a second part of the study, which will evaluate user ability to control the same application using only audio feedback.

© All rights reserved Wilson et al. and/or their publisher

 
Edit | Del

Halvey, Martin, Wilson, Graham, Vazquez-Alvarez, Yolanda, Brewster, Stephen A. and Hughes, Stephen A. (2011): The effect of clothing on thermal feedback perception. In: Proceedings of the 2011 International Conference on Multimodal Interfaces 2011. pp. 217-220.

Thermal feedback is a new area of research in HCI. To date, studies investigating thermal feedback for interaction have focused on virtual reality, abstract uses of thermal output or on use in highly controlled lab settings. This paper is one of the first to look at how environmental factors, in our case clothing, might affect user perception of thermal feedback and therefore usability of thermal feedback. We present a study into how well users perceive hot and cold stimuli on the hand, thigh and waist. Evaluations were carried out with cotton and nylon between the thermal stimulators and the skin. Results showed that the presence of clothing requires higher intensity thermal changes for detection but that these changes are more comfortable than direct stimulation on skin.

© All rights reserved Halvey et al. and/or ACM Press

 
Edit | Del

Wilson, Graham, Brewster, Stephen A., Halvey, Martin, Crossan, Andrew and Stewart, Craig (2011): The effects of walking, feedback and control method on pressure-based interaction. In: Proceedings of 13th Conference on Human-computer interaction with mobile devices and services 2011. pp. 147-156.

This paper presents a study looking into the effects of walking and the use of visual and audio feedback on the application of pressure for linear targeting. Positional and Rate-based control methods are compared in order to determine which allows for more stable and accurate selections, both while sitting and mobile. Results suggest that Rate-based control is superior for both mobile (walking) and static (sitting) linear targeting, and that mobility significantly increases errors, selection time and subjective workload. The use of only audio feedback significantly increased errors and task time for Positional control and static Rate-based control, but not mobile Rate-based control. Despite this, the results still suggest that audio control of pressure interaction while walking is highly accurate and usable.

© All rights reserved Wilson et al. and/or ACM Press

2010
 
Edit | Del

Salimun, Carolyn, Purchase, Helen C., Simmons, David R. and Brewster, Stephen A. (2010): The effect of aesthetically pleasing composition on visual search performance. In: Proceedings of the Sixth Nordic Conference on Human-Computer Interaction 2010. pp. 422-431.

This paper presents the results of a study on the effect of the aesthetic layout properties of a computer interface on visual search performance. Search performance was measured at three levels of layout aesthetics: high, medium, and low. Two types of performance metric were recorded: response time and number of errors. Performance at the three levels of aesthetics was also compared between two search methods (with or without mouse pointing), and related to preference. The findings of the present study indicate that, regardless of search method used, response time (but not errors) was strongly affected by the aesthetics level. There is also a clear relationship between preference and performance when a composite measurement of aesthetics is used, although this does not seem to be due to the influence of individual aesthetic features. Further study is needed to identify other aesthetic factors that influence task performance, and to establish appropriate design guidelines.

© All rights reserved Salimun et al. and/or their publisher

 
Edit | Del

McAdam, Christopher, Pinkerton, Craig and Brewster, Stephen A. (2010): Novel interfaces for digital cameras and camera phones. In: Proceedings of 12th Conference on Human-computer interaction with mobile devices and services 2010. pp. 143-152.

Camera phones are now very common but there are some usability issues that affect their use. These can occur because the users look through the LCD to frame the image and can often miss the icons displayed around the edges that present important information about the status of the camera. This may lead to shots being missed or poorly exposed. Most camera phones do not take full advantage of the features of the underlying phone platform to enhance their interfaces. We created a camera application for the Nokia N95 that featured novel interface elements and made use of the features of the platform to provide a rich variety of information in more usable forms, such as: sonifications of the luminance histogram to ensure better exposure before a picture is taken; phone orientation to give a level indicator to ensure the camera is straight; measuring phone movement to ensure the phone is being held steady; and the detection of image motion to support panning We also present a scenario for how these features could be used in conjunction with each other during the photo taking process.

© All rights reserved McAdam et al. and/or their publisher

 
Edit | Del

Wilson, Graham, Stewart, Craig and Brewster, Stephen A. (2010): Pressure-based menu selection for mobile devices. In: Proceedings of 12th Conference on Human-computer interaction with mobile devices and services 2010. pp. 181-190.

Despite many successes in desktop applications, little work has looked at the use of pressure input on mobile devices and the different issues associated with mobile interactions e.g. non-visual feedback. This study examined pressure input on a mobile device using a single Force Sensing Resistor (FSR) with linearised output as a means of target selection within a menu, where target menu items varied in size and location along the z-axis. Comparing visual and audio feedback, results showed that, overall, eyes-free pressure interaction reached a mean level of 74% accuracy. With visual feedback mean accuracy

© All rights reserved Wilson et al. and/or their publisher

 
Edit | Del

Alvarez, Yolanda Vazquez and Brewster, Stephen A. (2010): Designing spatial audio interfaces to support multiple audio streams. In: Proceedings of 12th Conference on Human-computer interaction with mobile devices and services 2010. pp. 253-256.

Auditory interfaces offer a solution to the problem of effective eyes-free mobile interactions. However, a problem with audio, as opposed to visual displays, is dealing with multiple simultaneous outputs. Any audio interface needs to consider: 1) simultaneous versus sequential presentation of multiple audio streams, 2) 3D audio techniques to place sounds in different spatial locations versus a single point of presentation, 3) dynamic movement versus fixed locations of audio sources. We present an experiment using a divided-attention task where a continuous podcast and an audio menu compete for attention. A sequential presentation baseline assessed the impact of cognitive load, and as expected, dividing attention had a significant effect on overall performance. However, spatial audio still increased the users' ability to attend to two streams, while dynamic movement of streams led to higher perceived workload. These results will provide guidelines for designers when building eyes-free auditory interfaces for mobile applications.

© All rights reserved Alvarez and Brewster and/or their publisher

 
Edit | Del

Rico, Julie, Jacucci, Giulio, Reeves, Stuart, Hansen, Lone Koefoed and Brewster, Stephen A. (2010): Designing for performative interactions in public spaces. In: Proceedings of the 2010 International Conference on Uniquitous Computing 2010. pp. 519-522.

Building on the assumption that every human action in public space has a performative aspect, this workshop seeks to explore issues of mobile technology and interactions in public settings. We will examine the design of performative technologies, the evaluation of user experience, the importance of spectator and performer roles, and the social acceptability of performative actions in public spaces. The workshop will aim to bring together researchers and practitioners who are interested in the rapidly growing area of technologies supporting use in a public setting, and through this, explore the themes the workshop offers, plan for publications which synthesize together this disparate work, and finally to facilitate future collaborations between participants.

© All rights reserved Rico et al. and/or their publisher

 
Edit | Del

Hoggan, Eve and Brewster, Stephen A. (2010): Crosstrainer: testing the use of multimodal interfaces in situ. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 333-342.

We report the results of an exploratory 8-day field study of CrossTrainer: a mobile game with crossmodal audio and tactile feedback. Our research focuses on the longitudinal effects on performance with audio and tactile feedback, the impact of context such as location and situation on performance and personal modality preference. The results of this study indicate that crossmodal feedback can aid users in entering answers quickly and accurately using a variety of different widgets. Our study shows that there are times when audio is more appropriate than tactile and vice versa and for this reason devices should support both tactile and audio feedback to cover the widest range of environments, user preference, locations and tasks.

© All rights reserved Hoggan and Brewster and/or their publisher

 
Edit | Del

Rico, Julie and Brewster, Stephen A. (2010): Usable gestures for mobile interfaces: evaluating social acceptability. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 887-896.

Gesture-based mobile interfaces require users to change the way they use technology in public settings. Since mobile phones are part of our public appearance, designers must integrate gestures that users perceive as acceptable for public use. This topic has received little attention in the literature so far. The studies described in this paper begin to look at the social acceptability of a set of gestures with respect to location and audience in order to investigate possible ways of measuring social acceptability. The results of the initial survey showed that location and audience had a significant impact on a user's willingness to perform gestures. These results were further examined through a user study where participants were asked to perform gestures in different settings (including a busy street) over repeated trials. The results of this work provide gesture design recommendations as well as social acceptability evaluation guidelines.

© All rights reserved Rico and Brewster and/or their publisher

 
Edit | Del

Williamson, John, Robinson, Simon, Stewart, Craig, Murray-Smith, Roderick, Jones, Matt and Brewster, Stephen A. (2010): Social gravity: a virtual elastic tether for casual, privacy-preserving pedestrian rendezvous. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 1485-1494.

We describe a virtual "tether" for mobile devices that allows groups to have quick, simple and privacy-preserving meetups. Our design provides cues which allow dynamic coordination of rendezvous without revealing users' positions. Using accelerometers and magnetometers, combined with GPS positioning and non-visual feedback, users can probe and sense a dynamic virtual object representing the nearest meeting point. The Social Gravity system makes social bonds tangible in a virtual world which is geographically grounded, using haptic feedback to help users rendezvous. We show dynamic navigation using this physical model-based system to be efficient and robust in significant field trials, even in the presence of low-quality positioning. The use of simulators to build models of mobile geolocated systems for pre-validation purposes is discussed, and results compared with those from our trials. Our results show interesting behaviours in the social coordination task, which lead to guidelines for geosocial interaction design. The Social Gravity system proved to be very successful in allowing groups to rendezvous efficiently and simply and can be implemented using only commercially available hardware.

© All rights reserved Williamson et al. and/or their publisher

 
Edit | Del

McGookin, David, Robertson, Euan and Brewster, Stephen A. (2010): Clutching at straws: using tangible interaction to provide non-visual access to graphs. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 1715-1724.

We present a tangible user interface (TUI) called Tangible Graph Builder, that has been designed to allow visually impaired users to access graph and chart-based data. We describe the current paper-based materials used to allow independent graph construction and browsing, before discussing how researchers have applied virtual haptic and non-speech audio techniques to provide more flexible access. We discuss why, although these technologies overcome many of the problems of non-visual graph access, they also introduce new issues and why the application of TUIs is important. An evaluation of Tangible Graph Builder with 12 participants (8 sight deprived, 4 blind) revealed key design requirements for non-visual TUIs, including phicon design and handling marker detection failure. We finish by presenting future work and improvements to our system.

© All rights reserved McGookin et al. and/or their publisher

 
Edit | Del

Crossan, Andrew, Williamson, John and Brewster, Stephen A. (2010): Artex: artificial textures from every-day surfaces for touchscreens. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 4081-4086.

The lack of tactile feedback available on touchscreen devices adversely affects their usability and forces the user to rely heavily on visual feedback. Here we pro-pose texturing a touchscreen with virtual vibrotactile textures to support the user when browsing an interface non-visually. We demonstrate how convincing pre-recorded textures can be delivered using processed audio files generated through recorded audio from a contact microphone being dragged over everyday sur-faces. These textures are displayed through a vibrotactile device attached to the back of an HTC Hero phone varying the rate and amplitude of the texture with the user's finger speed on the screen. We then discuss our future work exploring the potential of this idea to allow browsing of information and widgets non-visually.

© All rights reserved Crossan et al. and/or their publisher

 
Edit | Del

Kaaresoja, Topi and Brewster, Stephen A. (2010): Feedback is... late: measuring multimodal delays in mobile device touchscreen interaction. In: Proceedings of the 2010 International Conference on Multimodal Interfaces 2010. p. 2.

Multimodal interaction is becoming common in many kinds of devices, particularly mobile phones. If care is not taken in design and implementation, there may be latencies in the timing of feedback in the different modalities may have unintended effects on users. This paper introduces an easy to implement multimodal latency measurement tool for touchscreen interaction. It uses off-the-shelf components and free software and is capable of measuring latencies accurately between different interaction events in different modalities. The tool uses a high-speed camera, a mirror, a microphone and an accelerometer to measure the touch, visual, audio and tactile feedback events that occur in touchscreen interaction. The microphone and the accelerometer are both interfaced with a standard PC soundcard that makes the measurement and analysis simple. The latencies are obtained by hand and eye using a slow-motion video player and an audio editor. To validate the tool, we measured four commercial mobile phones. Our results show that there are significant differences in latencies, not only between the devices, but also between different applications and modalities within one device. In this paper the focus is on mobile touchscreen devices, but with minor modifications our tool could be also used in other domains.

© All rights reserved Kaaresoja and Brewster and/or ACM Press

 
Edit | Del

Rico, Julie and Brewster, Stephen A. (2010): Gesture and voice prototyping for early evaluations of social acceptability in multimodal interfaces. In: Proceedings of the 2010 International Conference on Multimodal Interfaces 2010. p. 16.

Interaction techniques that require users to adopt new behaviors mean that designers must take into account social acceptability and user experience otherwise the techniques may be rejected by users as they are too embarrassing to do in public. This research uses a set of low cost prototypes to study social acceptability and user perceptions of multimodal mobile interaction techniques early on in the design process. We describe 4 prototypes that were used with 8 focus groups to evaluate user perceptions of novel multimodal interactions using gesture, speech and nonspeech sounds, and gain feedback about the usefulness of the prototypes for studying social acceptability. The results of this research describe user perceptions of social acceptability and the realities of using multimodal interaction techniques in daily life. The results also describe key differences between young users (18-29) and older users (70-95) with respect to evaluation and approach to understanding these interaction techniques.

© All rights reserved Rico and Brewster and/or ACM Press

2009
 
Edit | Del

Hoggan, Eve, Crossan, Andrew, Brewster, Stephen A. and Kaaresoja, Topi (2009): Audio or tactile feedback: which modality when?. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 2253-2256.

When designing interfaces for mobile devices it is important to take into account the variety of contexts of use. We present a study that examines how changing noise and disturbance in the environment affects user performance in a touchscreen typing task with the interface being presented through visual only, visual and tactile, or visual and audio feedback. The aim of the study is to show at what exact environmental levels audio or tactile feedback become ineffective. The results show significant decreases in performance for audio feedback at levels of 94dB and above as well as decreases in performance for tactile feedback at vibration levels of 9.18g/s. These results suggest that at these levels, feedback should be presented by a different modality. These findings will allow designers to take advantage of sensor enabled mobile devices to adapt the provided feedback to the user's current context.

© All rights reserved Hoggan et al. and/or ACM Press

 
Edit | Del

Vazquez-Alvarez, Yolanda and Brewster, Stephen A. (2009): Investigating background & foreground interactions using spatial audio cues. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 3823-3828.

Audio is a key feedback mechanism in eyes-free and mobile computer interaction. Spatial audio, which allows us to localize a sound source in a 3D space, can offer a means of altering focus between audio streams as well as increasing the richness and differentiation of audio cues. However, the implementation of spatial audio on mobile phones is a recent development. Therefore, a calibration of this new technology is a requirement for any further spatial audio research. In this paper we report an evaluation of the spatial audio capabilities supported on a Nokia N95 8GB mobile phone. Participants were able to significantly discriminate between five audio sources on the frontal horizontal plane. Results also highlighted possible subject variation caused by earedness and handedness. We then introduce the concept of audio minimization and describe work in progress using the Nokia N95's 3D audio capability to implement and evaluate audio minimization in an eyes-free mobile environment.

© All rights reserved Vazquez-Alvarez and Brewster and/or ACM Press

 
Edit | Del

Pietrzak, Thomas, Crossan, Andrew, Brewster, Stephen A., Martin, Benoît and Pecci, Isabelle (2009): Exploring Geometric Shapes with Touch. In: Gross, T. (ed.) Interact 2009 2009, Uppsala, Sweden. pp. 145-148.

We propose a new technique to help users to explore geometric shapes without vision. This technique is based on a guidance using directional cues with a pin array. This is an alternative to the usual technique that consists of raising the pins corresponding to dark pixels around the cursor. In this paper we compare the exploration of geometric shapes with our new technique in unimanual and bimanual conditions. The users made fewer errors in unimanual condition than in bimanual condition. However they did not explore the shapes more quickly and there was no difference in confidence in their answer.

© All rights reserved Pietrzak et al. and/or their publisher

 
Edit | Del

Pietrzak, Thomas, Crossan, Andrew, Brewster, Stephen A., Martin, Benoît and Pecci, Isabelle (2009): Exploration de formes géométriques par le toucher. In: 21th French-speaking conference on Human-computer interaction IHM 2009 October 13-16, 2009, Grenoble, France. .

We propose a new technique to help people to explore geometric shapes without vision. This technique is based on a guidance using directional cues with a pin array. This is an alternative to the usual technique that consists of raising the pins corresponding to dark pixels around the cursor. In this paper we compare the exploration of geometric shapes with our new technique in unimanual and bimanual conditions. According to our results, the users made few errors in both conditions. Moreover the results show an equivalence for both techniques in answer time and users' confidence in their answer.

© All rights reserved Pietrzak et al. and/or their publisher

 
Edit | Del

Pietrzak, Thomas, Crossan, Andrew, Brewster, Stephen A., Pecci, Isabelle and Martin, Benoît (2009): Creating Usable Pin Array Tactons for Non-Visual Information. In Transactions on Haptics, 2 (2) pp. 61-72.

Spatial information can be difficult to present to a visually impaired computer user. In this paper, we examine a new kind of tactile cuing for nonvisual interaction as a potential solution, building on earlier work on vibrotactile Tactons. However, unlike vibrotactile Tactons, we use a pin array to stimulate the finger tip. Here, we describe how to design static and dynamic Tactons by defining their basic components. We then present user tests examining how easy it is to distinguish between different forms of pin array Tactons demonstrating accurate Tacton sets to represent directions. These experiments demonstrate usable patterns for static, wave, and blinking pin array Tacton sets for guiding a user in one of eight directions. A study is then described that shows the benefits of structuring Tactons to convey information through multiple parameters of the signal. By using multiple independent parameters for a Tacton, this study demonstrates that participants perceive more information through a single Tacton. Two applications using these Tactons are then presented: a maze exploration application and an electric circuit exploration application designed for use by and tested with visually impaired users.

© All rights reserved Pietrzak et al. and/or IEEE

 
Edit | Del

Crossan, Andrew, McGill, Mark, Brewster, Stephen A. and Murray-Smith, Roderick (2009): Head tilting for interaction in mobile contexts. In: Proceedings of 11th Conference on Human-computer interaction with mobile devices and services 2009. p. 6.

Developing interfaces for mobile situations requires that devices are useable on the move. Here, we explore head tilting as an input technique to allow a user to interact with a mobile device 'hands free'. A Fitts' Law style evaluation is described where a user acquires targets, moving the cursor by head tilt. We explore d position and velocity control cursor mechanisms in both static and mobile situations to see which provided the best level of performance. Results show that participants could successfully acquire targets using head tilting. Position control was shown to be significantly faster and more accurate in a static context, but exhibited significantly poorer accuracy and longer target acquisition times when the user was on the move. We further demonstrate how analysis of user's gait shows consistent targeting biases at different stages in the gait cycle.

© All rights reserved Crossan et al. and/or their publisher

 
Edit | Del

Brewster, Stephen A. and Hughes, Michael (2009): Pressure-based text entry for mobile devices. In: Proceedings of 11th Conference on Human-computer interaction with mobile devices and services 2009. p. 9.

This paper describes the design and evaluation of a touch screen-based pressure keyboard to investigate the possibilities of pressure as a new method of input for mobile devices. A soft press on the touchscreen generated a lowercase letter, a hard press an uppercase one. The aim was to improve input performance when entering mixed-case text, or shifted characters often used for emoticons, etc. An experiment compared two different forms of pressure input (Dwell and Quick Release) against a standard shift key keyboard, with users both sitting and walking. Results showed that Quick Release was the fastest for input of mixed case text with Dwell being the most accurate, even when users were mobile. The results demonstrate that pressure input can outperform a standard shift-key keyboard design for mobile text entry.

© All rights reserved Brewster and Hughes and/or their publisher

 
Edit | Del

Rico, Julie and Brewster, Stephen A. (2009): Gestures all around us: user differences in social acceptability perceptions of gesture based interfaces. In: Proceedings of 11th Conference on Human-computer interaction with mobile devices and services 2009. p. 64.

Gesture based interfaces provide a new way for us to interact with mobile devices, but also require us to make new decisions about how we feel about this new technology and which gestures we decide are usable and appropriate. These decisions are based on the social and public settings where these devices are used on a daily basis. Our ideas about which gestures are socially acceptable or not are an important factor in whether or not these gestures will be adopted. The ways in which users evaluate social acceptability is not only highly variable, but with drastically different results amongst different users. These differences are not dependant on factors such as age, gender, occupation, geographic location, or previous technology usage. Future work into the social acceptability perceptions of users will focus on personality traits as a new way of understanding how social acceptability is determined.

© All rights reserved Rico and Brewster and/or their publisher

 
Edit | Del

McGookin, David and Brewster, Stephen A. (2009): Eyes-free overviews for mobile map applications. In: Proceedings of 11th Conference on Human-computer interaction with mobile devices and services 2009. p. 87.

We outline two new auditory interaction techniques which build upon existing visual techniques to display off-screen points of interest (POI) in map based mobile computing applications. SonicPie uses a pie menu and compass metaphor, allowing a user to scroll around the environment, hearing off-screen POIs in a spatialised auditory environment. EdgeTouch integrates with the Wedge technique of Gustafson et al. [2], sonifying the POIs as the user comes into contact with them when moving his or her finger around a "sonification border".

© All rights reserved McGookin and Brewster and/or their publisher

 
Edit | Del

McAdam, Christopher and Brewster, Stephen A. (2009): Distal tactile feedback for text entry on tabletop computers. In: Proceedings of the HCI09 Conference on People and Computers XXIII 2009. pp. 504-511.

In this paper we present an initial study into the feasibility of using a mobile phone as a personal tactile display when interacting with a tabletop computer. There has been an increase in recent years in large touchscreen computers that use soft keyboards for text input. Text entry performance on such keyboards can be poor due to the lack of tactile feedback from the keys. Our approach is to use the vibration motor in a user's mobile phone to provide personal haptic feedback for interactions with the touchscreen computer. We ran an experiment to compare text entry on a touchscreen device with the tactile feedback being presented at different distal locations on the body (locations at which a user might keep a mobile device. The conditions were: no tactile feedback, feedback directly on the device, feedback at the wrist, upper arm, chest, belt and trouser pocket). The results showed that distal tactile feedback significantly increased text entry rates when presented to the wrist and upper arm. This was not at the expense of a reduction in text entry accuracy. This shows that the concept of presenting tactile feedback on a user's phone is an effective one and can improve interaction and text entry on tabletop computers.

© All rights reserved McAdam and Brewster and/or their publisher

 
Edit | Del

Crossan, Andrew, Murray-Smith, Roderick, Brewster, Stephen A. and Musizza, Bojan (2009): Instrumented Usability Analysis for Mobile Devices. In International Journal of Mobile Human Computer Interaction, 1 (1) pp. 1-19.

Instrumented usability analysis involves the use of sensors during a usability study which provide observations from which the evaluator can infer details of the context of use, specific activities or disturbances. This is particularly useful for the evaluation of mobile and wearable devices, which are currently difficult to test realistically without constraining users in unnatural ways. To illustrate the benefits of such an approach, we present a study of touch-screen selection of on-screen targets, whilst walking and sitting, using a PocketPC instrumented with an accelerometer. From the accelerometer data the user's gait behaviour is inferred, allowing us to link performance to gait phase angle, showing there were phase regions with significantly lower error and variability. The article provides examples of how information acquired via sensors gives us quantitatively measurable information about the detailed interactions taking place when mobile, allowing designers to test and revise design decisions, based on realistic user activity.

© All rights reserved Crossan et al. and/or their publisher

 
Edit | Del

Hoggan, Eve, Raisamo, Roope and Brewster, Stephen A. (2009): Mapping information to audio and tactile icons. In: Proceedings of the 2009 International Conference on Multimodal Interfaces 2009. pp. 327-334.

We report the results of a study focusing on the meanings that can be conveyed by audio and tactile icons. Our research considers the following question: how can audio and tactile icons be designed to optimise congruence between crossmodal feedback and the type of information this feedback is intended to convey? For example, if we have a set of system warnings, confirmations, progress up-dates and errors: what audio and tactile representations best match the information or type of message? Is one modality more appropriate at presenting certain types of information than the other modality? The results of this study indicate that certain parameters of the audio and tactile modalities such as rhythm, texture and tempo play an important role in the creation of congruent sets of feedback when given a specific type of information to transmit. We argue that a combination of audio or tactile parameters derived from our results allows the same type of information to be derived through touch and sound with an intuitive match to the content of the message.

© All rights reserved Hoggan et al. and/or their publisher

2008
 
Edit | Del

Hoggan, Eve, Brewster, Stephen A. and Johnston, Jody (2008): Investigating the effectiveness of tactile feedback for mobile touchscreens. In: Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems April 5-10, 2008, Florence, Italy. pp. 1573-1582.

This paper presents a study of finger-based text entry for mobile devices with touchscreens. Many devices are now coming to market that have no physical keyboards (the Apple iPhone being a very popular example). Touchscreen keyboards lack any tactile feedback and this may cause problems for entering text and phone numbers. We ran an experiment to compare devices with a physical keyboard, a standard touchscreen and a touchscreen with tactile feedback added. We tested this in both static and mobile environments. The results showed that the addition of tactile feedback to the touchscreen significantly improved finger-based text entry, bringing it close to the performance of a real physical keyboard. A second experiment showed that higher specification tactile actuators could improve performance even further. The results suggest that manufacturers should use tactile feedback in their touchscreen devices to regain some of the feeling lost when interacting on a touchscreen with a finger.

© All rights reserved Hoggan et al. and/or ACM Press

 
Edit | Del

Plimmer, Beryl, Crossan, Andrew, Brewster, Stephen A. and Blagojevic, Rachel (2008): Multimodal collaborative handwriting training for visually-impaired people. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 393-402.

"McSig" is a multimodal teaching and learning environment for visually-impaired students to learn character shapes, handwriting and signatures collaboratively with their teachers. It combines haptic and audio output to realize the teacher's pen input in parallel non-visual modalities. McSig is intended for teaching visually-impaired children how to handwrite characters (and from that signatures), something that is very difficult without visual feedback. We conducted an evaluation with eight visually-impaired children with a pretest to assess their current skills with a set of character shapes, a training phase using McSig and then a post-test of the same character shapes to see if there were any improvements. The children could all use McSig and we saw significant improvements in the character shapes drawn, particularly by the completely blind children (many of whom could draw almost none of the characters before the test). In particular, the blind participants all expressed enjoyment and excitement about the system and using a computer to learn to handwrite.

© All rights reserved Plimmer et al. and/or ACM Press

 
Edit | Del

Hoggan, Eve, Brewster, Stephen A. and Johnston, Jody (2008): Investigating the effectiveness of tactile feedback for mobile touchscreens. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1573-1582.

This paper presents a study of finger-based text entry for mobile devices with touchscreens. Many devices are now coming to market that have no physical keyboards (the Apple iPhone being a very popular example). Touchscreen keyboards lack any tactile feedback and this may cause problems for entering text and phone numbers. We ran an experiment to compare devices with a physical keyboard, a standard touchscreen and a touchscreen with tactile feedback added. We tested this in both static and mobile environments. The results showed that the addition of tactile feedback to the touchscreen significantly improved finger-based text entry, bringing it close to the performance of a real physical keyboard. A second experiment showed that higher specification tactile actuators could improve performance even further. The results suggest that manufacturers should use tactile feedback in their touchscreen devices to regain some of the feeling lost when interacting on a touchscreen with a finger.

© All rights reserved Hoggan et al. and/or ACM Press

 
Edit | Del

Brewster, Stephen A. and Johnston, Jody (2008): Multimodal interfaces for camera phones. In: Hofte, G. Henri ter, Mulder, Ingrid and Ruyter, Boris E. R. de (eds.) Proceedings of the 10th Conference on Human-Computer Interaction with Mobile Devices and Services - Mobile HCI 2008 September 2-5, 2008, Amsterdam, the Netherlands. pp. 387-390.

 
Edit | Del

Hall, Malcolm, Hoggan, Eve E. and Brewster, Stephen A. (2008): T-Bars: towards tactile user interfaces for mobile touchscreens. In: Hofte, G. Henri ter, Mulder, Ingrid and Ruyter, Boris E. R. de (eds.) Proceedings of the 10th Conference on Human-Computer Interaction with Mobile Devices and Services - Mobile HCI 2008 September 2-5, 2008, Amsterdam, the Netherlands. pp. 411-414.

 
Edit | Del

Crossan, Andrew, Williamson, John, Brewster, Stephen A. and Murray-Smith, Roderick (2008): Wrist rotation for interaction in mobile contexts. In: Hofte, G. Henri ter, Mulder, Ingrid and Ruyter, Boris E. R. de (eds.) Proceedings of the 10th Conference on Human-Computer Interaction with Mobile Devices and Services - Mobile HCI 2008 September 2-5, 2008, Amsterdam, the Netherlands. pp. 435-438.

 
Edit | Del

Hoggan, Eve E., Kaaresoja, Topi, Laitinen, Pauli and Brewster, Stephen A. (2008): Crossmodal congruence: the look, feel and sound of touchscreen widgets. In: Digalakis, Vassilios, Potamianos, Alexandros, Turk, Matthew, Pieraccini, Roberto and Ivanov, Yuri (eds.) Proceedings of the 10th International Conference on Multimodal Interfaces - ICMI 2008 October 20-22, 2008, Chania, Crete, Greece. pp. 157-164.

 
Edit | Del

McGookin, David, Brewster, Stephen A. and Jiang, WeiWei (2008): Investigating touchscreen accessibility for people with visual impairments. In: Proceedings of the Fifth Nordic Conference on Human-Computer Interaction 2008. pp. 298-307.

Touchscreen computing devices such as the iPhone are becoming more common. However this technology is largely inaccessible to people with visual impairments. We present the results of a requirements capture study that illustrates the problems with touchscreen accessibility, and the choices visually impaired people make when choosing assistive technology. We investigate ways of overcoming touchscreen accessibility problems by comparing a raised paper overlay touchscreen based MP3 player, with a touchscreen gesture based player. Twelve blindfolded participants, and one visually impaired person, were able to operate both players, though there were problems with short impact related operations in the gesture player. From our results we provide guidelines for future designers, to help them exploit the potential of touchscreen technology for visually impaired people.

© All rights reserved McGookin et al. and/or their publisher

 
Edit | Del

Crossan, Andrew and Brewster, Stephen A. (2008): Multimodal Trajectory Playback for Teaching Shape Information and Trajectories to Visually Impaired Computer Users. In ACM Transactions on Accessible Computing, 1 (2) p. 12.

There are difficulties in presenting nontextual or dynamic information to blind or visually impaired users through computers. This article examines the potential of haptic and auditory trajectory playback as a method of teaching shapes and gestures to visually impaired people. Two studies are described which test the success of teaching simple shapes. The first study examines haptic trajectory playback alone, played through a force-feedback device, and compares performance of visually impaired users with sighted users. It demonstrates that the task is significantly harder for visually impaired users. The second study builds on these results, combining force-feedback with audio to teach visually impaired users to recreate shapes. The results suggest that users performed significantly better when presented with multimodal haptic and audio playback of the shape, rather than haptic only. Finally, an initial test of these ideas in an application context is described, with sighted participants describing drawings to visually impaired participants through touch and sound. This study demonstrates in what situations trajectory playback can prove a useful role in a collaborative setting.

© All rights reserved Crossan and Brewster and/or ACM Press

 
Edit | Del

Oulasvirta, Antti and Brewster, Stephen A. (2008): Mobile human-computer interaction. In International Journal of Human-Computer Studies, 20 (12) pp. 833-837.

 
Edit | Del

Pirhonen, Antti and Brewster, Stephen A. (eds.) HAID 2008 - Haptic and Audio Interaction Design - Third International Workshop September 15-16, 2008, Jyväskylä, Finland.

2007
 
Edit | Del

Brewster, Stephen A., Chohan, Faraz and Brown, Lorna (2007): Tactile feedback for mobile interactions. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 159-162.

We present a study investigating the use of vibrotactile feedback for touchscreen keyboards on PDAs. Such keyboards are hard to use when mobile as keys are very small. We conducted a laboratory study comparing standard buttons to ones with tactile feedback added. Results showed that with tactile feedback users entered significantly more text, made fewer errors and corrected more of the errors they did make. We ran the study again with users seated on an underground train to see if the positive effects transferred to realistic use. There were fewer beneficial effects, with only the number of errors corrected significantly improved by the tactile feedback. However, we found strong subjective feedback in favour of the tactile display. The results suggest that tactile feedback has a key role to play in improving interactions with touch screens.

© All rights reserved Brewster et al. and/or ACM Press

 
Edit | Del

Hoggan, Eve E. and Brewster, Stephen A. (2007): Designing audio and tactile crossmodal icons for mobile devices. In: Massaro, Dominic W., Takeda, Kazuya, Roy, Deb and Potamianos, Alexandros (eds.) Proceedings of the 9th International Conference on Multimodal Interfaces - ICMI 2007 November 12-15, 2007, Nagoya, Aichi, Japan. pp. 162-169.

 
Edit | Del

Oakley, Ian and Brewster, Stephen A. (eds.) HAID 2007 - Haptic and Audio Interaction Design - Second International Workshop November 29-30, 2007, Seoul, South Korea.

 
Edit | Del

Hoggan, Eve E., Anwar, Sohail and Brewster, Stephen A. (2007): Mobile Multi-actuator Tactile Displays. In: Oakley, Ian and Brewster, Stephen A. (eds.) HAID 2007 - Haptic and Audio Interaction Design - Second International Workshop November 29-30, 2007, Seoul, South Korea. pp. 22-33.

 
Edit | Del

McGookin, David K., Gibbs, Maya, Nivala, Annu-Maaria and Brewster, Stephen A. (2007): Initial Development of a PDA Mobility Aid for Visually Impaired People. In: Baranauskas, Maria Cecília Calani, Palanque, Philippe A., Abascal, Julio and Barbosa, Simone Diniz Junqueira (eds.) DEGAS 2007 - Proceedings of the 1st International Workshop on Design and Evaluation of e-Government Applications and Services September 11th, 2007, Rio de Janeiro, Brazil. pp. 665-668.

 
Edit | Del

Kildal, Johan and Brewster, Stephen A. (2007): EMA-Tactons: Vibrotactile External Memory Aids in an Auditory Display. In: Baranauskas, Maria Cecília Calani, Palanque, Philippe A., Abascal, Julio and Barbosa, Simone Diniz Junqueira (eds.) DEGAS 2007 - Proceedings of the 1st International Workshop on Design and Evaluation of e-Government Applications and Services September 11th, 2007, Rio de Janeiro, Brazil. pp. 71-84.

2006
 
Edit | Del

Marentakis, Georgios N. and Brewster, Stephen A. (2006): Effects of feedback, mobility and index of difficulty on deictic spatial audio target acquisition in the horizontal plane. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 359-368.

We present the results of an empirical study investigating the effect of feedback, mobility and index of difficulty on a deictic spatial audio target acquisition task in the horizontal plane in front of a user. With audio feedback, spatial audio display elements are found to enable usable deictic interaction that can be described using Fitts law. Feedback does not affect perceived workload or preferred walking speed compared to interaction without feedback. Mobility is found to degrade interaction speed and accuracy by 20%. Participants were able to perform deictic spatial audio target acquisition when mobile while walking at 73% of their preferred walking speed. The proposed feedback design is examined in detail and the effects of variable target widths are quantified. Deictic interaction with a spatial audio display is found to be a feasible solution for future interface designs.

© All rights reserved Marentakis and Brewster and/or ACM Press

 
Edit | Del

Brewster, Stephen A., McGookin, David and Miller, Christopher (2006): Olfoto: designing a smell-based interaction. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 653-662.

We present a study into the use of smell for searching digital photo collections. Many people now have large photo libraries on their computers and effective search tools are needed. Smell has a strong link to memory and emotion so may be a good way to cue recall when searching. Our study compared text and smell based tagging. For the first stage we generated a set of smell and tag names from user descriptions of photos, participants then used these to tag photos, returning two weeks later to answer questions on their photos. Results showed that participants could tag effectively with text labels, as this is a common and familiar task. Performance with smells was lower but participants performed significantly above chance, with some participants using smells well. This suggests that smell has potential. Results also showed that some smells were consistently identified and useful, but some were not and highlighted issues with smell delivery devices. We also discuss some practical issues of using smell for interaction.

© All rights reserved Brewster et al. and/or ACM Press

 
Edit | Del

Wall, Steven and Brewster, Stephen A. (2006): Feeling what you hear: tactile feedback for navigation of audio graphs. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 1123-1132.

Access to digitally stored numerical data is currently very limited for sight impaired people. Graphs and visualizations are often used to analyze relationships between numerical data, but the current methods of accessing them are highly visually mediated. Representing data using audio feedback is a common method of making data more accessible, but methods of navigating and accessing the data are often serial in nature and laborious. Tactile or haptic displays could be used to provide additional feedback to support a point-and-click type interaction for the visually impaired. A requirements capture conducted with sight impaired computer users produced a review of current accessibility technologies, and guidelines were extracted for using tactile feedback to aid navigation. The results of a qualitative evaluation with a prototype interface are also presented. Providing an absolute position input device and tactile feedback allowed the users to explore the graph using tactile and proprioceptive cues in a manner analogous to point-and-click techniques.

© All rights reserved Wall and Brewster and/or ACM Press

 
Edit | Del

Wall, Steven A. and Brewster, Stephen A. (2006): Tac-tiles: multimodal pie charts for visually impaired users. In: Proceedings of the Fourth Nordic Conference on Human-Computer Interaction 2006. pp. 9-18.

Tac-tiles is an accessible interface that allows visually impaired users to browse graphical information using tactile and audio feedback. The system uses a graphics tablet which is augmented with a tangible overlay tile to guide user exploration. Dynamic feedback is provided by a tactile pin-array at the fingertips, and through speech/non-speech audio cues. In designing the system, we seek to preserve the affordances and metaphors of traditional, low-tech teaching media for the blind, and combine this with the benefits of a digital representation. Traditional tangible media allow rapid, non-sequential access to data, promote easy and unambiguous access to resources such as axes and gridlines, allow the use of external memory, and preserve visual conventions, thus promoting collaboration with sighted colleagues. A prototype system was evaluated with visually impaired users, and recommendations for multimodal design were derived.

© All rights reserved Wall and Brewster and/or ACM Press

 
Edit | Del

McGookin, David K. and Brewster, Stephen A. (2006): SoundBar: exploiting multiple views in multimodal graph browsing. In: Proceedings of the Fourth Nordic Conference on Human-Computer Interaction 2006. pp. 145-154.

In this paper we discuss why access to mathematical graphs is problematic for visually impaired people. By a review of graph understanding theory and interviews with visually impaired users, we explain why current non-visual representations are unlikely to provide effective access to graphs. We propose the use of multiple views of the graph, each providing quick access to specific information as a way to improve graph usability. We then introduce a specific multiple view system to improve access to bar graphs called SoundBar which provides an additional quick audio overview of the graph. An evaluation of SoundBar revealed that additional views significantly increased accuracy and reduced time taken in a question answering task.

© All rights reserved McGookin and Brewster and/or ACM Press

 
Edit | Del

Hoggan, Eve and Brewster, Stephen A. (2006): Crossmodal spatial location: initial experiments. In: Proceedings of the Fourth Nordic Conference on Human-Computer Interaction 2006. pp. 469-472.

This paper describes an alternative form of interaction for mobile devices using crossmodal output. The aim of our work is to investigate the equivalence of audio and tactile displays so that the same messages can be presented in one form or another. Initial experiments show that spatial location can be perceived as equivalent in both the auditory and tactile modalities Results show that participants are able to map presented 3D audio positions to tactile body positions on the waist most effectively when mobile and that there are significantly more errors made when using the ankle or wrist. This paper compares the results from both a static and mobile experiment on crossmodal spatial location and outlines the most effective ways to use this crossmodal output in a mobile context.

© All rights reserved Hoggan and Brewster and/or ACM Press

 
Edit | Del

Brown, Lorna M., Brewster, Stephen A. and Purchase, Helen C. (2006): Multidimensional tactons for non-visual information presentation in mobile devices. In: Proceedings of 8th conference on Human-computer interaction with mobile devices and services 2006. pp. 231-238.

Tactons are structured vibrotactile messages which can be used for non-visual information presentation when visual displays are limited, unavailable or inappropriate, such as in mobile phones and other mobile devices. Little is yet known about how to design them effectively. Previous studies have investigated the perception of Tactons which encode two dimensions of information using two different vibrotactile parameters (rhythm and roughness) and found recognition rates of around 70. When more dimensions of information are required it may be necessary to extend the parameter-space of these Tactons. Therefore this study investigates recognition rates for Tactons which encode a third dimension of information using spatial location. The results show that identification rate for three-parameter Tactons is just 48, but that this can be increased to 81 by reducing the number of values of one of the parameters. These results will aid designers to select suitable Tactons for use when designing mobile displays.

© All rights reserved Brown et al. and/or ACM Press

 
Edit | Del

McGookin, D. K. and Brewster, Stephen A. (2006): Graph Builder: Constructing Non-visual Visualizations. In: Proceedings of the HCI06 Conference on People and Computers XX 2006. pp. 263-278.

 
Edit | Del

Brown, Lorna M., Brewster, Stephen A. and Purchase, Helen C. (2006): Multidimensional tactons for non-visual information presentation in mobile devices. In: Nieminen, Marko and Röykkee, Mika (eds.) Proceedings of the 8th Conference on Human-Computer Interaction with Mobile Devices and Services - Mobile HCI 2006 September 12-15, 2006, Helsinki, Finland. pp. 231-238.

 
Edit | Del

McGookin, David K. and Brewster, Stephen A. (eds.) HAID 2006 - Haptic and Audio Interaction Design - First International Workshop August 31 - September 1, 2006, Glasgow, UK.

 
Edit | Del

Hoggan, Eve E. and Brewster, Stephen A. (2006): Crossmodal Interaction with Mobile Devices. In: VL-HCC 2006 - IEEE Symposium on Visual Languages and Human-Centric Computing 4-8 September, 2006, Brighton, UK. pp. 234-235.

 
Edit | Del

Wall, Steven A. and Brewster, Stephen A. (2006): Editorial: design of haptic user-interfaces and applications. In Virtual Reality, 9 (2) pp. 95-96.

2005
 
Edit | Del

Crossan, Andrew, Murray-Smith, Roderick, Brewster, Stephen A., Kelly, James and Musizza, Bojan (2005): Gait phase effects in mobile interaction. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1312-1315.

One problem evaluating mobile and wearable devices is that they are used in mobile settings, making it hard to collect usability data. We present a study of tap-based selection of on-screen targets whilst walking and sitting, using a PocketPC instrumented with an accelerometer to collect information about user activity at the time of each tap. From these data the user's gait can be derived, and this is then used to investigate preferred tapping behaviour relative to gait phase, and associated tap accuracy. Results showed that users were more accurate sitting than walking. When walking there were phase regions with significantly increased tap likelihood, and these regions had significantly lower error rates, and lower error variability. This work represents an example of accelerometer-instrumented mobile usability analysis, and the results give a quantitative understanding of the detailed interactions taking place when on the move, allowing us to develop better mobile interfaces.

© All rights reserved Crossan et al. and/or ACM Press

 
Edit | Del

Marentakis, Georgios N. and Brewster, Stephen A. (2005): Effects of reproduction equipment on interaction with a spatial audio interface. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1625-1628.

Spatial audio displays have been criticized because the use of headphones may isolate users from their real world audio environment. In this paper we study the effects of three types of audio reproduction equipment (standard headphones, bone-conductance headphones and monaural presentation using a single earphone) on time and accuracy during interaction with a deictic spatial audio display. Participants selected a target sound emitting from one of four different locations in the presence of distracters whilst wearing the different types of headphones. Target locations were marked with audio feedback. No significant differences were found for time and accuracy ratings between bone conductance and standard headphones. Monaural reproduction significantly slowed interaction. The results show that alternative reproduction equipment can be used to overcome user isolation from the natural audio environment.

© All rights reserved Marentakis and Brewster and/or ACM Press

 
Edit | Del

Wall, Steven and Brewster, Stephen A. (2005): Hands-on haptics: exploring non-visual visualization using the sense of touch. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 2140-2141.

 
Edit | Del

Brewster, Stephen A. and King, A. (2005): An Investigation into the Use of Tactons to Present Progress Information. In: Proceedings of IFIP INTERACT05: Human-Computer Interaction 2005. pp. 6-17.

This paper presents an initial investigation into the use of Tactons, or tactile icons, to present progress information in desktop human-computer interfaces. Progress bars are very common in a wide range of interfaces but have problems. For example, they must compete for screen space and visual attention with other visual tasks such as document editing or web browsing. To address these problems we created a tactile progress indicator, encoding progress information into a series of vibrotactile cues. An experiment comparing the tactile progress indicator to a standard visual one showed a significant improvement in performance and an overall preference for the tactile display. These results suggest that a tactile display is a good way to present such information and this has many potential applications from computer desktops to mobile telephones.

© All rights reserved Brewster and King and/or Springer Verlag

 
Edit | Del

Darroch, I., Goodman, J., Brewster, Stephen A. and Gray, P. (2005): The Effect of Age and Font Size on Reading Text on Handheld Computers. In: Proceedings of IFIP INTERACT05: Human-Computer Interaction 2005. pp. 253-266.

Though there have been many studies of computer based text reading, only a few have considered the small screens of handheld computers. This paper presents an investigation into the effect of varying font size between 2 and 16 point on reading text on a handheld computer. By using both older and younger participants the possible effects of age were examined. Reading speed and accuracy were measured and subjective views of participants recorded. Objective results showed that there was little difference in reading performance above 6 point, but subjective comments from participants showed a preference for sizes in the middle range. We therefore suggest, for reading tasks, that designers of interfaces for mobile computers provide fonts in the range of 8-12 point to maximize readability for the widest range of users.

© All rights reserved Darroch et al. and/or Springer Verlag

 
Edit | Del

Marentakis, Georgios and Brewster, Stephen A. (2005): A comparison of feedback cues for enhancing pointing efficiency in interaction with spatial audio displays. In: Proceedings of 7th conference on Human-computer interaction with mobile devices and services 2005. pp. 55-62.

An empirical study that compared six different feedback cue types to enhance pointing efficiency in deictic spatial audio displays is presented. Participants were asked to select a sound using a physical pointing gesture, with the help of a loudness cue, a timbre cue and an orientation update cue as well as with combinations of these cues. Display content was varied systematically to investigate the effect of increasing display population. Speed, accuracy and throughput ratings are provided as well as effective target widths that allow for minimal error rates. The results showed direct pointing to be the most efficient interaction technique; however large effective target widths reduce the applicability of this technique. Movement-coupled cues were found to significantly reduce display element size, but resulted in slower interaction and were affected by display content due to the requirement of continuous target attainment. The results show that, with appropriate design, it is possible to overcome interaction uncertainty and provide solutions that are effective in mobile human computer interaction.

© All rights reserved Marentakis and Brewster and/or ACM Press

 
Edit | Del

Marentakis, Georgios N. and Brewster, Stephen A. (2005): A comparison of feedback cues for enhancing pointing efficiency in interaction with spatial audio displays. In: Tscheligi, Manfred, Bernhaupt, Regina and Mihalic, Kristijan (eds.) Proceedings of the 7th Conference on Human-Computer Interaction with Mobile Devices and Services - Mobile HCI 2005 September 19-22, 2005, Salzburg, Austria. pp. 55-62.

 
Edit | Del

Baillie, Sarah, Brewster, Stephen A., Hall, Cordelia V. and O'Donnell, John T. (2005): Motion Space Reduction in a Haptic Model of Violin and Viola Bowin. In: WHC 2005 - World Haptics Conference 18-20 March, 2005, Pisa, Italy. pp. 525-526.

 
Edit | Del

Brewster, Stephen A. and King, Alison (2005): The Design and Evaluation of a Vibrotactile Progress Bar. In: WHC 2005 - World Haptics Conference 18-20 March, 2005, Pisa, Italy. pp. 499-500.

 
Edit | Del

Brown, Lorna M., Brewster, Stephen A. and Purchase, Helen C. (2005): A First Investigation into the Effectiveness of Tactons. In: WHC 2005 - World Haptics Conference 18-20 March, 2005, Pisa, Italy. pp. 167-176.

2004
 
Edit | Del

Brewster, Stephen A. and Dunlop, Marc (2004): Mobile Human Computer Interaction. Springer

 
Edit | Del

Zajicek, Mary and Brewster, Stephen A. (2004): Design principles to support older adults. In Universal Access in the Information Society, 3 (2) pp. 111-113.

 
Edit | Del

Brewster, Stephen A. and Dunlop, Mark (eds.) Mobile HCI 2004 September 13-16, 2004, Glasgow, Scotland, UK.

 
Edit | Del

Brewster, Stephen A. and Dunlop, Mark D. (eds.) Mobile Human-Computer Interaction - Mobile HCI 2004 - 6th International Symposium September 13-16, 2004, Glasgow, UK.

 
Edit | Del

Goodman, Joy, Gray, Philip D., Khammampad, Kartik and Brewster, Stephen A. (2004): Using Landmarks to Support Older People in Navigation. In: Brewster, Stephen A. and Dunlop, Mark D. (eds.) Mobile Human-Computer Interaction - Mobile HCI 2004 - 6th International Symposium September 13-16, 2004, Glasgow, UK. pp. 38-48.

 
Edit | Del

Marentakis, Georgios N. and Brewster, Stephen A. (2004): A Study on Gestural Interaction with a 3D Audio Display. In: Brewster, Stephen A. and Dunlop, Mark D. (eds.) Mobile Human-Computer Interaction - Mobile HCI 2004 - 6th International Symposium September 13-16, 2004, Glasgow, UK. pp. 180-191.

 
Edit | Del

Brewster, Stephen A. and Brown, Lorna M. (2004): Tactons: Structured Tactile Messages for Non-Visual Information Display. In: Cockburn, Andy (ed.) AUIC2004 - User Interfaces 2004 - Fifth Australasian User Interface Conference 18-22 January, 2004, Dunedin, New Zealand. pp. 15-23.

2003
 
Edit | Del

Brewster, Stephen A., Lumsden, Joanna, Bell, Marek, Hall, Malcolm and Tasker, Stuart (2003): Multimodal 'eyes-free' interaction techniques for wearable devices. In: Cockton, Gilbert and Korhonen, Panu (eds.) Proceedings of the ACM CHI 2003 Human Factors in Computing Systems Conference April 5-10, 2003, Ft. Lauderdale, Florida, USA. pp. 473-480.

 Cited in the following chapter:

Design 4 All: [/encyclopedia/design_4_all.html]


 
 
Edit | Del

Yu, Wai and Brewster, Stephen A. (2003): Evaluation of multimodal graphs for blind people. In Universal Access in the Information Society, 2 (2) pp. 105-124.

This paper introduces the development of a multimodal data visualisation system and its evaluations. This system is designed to improve blind and visually impaired peoples access to graphs and tables. Force feedback, synthesized speech and non-speech audio are utilised to present graphical data to blind people. Through the combination of haptic and audio representations, users can explore virtual graphs rendered by a computer. Various types of graphs and tables have been implemented, and a three-stage evaluation has been conducted. The experimental results have proven the usability of the system and the benefits of the multimodal approach. The paper presents the details of the development and experimental findings, as well as the changes of role of haptics in the evaluation.

© All rights reserved Yu and Brewster and/or Springer Verlag

 
Edit | Del

Yu, Wai, Kangas, Katri and Brewster, Stephen A. (2003): Web-Based Haptic Applications for Blind People to Create Virtual Graph. In: HAPTICS 2003 - 11th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems 22-23 March, 2003, Los Angeles, CA, USA. pp. 318-325.

2002
 
Edit | Del

Pirhonen, Antti, Brewster, Stephen A. and Holguin, Christopher (2002): Gestural and audio metaphors as a means of control for mobile devices. In: Terveen, Loren (ed.) Proceedings of the ACM CHI 2002 Conference on Human Factors in Computing Systems Conference April 20-25, 2002, Minneapolis, Minnesota. pp. 291-298.

 
Edit | Del

Yu, Wai and Brewster, Stephen A. (2002): Multimodal virtual reality versus printed medium in visualization for blind people. In: Fifth Annual ACM Conference on Assistive Technologies 2002. pp. 57-64.

In this paper, we describe a study comparing the strengths of a multimodal Virtual Reality (VR) interface against traditional tactile diagrams in conveying information to visually impaired and blind people. The multimodal VR interface consists of a force feedback device (SensAble PHANTOM), synthesized speech and non-speech audio. Potential advantages of the VR technology are well known however its real usability in comparison with the conventional paper-based medium is seldom investigated. We have addressed this issue in our evaluation. The experimental results show benefits from using the multimodal approach in terms of more accurate information about the graphs obtained by users.

© All rights reserved Yu and Brewster and/or ACM Press

 
Edit | Del

Oakley, I., Adams, A., Brewster, Stephen A. and Gray, P. (2002): Guidelines for the Design of Haptic Widgets. In: Faulkner, Xristine, Finlay, Janet and Détienne, Françoise (eds.) Proceedings of the HCI02 Conference on People and Computers XVI September 18-20, 2002, Pisa, Italy. pp. 195-212.

 
Edit | Del

Crossan, A., Brewster, Stephen A., Reid, S. and Mellor, D. (2002): Multi-session VR Medical Training: The HOPS Simulator. In: Faulkner, Xristine, Finlay, Janet and Détienne, Françoise (eds.) Proceedings of the HCI02 Conference on People and Computers XVI September 18-20, 2002, Pisa, Italy. pp. 213-226.

 
Edit | Del

Dunlop, Mark D. and Brewster, Stephen A. (2002): editorial: The Challenge of Mobile Devices for Human Computer Interaction. In Personal and Ubiquitous Computing, 6 (4) pp. 235-236.

 
Edit | Del

Brewster, Stephen A. (2002): Overcoming the Lack of Screen Space on Mobile Computers. In Personal and Ubiquitous Computing, 6 (3) pp. 188-205.

 Cited in the following chapter:

Mobile Computing: [/encyclopedia/mobile_computing.html]


 
 
Edit | Del

Yu, Wai and Brewster, Stephen A. (2002): Comparing Two Haptic Interfaces for Multimodal Graph Rendering. In: HAPTICS 2002 - Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems 2002 2002. pp. 3-9.

 
Edit | Del

Ramloll, Rameshsharma and Brewster, Stephen A. (2002): An Environment for Studying the Impact of Spatialising Sonified Graphs on Data Comprehension. In: IV 2002 2002. pp. 167-174.

2001
 
Edit | Del

Brewster, Stephen A. and Murray-Smith, Roderick (eds.) (2001): Haptic human-computer interaction : first international workshop, Glasgow, UK, August 31-September 1, 2000. Springer-Verlag

 
Edit | Del

Ramloll, R., Brewster, Stephen A., Yu, W. and Riedel, B. (2001): Using Non-speech Sounds to Improve Access to 2D Tabular Numerical Information for Visually Impaired Users. In: Proceedings of the HCI01 Conference on People and Computers XV 2001. pp. 515-530.

 
Edit | Del

Walker, A., Brewster, Stephen A., McGookin, D. and Ng, A. (2001): Diary in the Sky: A Spatial Audio Display for a Mobile Calendar. In: Proceedings of the HCI01 Conference on People and Computers XV 2001. pp. 531-540.

2000
 
Edit | Del

Oakley, Ian, McGee, Marilyn Rose, Brewster, Stephen A. and Gray, Philip D. (2000): Putting the Feel in 'Look and Feel'. In: Turner, Thea, Szwillus, Gerd, Czerwinski, Mary, Peterno, Fabio and Pemberton, Steven (eds.) Proceedings of the ACM CHI 2000 Human Factors in Computing Systems Conference April 1-6, 2000, The Hague, The Netherlands. pp. 415-422.

Haptic devices are now commercially available and thus touch has become a potentially realistic solution to a variety of interaction design challenges. We report on an investigation of the use of touch as a way of reducing visual overload in the conventional desktop. In a two-phase study, we investigated the use of the PHANTOM haptic device as a means of interacting with a conventional graphical user interface. The first experiment compared the effects of four different haptic augmentations on usability in a simple targeting task. The second experiment involved a more ecologically-oriented searching and scrolling task. Results indicated that the haptic effects did not improve users performance in terms of task completion time. However, the number of errors made was significantly reduced. Subjective workload measures showed that participants perceived many aspects of workload as significantly less with haptics. The results are described and the implications for the use of haptics in user interface design are discussed.

© All rights reserved Oakley et al. and/or ACM Press

 Cited in the following chapter:

Tactile Interaction: [/encyclopedia/tactile_interaction.html]


 
 
Edit | Del

Ramloll, Rameshsharma, Yu, Wai, Brewster, Stephen A., Riedel, Beate, Burton, Mike and Dimigen, Gisela (2000): Constructing Sonified Haptic Line Graphs for the Blind Student: First Steps. In: Fourth Annual ACM Conference on Assistive Technologies 2000. pp. 17-25.

Line graphs stand as an established information visualisation and analysis technique taught at various levels of difficulty according to standard Mathematics curricula. It has been argued that blind individuals cannot use line graphs as a visualisation and analytic tool because they currently primarily exist in the visual medium. The research described in this paper aims at making line graphs accessible to blind students through auditory and haptic media. We describe (1) our design space for representing line graphs, (2) the technology we use to develop our prototypes and (3) the insights from our preliminary work.

© All rights reserved Ramloll et al. and/or ACM Press

 
Edit | Del

Crease, M., Brewster, Stephen A. and Gray, P. (2000): Caring, Sharing Widgets: A Toolkit of Sensitive Widgets. In: Proceedings of the HCI00 Conference on People and Computers XIV 2000. pp. 257-270.

 
Edit | Del

Brewster, Stephen A. and Dunlop, Mark D. (2000): editoral: Human Computer Interaction with Mobile Devices. In Personal and Ubiquitous Computing, 4 (2) .

 
Edit | Del

Brewster, Stephen A. and Murray, Robin (2000): Presenting Dynamic Information on Mobile Computers. In Personal and Ubiquitous Computing, 4 (4) pp. 209-212.

 
Edit | Del

Walker, Ashely and Brewster, Stephen A. (2000): Spatial Audio in Small Screen Device Displays. In Personal and Ubiquitous Computing, 4 (2) .

 
Edit | Del

Crease, Murray, Gray, Philip D. and Brewster, Stephen A. (2000): A Toolkit of Mechanism and Context Independent Widgets. In: DSV-IS 2000 2000. pp. 121-133.

1999
 
Edit | Del

Brewster, Stephen A. and Crease, Murray G. (1999): Correcting Menu Usability Problems with Sound. In Behaviour and Information Technology, 18 (3) pp. 165-177.

Future human-computer interfaces will use more than just graphical output to display information. In this paper we suggest that sound and graphics together can be used to improve interaction. We describe an experiment to improve the usability of standard graphical menus by the addition of sound. One common difficulty is slipping off a menu item by mistake when trying to select it. One of the causes of this is insufficient feedback. We designed and experimentally evaluated a new set of menus with much more salient audio feedback to solve this problem. The results from the experiment showed a significant reduction in the subjective effort required to use the new sonically-enhanced menus along with significantly reduced error recovery times. A significantly larger number of errors were also corrected with sound.

© All rights reserved Brewster and and/or Taylor and Francis

 
Edit | Del

Brewster, Stephen A. (1999): Sound in the interface to a mobile computer. In: 1999. pp. 43-47.

1998
 
Edit | Del

Brewster, Stephen A. (1998): The Design of Sonically-Enhanced Widgets. In Interacting with Computers, 11 (2) pp. 211-235.

This paper describes the design of user-interface widgets that include non-speech sound. Previous research has shown that the addition of sound can improve the usability of human-computer interfaces. However, there is little research to show where the best places are to add sound to improve usability. The approach described here is to integrate sound into widgets, the basic components of the human-computer interface. An overall structure for the integration of sound is presented. There are many problems with current graphical widgets and many of these are difficult to correct by using more graphics. This paper presents many of the standard graphical widgets and describes how sound can be added. It describes in detail usability problems with the widgets and then the non-speech sounds to overcome them. The non-speech sounds used are earcons. These sonically-enhanced widgets allow designers who are not sound experts to create interfaces that effectively improve usability and have coherent and consistent sounds.

© All rights reserved Brewster and/or Elsevier Science

 
Edit | Del

Brewster, Stephen A. (1998): Using Earcons to Improve the Usability of a Graphics Package. In: Johnson, Hilary, Nigay, Laurence and Roast, C. R. (eds.) Proceedings of the Thirteenth Conference of the British Computer Society Human Computer Interaction Specialist Group - People and Computers XIII August 1-4, 1998, Sheffield, UK. pp. 287-302.

This paper describes how non-speech sounds can be used to improve the usability of a graphics package. Sound was specifically used to aid problems with tool palettes and finding the current mouse coordinates when drawing. Tool palettes have usability problems because users need to see the information they present but they are often outside the area of visual focus. An experiment was conducted to investigate the effectiveness of adding sound to tool palettes. Earcons were used to indicate the current tool and when tool changes occurred. Results showed a significant reduction in the number of tasks performed with the wrong tool. Therefore users knew what the current tool was and did not try to perform tasks with the wrong tool. All of this was not at the expense of making the interface any more annoying to use.

© All rights reserved Brewster and/or Springer Verlag

 
Edit | Del

Brewster, Stephen A. (1998): Using Nonspeech Sounds to Provide Navigation Cues. In ACM Transactions on Computer-Human Interaction, 5 (3) pp. 224-259.

This article describes three experiments that investigate the possibility of using structured nonspeech audio messages called earcons to provide navigational cues in a menu hierarchy. A hierarchy of 27 nodes and four levels was created with an earcon for each node. Rules were defined for the creation of hierarchical earcons at each node. Participants had to identify their location in the hierarchy by listening to an earcon. Results of the first experiment showed that participants could identify their location with 81.5% accuracy, indicating that earcons were a powerful method of communicating hierarchy information. One proposed use for such navigation cues is in telephone-based interfaces (TBIs) where navigation is a problem. The first experiment did not address the particular problems of earcons in TBIs such as "does the lower quality of sound over the telephone lower recall rates," "can users remember earcons over a period of time," and "what effect does training type have on recall?" An experiment was conducted and results showed that sound quality did lower the recall of earcons. However, redesign of the earcons overcame this problem with 73% recalled correctly. Participants could still recall earcons at this level after a week had passed. Training type also affected recall. With "personal training" participants recalled 73% of the earcons, but with purely textual training results were significantly lower. These results show that earcons can provide good navigation cues for TBIs. The final experiment used compound, rather than hierarchical, earcons to represent the hierarchy from the first experiment. Results showed that with sounds constructed in this way participants could recall 97% of the earcons. These experiments have developed our general understanding of earcons. A hierarchy three times larger than any previously created was tested, and this was also the first test of the recall of earcons over time.

© All rights reserved Brewster and/or ACM Press

1997
 
Edit | Del

Brewster, Stephen A. (1997): Navigating Telephone-Based Interfaces with Earcons. In: Thimbleby, Harold, O'Conaill, Brid and Thomas, Peter J. (eds.) Proceedings of the Twelfth Conference of the British Computer Society Human Computer Interaction Specialist Group - People and Computers XII August, 1997, Bristol, England, UK. pp. 39-56.

Non-speech audio messages called earcons can provide powerful navigation cues in menu hierarchies. However, previous research on earcons has not addressed the particular problems of menus in telephone-based interfaces (TBI's) such as: Does the lower quality of sound in TBI's lower recall rates, can users remember earcons over a period of time and what effect does training type have on recall. An experiment was conducted and results showed that sound quality did lower the recall of earcons. However, redesign of the earcons overcame this problem with 73% recalled correctly. Participants could still recall earcons at this level after a week had passed. Training type also affected recall. With 'personal training' participants recalled 73% of the earcons but with purely textual training results were significantly lower. These results show that earcons can provide excellent navigation cues for telephone-based interfaces.

© All rights reserved Brewster and/or Springer Verlag

1996
 
Edit | Del

Brewster, Stephen A., Raty, Veli-Pekka and Kortekangas, Atte (1996): Earcons as a Method of Providing Navigational Cues in a Menu Hierarchy. In: Sasse, Martina Angela, Cunningham, R. J. and Winder, R. L. (eds.) Proceedings of the Eleventh Conference of the British Computer Society Human Computer Interaction Specialist Group - People and Computers XI August, 1996, London, UK. pp. 169-183.

We describe an experiment to discover if structured audio messages, earcons, could provide navigational cues in a menu hierarchy. A hierarchy of 27 nodes and four levels was created with sounds for each node. Participants had to identify their location in the hierarchy by listening to an earcon. Results showed that participants could identify their location with over 80% accuracy, indicating that earcons are a powerful method of communicating hierarchy information. Participants were also tested to see if they could identify where previously unheard earcons would fit in the hierarchy. The results showed that they could do this with over 90% accuracy. These results show that earcons are a robust and extensible method of communicating hierarchy information in sound.

© All rights reserved Brewster et al. and/or Springer Verlag

 
Edit | Del

Brewster, Stephen A., Raty, Veli-Pekka and Kortekangas, Atte (1996): Enhancing Scanning Input with Non-Speech Sounds. In: Second Annual ACM Conference on Assistive Technologies 1996. pp. 10-14.

This paper proposes the addition of non-speech sounds to aid people who use scanning as their method of input. Scanning input is a temporal task; users have to press a switch when a cursor is over the required target. However, it is usually presented as a spatial task with the items to be scanned laid-out in a grid. Research has shown that for temporal tasks the auditory modality is often better than the visual. This paper investigates this by adding non-speech sound to a visual scanning system. It also shows how our natural abilities to perceive rhythms can be supported so that they can be used to aid the scanning process. Structured audio messages called Earcons were used for the sound output. The results from a preliminary investigation were favourable, indicating that the idea is feasible and further research should be undertaken.

© All rights reserved Brewster et al. and/or ACM Press

1995
 
Edit | Del

Brewster, Stephen A., Wright, Peter C. and Edwards, Alistair (1995): Parallel Earcons: Reducing the Length of Audio Messages. In International Journal of Human-Computer Studies, 43 (2) pp. 153-175.

This paper describes a method of presenting structured audio messages, earcons, in parallel so that they take less time to play and can better keep pace with interactions in a human-computer interface. The two component parts of a compound earcon are played in parallel so that the time taken is only that of a single part. An experiment was conducted to test the recall and recognition of parallel compound earcons as compared to serial compound earcons. Results showed that there are no differences in the rates of recognition between the two groups. Non-musicians are also shown to be equal in performance to musicians. Some extensions to the earcon creation guidelines of Brewster, Wright and Edwards are put forward based upon research into auditory stream segregation. Parallel earcons are shown to be an effective means of increasing the presentation rates of audio messages without compromising recognition rates.

© All rights reserved Brewster et al. and/or Academic Press

1993
 
Edit | Del

Brewster, Stephen A., Wright, Peter C. and Edwards, Alistair (1993): An Evaluation of Earcons for Use in Auditory Human-Computer Interfaces. In: Ashlund, Stacey, Mullet, Kevin, Henderson, Austin, Hollnagel, Erik and White, Ted (eds.) Proceedings of the ACM CHI 93 Human Factors in Computing Systems Conference April 24-29, 1993, Amsterdam, The Netherlands. pp. 222-227.

An evaluation of earcons was carried out to see whether they are an effective means of communicating information in sound. An initial experiment showed that earcons were better than unstructured bursts of sound and that musical timbres were more effective than simple tones. A second experiment was then carried out which improved upon some of the weaknesses shown up in Experiment 1 to give a significant improvement in recognition. From the results of these experiments some guidelines were drawn up for use in the creation of earcons. Earcons have been shown to be an effective method for communicating information in a human-computer interface.

© All rights reserved Brewster et al. and/or ACM Press

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
 
 
 

Changes to this page (author)

28 Nov 2012: Added
09 Nov 2012: Modified
04 Apr 2012: Modified
04 Apr 2012: Modified
28 Feb 2012: Added
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
29 Apr 2011: Modified
20 Apr 2011: Modified
18 Apr 2011: Modified
17 Apr 2011: Modified
16 Jan 2011: Modified
16 Jan 2011: Modified
16 Jan 2011: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
19 May 2010: Modified
05 Feb 2010: Modified
10 Sep 2009: Added
01 Aug 2009: Modified
31 Jul 2009: Modified
31 Jul 2009: Added
25 Jul 2009: Modified
25 Jul 2009: Modified
20 Jul 2009: Modified
20 Jul 2009: Modified
03 Jul 2009: Modified
16 Jun 2009: Modified
15 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
10 Jun 2009: Added
05 Jun 2009: Modified
03 Jun 2009: Modified
02 Jun 2009: Modified
31 May 2009: Modified
31 May 2009: Modified
31 May 2009: Modified
31 May 2009: Modified
31 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
09 May 2009: Added
09 May 2009: Modified
04 May 2009: Added
04 Apr 2009: Added
12 May 2008: Added
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Added
12 May 2008: Added
26 Jul 2007: Modified
26 Jul 2007: Modified
26 Jul 2007: Modified
26 Jul 2007: Modified
26 Jul 2007: Modified
24 Jul 2007: Modified
24 Jul 2007: Modified
24 Jul 2007: Modified
24 Jul 2007: Added
23 Jul 2007: Modified
16 Jul 2007: Added
29 Jun 2007: Modified
29 Jun 2007: Modified
29 Jun 2007: Modified
29 Jun 2007: Modified
27 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Added
28 Apr 2003: Added
28 Apr 2003: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/stephen_a__brewster.html

Publication statistics

Pub. period:1993-2012
Pub. count:108
Number of co-authors:107



Co-authors

Number of publications with 3 favourite co-authors:

Andrew Crossan:12
Eve Hoggan:6
Roderick Murray-Smith:6

 

 

Productive colleagues

Stephen A. Brewster's 3 most productive colleagues in number of publications:

Carl Gutwin:116
Matt Jones:63
Joemon M. Jose:57
 
 
 
Jul 30

It's all about one thing: creative problem-solving to get the story out.

-- Robert Greenberg, R/GA, 2006

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!