Upcoming Courses

go to course
Human-computer Interaction
88% booked. Starts in 7 days
go to course
Gamification: Creating Addictive User Experience
83% booked. Starts in 12 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Upcoming Courses

go to course
Human-computer Interaction
88% booked. Starts in 7 days
go to course
Gamification: Creating Addictive User Experience
83% booked. Starts in 12 days
 
 

Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction


 
Time and place:
Adelaide, Australia
November 28-30, 2007
Conf. description:
OZCHI is Australia’s leading forum for research and development in all areas of Human-Computer Interaction. OZCHI attracts an international community of practitioners, researchers, academics and students from a wide range of disciplines including user experience designers, information architects, software engineers, human factors experts, information systems analysts, and social scientists.
Next conference:
is coming up
Dec2
02 Dec 2014 in Sydney, Australia
Series:
This is a preferred venue for people like Steve Howard, Toni Robertson, Jesper Kjeldskov, Ernest Edmonds, and Frank Vetere. Part of the OZCHI - CHISIG Conference on Human Computer Interaction conference series.
Other years:
Publisher:
ACM Press
EDIT

References from this conference (2007)

The following articles are from "Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction":

 what's this?

Articles

p. 1-8

Dourish, Paul (2007): Seeing like an interface. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 1-8. Available online

Mobile and ubiquitous computing systems are increasingly of interest to HCI researchers. Often, this has meant considering the ways in which we might migrate desktop applications and everyday usage scenarios to mobile devices and mobile contexts. However, we do not just experience technologies in situ -- we also experience everyday settings through the technologies we have at our disposal. Drawing on anthropological research, I outline an alternative way of thinking about the relationship between technology and "seeing" everyday life and everyday space.

© All rights reserved Dourish and/or ACM Press

p. 103-110

Palmer, Doug, Adcock, Matt, Smith, Jocelyn, Hutchins, Matthew, Gunn, Chris, Stevenson, Duncan and Taylor, Ken (2007): Annotating with light for remote guidance. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 103-110. Available online

This paper presents a system that will support a remote guidance collaboration, in which a local expert guides a remotely located assistant to perform physical, three-dimensional tasks. The system supports this remote guidance by allowing the expert to annotate, point at and draw upon objects in the remote location using a pen and tablet-based interface to control a laser projection device. The specific design criteria for this system are drawn from a tele-health scenario involving remote medical examination of patients and the paper presents the software architecture and implementation details of the associated hardware. In particular, the algorithm for aligning the representation of the laser projection over the video display of the remote scene is described. Early evaluations by medical specialists are presented, the usability of the system in laboratory experiments is discussed and ideas for future developments are outlined.

© All rights reserved Palmer et al. and/or ACM Press

p. 111-114

Stitzlein, Cara and Wessels, Anja (2007): A descriptive screenshot analysis in a mixed presence setting. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 111-114. Available online

In mixed presence settings (MPS) computer-mediated interactions of individuals and co-located groups, particular technical (video transmission of a large co-located group) and perceptual (ability to perceive video of remote people) challenges require attention. This paper presents results of a descriptive study within industry based on in vivo screen snapshots. The results illustrate how MPS are complicated by size of co-located groups. We present considerations for using video in these settings, informing engineers, evaluators and users of such potentially complex videoconferencing spaces.

© All rights reserved Stitzlein and Wessels and/or ACM Press

p. 115-121

Strom, Georg (2007): Stories with emotions and conflicts drive development of better interactions in industrial software projects. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 115-121. Available online

An earlier study shows that stories with dialogue, emotions and conflicts -- similar to fiction writing -- give a better understanding of user needs and the situations in which an interface is used when compared to conventional scenarios. This paper describes how stories with emotions and conflicts were accepted as inputs to the definition of requirements in two industrial software projects, and how managers regarded stories as more credible than concise reports. The paper describes how it is possible to use stories with emotions and conflicts in industrial software projects, characteristics of the most useful stories, and how stories can be used to facilitate a dialogue between users and developers.

© All rights reserved Strom and/or ACM Press

p. 123-130

Nilsson, Susanna and Johansson, Björn (2007): Fun and usable: augmented reality instructions in a hospital setting. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 123-130. Available online

The differences between Augmented Reality (AR) systems and computer display based systems create a need for a different approach than "traditional" HCI to the design and development of AR systems. This paper presents theoretical and empirical work which uses holistic approach compared to traditional guidelines in human computer interaction. The paper includes a usability study where AR was used for giving instructions to professional users in a hospital. The theoretical stance of Cognitive Systems Engineering is suggested as a possible approach to the design of AR system. The analysis show that the users in the context of medical care are positive towards AR systems as a technology and as a tool for instructions. This indicates that AR technology may become an accepted part of every day work in such a context.

© All rights reserved Nilsson and Johansson and/or ACM Press

p. 131-137

Mason, Keith, Howard, Catherine, Sturm, Jeff and Keogh, Craig (2007): An assessment of the impact of ELEXSA visualizations on operator situation awareness. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 131-137. Available online

Decision making in the tactical military domain is a time-critical, high stress, high stake activity. This paper presents the results of a human-in-the-loop experiment conducted to assess the impact of novel visualizations of radar detection ranges and safe areas on operator situation awareness of the tactical environment. The experiment involved comparing the performance of two groups of eight operators attempting to achieve the same tactical goals within the same scenario using two different, but functionally equivalent, situation awareness environments; one environment incorporated the novel visualizations outlined in this paper and one did not. The experiment showed that the novel visualizations: (1) increased operator survivability; (2) shortened mission duration; and (3) reduced the time the operator spent vulnerable to radar detection compared to the standard environment. Analysis of operator perceptions elicited via post simulation interviews showed that operators using the visualizations experienced lower workload and stress levels and had more accurate perceptions of their vulnerability to radar detection than those operators using the standard environment.

© All rights reserved Mason et al. and/or ACM Press

p. 139-141

Phillips, J. G., Jory, M. and Mogford, N. (2007): Decisional style and eParticipation. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 139-141. Available online

Decisional style may predict eParticipation tendencies. The vigilance, procrastination, buck-passing, and hypervigilance of 77 undergraduates was measured using the Melbourne Decision Making Scale and related to use of WebCT. Decisional style predicted grades, participation in discussion groups and course evaluations.

© All rights reserved Phillips et al. and/or ACM Press

p. 143-150

Mueller, Florian and Gibbs, Martin R. (2007): Evaluating a distributed physical leisure game for three players. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 143-150. Available online

Physical leisure activities such as table tennis provide healthy exercise and can offer a means to connect with others socially; however, players have to be in the same physical location to play. We have developed a networked table tennis-like game that is played with a real paddle and ball, augmented with a large-scale videoconference. Unlike existing commercial console games that encourage physical activity, our system supports social interaction through an audio and video communication channel, offers a familiar gaming interface comparable to a traditional leisure game, provides non-virtual force feedback and can be enjoyed by players in three geographically separate locations simultaneously. We are presenting results from an empirical evaluation of "Table Tennis for Three" with 41 participants. The players reported that they had fun, used the game to build social rapport and experienced a sense of playing "together". Some participants did not enjoy the game, and we present informed opinions to explain their reactions. With our work, we provide other HCI researchers with a further example of an evaluation of a novel type of experience that lies in the realms of physical activity, fun and social interactions. We hope we can inspire designers to consider our results in their future game designs by looking at the characteristics of traditional physical leisure games to promote similar benefits such as exercise, enjoyment and bringing people together to socialize.

© All rights reserved Mueller and Gibbs and/or ACM Press

p. 151-159

Davis, Hilary, Skov, Mikael B., Stougaard, Malthe and Vetere, Frank (2007): Virtual box: supporting mediated family intimacy through virtual and physical play. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 151-159. Available online

Mediated intimacy is the phenomenon where humans use technologies to express, share, or communicate intimate feelings with each other. Typically, technologies supporting mediated intimacy encompass different characteristics than technologies designed to solve specific work-oriented tasks. This paper reports on the design, implementation and initial evaluation of Virtual Box. Virtual Box attempts to create a physical and engaging context in order to support reciprocal interactions with expressive content. An implemented version of Virtual Box is evaluated in a location-aware environment to evaluate the design ideas according to mediated family intimacy.

© All rights reserved Davis et al. and/or ACM Press

p. 161-166

Watkins, Jerry (2007): Social media, participatory design and cultural engagement. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 161-166. Available online

This paper reports on the application of Participatory Design methodology to an experiment in social media production. Staff at the Australian Museum are developing new content genres, creative tools and techniques in order to produce original cultural multimedia based on -- or inspired by -- the Museum's extensive collections. The ultimate aim of the project is for the Museum to act as a social media hub for external communities of interest to co-create their own narrative-based interpretations of the Museum's content, leading to an individualized cultural experience for physical and online visitors alike. A participatory content creation method has been developed for this project, which features iterative design cycles marked by social prototyping, evaluation and strategic formulation. These cycles are repeated until desired performance is achieved.

© All rights reserved Watkins and/or ACM Press

p. 167-170

Mueller, Florian, Agamanolis, Stefan, Vetere, Frank and Gibbs, Martin (2007): Brute force as input for networked gaming. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 167-170. Available online

Bodily activities such as sports have many physical and mental health benefits. The associated physical interactions are often of an exertion character and facilitate the use of brute force and intense physical actions. On the other hand, computer interfaces so far have mainly focused on interactions that use limited force and often ignored the existence of extreme brutal interactions that can be encountered in everyday life, in particular in contact sports. We present our initial investigations on the concept of "Brute Force" interfaces in HCI and describe work-in-progress on a prototype that aims to facilitate brute force interactions. We hope with our work we can aid designers who want to leverage the physical and mental health benefits of such physically intense behaviors that people do exhibit in their lives.

© All rights reserved Mueller et al. and/or ACM Press

p. 17-24

Taylor, Nick, Cheverst, Keith, Fitton, Dan, Race, Nicholas J. P., Rouncefield, Mark and Graham, Connor (2007): Probing communities: study of a village photo display. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 17-24. Available online

In this paper we describe a technology probe aiming to aid understanding of how digital displays can help support communities. Using a simple photo gallery application, deployed in a central social point in a small village and displaying user-generated photos and videos, we have been able to gain an understanding of this setting, field test our device and inspire new ideas directly from members of the community. We explore the process of deploying this display, the response from residents and how the display has taken a place within the community.

© All rights reserved Taylor et al. and/or ACM Press

p. 171-174

McEwan, Gregor, Rittenbruch, Markus and Mansfield, Tim (2007): Understanding awareness in mixed presence collaboration. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 171-174. Available online

Mixed presence collaboration combines distributed and collocated collaboration -- there are multiple distributed sites, each with a collocated group. While collocated collaboration and purely distributed collaboration are each the subject of rich bodies of research, the combination is less well explored. In this paper we present our initial concepts of awareness support in mixed presence collaboration. We present this as a first version model of awareness. The selected literature we have used to inform the model is drawn from collocated research and distributed research as well as the small body of work addressing mixed presence collaboration directly. In this paper we present a discussion of this relevant literature and use it to explain our model. We also offer a sample of applying the model through the use of a scenario.

© All rights reserved McEwan et al. and/or ACM Press

p. 175-178

Müller-Tomfelde, Christian, Schremmer, Claudia and Wessels, Anja (2007): Exploratory study on concurrent interaction in co-located collaboration. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 175-178. Available online

We present an exploratory lab study that provides observations and measures about the usage of interaction devices in co-located cooperative work situations at a tabletop display. We designed our experiment with the aim of providing a context for the collaboration that shares as many characteristics of real life as possible. Twenty-two participants were instructed to perform a shared goal task. They worked in co-located pairs on solving three sets of two jigsaw puzzles concurrently. They were allowed to use any combination of direct and indirect input device, i.e., touch and mouse, to achieve the goal. Additionally, a hidden task was imposed on the participants in the second and third puzzle task: They had to discover that pieces were mixed up between the two displayed puzzles. The role of the hidden task was to trigger spontaneous transitions from individual to collaborative work. Our observations focused on the participants' selection and usage of input devices during the task execution. Our study revealed amongst others that participants stuck to their preferred input device even when they got more engaged in coordination and communication with their partner. Our findings are based on log data, questionnaire data and video recordings.

© All rights reserved Müller-Tomfelde et al. and/or ACM Press

p. 179-182

Li, Jane, Wessels, Anja, Alem, Leila and Stitzlein, Cara (2007): Exploring interface with representation of gesture for remote collaboration. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 179-182. Available online

This paper reports on a laboratory study into the gesture representation interface for remote collaboration on physical tasks. Measured by task performance and user's perception of interaction, the experiment assessed two gesture representations (hands vs. cursor pointer) in the context of a video mediated interface which included a view of the remote partner. We did not find any significant difference between the hands condition and the pointer condition when measuring user's task performance. However, our result showed that participants reported an overall preference of using the pointer functionality than using the hands'. We found that participants perceived higher quality of interaction in the hands condition than in the pointer condition and there was a significant difference. Additionally, majority of the participants valued the ability of being able to see each other's face during the collaboration. We conclude with a discussion on the importance of accounting for the user's perception of interaction in addition to the traditional task performance measure in evaluating gesture representation interface, and the importance of considering these two factors in recommending the most suitable interface design with gesture representation for collaboration on physical tasks.

© All rights reserved Li et al. and/or ACM Press

p. 183-186

Jones, Christian Martyn and Deeming, Andrew (2007): Investigating emotional interaction with a robotic dog. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 183-186. Available online

Next generation of consumer-level entertainment robots should offer more natural engaging interaction. This paper reports on the development and evaluation of a consumer-level robotic dog with acoustic emotion recognition capabilities. The dog can recognise the emotional state of its owner from affective cues in the owner's speech and respond with appropriate actions. The evaluation study shows that users can recognise the new robotic dog to be emotionally intelligent and report that this makes the dog appear more 'alive'.

© All rights reserved Jones and Deeming and/or ACM Press

p. 187-190

Willis, Matthew (2007): An emotionally intelligent user interface: modelling emotion for user engagement. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 187-190. Available online

This paper presents a model for simulating emotion and personality in interactive systems. This paper argues that by introducing simulated emotional responses and state dynamics to future systems, they will provide a more life-like, engaging and interactive experience to their users, and provide a more effective and efficient user interaction. Further, by simulating Emotional Intelligence in a system, a developer may be able to provide a more tailored user experience, and be provided with more control over the outcome of a user's interaction with their system. Emotion theory and expression are explored, and a model is presented based upon emotional states. Implementation of this model is then presented as an intelligent back end process that utilises dynamic video stream analysis that feeds into an interactive display. The proposed system, and its hardware implementation is presented, followed by a discussion of future areas of research.

© All rights reserved Willis and/or ACM Press

p. 191-194

Jones, Christian Martyn and Troen, Tommy (2007): Biometric valence and arousal recognition. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 191-194. Available online

A real-time user-independent emotion detection system using physiological signals has been developed. The system has the ability to classify affective states into 2-dimensions using valence and arousal. Each dimension ranges from 1 to 5 giving a total of 25 possible affective regions. Physiological signals were measured using 3 biometric sensors for Blood Volume Pulse (BVP), Skin Conductance (SC) and Respiration (RESP). Two emotion inducing experiments were conducted to acquire physiological data from 13 subjects. The data from 10 of these subjects were used to train the system, while the remaining 3 datasets were used to test the performance of the system. A recognition rate of 62% for valence and 67% for arousal was achieved within +/- 1 units of the valence and arousal rating.

© All rights reserved Jones and Troen and/or ACM Press

p. 195-198

Chik, Vincent, Plimmer, Beryl and Hosking, John (2007): Intelligent mind-mapping. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 195-198. Available online

Current computer based mind-mapping tools are much slower to use than pen and paper because users are distracted by tool operations such as finding and arranging widgets. The shift in focus from brainstorming to tool management interrupts the rapid brainstorming process that mind maps are intended to support. Our pen based mind-mapping software that includes intelligent ink recognition, editing and export alleviates these intrusions as the user only has to worry about writing on the canvas, yet usual digital document support is provided. The digital ink recognition and manipulation techniques described here will be of interest to others working with informal documents.

© All rights reserved Chik et al. and/or ACM Press

p. 199-202

Luca, Alexander De, Weiss, Roman and Drewes, Heiko (2007): Evaluation of eye-gaze interaction methods for security enhanced PIN-entry. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 199-202. Available online

Personal identification numbers (PINs) are one of the most common ways of electronic authentication these days and used in a wide variety of applications, especially in ATMs (cash machines). A non-marginal amount of tricks are used by criminals to spy on these numbers to gain access to the owners' valuables. Simply looking over the victims' shoulders to get in possession of their PINs is a common one. This effortless but effective trick is known as shoulder surfing. Thus, a less observable PIN entry method is desirable. In this work, we evaluate three different eye gaze interaction methods for PIN-entry, all resistant against these common attacks and thus providing enhanced security. Besides the classical eye input methods we also investigate a new approach of gaze gestures and compare it to the well known classical gaze-interactions. The evaluation considers both security and usability aspects. Finally we discuss possible enhancements for gaze gestures towards pattern based identification instead of number sequences.

© All rights reserved Luca et al. and/or ACM Press

p. 203-206

Reid, Peter, Hallett-Hook, Fred, Plimmer, Beryl and Purchase, Helen C. (2007): Applying layout algorithms to hand-drawn graphs. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 203-206. Available online

Hand-drawing a node-and-edge graph is a simple visual problem solving technique; however as the graph is built it can easily get untidy and confusing. It is more difficult to understand and interpret a confusing graph. By applying edge morphing techniques and a force-directed algorithm the hand-drawn graph can retain its informal appearance while its layout is improved. Graphs will be more readily understood, making the problem solving process easier.

© All rights reserved Reid et al. and/or ACM Press

p. 207-210

Kennedy, Patrick (2007): Mentoring collaborative user centred design. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 207-210. Available online

User-centred design is a key part of best practice website design, but becomes increasingly difficult when undertaken by an inexperienced, multidisciplinary team gathered from various parts of an organisation. Mentoring is one approach that can help alleviate the pressure and assist such a team in delivering solid results. This paper describes the experiences of a team and its mentor during the ground-up redesign of a website for an Australian government agency, referred to by the pseudonym 'ESA' in this paper.

© All rights reserved Kennedy and/or ACM Press

p. 211-214

Chen, Xiaofan and Plimmer, Beryl (2007): CodeAnnotator: digital ink annotation within Eclipse. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 211-214. Available online

Programming environments do not support ink annotation. Yet, annotation is the most effective way to actively read and review a document. This paper describes a tool, CodeAnnotator, which integrates annotation support inside an Integrated Development Environment (IDE). This tool is designed and developed to support direct annotation of program code with digital ink in the IDE. Programmers will benefit from a more intuitive interaction space to record notes and comments just as they would on paper documents.

© All rights reserved Chen and Plimmer and/or ACM Press

p. 215-218

Sun, Yong, Shi, Yu, Chen, Fang and Chung, Vera (2007): An efficient unification-based multimodal language processor in multimodal input fusion. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 215-218. Available online

A Multimodal User Interface (MMUI) allows a user to interact with a computer in a way similar to human-to-human communication, for example, through speech and gesture. Being an essential component in MMUIs, Multimodal Input Fusion should be able to find the semantic interpretation of a user's intention from recognized multimodal symbols which are semantically complementary. We enhanced our efficient unification-based multimodal parsing processor, which has the potential to achieve low polynomial computational complexity while parsing versatile multimodal inputs within a speech and gesture based MMUI, to handle multimodal inputs from more than two modalities. Its ability to disambiguate speech recognition results with gesture recognition results was verified with an experiment. The analysis of experiment results demonstrates that the improvement is significant after applying this technique.

© All rights reserved Sun et al. and/or ACM Press

p. 219-222

Young, Greg T., Foth, Marcus and Matthes, Natascha Y. (2007): Virtual fish: visual evidence of connectivity in a master-planned urban community. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 219-222. Available online

The rapid densification of urban areas around the world offers exciting opportunities for new place-based artworks and locative media that aim at engaging, informing and entertaining members of local communities. In this paper, we introduce a design competition for concepts of interaction design which display visual evidence of connectivity in a master-planned community. This competition is based in and focuses on one of Brisbane's newly built inner urban renewal developments. Furthermore, we introduce the conceptual interaction design of one of the competition's winning entries, as well as its potential and its challenges to engage local residents in participation and exploration of place-based information and community media.

© All rights reserved Young et al. and/or ACM Press

p. 223-226

Ryu, Han-Sol, Yoon, Yeo-Jin, Lim, Myeong-Eun, Park, Chan-Yong, Park, Soo-Jun and Choi, Soo-Mi (2007): Picture navigation using an ambient display and implicit interactions. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 223-226. Available online

There is increasing demand for ubiquitous displays that react to a user's actions. We propose a method of navigating pictures on an ambient display using implicit interactions. The ambient display can identify the user and measure how far away they are using an RFID reader and ultrasonic sensors. When the user is a long way from the display, it acts as a digital picture and does not attract attention. When the user comes within an appropriate range for interaction, the display shows pictures that are related to the user and provides quasi-3D navigation using the TIP (tour into the picture) method. In addition, menus can be manipulated directly on a touch-screen or remotely using an air mouse. In an emergency, LEDs around the display flash to alert the user.

© All rights reserved Ryu et al. and/or ACM Press

p. 227-230

Sreedharan, Sreeram, Zurita, Edmund S. and Plimmer, Beryl (2007): 3D input for 3D worlds. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 227-230. Available online

Virtual Worlds present a 3D space to the user. However, input devices are typically 2D. This unnatural mapping reduces the engagement of the experience. We are exploring using Wii controllers to provide 3D gesture-based input to the 3D virtual world, Second Life. By evaluating its usability, we found that gesture-based interfaces are appealing and natural for hand gestures such as wave but difficult to map to facial expressions.

© All rights reserved Sreedharan et al. and/or ACM Press

p. 231-234

Schou, Torben and Gardner, Henry J. (2007): A Wii remote, a game engine, five sensor bars and a virtual reality theatre. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 231-234. Available online

The Nintendo Wii Remote is having a huge impact on the computer games industry. This paper describes a project which is integrating this controller into a game environment in a multi-wall virtual reality theatre. Aspects considered include interaction taxonomies of the Wii controller, the extension of driver software to have the Wii controller deal with multiple Sensor Bars at once, and the porting of the game engine into the virtual reality theatre.

© All rights reserved Schou and Gardner and/or ACM Press

p. 235-238

McCord, Glenn, Plimmer, Beryl and Wuensche, Burkhard (2007): Surface manipulation using a paper sculpture metaphor. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 235-238. Available online

The creation of 3D computer models is essential for many applications in science, engineering and arts and is frequently performed by untrained users. However, creating an intuitive mapping between 2D input and 3D models is a non-trivial task and is reflected in the difficulty novices have in using current 3D modelling software. Using metaphors of paper sculpture and pen sketching, our gesture based modelling tool simplifies this interaction mapping. More intuitive object manipulation means that an otherwise complex model can be rapidly created by an inexperienced, non-artistic user. To demonstrate this, we have chosen to model orchid flowers as they offer considerable challenges to the artist due to their complexity of shape and detail, especially the petal surfaces which vary a great deal in curvature.

© All rights reserved McCord et al. and/or ACM Press

p. 239-240

Luca, Alexander De, Weiss, Roman and Hussmann, Heinrich (2007): PassShape: stroke based shape passwords. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 239-240. Available online

Authentication today mostly means using passwords or personal identification numbers (PINs). The average user has to remember an increasing amount of PINs and passwords. But unfortunately, humans have limited capabilities in remembering abstract alphanumeric sequences. Thus, many people either forget them or use very simple ones that imply several security risks. In our previous work on PIN entry on ATMs (cash machines), we found out that many persons support their memory recalling PINs by using an imaginary shape overlaid on the number pad. In this paper, we introduce PassShape, a shape based authentication mechanism. We argue that using shapes will allow more complex and more secure authentication with a lower cognitive load. That is, it enables people to use easy to remember but complex authentication patterns.

© All rights reserved Luca et al. and/or ACM Press

p. 241-248

Martin, Dave, Procter, Rob, Mariani, John and Rouncefield, Mark (2007): Working the contract. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 241-248. Available online

This paper presents data and analysis from a long term ethnographic study of the design and development of an electronic patient records system in a UK hospital Trust. The project is a public private partnership (PPP) between the Trust and a US based software house (OurComp) contracted to supply, configure and support their customizable-off-the-shelf (COTS) healthcare information system in cooperation with an in-hospital project team. Given this contractual relationship for system delivery and support (increasingly common, and 'standard' in UK healthcare) we focus on the ways in which issues to do with the 'contract' enter into and impinge on everyday design and deployment work as part of the process of delivering dependable systems.

© All rights reserved Martin et al. and/or ACM Press

p. 249-255

Yin, Bo, Ruiz, Natalie, Chen, Fang and Khawaja, M. Asif (2007): Automatic cognitive load detection from speech features. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 249-255. Available online

Cognitive load variations have been found to impact multimodal behaviour, in particular, features of spoken input. In this paper, we present a design and implementation of a user study aimed at soliciting natural speech at three different levels of cognitive load. Some of the speech data produced was then used to train a number of models to automatically detect cognitive load. We describe a classification approach, the cognitive load levels were detected and output as discrete level ranges. The final system achieved a 71.1% accuracy for 3 levels classification in a speaker-independent setting. The ability to detect and manage a user's cognitive load can help us to adapt intelligent interfaces that ensure optimal user performance.

© All rights reserved Yin et al. and/or ACM Press

p. 25-28

Kjeldskov, Jesper, Andersen, Eva and Hedegaard, Lars (2007): Designing and evaluating Buster: an indexical mobile travel planner for public transportation. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 25-28. Available online

This paper elaborates on previous research into the design and use of mobile information systems for supporting the use of public transportation. Contributing to this domain of HCI research, we describe the design and evaluation of a mobile travel planner, Buster, for the public city bus system of a large regional city in Denmark. Carrying on from on earlier research activities, we did contextual interviews, acting out of future scenarios in situ, and iterative paper prototyping to extend on previous design ideas and explore further the principle of indexicality in interface design for context-aware mobile systems. We then implemented a functional prototype application and evaluated it in the field.

© All rights reserved Kjeldskov et al. and/or ACM Press

p. 257-260

Kraal, Ben and Popovic, Vesna (2007): Looking for expertise in physical interactions. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 257-260. Available online

In this paper, we describe the methods we have used to investigate expertise in interaction with physical interfaces. This paper covers the background of the interfaces (compression bandages), describes the methods used and presents findings on the use of tacit and explicit knowledge during interaction. Due to the increase in interest in interfaces that cross between the physical and digital, this method may be of interest to researchers who are involved in similar projects.

© All rights reserved Kraal and Popovic and/or ACM Press

p. 261-264

Bruno, Vince and Dick, Martin (2007): Making usability work in industry: an Australian practitioner perspective. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 261-264. Available online

The gap in usability knowledge between research and industry practice is an important one to bridge. This paper presents the findings of 12 interviews with usability practitioners. The interviews focus on eliciting stories about successful and unsuccessful usability outcomes. The analysis shows that an iterative usability process, ensuring stakeholder involvement, articulating usability goals and requirements and avoiding technological constraints are critical issues to achieving a successful usability outcome in a project.

© All rights reserved Bruno and Dick and/or ACM Press

p. 265-272

Chatfield, Craig and Hexel, René (2007): Privacy and community connectedness: designing intelligent environments for our cities. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 265-272. Available online

This paper investigates the casual interactions that support and nourish a community and seeks to provide a solution to the increasing detachment of modern society as community spaces become less and less engaging. We suggest the use of a ubiquitous computing (ubicomp) infrastructure to promote and support community connectedness via the hosting of virtual community environments and by providing local information and interaction possibilities. This infrastructure addresses our need as society to communicate more effectively and create loose bonds with familiar strangers within our community. We explore this idea with a use scenario and user study of users interacting with the services in a developed intelligent environment.

© All rights reserved Chatfield and Hexel and/or ACM Press

p. 273-281

Christensen, Claus M., Kjeldskov, Jesper and Rasmussen, Klaus K. (2007): GeoHealth: a location-based service for nomadic home healthcare workers. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 273-281. Available online

In this paper, we describe GeoHealth -- a geographical information system prototype for home healthcare workers who during a normal workday have to attend clients and patients that are physically distributed over a large geographical area. Informed by field studies of work activities and interviews with the healthcare workers, we have designed an interactive location-based service for supporting distributed mobile collaboration. The prototype explores a representational approach to context-awareness and represents live contextual information about clients/patients, co-workers, current and scheduled work activities, and alarms adapted to the users' location. The prototype application is web-based and uses Google Maps, GPS positioning, and Web 2.0 technology to provide a lightweight dynamic and interactive representation of the work domain supporting distributed collaboration, communication, and peripheral awareness among nomadic workers.

© All rights reserved Christensen et al. and/or ACM Press

p. 283-290

Paay, Jeni and Kjeldskov, Jesper (2007): A Gestalt theoretic perspective on the user experience of location-based services. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 283-290. Available online

Location-based services provide mobile users with information and functionality tailored to their geographical location. Within recent years these kinds of context-aware mobile systems have received increasing attention from software industry as well as from researchers within a wide range of computing disciplines. However, while a lot of research has been done into sensing, adapting and philosophizing over the complex concept of "context", little theoretically based knowledge exists about why, from a user experience perspective, some context-aware system designs work well and why others don't. Contributing to this discussion, this paper suggests the perspective of "Gestalt theory" as a potential theoretical framework for understanding the use of this class of computer systems, and argues that describing the user experience of location-based services through Gestalt theory's principles of proximity, closure, symmetry, continuity, and similarity can help explain how people make sense of small and fragmented pieces of information on mobile devices in context.

© All rights reserved Paay and Kjeldskov and/or ACM Press

p. 29-37

Graham, Connor, Rouncefield, Mark, Gibbs, Martin, Vetere, Frank and Cheverst, Keith (2007): How probes work. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 29-37. Available online

'Cultural probes', since first being proposed and described by Bill Gaver and his colleagues, have been adapted and appropriated for a range of purposes within a variety of technology projects. In this paper we critically review different uses of Probes and discuss common aspects of different Probe variants. We also present and critique some of the debate around Probes through describing the detail of their use in two studies: The Digital Care Project (Lancaster University) and The Mediating Intimacy Project (University of Melbourne). We then reorient the discussion around Probes towards how probes work: both as interpretative fodder for social scientists and as a resource for 'designers'. Finally we discuss new possible directions for Probes as an approach and some of the challenges confronting Probes as an approach.

© All rights reserved Graham et al. and/or ACM Press

p. 291-294

Su, Daniel Kuen Seong and Yee, Victoria Siew Yen (2007): Designing usable interface for navigating mobile chat messages. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 291-294. Available online

There has been little design consideration given to ease the navigation through a long chat archive in a limited screen display. By incorporating graphical and user-centered design, messages can be presented in logical grouping for navigation ease and efficient tracking of specific messages in a long chat archive. This paper explores usable interface design for mobile group chat systems via navigation and visualisation to track messages that results in minimal key-pressed and fast message retrieval. Additionally, we incorporate avatars and emoticon in user identification and human embodiment to facilitate ease of understanding of the messages' contents.

© All rights reserved Su and Yee and/or ACM Press

p. 295-302

Loewenich, Frank and Maire, Frederic (2007): Hands-free mouse-pointer manipulation using motion-tracking and speech recognition. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 295-302. Available online

Technology is advancing at a rapid pace, automating many everyday chores in the process. Information technology (IT) is changing the way we perform work and providing society with a multitude of entertainment options. Unfortunately, in the past designers of many software systems have not considered the disabled as active users of technology, and thus this significant part of the world population has often been neglected. A change in this mindset has been emerging in recent years, however, as private-sector organizations and governments have started to realize that including this user group is not only profitable, but also beneficial to society as a whole. This paper introduces an alternative method to the traditional mouse input device, using a modified Lucas-Kanade optical-flow algorithm for tracking head movements, and speech recognition to activate mouse buttons.

© All rights reserved Loewenich and Maire and/or ACM Press

p. 303-310

Bezerianos, Anastasia (2007): Using alternative views for layout, comparison and context switching tasks in wall displays. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 303-310. Available online

In this paper we first present a set of tasks that are relevant to wall display interaction. Among these, layout management, context switching and comparison tasks could benefit from the use of interactive shortcut views of remote areas of a wall display, presented close to the user. Such a shortcut view technique, the ScaleView portals, is evaluated against using a simple magnification lens and walking when performing these tasks. We observed that for a layout and comparison task with frequent context switching, users preferred ScaleView portals. But for simpler tasks, such as searching, regular magnification lenses and walking were preferred. General observations on how the display was used as a peripheral reference by different participants highlighted one of the benefits of using wall sized displays: users may visually refer to the large, spread out content on the wall display, even if they prefer to interact with it close to their location.

© All rights reserved Bezerianos and/or ACM Press

p. 311-317

Yeung, Louise, Plimmer, Beryl, Lobb, Brenda and Elliffe, Douglas (2007): Levels of formality in diagram presentation. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 311-317. Available online

The incremental beautification of hand-drawn diagrams is a process that is poorly understood. Thus implementation of beautification techniques in computer-based sketch tools is ad hoc, with most only supporting the ends of the spectrum: hand-drawn and fully formalized. Hand-drawn diagrams are more effective for early design and review but users are more satisfied with formal designs. This suggests that there may be applications for intermediate levels of formality. By understanding the attributes of visual formality it is possible to beautify a diagram progressively, thereby achieving visually consistent intermediate levels of formality. Here we present a taxonomy of the attributes of visual formality and the implementation of this taxonomy into a sketch tool.

© All rights reserved Yeung et al. and/or ACM Press

p. 319-322

Enders, Jessica (2007): Zebra striping: does it really help?. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 319-322. Available online

'Zebra striping' -- also known as half shadow -- is the application of a faint shadow to alternate lines or rows in data tables or forms to aid readability. Zebra striping has been in use on paper and in electronic mediums for almost half a century, however, there is practically no evidence that it actually assist users. We conducted an online experiment to measure the impact of zebra striping on accuracy and speed when answering a series of questions using a table of data. Surprisingly, zebra striping did not consistently deliver gains in either measure.

© All rights reserved Enders and/or ACM Press

p. 39-47

Pascoe, Jason and Thomson, Kirsten (2007): On the use of mobile tools in everyday life. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 39-47. Available online

This paper explores how mobile tools are used in everyday life and investigates the issues surrounding their usage, or indeed, their lack of usage. Personal computers support us in a wide range of our desk-bound activities, but there is still relatively little use of computer-based tools in other parts of our day-to-day lives. We hypothesise that this is because certain barriers-to-use exist that discourage their use in everyday environments, namely that the tools are not readily to hand at the time of need and/or that they distract the user's attention too much from the surrounding environment or main task at hand. We briefly present our concept of a Smartwatch -- a wrist-worn form of a general-purpose wearable computer -- that aims to overcome these barriers. However, we strongly believe that a prerequisite to the successful development of this, and other types of mobile devices, is a better understanding of the use, or disuse, of mobile tools in everyday life (including traditional and paper-based solutions). To gain this understanding we conducted a diary study in which a group of twelve volunteers recorded their usage of mobile tools over a period of 2 days. With the large volume of data that was collected we performed a qualitative analysis based on grounded theory techniques, resulting in a comprehensive and detailed picture of the use of mobile tools in everyday life. From this understanding we have drawn out nine key themes which we present in some detail in this paper, including: situational versus portability strategies, the mobility of information, new behaviours derived from mobile phone usage, the importance of creative expression, concern over privacy and security issues, and the demonstrated existence of the barriers-to-use.

© All rights reserved Pascoe and Thomson and/or ACM Press

p. 49-56

Kantola, Niina and Jokela, Timo (2007): SVSb: simple and visual storyboards: developing a visualisation method for depicting user scenarios. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 49-56. Available online

Presenting scenarios, or user tasks, in the form of storyboards is a common technique in the field of HCI. A storyboarding approach, called SVSb ('simple and visual storyboards'), is proposed. The aim is to create storyboards that are simple and easy to construct and modify, but still communicative and descriptive enough. The storyboards include elements such as context, user, user goals, plans, evaluation, user actions and system events. The rationale of SVSb and experiences on using the approach in a case project are described.

© All rights reserved Kantola and Jokela and/or ACM Press

p. 57-60

Khawaja, M. Asif, Ruiz, Natalie and Chen, Fang (2007): Potential speech features for cognitive load measurement. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 57-60. Available online

Intelligent user interfaces with an awareness of a user's experienced level of cognitive load have the potential to change the way output strategies are implemented and executed. However, current methods of measuring cognitive load are intrusive and unsuitable in real-time scenarios. Certain speech features have been shown to change under high levels of load. We present a dual-task speech based user study in which we explore three speech features: pause length, pause frequency and latency to response. These features are evaluated for their diagnostic capacity. Pause length and latency to response are shown to be useful indicators of high load versus low load speech.

© All rights reserved Khawaja et al. and/or ACM Press

p. 61-68

Xu, Sheng and Shibata, Hirohito (2007): Writing blocks: a visualization to support global revising. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 61-68. Available online

This paper focuses on the cognitive support of the revising activities of writing, especially the global revising activity. Most of previous writing support systems emphasize the support of content generating activities. As for revising, some of these systems have provided automatic spell check and grammar check features. However, revising should not be confined only to these local superficial changes. In this paper, we aim to support the revising activities through providing a new visualization and interaction that encourages writers to revise globally. We have built a system called Writing Blocks to confirm the feasibility of our approach. Our preliminary user study, which was conducted with three subjects, helped us to gain a perspective on the effectiveness of our approach especially in supporting revising scenarios such as the revising of scripts for time-restricted presentations and the revising of global usage of terms in long documents according to their contexts.

© All rights reserved Xu and Shibata and/or ACM Press

p. 69-76

Jakobsen, Mikkel Rønne and Hornæk, Kasper (2007): Transient visualizations. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 69-76. Available online

Information visualizations often make permanent changes to the user interface with the aim of supporting specific tasks. However, a permanent visualization cannot support the variety of tasks found in realistic work settings equally well. We explore interaction techniques that transiently visualize information near the user's focus of attention. Transient visualizations support specific contexts of use without permanently changing the user interface, and aim to seamlessly integrate with existing tools and to decrease distraction. Examples of transient visualizations for document search, map zoom-outs, fisheye views of source code, and thesaurus access are presented. We provide an initial validation of transient visualizations by comparing a transient overview for maps to a permanent visualization. Among 20 users of these visualizations, all but four preferred the transient visualization. However, differences in time and error rates were insignificant. On this background, we discuss the potential of transient visualizations and future directions.

© All rights reserved Jakobsen and Hornæk and/or ACM Press

p. 77-82

Pilgrim, C. J. (2007): The influence of spatial ability on the use of web sitemaps. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 77-82. Available online

One challenge confronting web site designers is to provide effective navigational support. Supplemental navigation tools such as sitemaps are frequently included on web sites to support navigation. However, there is a lack of empirically based guidelines for designers of such tools. This paper reports an empirical investigation into the factors influencing the decision by users to select sitemaps or search tools. The study establishes a relationship between the user's level of spatial ability and their tendency to select Web sitemaps providing a basis for the design of further investigations into the usability and design of such tools.

© All rights reserved Pilgrim and/or ACM Press

p. 83-86

Swift, Ben, Sheridan, James, Zhen, Yang and Gardner, Henry J. (2007): Mind-modulated music in the mind attention interface. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 83-86. Available online

A recent study of electroencephalogram (EEG) activity associated with musical cognition has suggested a correlate for the amount of active musical processing taking place in the brains of musicians. Using a version of this measure, we have built a new brain computer interface which harnesses the "natural" brain activity of musicians to mold and modulate music as it is being composed and played. This computer music instrument is part of a system, the Mind Attention Interface, which provides an interface to a virtual reality theatre using measures of a participant's EEG, eye-gaze and head position. The theatre itself, and its spatialised sound system, closes a feedback loop through the mind of the participant.

© All rights reserved Swift et al. and/or ACM Press

p. 87-93

Wickey, Aiden and Alem, Leila (2007): Analysis of hand gestures in remote collaboration: some design recommendations. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 87-93. Available online

This paper reports on a qualitative analysis of gestures performed during a remote collaboration in which two people are working together with physical objects. CSCW researchers have established the importance of supporting gesture when sharing and interacting from a distance. Recent work reports on a corpus of gestures phrases [3] and a set of gestures functions and roles [2] as observed in remote collaboration on physical tasks. While advances are being made in identifying these gestures, to date the design implications of remote gesture systems is still unclear. In this paper we describe a set of gestures phrases, which we have observed. These gesture phrases are composed of a number of individual gestures. We describe a specific gesture performed by the helper indicative of participant's natural/intuitive and preferred interaction practices. We also describe gestures performed by the helper going beyond pointing or showing a movement or a shape, gestures that suggest that the helper is acting as if manipulating the remote physical objects. We discuss the implications for supporting such more elaborated gesture phrases and present a set of design recommendations for designing remote gesture systems.

© All rights reserved Wickey and Alem and/or ACM Press

p. 9-16

Phalip, Julien, Morphett, Matthew and Edmonds, Ernest (2007): Alleviating communication challenges in film scoring: an interaction design approach. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 9-16. Available online

Film Scoring is a creative and collaborative activity that involves several practitioners, in particular music specialists (film composers) and non specialists (filmmakers). These practitioners face recurrent challenges in communication primarily because they do not share the same musical language. In this paper we present the results of research undertaken into the communication process between filmmakers and composers, with particular focus on the challenges experienced by the two parties. We then propose and discuss an interaction design approach to progress towards appropriate computer-based solutions.

© All rights reserved Phalip et al. and/or ACM Press

p. 95-102

Sun, Jian and Regenbrecht, Holger (2007): Implementing three-party desktop videoconferencing. In: Proceedings of OZCHI07, the CHISIG Annual Conference on Human-Computer Interaction November 28-30, 2007, Adelaide, Australia. pp. 95-102. Available online

We describe the implementation and test of a novel three-party desktop videoconferencing system. To allow for gaze and workspace awareness between the participating partners a special quasi-spatial arrangement of cameras and graphical user interface elements is chosen. We informally tested the system setup with a usability evaluation presented at the end of this paper. Our prototypical solution is a customizable off-the-shelf, affordable way of supporting mutual awareness in three-party videoconferencing.

© All rights reserved Sun and Regenbrecht and/or ACM Press




 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 

User-contributed notes

Give us your opinion! Do you have any comments/additions
that you would like other visitors to see?

 
comment You (your email) say: Sep 5th, 2014
#1
Sep 5
Add a thoughtful commentary or note to this page ! 
 

your homepage, facebook profile, twitter, or the like
will be spam-protected
How many?
= e.g. "6"
User ExperienceBy submitting you agree to the Site Terms
 
 
 
 

Changes to this page (conference)

12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Added
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/conferences/proceedings_of_ozchi07%2C_the_chisig_annual_conference_on_human-computer_interaction.html

Upcoming Courses

go to course
Human-computer Interaction
88% booked. Starts in 7 days
go to course
Gamification: Creating Addictive User Experience
83% booked. Starts in 12 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Upcoming Courses

go to course
Human-computer Interaction
88% booked. Starts in 7 days
go to course
Gamification: Creating Addictive User Experience
83% booked. Starts in 12 days