Publication statistics

Pub. period:2006-2012
Pub. count:4
Number of co-authors:14


Number of publications with 3 favourite co-authors:

Adam Sadilek:
Walter Lasecki:
Andrew Abumoussa:



Productive colleagues

Christopher Miller's 3 most productive colleagues in number of publications:

Stephen A. Brewste..:108
Emilie M. Roth:21
David McGookin:6

Upcoming Courses

go to course
User Research - Methods and Best Practices
Starts tomorrow LAST CALL!
go to course
Get Your First Job as a UX or Interaction Designer
Starts the day after tomorrow !

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !


Our Latest Books

The Glossary of Human Computer Interaction
by Mads Soegaard and Rikke Friis Dam
start reading
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading

Christopher Miller


Publications by Christopher Miller (bibliography)

 what's this?
Edit | Del

Roth, Emilie M., Cummings, Missy, Miller, Christopher, Smith, Philip, Schreckenghost, Debra and Scott, Ron (2012): Discussion Panel: Collaborative Automation Across Varying Time Scales of Interaction: What's the Same? What's Different?. In: Proceedings of the Human Factors and Ergonomics Society 2012 Annual Meeting 2012. pp. 213-217.

Automated systems, ranging from robots to intelligent planning aids, are increasingly part of the cognitive landscape. A pressing question facing the human factors community is how to design the automation to foster effective collaboration between the human and automated agents. This panel brings together leading researchers actively engaged in design of decision-support systems that involve some element of 'cognitive work' automation. They represent a variety of domains and a variety of approaches to design of collaborative automation. Panel members will discuss unique challenges of their respective domains, including differences in temporal rhythm, and how these have shaped their perspective on 'collaborative aiding'.

© All rights reserved Roth et al. and/or Human Factors and Ergonomics Society

Edit | Del

Lasecki, Walter, Miller, Christopher, Sadilek, Adam, Abumoussa, Andrew, Borrello, Donato, Kushalnagar, Raja and Bigham, Jeffrey (2012): Real-time captioning by groups of non-experts. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 23-34.

Real-time captioning provides deaf and hard of hearing people immediate access to spoken language and enables participation in dialogue with others. Low latency is critical because it allows speech to be paired with relevant visual cues. Currently, the only reliable source of real-time captions are expensive stenographers who must be recruited in advance and who are trained to use specialized keyboards. Automatic speech recognition (ASR) is less expensive and available on-demand, but its low accuracy, high noise sensitivity, and need for training beforehand render it unusable in real-world situations. In this paper, we introduce a new approach in which groups of non-expert captionists (people who can hear and type) collectively caption speech in real-time on-demand. We present Legion:Scribe, an end-to-end system that allows deaf people to request captions at any time. We introduce an algorithm for merging partial captions into a single output stream in real-time, and a captioning interface designed to encourage coverage of the entire audio stream. Evaluation with 20 local participants and 18 crowd workers shows that non-experts can provide an effective solution for captioning, accurately covering an average of 93.2% of an audio stream with only 10 workers and an average per-word latency of 2.9 seconds. More generally, our model in which multiple workers contribute partial inputs that are automatically merged in real-time may be extended to allow dynamic groups to surpass constituent individuals (even experts) on a variety of human performance tasks.

© All rights reserved Lasecki et al. and/or ACM Press

Edit | Del

Wu, Peggy and Miller, Christopher (2010): The Current Bottleneck for Computer-based Culture Training Who cares about Etiquette. In: Proceedings of the Human Factors and Ergonomics Society 54th Annual Meeting 2010. pp. 2289-2293.

Computerized training systems may be the only viable solution to accommodate the quickly evolving learning needs, the short lead time, and the large number of students that must prepare for cross-cultural interactions. However, current systems place emphasis on language training, which is only part of the equation in effective communication. Knowledge about culturally dependent social norms directly impacts the interaction outcome. A dynamic, customizable, computationally tractable model for the code of conduct, or etiquette, can alert us of the specific communication protocols of groups of interest, and can prevent over-generalizations which can be detrimental to the end goal. We may not consciously care, or even be aware of etiquette, but it impacts our trust and affect for individuals, and ultimately shapes our subsequent actions. We have adapted a socio-linguistic model of human-human interaction for use in language and culture training. This computational model increases the social intelligence of virtual actors by allowing them to act and react to human students in ways that are more aligned with the humans they simulate. Capabilities for virtual agents to reason about and select actions not only enhances interaction fidelity, but can also result in time and cost savings in training scenario generation and maintenance.

© All rights reserved Wu and Miller and/or HFES

Edit | Del

Brewster, Stephen A., McGookin, David and Miller, Christopher (2006): Olfoto: designing a smell-based interaction. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 653-662.

We present a study into the use of smell for searching digital photo collections. Many people now have large photo libraries on their computers and effective search tools are needed. Smell has a strong link to memory and emotion so may be a good way to cue recall when searching. Our study compared text and smell based tagging. For the first stage we generated a set of smell and tag names from user descriptions of photos, participants then used these to tag photos, returning two weeks later to answer questions on their photos. Results showed that participants could tag effectively with text labels, as this is a common and familiar task. Performance with smells was lower but participants performed significantly above chance, with some participants using smells well. This suggests that smell has potential. Results also showed that some smells were consistently identified and useful, but some were not and highlighted issues with smell delivery devices. We also discuss some practical issues of using smell for interaction.

© All rights reserved Brewster et al. and/or ACM Press

Add publication
Show list on your website

Join our community and advance:




Join our community!

Page Information

Page maintainer: The Editorial Team