Publication statistics

Pub. period:2011-2012
Pub. count:4
Number of co-authors:10


Number of publications with 3 favourite co-authors:

Scott Bateman:
Andre Doucette:
Chris Harrison:



Productive colleagues

Robert Xiao's 3 most productive colleagues in number of publications:

Carl Gutwin:116
Stephen A. Brewste..:108
Andy Cockburn:68

Upcoming Courses

go to course
Become a UX Designer from scratch
go to course
User Research - Methods and Best Practices
Starts the day after tomorrow !

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !


Our Latest Books

The Glossary of Human Computer Interaction
by Mads Soegaard and Rikke Friis Dam
start reading
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading

Robert Xiao


Publications by Robert Xiao (bibliography)

 what's this?
Edit | Del

Harrison, Chris, Xiao, Robert and Hudson, Scott (2012): Acoustic barcodes: passive, durable and inexpensive notched identification tags. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 563-568.

We present acoustic barcodes, structured patterns of physical notches that, when swiped with e.g., a fingernail, produce a complex sound that can be resolved to a binary ID. A single, inexpensive contact microphone attached to a surface or object is used to capture the waveform. We present our method for decoding sounds into IDs, which handles variations in swipe velocity and other factors. Acoustic barcodes could be used for information retrieval or to triggering interactive functions. They are passive, durable and inexpensive to produce. Further, they can be applied to a wide range of materials and objects, including plastic, wood, glass and stone. We conclude with several example applications that highlight the utility of our approach, and a user study that explores its feasibility.

© All rights reserved Harrison et al. and/or ACM Press

Edit | Del

Gutwin, Carl, Schneider, Oliver, Xiao, Robert and Brewster, Stephen A. (2011): Chalk sounds: the effects of dynamic synthesized audio on workspace awareness in distributed groupware. In: Proceedings of ACM CSCW11 Conference on Computer-Supported Cooperative Work 2011. pp. 85-94.

Awareness of other people's activity is an important part of shared-workspace collaboration, and is typically supported using visual awareness displays such as radar views. These visual presentations are limited in that the user must be able to see and attend to the view in order to gather awareness information. Using audio to convey awareness information does not suffer from these limitations, and previous research has shown that audio can provide valuable awareness in distributed settings. In this paper we evaluate the effectiveness of synthesized dynamic audio information, both on its own and as an adjunct to a visual radar view. We developed a granular-synthesis engine that produces realistic chalk sounds for off-screen activity in a groupware workspace, and tested the audio awareness in two ways. First, we measured people's ability to identify off-screen activities using only sound, and found that people are almost as accurate with synthesized sounds as with real sounds. Second, we tested dynamic audio awareness in a realistic groupware scenario, and found that adding audio to a radar view significantly improved awareness of off-screen activities in situations where it was difficult to see or attend to the visual display. Our work provides new empirical evidence about the value of dynamic synthesized audio in distributed groupware.

© All rights reserved Gutwin et al. and/or their publisher

Edit | Del

Xiao, Robert, Nacenta, Miguel A., Mandryk, Regan L., Cockburn, Andy and Gutwin, Carl (2011): Ubiquitous cursor: a comparison of direct and indirect pointing feedback in multi-display environments. In: Proceedings of the 2011 Conference on Graphics Interface 2011. pp. 135-142.

Multi-display environments (MDEs) connect several displays into a single digital workspace. One of the main problems to be solved in an MDE's design is how to enable movement of objects from one display to another. When the real-world space between displays is modeled as part of the workspace (i.e., Mouse Ether), it becomes difficult for users to keep track of their cursors during a transition between displays. To address this problem, we developed the Ubiquitous Cursor system, which uses a projector and a hemispherical mirror to completely cover the interior of a room with usable low-resolution pixels. Ubiquitous Cursor allows us to provide direct feedback about the location of the cursor between displays. To assess the effectiveness of this direct-feedback approach, we carried out a study that compared Ubiquitous Cursor with two other standard approaches: Halos, which provide indirect feedback about the cursor's location; and Stitching, which warps the cursor between displays, similar to the way that current operating systems address multiple monitors. Our study tested simple cross-display pointing tasks in an MDE; the results showed that Ubiquitous Cursor was significantly faster than both other approaches. Our work shows the feasibility and the value of providing direct feedback for cross-display movement, and adds to our understanding of the principles underlying targeting performance in MDEs.

© All rights reserved Xiao et al. and/or their publisher

Edit | Del

Bateman, Scott, Doucette, Andre, Xiao, Robert, Gutwin, Carl, Mandryk, Regan L. and Cockburn, Andy (2011): Effects of view, input device, and track width on video game driving. In: Proceedings of the 2011 Conference on Graphics Interface 2011. pp. 207-214.

Steering and driving tasks -- where the user controls a vehicle or other object along a path -- are common in many simulations and games. Racing video games have provided users with different views of the visual environment -- e.g., overhead, first-person, and third-person views. Although research has been done in understanding how people perform using a first-person view in virtual reality and driving simulators, little empirical work has been done to understand the factors that affect performance in video games. To establish a foundation for thinking about view in the design of driving games and simulations, we carried out three studies that explored the effects of different view types on driving performance. We also considered how view interacts with difficulty and input device. We found that although there were significant effects of view on performance, these were not in line with conventional wisdom about view. Our explorations provide designers with new empirical knowledge about view and performance, but also raise a number of new research questions about the principles underlying view differences.

© All rights reserved Bateman et al. and/or their publisher

Add publication
Show list on your website

Join our community and advance:




Join our community!

Page Information

Page maintainer: The Editorial Team