Number of co-authors:10
Number of publications with 3 favourite co-authors:Carl Gutwin:3Andy Cockburn:2Regan L. Mandryk:2
Robert Xiao's 3 most productive colleagues in number of publications:Carl Gutwin:116Stephen A. Brewste..:108Andy Cockburn:68
Once the product's task is known, design the interface first; then implement to the interface design..... As far as the customer is concerned, the interface is the the product.
-- Jef Raskin, Cited by Malcolm McCullough in Digital Ground, 2004
Read the fascinating history of Wearable Computing, told by its father, Steve Mann
Read Steve's chapter !
Our Latest Books
Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger
Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad
Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam
Publications by Robert Xiao (bibliography)
Harrison, Chris, Xiao, Robert and Hudson, Scott (2012): Acoustic barcodes: passive, durable and inexpensive notched identification tags. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 563-568.
We present acoustic barcodes, structured patterns of physical notches that, when swiped with e.g., a fingernail, produce a complex sound that can be resolved to a binary ID. A single, inexpensive contact microphone attached to a surface or object is used to capture the waveform. We present our method for decoding sounds into IDs, which handles variations in swipe velocity and other factors. Acoustic barcodes could be used for information retrieval or to triggering interactive functions. They are passive, durable and inexpensive to produce. Further, they can be applied to a wide range of materials and objects, including plastic, wood, glass and stone. We conclude with several example applications that highlight the utility of our approach, and a user study that explores its feasibility.
© All rights reserved Harrison et al. and/or ACM Press
Gutwin, Carl, Schneider, Oliver, Xiao, Robert and Brewster, Stephen A. (2011): Chalk sounds: the effects of dynamic synthesized audio on workspace awareness in distributed groupware. In: Proceedings of ACM CSCW11 Conference on Computer-Supported Cooperative Work 2011. pp. 85-94.
Awareness of other people's activity is an important part of shared-workspace collaboration, and is typically supported using visual awareness displays such as radar views. These visual presentations are limited in that the user must be able to see and attend to the view in order to gather awareness information. Using audio to convey awareness information does not suffer from these limitations, and previous research has shown that audio can provide valuable awareness in distributed settings. In this paper we evaluate the effectiveness of synthesized dynamic audio information, both on its own and as an adjunct to a visual radar view. We developed a granular-synthesis engine that produces realistic chalk sounds for off-screen activity in a groupware workspace, and tested the audio awareness in two ways. First, we measured people's ability to identify off-screen activities using only sound, and found that people are almost as accurate with synthesized sounds as with real sounds. Second, we tested dynamic audio awareness in a realistic groupware scenario, and found that adding audio to a radar view significantly improved awareness of off-screen activities in situations where it was difficult to see or attend to the visual display. Our work provides new empirical evidence about the value of dynamic synthesized audio in distributed groupware.
© All rights reserved Gutwin et al. and/or their publisher
Xiao, Robert, Nacenta, Miguel A., Mandryk, Regan L., Cockburn, Andy and Gutwin, Carl (2011): Ubiquitous cursor: a comparison of direct and indirect pointing feedback in multi-display environments. In: Proceedings of the 2011 Conference on Graphics Interface 2011. pp. 135-142.
Multi-display environments (MDEs) connect several displays into a single digital workspace. One of the main problems to be solved in an MDE's design is how to enable movement of objects from one display to another. When the real-world space between displays is modeled as part of the workspace (i.e., Mouse Ether), it becomes difficult for users to keep track of their cursors during a transition between displays. To address this problem, we developed the Ubiquitous Cursor system, which uses a projector and a hemispherical mirror to completely cover the interior of a room with usable low-resolution pixels. Ubiquitous Cursor allows us to provide direct feedback about the location of the cursor between displays. To assess the effectiveness of this direct-feedback approach, we carried out a study that compared Ubiquitous Cursor with two other standard approaches: Halos, which provide indirect feedback about the cursor's location; and Stitching, which warps the cursor between displays, similar to the way that current operating systems address multiple monitors. Our study tested simple cross-display pointing tasks in an MDE; the results showed that Ubiquitous Cursor was significantly faster than both other approaches. Our work shows the feasibility and the value of providing direct feedback for cross-display movement, and adds to our understanding of the principles underlying targeting performance in MDEs.
© All rights reserved Xiao et al. and/or their publisher
Bateman, Scott, Doucette, Andre, Xiao, Robert, Gutwin, Carl, Mandryk, Regan L. and Cockburn, Andy (2011): Effects of view, input device, and track width on video game driving. In: Proceedings of the 2011 Conference on Graphics Interface 2011. pp. 207-214.
Steering and driving tasks -- where the user controls a vehicle or other object along a path -- are common in many simulations and games. Racing video games have provided users with different views of the visual environment -- e.g., overhead, first-person, and third-person views. Although research has been done in understanding how people perform using a first-person view in virtual reality and driving simulators, little empirical work has been done to understand the factors that affect performance in video games. To establish a foundation for thinking about view in the design of driving games and simulations, we carried out three studies that explored the effects of different view types on driving performance. We also considered how view interacts with difficulty and input device. We found that although there were significant effects of view on performance, these were not in line with conventional wisdom about view. Our explorations provide designers with new empirical knowledge about view and performance, but also raise a number of new research questions about the principles underlying view differences.
© All rights reserved Bateman et al. and/or their publisher
Show list on your website
Join the design elite and advance:
Changes to this page (author)23 Nov 2012: Modified04 Apr 2012: Modified
04 Apr 2012: Modified
18 Apr 2011: Added
Page maintainer: The Editorial Team