Number of co-authors:70
Number of publications with 3 favourite co-authors:Hyunyoung Song:Kori Inkpen:Chen Zhao:
Xiang Cao's 3 most productive colleagues in number of publications:Scott E. Hudson:113Ravin Balakrishnan:108James A. Landay:91
go to course
Quality Web Communication: The Beginner's Guide
go to course
UI Design Patterns for Successful Software
91% booked. Starts in 4 days
Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess
User Experience and Experience Design !
Our Latest Books
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
Publications by Xiang Cao (bibliography)
Li, Guang, Cao, Xiang, Paolantonio, Sergio and Tian, Feng (2012): SketchComm: a tool to support rich and flexible asynchronous communication of early design ideas. In: Proceedings of ACM CSCW12 Conference on Computer-Supported Cooperative Work 2012. pp. 359-368. http://dx.doi.org/10.1145/2145204.2145261
When designers explain their early design ideas to others, they usually use face-to-face communication along with sketches. In practice, however, sometimes face-to-face meetings are not possible, and designers have to rely on asynchronous communication. Important contextual information that is available in face-to-face meetings often becomes missing in such asynchronous communications, which can lead to confusion and misunderstanding. To address this challenge, we present SketchComm: an enhanced tool to support rich and flexible asynchronous communication of early design ideas. The key of the system is to allow designers to capture and communicate important contextual information to the audience in addition to sketches. A user study with designers and audience demonstrated effectiveness of asynchronous early design communication using SketchComm.
© All rights reserved Li et al. and/or ACM Press
Kato, Jun, McDirmid, Sean and Cao, Xiang (2012): DejaVu: integrated support for developing interactive camera-based programs. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 189-196. http://dx.doi.org/10.1145/2380116.2380142
The increasing popularity of interactive camera-based programs highlights the inadequacies of conventional IDEs in developing these programs given their distinctive attributes and workflows. We present DejaVu, an IDE enhancement that eases the development of these programs by enabling programmers to visually and continuously monitor program data in consistency with the frame-based pipeline of computer-vision programs; and to easily record, review, and reprocess temporal data to iteratively improve the processing of non-reproducible camera input. DejaVu was positively received by three experienced programmers of interactive camera-based programs in our preliminary user trial.
© All rights reserved Kato et al. and/or ACM Press
Schmidt, Dominik, Molyneaux, David and Cao, Xiang (2012): PICOntrol: using a handheld projector for direct control of physical devices through visible light. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 379-388. http://dx.doi.org/10.1145/2380116.2380166
Today's environments are populated with a growing number of electric devices which come in diverse form factors and provide a plethora of functions. However, rich interaction with these devices can become challenging if they need be controlled from a distance, or are too small to accommodate user interfaces on their own. In this work, we explore PICOntrol, a new approach using an off-the-shelf handheld pico projector for direct control of physical devices through visible light. The projected image serves a dual purpose by simultaneously presenting a visible interface to the user, and transmitting embedded control information to inexpensive sensor units integrated with the devices. To use PICOntrol, the user points the handheld projector at a target device, overlays a projected user interface on its sensor unit, and performs various GUI-style or gestural interactions. PICOntrol enables direct, visible, and rich interactions with various physical devices without requiring central infrastructure. We present our prototype implementation as well as explorations of its interaction space through various application examples.
© All rights reserved Schmidt et al. and/or ACM Press
Dai, Xiaowei, Gu, Jiawei, Cao, Xiang, Colgate, J. Edward and Tan, Hong (2012): SlickFeel: sliding and clicking haptic feedback on a touchscreen. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 21-22. http://dx.doi.org/10.1145/2380296.2380308
We present SlickFeel, a single haptic display setup that can deliver two distinct types of feedback to a finger on a touchscreen during typical operations of sliding and clicking. Sliding feedback enables the sliding finger to feel interactive objects on a touchscreen through variations in friction. Clicking feedback provides a key-click sensation for confirming a key or button click. Two scenarios have been developed to demonstrate the utility of the two haptic effects. In the first, simple button-click scenario, a user feels the positions of four buttons on a touchscreen by sliding a finger over them and feels a simulated key-click signal by pressing on any of the buttons. In the second scenario, the advantage of haptic feedback is demonstrated in a haptically-enhanced thumb-typing scenario. A user enters text on a touchscreen with two thumbs without having to monitor the thumbs' locations on the screen. By integrating SlickFeel with a Kindle Fire tablet, we show that it can be used with existing mobile touchscreen devices.
© All rights reserved Dai et al. and/or ACM Press
Xu, Yan, Cao, Xiang, Sellen, Abigail, Herbrich, Ralf and Graepel, Thore (2011): Sociable killers: understanding social relationships in an online first-person shooter game. In: Proceedings of ACM CSCW11 Conference on Computer-Supported Cooperative Work 2011. pp. 197-206. http://dx.doi.org/10.1145/1958824.1958854
Online video games can be seen as medium for the formation and maintenance of social relationships. In this paper, we explore what social relationships mean under the context of online First-Person Shooter (FPS) games, how these relationships influence game experience, and how players manage them. We combine qualitative interview and quantitative game log data, and find that despite the gap between the non-persistent game world and potentially persistent social relationships, a diversity of social relationships emerge and they play a central role in the enjoyment of online FPS games. We report the forms, development, and impact of such relationships, and discuss our findings in light of design implications and comparison with other game genres.
© All rights reserved Xu et al. and/or their publisher
Tang, John C., Zhao, Chen, Cao, Xiang and Inkpen, Kori (2011): Your time zone or mine?: a study of globally time zone-shifted collaboration. In: Proceedings of ACM CSCW11 Conference on Computer-Supported Cooperative Work 2011. pp. 235-244. http://dx.doi.org/10.1145/1958824.1958860
We conducted interviews with sixteen members of teams that worked across global time zone differences. Despite time zone differences of about eight hours, collaborators still found time to synchronously meet. The interviews identified the diverse strategies teams used to find time windows to interact, which often included times outside of the normal workday and connecting from home to participate. Recent trends in increased work connectivity from home and blurred boundaries between work and home enabled more scheduling flexibility. While email use was understandably prevalent, there was also general interest in video, although obstacles remain for widespread usage. We propose several design implications for supporting this growing population of workers that need to span global time zone differences.
© All rights reserved Tang et al. and/or their publisher
Song, Hyunyoung, Benko, Hrvoje, Guimbretiere, Francois, Izadi, Shahram, Cao, Xiang and Hinckley, Ken (2011): Grips and gestures on a multi-touch pen. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1323-1332. http://dx.doi.org/10.1145/1978942.1979138
This paper explores the interaction possibilities enabled when the barrel of a digital pen is augmented with a multi-touch sensor. We present a novel multi-touch pen (MTPen) prototype and discuss its alternate uses beyond those of a standard stylus, such as allowing new touch gestures to be performed using the index finger or thumb and detecting how users grip the device as a mechanism for mode switching. We also discuss the hardware and software implementation challenges in realizing our prototype, and showcase how one can combine different grips (tripod, relaxed tripod, sketch, wrap) and gestures (swipe and double tap) to enable new interaction techniques with the MTPen in a prototype drawing application. One specific aim is the elimination of some of the comfort problems associated with existing auxiliary controls on digital pens. Mechanical controls such as barrel buttons and barrel scroll wheels work best in only a few specific hand grips and pen rotations. Comparatively, our gestures can be successfully and comfortably performed regardless of the rotation of the pen or how the user grips it, offering greater flexibility in use. We describe a formal evaluation comparing MTPen gestures against the use of a barrel button for mode switching. This study shows that both swipe and double tap gestures are comparable in performance to commonly employed barrel buttons without its disadvantages.
© All rights reserved Song et al. and/or their publisher
Lu, Fei, Tian, Feng, Jiang, Yingying, Cao, Xiang, Luo, Wencan, Li, Guang, Zhang, Xiaolong, Dai, Guozhong and Wang, Hongan (2011): ShadowStory: creative and collaborative digital storytelling inspired by cultural heritage. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1919-1928. http://dx.doi.org/10.1145/1978942.1979221
With the fast economic growth and urbanization of many developing countries come concerns that their children now have fewer opportunities to express creativity and develop collaboration skills, or to experience their local cultural heritage. We propose to address these concerns by creating technologies inspired by traditional arts, and allowing children to create and collaborate through playing with them. ShadowStory is our first attempt in this direction, a digital storytelling system inspired by traditional Chinese shadow puppetry. We present the design and implementation of ShadowStory and a 7-day field trial in a primary school. Findings illustrated that ShadowStory promoted creativity, collaboration, and intimacy with traditional culture among children, as well as interleaved children's digital and physical playing experience.
© All rights reserved Lu et al. and/or their publisher
Sun, Minghui, Cao, Xiang, Song, Hyunyoung, Izadi, Shahram, Benko, Hrvoje, Guimbretiere, Francois, Ren, Xiangshi and Hinckley, Ken (2011): Enhancing naturalness of pen-and-tablet drawing through context sensing. In: Proceedings of the 2011 ACM International Conference on Interactive Tabletops and Surfaces 2011. pp. 83-86. http://dx.doi.org/10.1145/2076354.2076371
Among artists and designers, the pen-and-tablet combination is widely used for creating digital drawings, as digital pens outperform other input devices in replicating the experience of physical drawing tools. In this paper, we explore how contextual information such as the relationship between the hand, the pen, and the tablet can be leveraged in the digital drawing experience to further enhance its naturalness. By embedding sensors in the pen and the tablet to sense and interpret these contexts, we demonstrate how several physical drawing practices can be reflected and assisted in digital interaction scenarios.
© All rights reserved Sun et al. and/or ACM Press
Zhao, Yuhang, Xue, Chao, Cao, Xiang and Shi, Yuanchun (2011): PicoPet: "Real World" digital pet on a handheld projector. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 1-2. http://dx.doi.org/10.1145/2046396.2046398
We created PicoPet, a digital pet game based on mobile handheld projectors. The player can project the pet into physical environments, and the pet behaves and evolves differently according to the physical surroundings. PicoPet creates a new form of gaming experience that is directly blended into the physical world, thus could become incorporated into the player's daily life as well as reflecting their lifestyle. Multiple pets projected by multiple players can also interact with each other, potentially triggering social interactions between players. In this paper, we present the design and implementation of PicoPet, as well as directions for future explorations.
© All rights reserved Zhao et al. and/or ACM Press
Li, Yang, Cao, Xiang, Everitt, Katherine, Dixon, Morgan and Landay, James A. (2010): FrameWire: a tool for automatically extracting interaction logic from paper prototyping tests. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 503-512. http://doi.acm.org/10.1145/1753326.1753401
Paper prototyping offers unique affordances for interface design. However, due to its spontaneous nature and the limitations of paper, it is difficult to distill and communicate a paper prototype design and its user test findings to a wide audience. To address these issues, we created FrameWire, a computer vision-based system that automatically extracts interaction flows from the video recording of paper prototype user tests. Based on the extracted logic, FrameWire offers two distinct benefits for designers: a structural view of the video recording that allows a designer or a stakeholder to easily distill and understand the design concept and user interaction behaviors, and automatic generation of interactive HTML-based prototypes that can be easily tested with a larger group of users as well as "walked through" by other stakeholders. The extraction is achieved by automatically aggregating video frame sequences into an interaction flow graph based on frame similarities and a designer-guided clustering process. The results of evaluating FrameWire with realistic paper prototyping tests show that our extraction approach is feasible and FrameWire is a promising tool for enhancing existing prototyping practice.
© All rights reserved Li et al. and/or their publisher
Cao, Xiang, Villar, Nicolas and Izadi, Shahram (2010): Comparing user performance with single-finger, whole-hand, and hybrid pointing devices. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 1643-1646. http://doi.acm.org/10.1145/1753326.1753571
Researchers have explored pointing devices operated by a single finger, but their advantage was not clear compared to conventional mice controlled by the whole hand. To incorporate the benefits of both, we prototyped hybrid pointing devices that combined both finger and hand movement to control the cursor, and experimentally compared their performance with single-finger and whole-hand devices. Results showed that such hybrid devices have the potential to improve pointing performance in terms of time, error, and bandwidth, especially for precise pointing.
© All rights reserved Cao et al. and/or their publisher
Yang, Xing-Dong, Mak, Edward, McCallum, David, Irani, Pourang, Cao, Xiang and Izadi, Shahram (2010): LensMouse: augmenting the mouse with an interactive touch display. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2431-2440. http://doi.acm.org/10.1145/1753326.1753695
We introduce LensMouse, a novel device that embeds a touch-screen display -- or tangible 'lens' -- onto a mouse. Users interact with the display of the mouse using direct touch, whilst also performing regular cursor-based mouse interactions. We demonstrate some of the unique capabilities of such a device, in particular for interacting with auxiliary windows, such as toolbars, palettes, pop-ups and dialog-boxes. By migrating these windows onto LensMouse, challenges such as screen real-estate use and window management can be alleviated. In a controlled experiment, we evaluate the effectiveness of LensMouse in reducing cursor movements for interacting with auxiliary windows. We also consider the concerns involving the view separation that results from introducing such a display-based device. Our results reveal that overall users are more effective with LenseMouse than with auxiliary application windows that are managed either in single or dual-monitor setups. We conclude by presenting other application scenarios that LensMouse could support.
© All rights reserved Yang et al. and/or their publisher
Kirk, David S., Sellen, Abigail and Cao, Xiang (2010): Home video communication: mediating 'closeness'. In: Proceedings of ACM CSCW10 Conference on Computer-Supported Cooperative Work 2010. pp. 135-144. http://doi.acm.org/10.1145/1718918.1718945
Video-mediated communication (VMC) technologies are becoming rapidly adopted by home users. Little research has previously been conducted into why home users would choose to use VMC or their practices surrounding its use. We present the results of an interview and diary-based study of 17 people about their uses of, and attitudes towards, VMC. We highlight the artful ways in which users appropriate VMC to reconcile a desire for closeness with those with whom they communicate, and we explore the rich ways in which VMC supports different expressions of this desire. We conclude with discussions of how next-generation VMC technologies might be designed to take advantage of this understanding of human values in communicative practice.
© All rights reserved Kirk et al. and/or their publisher
Cao, Xiang, Sellen, Abigail, Brush, A. J. Bernheim, Kirk, David, Edge, Darren and Ding, Xianghua (2010): Understanding family communication across time zones. In: Proceedings of ACM CSCW10 Conference on Computer-Supported Cooperative Work 2010. pp. 155-158. http://doi.acm.org/10.1145/1718918.1718947
Nowadays it has become increasingly common for family members to be distributed in different time zones. These time differences pose specific challenges for communication within the family and result in different communication practices to cope with them. To gain an understanding of current challenges and practices, we interviewed people who regularly communicate with immediate family members living in other time zones. We report primary findings from the interviews, and identify design opportunities for improving the experience of cross time zone family communication.
© All rights reserved Cao et al. and/or their publisher
Cao, Xiang, Lindley, Sian E., Helmes, John and Sellen, Abigail (2010): Telling the whole story: anticipation, inspiration and reputation in a field deployment of TellTable. In: Proceedings of ACM CSCW10 Conference on Computer-Supported Cooperative Work 2010. pp. 251-260. http://doi.acm.org/10.1145/1718918.1718967
We present a field study of TellTable, a new storytelling system designed to support creativity and collaboration amongst children. The application was deployed on a multi-touch interactive table in the library of a primary school, where children could use it to create characters and scenery based on elements of the physical world (captured through photography) as well as through drawing. These could then be used to record a story which could be played back. TellTable allowed children to collaborate in devising stories that mixed the physical and the digital in creative ways and that could include themselves as characters. Additionally, the field deployment illustrated how children took inspiration from one another's stories, how they planned elements of their own tales before using the technology, and how the fact that stories could be accessed in the library led some to become well-known and popular within the school community. The real story here, we argue, needs to take into account all that happens within the wider context of use of this system.
© All rights reserved Cao et al. and/or their publisher
Benko, Hrvoje, Izadi, Shahram, Wilson, Andrew D., Cao, Xiang, Rosenfeld, Dan and Hinckley, Ken (2010): Design and evaluation of interaction models for multi-touch mice. In: Proceedings of the 2010 Conference on Graphics Interface 2010. pp. 253-260. http://doi.acm.org/10.1145/4713060.1839259
Adding multi-touch sensing to the surface of a mouse has the potential to substantially increase the number of interactions available to the user. However, harnessing this increased bandwidth is challenging, since the user must perform multi-touch interactions while holding the device and using it as a regular mouse. In this paper we describe the design challenges and formalize the design space of multi-touch mice interactions. From our design space categories we synthesize four interaction models which enable the use of both multi-touch and mouse interactions on the same device. We describe the results of a controlled user experiment evaluating the performance of these models in a 2D spatial manipulation task typical of touch-based interfaces and compare them to interacting directly on a multi-touch screen and with a regular mouse. We observed that our multi-touch mouse interactions were overall slower than the chosen baselines; however, techniques providing a single focus of interaction and explicit touch activation yielded better performance and higher preferences from our participants. Our results expose the difficulties in designing multi-touch mice interactions and define the problem space for future research in making these devices effective.
© All rights reserved Benko et al. and/or their publisher
Neustaedter, Carman, Judge, Tejinder K., Harrison, Steve, Sellen, Abigail, Cao, Xiang, Kirk, David and Kaye, Joseph Jofish (2010): Connecting families: new technologies, family communication, and the impact on domestic space. In: GROUP10 International Conference on Supporting Group Work 2010. pp. 363-366. http://dx.doi.org/10.1145/1880071.1880152
Helmes, John, Cao, Xiang, Lindley, Sian E. and Sellen, Abigail (2009): Developing the story: designing an interactive storytelling application. In: Proceedings of the 2009 ACM International Conference on Interactive Tabletops and Surfaces 2009. pp. 49-52. http://doi.acm.org/10.1145/1731903.1731913
This paper describes the design of a tabletop storytelling application for children, called TellTable. The goal of the system was to stimulate creativity and collaboration by allowing children to develop their own story characters and scenery through photography and drawing, and record stories through direct manipulation and narration. Here we present the initial interface design and its iteration following the results of a preliminary trial. We also describe key findings from TellTable's deployment in a primary school that relate to its design, before concluding with a discussion of design implications from the process.
© All rights reserved Helmes et al. and/or their publisher
Wang, Feng, Cao, Xiang, Ren, Xiangshi and Irani, Pourang (2009): Detecting and leveraging finger orientation for interaction with direct-touch surfaces. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 23-32. http://doi.acm.org/10.1145/1622176.1622182
Current interactions on direct-touch interactive surfaces are often modeled based on properties of the input channel that are common in traditional graphical user interfaces (GUI) such as x-y coordinate information. Leveraging additional information available on the surfaces could potentially result in richer and novel interactions. In this paper we specifically explore the role of finger orientation. This property is typically ignored in touch-based interactions partly because of the ambiguity in determining it solely from the contact shape. We present a simple algorithm that unambiguously detects the directed finger orientation vector in real-time from contact information only, by considering the dynamics of the finger landing process. Results of an experimental evaluation show that our algorithm is stable and accurate. We then demonstrate how finger orientation can be leveraged to enable novel interactions and to infer higher-level information such as hand occlusion or user position. We present a set of orientation-aware interaction techniques and widgets for direct-touch surfaces.
© All rights reserved Wang et al. and/or their publisher
Villar, Nicolas, Izadi, Shahram, Rosenfeld, Dan, Benko, Hrvoje, Helmes, John, Westhues, Jonathan, Hodges, Steve, Ofek, Eyal, Butler, Alex, Cao, Xiang and Chen, Billy (2009): Mouse 2.0: multi-touch meets the mouse. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 33-42. http://doi.acm.org/10.1145/1622176.1622184
In this paper we present novel input devices that combine the standard capabilities of a computer mouse with multi-touch sensing. Our goal is to enrich traditional pointer-based desktop interactions with touch and gestures. To chart the design space, we present five different multi-touch mouse implementations. Each explores a different touch sensing strategy, which leads to differing form-factors and hence interactive possibilities. In addition to the detailed description of hardware and software implementations of our prototypes, we discuss the relative strengths, limitations and affordances of these novel input devices as informed by the results of a preliminary user study.
© All rights reserved Villar et al. and/or their publisher
Cao, Xiang, Li, Jacky Jie and Balakrishnan, Ravin (2008): Peephole pointing: modeling acquisition of dynamically revealed targets. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1699-1708. http://doi.acm.org/10.1145/1357054.1357320
Peephole interaction occurs when a spatially aware display is moved and acts as a viewport to reveal different parts of the virtual space that cannot all fit within the display at once. We investigate pointing within this peephole metaphor, where the targets may not be initially visible on the display, but are dynamically revealed by moving the display. We develop and experimentally validate a quantitative model for peephole pointing. Our results indicate that the model accurately accounts for peephole pointing for a variety of display sizes, both with and without users' having prior knowledge of the target location.
© All rights reserved Cao et al. and/or ACM Press
Cao, Xiang, Massimi, Michael and Balakrishnan, Ravin (2008): Flashlight jigsaw: an exploratory study of an ad-hoc multi-player game on public displays. In: Proceedings of ACM CSCW08 Conference on Computer-Supported Cooperative Work 2008. pp. 77-86. http://doi.acm.org/10.1145/1460563.1460577
As large displays become prevalent in public spaces, they could be employed to create novel game experiences for the public. We present an exploratory study of an ad-hoc multi-player game played on such public displays. The game, Flashlight Jigsaw, was deployed in a shared lab space and a public atrium for two weeks in total. Through interviews supported by observations and system logs we explored the experiences and behaviors of players and spectators. We also investigated the interrelationship between public display games and the spaces they are deployed in. The research resulted in findings regarding game play, communication, social interaction, spectatorship, and space and location around such a game. We use our findings to develop design implications for future public display games.
© All rights reserved Cao et al. and/or ACM Press
Cao, Xiang, Wilson, Andrew D., Balakrishnan, Ravin, Hinckley, Ken and Hudson, Scott E. (2008): ShapeTouch: Leveraging contact shape on interactive surfaces. In: Third IEEE International Workshop on Tabletops and Interactive Surfaces Tabletop 2008 October 1-3, 2008, Amsterdam, The Netherlands. pp. 129-136. http://dx.doi.org/10.1109/TABLETOP.2008.4660195
Cao, Xiang and Zhai, Shumin (2007): Modeling human performance of pen stroke gestures. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 1495-1504. http://doi.acm.org/10.1145/1240624.1240850
This paper presents a quantitative human performance model of making single-stroke pen gestures within certain error constraints in terms of production time. Computed from the properties of Curves, Line segments, and Corners (CLC) in a gesture stroke, the model may serve as a foundation for the design and evaluation of existing and future gesture-based user interfaces at the basic motor control efficiency level, similar to the role of previous "laws of action" played to pointing, crossing or steering-based user interfaces. We report and discuss our experimental results on establishing and validating the CLC model, together with other basic empirical findings in stroke gesture production.
© All rights reserved Cao and Zhai and/or ACM Press
Cao, Xiang, Forlines, Clifton and Balakrishnan, Ravin (2007): Multi-user interaction using handheld projectors. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 43-52. http://doi.acm.org/10.1145/1294211.1294220
Recent research on handheld projector interaction has expanded the display and interaction space of handheld devices by projecting information onto the physical environment around the user, but has mainly focused on single-user scenarios. We extend this prior single-user research to co-located multi-user interaction using multiple handheld projectors. We present a set of interaction techniques for supporting co-located collaboration with multiple handheld projectors, and discuss application scenarios enabled by them.
© All rights reserved Cao et al. and/or ACM Press
Cao, Xiang and Balakrishnan, Ravin (2006): Interacting with dynamically defined information spaces using a handheld projector and a pen. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 225-234. http://doi.acm.org/10.1145/1166253.1166289
The recent trend towards miniaturization of projection technology indicates that handheld devices will soon have the ability to project information onto any surface, thus enabling interfaces that are not possible with current handhelds. We explore the design space of dynamically defining and interacting with multiple virtual information spaces embedded in a physical environment using a handheld projector and a passive pen tracked in 3D. We develop techniques for defining and interacting with these spaces, and explore usage scenarios.
© All rights reserved Cao and Balakrishnan and/or ACM Press
Cao, Xiang and Iverson, Lee (2006): Intentional access management: making access control usable for end-users. In: Proceedings of the 2006 Symposium on Usable Privacy and Security 2006. pp. 20-31. http://cups.cs.cmu.edu/soups/2006/proceedings/p20_cao.pdf
The usability of access control mechanisms in modern distributed systems has been widely criticized but little studied. In this paper, we carefully examine one such widely deployed access control mechanism, the one embedded in the WebDAV standard, from the point-of-view of an end-user trying to decide how to grant or deny access to some resource to a third party. This analysis points to problems with the conceptual usability of the system. Significant effort is required on the part of the user to determine how to implement the desired access rules; the user, however, has low interest and expertise in this task, given that such access management actions are almost always secondary to the collaborative task at hand. The analysis does however indicate a possible solution: to recast the access control puzzle as a decision support problem in which user intentions (i.e. the descriptions of desired system outputs) are interpreted by an access mediator that either automatically or semi-automatically decides how to achieve the designated goals and provides enough feedback to the user. We call such systems intentional access management (IAM) systems and describe them in both specific and general terms. To demonstrate the feasibility and usability of the proposed IAM models, we develop an intentional access management prototype for WebDAV. The results of a user study conducted on the system show its superior usability compared to traditional access management tools like the access control list editor.
© All rights reserved Cao and Iverson and/or ACM Press
Cao, Xiang and Balakrishnan, Ravin (2005): Evaluation of an on-line adaptive gesture interface with command prediction. In: Graphics Interface 2005 May 9-11, 2005, Victoria, British Columbia, Canada. pp. 187-194. http://portal.acm.org/citation.cfm?id=1089508.1089540
We present an evaluation of a hybrid gesture interface framework that combines on-line adaptive gesture recognition with a command predictor. Machine learning techniques enable on-line adaptation to differences in users' input patterns when making gestures, and exploit regularities in command sequences to improve recognition performance. A prototype using 2D single-stroke gestures was implemented with a minimally intrusive user interface for on-line re-training. Results of a controlled user experiment show that the hybrid adaptive system significantly improved overall gesture recognition performance, and reduced users' need to practice making the gestures before achieving good results.
© All rights reserved Cao and Balakrishnan and/or their publisher
Cao, Xiang, Ofek, Eyal and Vronay, David (2005): Evaluation of alternative presentation control techniques. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1248-1251. http://doi.acm.org/10.1145/1056808.1056888
Although slideshow presentation applications, such as PowerPoint have been popular for years, the techniques commercially available to control them rely on mouse and keyboard, which can be restrictive for the presenters. We evaluated two representative alternative designs of presentation control techniques - Bare Hand and Laser Pointer, through a Wizard-of-Oz user study. The result showed that Bare Hand was better than Laser Pointer and Standard (mouse/keyboard) control in terms of acceptance and preference from both presenters and audience. We also proposed design directions based on user feedback.
© All rights reserved Cao et al. and/or ACM Press
Cao, Xiang and Balakrishnan, Ravin (2003): VisionWand: interaction techniques for large displays using a passive wand tracked in 3D. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 173-182. http://doi.acm.org/10.1145/964696.964716
A passive wand tracked in 3D using computer vision techniques is explored as
a new input mechanism for interacting with large displays. We demonstrate a
variety of interaction techniques that exploit the affordances of the wand,
resulting in an effective interface for large scale interaction. The lack of
any buttons or other electronics on the wand presents a challenge that we
address by developing a set of postures and gestures to track state and enable
command input. We also describe the use of multiple wands, and posit designs
for more complex wands in the future.
© All rights reserved Cao and Balakrishnan and/or ACM Press
Join our community and advance:
Page maintainer: The Editorial Team