Publication statistics

Pub. period:2003-2011
Pub. count:14
Number of co-authors:30



Co-authors

Number of publications with 3 favourite co-authors:

Qiong Liu:8
Francois Guimbretiere:5
Lynn Wilcox:4

 

 

Productive colleagues

Chunyuan Liao's 3 most productive colleagues in number of publications:

Ken Hinckley:54
James D. Hollan:49
Andreas Paepcke:43
 
 
 

Upcoming Courses

go to course
Information Visualization: Getting Dashboards Right
Starts TODAY LAST CALL!
go to course
User Experience: The Beginner's Guide
89% booked. Starts in 6 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Chunyuan Liao

 

Publications by Chunyuan Liao (bibliography)

 what's this?
2011
 
Edit | Del

Huber, Jochen, Liao, Chunyuan, Steimle, Jrgen and Liu, Qiong (2011): Toward Bimanual Interactions with Mobile Projectors on Arbitrary Surfaces. In: Proceedings of MP Workshop on Mobile and Personal Projection in conjunction with CHI 2011 May, 2011, Vancouver, Canada. p. toaear.

 
Edit | Del

Chiu, Patrick, Liao, Chunyuan and Chen, Francine (2011): Multi-touch document folding: gesture models, fold directions and symmetries. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1591-1600. Available online

For document visualization, folding techniques provide a focus-plus-context approach with fairly high legibility on flat sections. To enable richer interaction, we explore the design space of multi-touch document folding. We discuss several design considerations for simple modeless gesturing and compatibility with standard Drag and Pinch gestures. We categorize gesture models along the characteristics of Symmetric/Asymmetric and Serial/Parallel, which yields three gesture models. We built a prototype document workspace application that integrates folding and standard gestures, and a system for testing the gesture models. A user study was conducted to compare the three models and to analyze the factors of fold direction, target symmetry, and target tolerance in user performance when folding a document to a specific shape. Our results indicate that all three factors were significant for task times, and parallelism was greater for symmetric targets.

© All rights reserved Chiu et al. and/or their publisher

 
Edit | Del

Liao, Chunyuan and Liu, Qiong (2011): MixPad: augmenting interactive paper with mice & keyboards for fine-grained cross-media interaction with documents. In: Proceedings of the 2011 International Conference on Uniquitous Computing 2011. pp. 539-540. Available online

This demo shows an interactive paper system called MixPad, which features using mice and keyboards to enhance the conventional pen-finger-gesture based interaction with paper documents. Similar to many interactive paper systems, MixPad adopts a mobile camera-projector unit to recognize paper documents, detect pen and finger gestures and provide visual feedback. Unlike these systems, MixPad allows using mice and keyboards to help users interact with fine-grained document content on paper (e.g. individual words and user-defined arbitrary regions), and to facilitate cross-media operations. For instance, to copy a document segment from paper to a laptop, one first points a finger of her non-dominant hand to the segment roughly, and then uses a mouse in her dominant hand to refine the selection and drag it to the laptop; she can also type text as a detailed comment on a paper document. This novel interaction paradigm combines the advantages of mice, keyboards, pens and fingers, and therefore enables rich digital functions on paper.

© All rights reserved Liao and Liu and/or ACM Press

2010
 
Edit | Del

Liao, Chunyuan, Liu, Qiong, Liew, Bee and Wilcox, Lynn (2010): Pacer: fine-grained interactive paper via camera-touch hybrid gestures on a cell phone. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2441-2450. Available online

PACER is a gesture-based interactive paper system that supports fine-grained paper document content manipulation through the touch screen of a cameraphone. Using the phone's camera, PACER links a paper document to its digital version based on visual features. It adopts camera-based phone motion detection for embodied gestures (e.g. marquees, underlines and lassos), with which users can flexibly select and interact with document details (e.g. individual words, symbols and pixels). The touch input is incorporated to facilitate target selection at fine granularity, and to address some limitations of the embodied interaction, such as hand jitter and low input sampling rate. This hybrid interaction is coupled with other techniques such as semi-real time document tracking and loose physical-digital document registration, offering a gesture-based command system. We demonstrate the use of PACER in various scenarios including work-related reading, maps and music score playing. A preliminary user study on the design has produced encouraging user feedback, and suggested future research for better understanding of embodied vs. touch interaction and one vs. two handed interaction.

© All rights reserved Liao et al. and/or their publisher

 
Edit | Del

Liu, Qiong, Liao, Chunyuan, Wilcox, Lynn, Dunnigan, Anthony and Liew, Bee (2010): Embedded media markers: marks on paper that signify associated media. In: Proceedings of the 2010 International Conference on Intelligent User Interfaces 2010. pp. 149-158. Available online

Embedded Media Markers, or simply EMMs, are nearly transparent iconic marks printed on paper documents that signify the existence of media associated with that part of the document. EMMs also guide users' camera operations for media retrieval. Users take a picture of an EMM-signified document patch using a cell phone, and the media associated with the EMM-signified document location is displayed on the phone. Unlike bar codes, EMMs are nearly transparent and thus do not interfere with the document appearance. Retrieval of media associated with an EMM is based on image local features of the captured EMM-signified document patch. This paper describes a technique for semi-automatically placing an EMM at a location in a document, in such a way that it encompasses sufficient identification features with minimal disturbance to the original document.

© All rights reserved Liu et al. and/or their publisher

 
Edit | Del

Liu, Qiong, Liao, Chunyuan, Wilcox, Lynn and Dunnigan, Anthony (2010): Embedded media barcode links: optimally blended barcode overlay on paper for linking to associated media. In: Proceedings of the 2010 International Conference on Multimodal Interfaces 2010. p. 23. Available online

Embedded Media Barcode Links, or simply EMBLs, are optimally blended iconic barcode marks, printed on paper documents, that signify the existence of multimedia associated with that part of the document content (Figure 1). EMBLs are used for multimedia retrieval with a camera phone. Users take a picture of an EMBL-signified document patch using a cell phone, and the multimedia associated with the EMBL-signified document location is displayed on the phone. Unlike a traditional barcode which requires an exclusive space, the EMBL construction algorithm acts as an agent to negotiate with a barcode reader for maximum user and document benefits. Because of this negotiation, EMBLs are optimally blended with content and thus have less interference with the original document layout and can be moved closer to a media associated location. Retrieval of media associated with an EMBL is based on the barcode identification of a captured EMBL. Therefore, EMBL retains nearly all barcode identification advantages, such as accuracy, speed, and scalability. Moreover, EMBL takes advantage of users' knowledge of a traditional barcode. Unlike Embedded Media Maker (EMM) which requires underlying document features for marker identification, EMBL has no requirement for the underlying features. This paper will discuss the procedures for EMBL construction and optimization. It will also give experimental results that strongly support the EMBL construction and optimization ideas.

© All rights reserved Liu et al. and/or ACM Press

 
Edit | Del

Carter, Scott, Liao, Chunyuan, Denoue, Laurent, Golovchinsky, Gene and Liu, Qiong (2010): Linking Digital Media to Physical Documents: Comparing Content- and Marker-Based Tags. In IEEE Pervasive Computing, 9 (2) pp. 46-55. Available online

2007
 
Edit | Del

Liao, Chunyuan, Guimbretiere, Francois, Hinckley, Ken and Hollan, James D. (2007): Papiercraft: A gesture-based command system for interactive paper. In ACM Transactions on Computer-Human Interaction, 14 (4) p. 18. Available online

Paper persists as an integral component of active reading and other knowledge-worker tasks because it provides ease of use unmatched by digital alternatives. Paper documents are light to carry, easy to annotate, rapid to navigate, flexible to manipulate, and robust to use in varied environments. Interactions with paper documents create rich webs of annotation, cross reference, and spatial organization. Unfortunately, the resulting webs are confined to the physical world of paper and, as they accumulate, become increasingly difficult to store, search, and access. XLibris [Schilit et al. 1998] and similar systems address these difficulties by simulating paper with tablet PCs. While this approach is promising, it suffers not only from limitations of current tablet computers (e.g., limited screen space) but also from loss of invaluable paper affordances. In this article, we describe PapierCraft, a gesture-based command system that allows users to manipulate digital documents using paper printouts as proxies. Using an Anoto [Anoto 2002] digital pen, users can draw command gestures on paper to tag a paragraph, e-mail a selected area, copy selections to a notepad, or create links to related documents. Upon pen synchronization, PapierCraft executes the commands and presents the results in a digital document viewer. Users can then search the tagged information and navigate the web of annotated digital documents resulting from interactions with the paper proxies. PapierCraft also supports real time interactions across mix-media, for example, letting users copy information from paper to a Tablet PC screen. This article presents the design and implementation of the PapierCraft system and describes user feedback from initial use.

© All rights reserved Liao et al. and/or ACM Press

 
Edit | Del

Liao, Chunyuan, Guimbretiere, Francois, Anderson, Richard J., Linnell, Natalie, Prince, Craig and Razmov, Valentin (2007): PaperCP: Exploring the Integration of Physical and Digital Affordances for Active Learning. In: Baranauskas, Maria Ceclia Calani, Palanque, Philippe A., Abascal, Julio and Barbosa, Simone Diniz Junqueira (eds.) DEGAS 2007 - Proceedings of the 1st International Workshop on Design and Evaluation of e-Government Applications and Services September 11th, 2007, Rio de Janeiro, Brazil. pp. 15-28. Available online

2006
 
Edit | Del

Yeh, Ron, Liao, Chunyuan, Klemmer, Scott R., Guimbretiere, Francois, Lee, Brian, Kakaradov, Boyko, Stamberger, Jeannie and Paepcke, Andreas (2006): ButterflyNet: a mobile capture and access system for field biology research. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 571-580. Available online

Through a study of field biology practices, we observed that biology fieldwork generates a wealth of heterogeneous information, requiring substantial labor to coordinate and distill. To manage this data, biologists leverage a diverse set of tools, organizing their effort in paper notebooks. These observations motivated ButterflyNet, a mobile capture and access system that integrates paper notes with digital photographs captured during field research. Through ButterflyNet, the activity of leafing through a notebook expands to browsing all associated digital photos. ButterflyNet also facilitates the transfer of captured content to spreadsheets, enabling biologists to share their work. A first-use study with 14 biologists found this system to offer rich data capture and transformation, in a manner felicitous with current practice.

© All rights reserved Yeh et al. and/or ACM Press

 
Edit | Del

Liao, Chunyuan, Guimbretiere, Francois and Loeckenhoff, Corinna E. (2006): Pen-top feedback for paper-based interfaces. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 201-210. Available online

Current paper-based interfaces such as PapierCraft, provide very little feedback and this limits the scope of possible interactions. So far, there has been little systematic exploration of the structure, constraints, and contingencies of feedback-mechanisms in paper-based interaction systems for paper-only environments. We identify three levels of feedback: discovery feedback (e.g., to aid with menu learning), status-indication feedback (e.g., for error detection), and task feedback (e.g., to aid in a search task). Using three modalities (visual, tactile, and auditory) which can be easily implemented on a pen-sized computer, we introduce a conceptual matrix to guide systematic research on pen-top feedback for paper-based interfaces. Using this matrix, we implemented a multimodal pen prototype demonstrating the potential of our approach. We conducted an experiment that confirmed the efficacy of our design in helping users discover a new interface and identify and correct their errors.

© All rights reserved Liao et al. and/or ACM Press

2005
 
Edit | Del

Liao, Chunyuan, Guimbretiere, Francois and Hinckley, Ken (2005): PapierCraft: a command system for interactive paper. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 241-244. Available online

Knowledge workers use paper extensively for document reviewing and note-taking due to its versatility and simplicity of use. As users annotate printed documents and gather notes, they create a rich web of annotations and cross references. Unfortunately, as paper is a static media, this web often gets trapped in the physical world. While several digital solutions such as XLibris [15] and Digital Desk [18] have been proposed, they suffer from a small display size or onerous hardware requirements. To address these limitations, we propose PapierCraft, a gesture-based interface that allows users to manipulate digital documents directly using their printouts as proxies. Using a digital pen, users can annotate a printout or draw command gestures to indicate operations such as copying a document area, pasting an area previously copied, or creating a link. Upon pen synchronization, our infrastructure executes these commands and presents the result in a customized viewer. In this paper we describe the design and implementation of the PapierCraft command system, and report on early user feedback.

© All rights reserved Liao et al. and/or ACM Press

2003
 
Edit | Del

Chiu, Patrick, Liu, Qiong, Boreczky, John, Foote, Jonathan, Fuse, Tohru, Kimber, Don, Lertsithichai, Surapong and Liao, Chunyuan (2003): Manipulating and Annotating Slides in a Multi-Display Environment. In: Proceedings of IFIP INTERACT03: Human-Computer Interaction 2003, Zurich, Switzerland. p. 583.

 
Edit | Del

Liao, Chunyuan, Liu, Qiong, Kimber, Don, Chiu, Patrick, Foote, Jonathan and Wilcox, Lynn (2003): Shared interactive video for teleconferencing. In: Rowe, Lawrence A., Vin, Harrick M., Plagemann, Thomas, Shenoy, Prashant J. and Smith, John R. (eds.) Proceedings of the Eleventh ACM International Conference on Multimedia November 2-8, 2003, Berkeley, CA, USA. pp. 546-554. Available online

 
Add publication
Show list on your website
 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/chunyuan_liao.html