Publication statistics

Pub. period:2000-2012
Pub. count:30
Number of co-authors:45



Co-authors

Number of publications with 3 favourite co-authors:

James A. Landay:10
Zhiwei Guan:6
Guozhong Dai:5

 

 

Productive colleagues

Yang Li's 3 most productive colleagues in number of publications:

James A. Landay:91
Jacob O. Wobbrock:71
Ken Hinckley:54
 
 
 

Upcoming Courses

go to course
Design Thinking: The Beginner's Guide
Starts tomorrow LAST CALL!
go to course
The Psychology of Online Sales: The Beginner's Guide
Starts the day after tomorrow !
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Yang Li

Personal Homepage:
http://yangl.org/

Add description
Rename / change spelling
Add publication
 

Publications by Yang Li (bibliography)

 what's this?
2012
 
Edit | Del

Li, Yang (2012): Gesture-based interaction: a new dimension for mobile user interfaces. In: Proceedings of the 2012 International Conference on Advanced Visual Interfaces 2012. pp. 6-6.

Today, smart phones with touchscreens and sensors are the predominant, fastest growing class of consumer computing devices. However, because these devices are used in diverse situations, and have unique capabilities and form factors, they also raise new user interface challenges, and at the same time, offer great opportunities for impactful HCI research. In this talk, I will focus on gesture-based interaction, an important interaction behavior enabled by touchscreens and built-in sensors, which sets mobile interaction apart from traditional graphical user interfaces. I will first talk about gesture shortcuts in the context of Gesture Search [1], a tool that allows users to quickly access applications and data on the phone by simply drawing a few gestures (http://www.google.com/mobile/gesture-search). Gesture Search flattens mobile phones' UI hierarchy by alleviating the need for navigating the interface. Gesture Search has been released and is invoked hundreds of thousands of times per day by a large user population. I will then cover several related projects that furthered our investigation into gesture shortcuts, including using gestures for target acquisition [3], crowd sourcing-based gesture recognition [5] and our early exploration on motion gestures [4, 6, 7]. Finally, I will turn to discuss multi-touch gestures for direct manipulation of an interface, the dominant class of gesture-based interaction on existing commercial devices. Multi-touch gestures are intuitive and efficient to use, but can be difficult to implement. I will discuss tools to support developers, allowing them to more easily create multi-touch interaction behaviors by demonstration [2]. These projects investigated various aspects of gesture-based interaction on mobile devices. They help open a new dimension for mobile interaction.

© All rights reserved Li and/or ACM Press

 
Edit | Del

Negulescu, Matei, Ruiz, Jaime, Li, Yang and Lank, Edward (2012): Tap, swipe, or move: attentional demands for distracted smartphone input. In: Proceedings of the 2012 International Conference on Advanced Visual Interfaces 2012. pp. 173-180.

Smartphones are frequently used in environments where the user is distracted by another task, for example by walking or by driving. While the typical interface for smartphones involves hardware and software buttons and surface gestures, researchers have recently posited that, for distracted environments, benefits may exist in using motion gestures to execute commands. In this paper, we examine the relative cognitive demands of motion gestures and surface taps and gestures in two specific distracted scenarios: a walking scenario, and an eyes-free seated scenario. We show, first, that there is no significant difference in reaction time for motion gestures, taps, or surface gestures on smartphones. We further show that motion gestures result in significantly less time looking at the smartphone during walking than does tapping on the screen, even with interfaces optimized for eyes-free input. Taken together, these results show that, despite somewhat lower throughput, there may be benefits to making use of motion gestures as a modality for distracted input on smartphones.

© All rights reserved Negulescu et al. and/or ACM Press

2011
 
Edit | Del

Ruiz, Jaime, Li, Yang and Lank, Edward (2011): User-defined motion gestures for mobile interaction. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 197-206.

Modern smartphones contain sophisticated sensors to monitor three-dimensional movement of the device. These sensors permit devices to recognize motion gestures -- deliberate movements of the device by end-users to invoke commands. However, little is known about best-practices in motion gesture design for the mobile computing paradigm. To address this issue, we present the results of a guessability study that elicits end-user motion gestures to invoke commands on a smartphone device. We demonstrate that consensus exists among our participants on parameters of movement and on mappings of motion gestures onto commands. We use this consensus to develop a taxonomy for motion gestures and to specify an end-user inspired motion gesture set. We highlight the implications of this work to the design of smartphone applications and hardware. Finally, we argue that our results influence best practices in design for all gestural interfaces.

© All rights reserved Ruiz et al. and/or their publisher

 
Edit | Del

L, Hao and Li, Yang (2011): Gesture avatar: a technique for operating mobile user interfaces using gestures. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 207-216.

Finger-based touch input has become a major interaction modality for mobile user interfaces. However, due to the low precision of finger input, small user interface components are often difficult to acquire and operate on a mobile device. It is even harder when the user is on the go and unable to pay close attention to the interface. In this paper, we present Gesture Avatar, a novel interaction technique that allows users to operate existing arbitrary user interfaces using gestures. It leverages the visibility of graphical user interfaces and the casual interaction of gestures. Gesture Avatar can be used to enhance a range of mobile interactions. A user study we conducted showed that compared to Shift (an alternative technique for target acquisition tasks), Gesture Avatar performed at a much lower error rate on various target sizes and significantly faster on small targets (1mm). It also showed that using Gesture Avatar while walking did not significantly impact its performance, which makes it suitable for mobile uses.

© All rights reserved L and Li and/or their publisher

 
Edit | Del

Bragdon, Andrew, Nelson, Eugene, Li, Yang and Hinckley, Ken (2011): Experimental analysis of touch-screen gesture designs in mobile environments. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 403-412.

Direct-touch interaction on mobile phones revolves around screens that compete for visual attention with users' real-world tasks and activities. This paper investigates the impact of these situational impairments on touch-screen interaction. We probe several design factors for touch-screen gestures, under various levels of environmental demands on attention, in comparison to the status-quo approach of soft buttons. We find that in the presence of environmental distractions, gestures can offer significant performance gains and reduced attentional load, while performing as well as soft buttons when the user's attention is focused on the phone. In fact, the speed and accuracy of bezel gestures did not appear to be significantly affected by environment, and some gestures could be articulated eyes-free, with one hand. Bezel-initiated gestures offered the fastest performance, and mark-based gestures were the most accurate. Bezel-initiated marks therefore may offer a promising approach for mobile touch-screen interaction that is less demanding of the user's attention.

© All rights reserved Bragdon et al. and/or their publisher

 
Edit | Del

Chang, Tsung-Hsiang and Li, Yang (2011): Deep shot: a framework for migrating tasks across devices using mobile phone cameras. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2163-2172.

A user task often spans multiple heterogeneous devices, e.g., working on a PC in the office and continuing the work on a laptop or a mobile phone while commuting on a shuttle. However, there is a lack of support for users to easily migrate their tasks across devices. To address this problem, we created Deep Shot, a framework for capturing the user's work state that is needed for a task (e.g., the specific part of a webpage being viewed) and resuming it on a different device. In particular, Deep Shot supports two novel and intuitive interaction techniques, deep shooting and deep posting, for pulling and pushing work states, respectively, using a mobile phone camera. In addition, Deep Shot provides a concise API for developers to leverage its services and make their application states migratable. We demonstrated that Deep Shot can be used to support a range of everyday tasks migrating across devices. An evaluation consisting of a series of experiments showed that our framework and techniques are feasible.

© All rights reserved Chang and Li and/or their publisher

 Cited in the following chapter:

User Interface Design Adaptation: [/encyclopedia/user_interface_design_adaptation.html]


 
 
Edit | Del

Ruiz, Jaime and Li, Yang (2011): DoubleFlip: a motion gesture delimiter for mobile interaction. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2717-2720.

To make motion gestures more widely adopted on mobile devices it is important that devices be able to distinguish between motion intended for mobile interaction and every-day motion. In this paper, we present DoubleFlip, a unique motion gesture designed as an input delimiter for mobile motion-based interaction. The DoubleFlip gesture is distinct from regular motion of a mobile device. Based on a collection of 2,100 hours of motion data captured from 99 users, we found that our DoubleFlip recognizer is extremely resistant to false positive conditions, while still achieving a high recognition rate. Since DoubleFlip is easy to perform and unlikely to be accidentally invoked, it provides an always-active input event for mobile interaction.

© All rights reserved Ruiz and Li and/or their publisher

2010
 
Edit | Del

Li, Yang, Cao, Xiang, Everitt, Katherine, Dixon, Morgan and Landay, James A. (2010): FrameWire: a tool for automatically extracting interaction logic from paper prototyping tests. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 503-512.

Paper prototyping offers unique affordances for interface design. However, due to its spontaneous nature and the limitations of paper, it is difficult to distill and communicate a paper prototype design and its user test findings to a wide audience. To address these issues, we created FrameWire, a computer vision-based system that automatically extracts interaction flows from the video recording of paper prototype user tests. Based on the extracted logic, FrameWire offers two distinct benefits for designers: a structural view of the video recording that allows a designer or a stakeholder to easily distill and understand the design concept and user interaction behaviors, and automatic generation of interactive HTML-based prototypes that can be easily tested with a larger group of users as well as "walked through" by other stakeholders. The extraction is achieved by automatically aggregating video frame sequences into an interaction flow graph based on frame similarities and a designer-guided clustering process. The results of evaluating FrameWire with realistic paper prototyping tests show that our extraction approach is feasible and FrameWire is a promising tool for enhancing existing prototyping practice.

© All rights reserved Li et al. and/or their publisher

 
Edit | Del

Li, Yang (2010): Protractor: a fast and accurate gesture recognizer. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2169-2172.

Protractor is a novel gesture recognizer that can be easily implemented and quickly customized for different users. Protractor uses a nearest neighbor approach, which recognizes an unknown gesture based on its similarity to each of the known gestures, e.g., training samples or examples given by a user. In particular, it employs a novel method to measure the similarity between gestures, by calculating a minimum angular distance between them with a closed-form solution. As a result, Protractor is more accurate, naturally covers more gesture variation, runs significantly faster and uses much less memory than its peers. This makes Protractor suitable for mobile computing, which is limited in processing power and memory. An evaluation on both a previously published gesture data set and a newly collected gesture data set indicates that Protractor outperforms its peers in many aspects.

© All rights reserved Li and/or his/her publisher

 
Edit | Del

Li, Yang (2010): Gesture search: a tool for fast mobile data access. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 87-96.

Modern mobile phones can store a large amount of data, such as contacts, applications and music. However, it is difficult to access specific data items via existing mobile user interfaces. In this paper, we present Gesture Search, a tool that allows a user to quickly access various data items on a mobile phone by drawing gestures on its touch screen. Gesture Search contributes a unique way of combining gesture-based interaction and search for fast mobile data access. It also demonstrates a novel approach for coupling gestures with standard GUI interaction. A real world deployment with mobile phone users showed that Gesture Search enabled fast, easy access to mobile data in their day-to-day lives. Gesture Search has been released to public and is currently in use by hundreds of thousands of mobile users. It was rated positively by users, with a mean of 4.5 out of 5 for over 5000 ratings.

© All rights reserved Li and/or his/her publisher

 
Edit | Del

Ruiz, Jaime and Li, Yang (2010): DoubleFlip: a motion gesture delimiter for interaction. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 449-450.

In order to use motion gestures with mobile devices it is imperative that the device be able to distinguish between input motion and everyday motion. In this abstract we present DoubleFlip, a unique motion gesture designed to act as an input delimiter for mobile motion gestures. We demonstrate that the DoubleFlip gesture is extremely resistant to false positive conditions, while still achieving high recognition accuracy. Since DoubleFlip is easy to perform and less likely to be accidentally invoked, it provides an always-active input event for mobile interaction.

© All rights reserved Ruiz and Li and/or their publisher

 
Edit | Del

Pan, Zhigeng, Li, Yang, Zhang, Mingmin, Sun, Chao, Guo, Kangde, Tang, Xing and Zhou, Steven Zhiying (2010): A real-time multi-cue hand tracking algorithm based on computer vision. In: Lok, Benjamin, Klinker, Gudrun and Nakatsu, Ryohei (eds.) IEEE Virtual Reality Conference, VR 2010 March 20-24, 2010, Waltham, Massachusetts, USA. pp. 219-222.

2009
 
Edit | Del

Li, Yang, Lu, Tian-Bo, Guo, Li, Tian, Zhi-Hong and Nie, Qin-Wu (2009): Towards lightweight and efficient DDOS attacks detection for web server. In: Proceedings of the 2009 International Conference on the World Wide Web 2009. pp. 1139-1140.

In this poster, based on our previous work in building a lightweight DDoS (Distributed Denial-of-Services) attacks detection mechanism for web server using TCM-KNN (Transductive Confidence Machines for K-Nearest Neighbors) and genetic algorithm based instance selection methods, we further propose a more efficient and effective instance selection method, named E-FCM (Extend Fuzzy C-Means). By using this method, we can obtain much cheaper training time for TCM-KNN while ensuring high detection performance. Therefore, the optimized mechanism is more suitable for lightweight DDoS attacks detection in real network environment.

© All rights reserved Li et al. and/or ACM Press

 
Edit | Del

Li, Yang, Wu, Yi-Chuan, Zhang, Jian-Ying, Peng, Jin, Liao, Hong-Luan and Zhang, Yun-Fei (2009): A P2P based distributed services network for next generation mobile internet communications. In: Proceedings of the 2009 International Conference on the World Wide Web 2009. pp. 1177-1178.

In this poster, we present a novel P2P (Peer to Peer) based distributed services network (DSN), which is a next generation operable and manageable distributed core network architecture and functional structure, proposed by China Mobile for telecommunication services and wireless Internet. Our preliminary implementations of P2P VoIP (Voice over Internet Protocol) system over DSN platform demonstrate its effectiveness and promising future.

© All rights reserved Li et al. and/or ACM Press

2008
 
Edit | Del

Li, Yang and Landay, James A. (2008): Activity-based prototyping of ubicomp applications for long-lived, everyday human activities. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1303-1312.

We designed an activity-based prototyping process realized in the ActivityDesigner system that combines the theoretical framework of Activity-Centered Design with traditional iterative design. This process allows designers to leverage human activities as first class objects for design and is supported in ActivityDesigner by three novel features. First, this tool allows designers to model activities based on concrete scenarios collected from everyday lives. The models form a context for design and computational constructs for creating functional prototypes. Second, it allows designers to prototype interaction behaviors based on activity streams spanning time. Third, it allows designers to easily test these prototypes with real users continuously, in situ. We have garnered positive feedback from a series of laboratory user studies and several case studies in which ActivityDesigner was used in realistic design situations. ActivityDesigner was able to effectively streamline a ubicomp design process, and it allowed creating realistic ubicomp application prototypes at a low cost and testing them in everyday lives over an extended period.

© All rights reserved Li and Landay and/or ACM Press

2007
 
Edit | Del

Liu, Alan L. and Li, Yang (2007): BrickRoad: a light-weight tool for spontaneous design of location-enhanced applications. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 295-298.

It is difficult to design and test location-enhanced applications. A large part of this difficulty is due to the added complexity of supporting location. Wizard of Oz (WOz) has become an effective technique for the early stage design of location-enhanced applications because it allows designers to test an application prototype bysimulating nonexistent components such as location sensing. However, existing WOz tools 1) require nontrivial effort from designers to specify how a prototype should behave before it can be tested with end users, and 2) support only limited control over application behavior during a test. BrickRoad is a WOz tool for spontaneous design of location-enhanced applications. It lowers the threshold to acquiring user feedback and exploring a design space. With BrickRoad, a designer does not need to specify any interaction logic and can experiment on-the-fly with different designs during testing. BrickRoad is a valuable complement to existing tool support for the early stage design of location-enhanced applications.

© All rights reserved Liu and Li and/or ACM Press

 
Edit | Del

Wobbrock, Jacob O., Wilson, Andrew D. and Li, Yang (2007): Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 159-168.

Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user interface prototypers. Although some user interface libraries and toolkits offer gesture recognizers, such infrastructure is often unavailable in design-oriented environments like Flash, scripting environments like JavaScript, or brand new off-desktop prototyping environments. To enable novice programmers to incorporate gestures into their UI prototypes, we present a "$1 recognizer" that is easy, cheap, and usable almost anywhere in about 100 lines of code. In a study comparing our $1 recognizer, Dynamic Time Warping, and the Rubine classifier on user-supplied gestures, we found that $1 obtains over 97% accuracy with only 1 loaded template and 99% accuracy with 3+ loaded templates. These results were nearly identical to DTW and superior to Rubine. In addition, we found that medium-speed gestures, in which users balanced speed and accuracy, were recognized better than slow or fast gestures for all three recognizers. We also discuss the effect that the number of templates or training examples has on recognition, the score falloff along recognizers' N-best lists, and results for individual gestures. We include detailed pseudocode of the $1 recognizer to aid development, inspection, extension, and testing.

© All rights reserved Wobbrock et al. and/or ACM Press

 
Edit | Del

Li, Yang, Hong, Jason I. and Landay, James A. (2007): Design Challenges and Principles for Wizard of Oz Testing of Location-Enhanced Applications. In IEEE Pervasive Computing, 6 (2) pp. 70-75.

 
Edit | Del

0004, Li Ding, Yang, Feng, Yang, Chunxin, Yuan, Xiugan and Li, Yang (2007): Study on the Appraisal Methods of Hand Fatigue. In: Duffy, Vincent G. (ed.) ICDHM 2007 - First International Conference on Digital Human Modeling July 22-27, 2007, Beijing, China. pp. 33-41.

 
Edit | Del

Ren, Xiangshi, Yin, Jibin, Zhao, Shengdong and Li, Yang (2007): The Adaptive Hybrid Cursor: A Pressure-Based Target Selection Technique for Pen-Based User Interfaces. In: Baranauskas, Maria Ceclia Calani, Palanque, Philippe A., Abascal, Julio and Barbosa, Simone Diniz Junqueira (eds.) DEGAS 2007 - Proceedings of the 1st International Workshop on Design and Evaluation of e-Government Applications and Services September 11th, 2007, Rio de Janeiro, Brazil. pp. 310-323.

2006
 
Edit | Del

Li, Yang, Welbourne, Evan and Landay, James A. (2006): Design and experimental analysis of continuous location tracking techniques for Wizard of Oz testing. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 1019-1022.

Wizard of Oz (WOz) testing has shown promise as an effective way to test location-enhanced applications. However, it is challenging to conduct a location-based WOz test because of the dynamic nature of target settings in the field. In particular, continuous location tracking, a major task in such a test, requires a wizard to frequently update a user's location to simulate a location system. This imposes a heavy task load on a wizard. To ease wizards' tasks for location tracking, we designed two techniques, Directional Crossing and Steering, and conducted a field experiment to investigate the performance of the two techniques. A quantitative analysis shows that Directional Crossing and Steering significantly lowered a wizard's task load for location tracking without sacrificing accuracy.

© All rights reserved Li et al. and/or ACM Press

 
Edit | Del

Dow, Steven, Saponas, T. Scott, Li, Yang and Landay, James A. (2006): External representations in ubiquitous computing design and the implications for design tools. In: Proceedings of DIS06: Designing Interactive Systems: Processes, Practices, Methods, & Techniques 2006. pp. 241-250.

One challenge for ubiquitous computing is providing appropriate tools for professional designers, thus leading to stronger user-valued applications. Unlike many previous tool-builders' attempts to support a specific technology, we take a designer-centered stance, asking the question: how do professional designers externalize ideas for off-the-desktop computing and how do these inform next generation design tools? We report on interviews with designers from various domains, including experience, interaction, industrial, and space designers. The study broadly reveals perceived challenges of moving into a non-traditional design medium, emphasizes the practice of storytelling for relating the context of interaction, and through two case studies, traces the use of various external representations during the design progression of ubicomp applications. Using paperprototyped "walkthroughs" centered on two common design representations (storyboards and physical simulations), we formed a deeper understanding of issues influencing tool development. We offer guidelines for builders of future ubicomp tools, especially early-stage conceptual tools for professional designers to prototype applications across multiple sensors, displays, and physical environments.

© All rights reserved Dow et al. and/or ACM Press

2005
 
Edit | Del

Li, Yang and Landay, James A. (2005): Informal prototyping of continuous graphical interactions by demonstration. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 221-230.

Informal prototyping tools have shown great potential in facilitating the early stage design of user interfaces. However, continuous interactions, an important constituent of highly interactive interfaces, have not been well supported by previous tools. These interactions give continuous visual feedback, such as geometric changes of a graphical object, in response to continuous user input, such as the movement of a mouse. We built Monet, a sketch-based tool for prototyping continuous interactions by demonstration. In Monet, designers can prototype continuous widgets and their states of interest using examples. They can also demonstrate compound behaviors involving multiple widgets by direct manipulation. Monet allows continuous interactions to be easily integrated with event-based, discrete interactions. Continuous widgets can be embedded into storyboards and their states can condition or trigger storyboard transitions. Monet achieves these features by employing continuous function approximation and statistical classification techniques, without using any domain specific knowledge or assuming any application semantics. Informal feedback showed that Monet is a promising approach to enabling more complete tool support for early stage UI design.

© All rights reserved Li and Landay and/or ACM Press

 
Edit | Del

Li, Yang, Hinckley, Ken, Guan, Zhiwei and Landay, James A. (2005): Experimental analysis of mode switching techniques in pen-based user interfaces. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 461-470.

Inking and gesturing are two central tasks in pen-based user interfaces. Switching between modes for entry of uninterpreted ink and entry of gestures is required by many pen-based user interfaces. Without an appropriate mode switching technique, pen-based interactions in such situations may be inefficient and cumbersome. In this paper, we investigate five techniques for switching between ink and gesture modes in pen interfaces, including a pen-pressure based mode switching technique that allows implicit mode transition. A quantitative experimental study was conducted to evaluate the performance of these techniques. The results suggest that pressing a button with the non-preferred hand offers the fastest performance, while the technique of holding the pen still is significantly slower and more prone to error than the other techniques. Pressure, while promising, did not perform as well as the non-preferred hand button with our current implementation.

© All rights reserved Li et al. and/or ACM Press

2004
 
Edit | Del

Li, Yang, Hong, Jason I. and Landay, James A. (2004): Topiary: a tool for prototyping location-enhanced applications. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 217-226.

Location-enhanced applications use the location of people, places, and things to augment or streamline interaction. Location-enhanced applications are just starting to emerge in several different domains, and many people believe that this type of application will experience tremendous growth in the near future. However, it currently requires a high level of technical expertise to build location-enhanced applications, making it hard to iterate on designs. To address this problem we introduce Topiary, a tool for rapidly prototyping location-enhanced applications. Topiary lets designers create a map that models the location of people, places, and things; use this active map to demonstrate scenarios depicting location contexts; use these scenarios in creating storyboards that describe interaction sequences; and then run these storyboards on mobile devices, with a wizard updating the location of people and things on a separate device. We performed an informal evaluation with seven researchers and interface designers and found that they reacted positively to the concept.

© All rights reserved Li et al. and/or ACM Press

2003
 
Edit | Del

Li, Yang, Landay, James A., Guan, Zhiwei, Ren, Xiangshi and Dai, Guozhong (2003): Sketching informal presentations. In: Oviatt, Sharon L., Darrell, Trevor, Maybury, Mark T. and Wahlster, Wolfgang (eds.) Proceedings of the 5th International Conference on Multimodal Interfaces - ICMI 2003 November 5-7, 2003, Vancouver, British Columbia, Canada. pp. 234-241.

 
Edit | Del

Li, Yang, Landay, James A., Guan, Zhiwei, Ren, Xiangshi and Dai, Guozhong (2003): Sketching informal presentations. In: Proceedings of the 2003 International Conference on Multimodal Interfaces 2003. pp. 234-241.

Informal presentations are a lightweight means for fast and convenient communication of ideas. People communicate their ideas to others on paper and whiteboards, which afford fluid sketching of graphs, words and other expressive symbols. Unlike existing authoring tools that are designed for formal presentations, we created SketchPoint to help presenters design informal presentations via freeform sketching. In SketchPoint, presenters can quickly author presentations by sketching slide content, overall hierarchical structures and hyperlinks. To facilitate the transition from idea capture to communication, a note-taking workspace was built for accumulating ideas and sketching presentation outlines. Informal feedback showed that SketchPoint is a promising tool for idea communication.

© All rights reserved Li et al. and/or their publisher

2000
 
Edit | Del

Guan, Zhiwei, Li, Yang, Chen, Youdi and Dai, Guozhong (2000): Heuristic Walkthroughs Evaluation of Pen-Based Chinese Word Edit System (PCWES) Usability. In: Tan, Tieniu, Shi, Yuanchun and Gao, Wen (eds.) Advances in Multimodal Interfaces - ICMI 2000 - Third International Conference October 14-16, 2000, Beijing, China. pp. 488-495.

 
Edit | Del

Li, Yang, Guan, Zhiwei, Chen, Youdi and Dai, Guozhong (2000): Penbuilder: Platform for the Development of Pen-Based User Interface. In: Tan, Tieniu, Shi, Yuanchun and Gao, Wen (eds.) Advances in Multimodal Interfaces - ICMI 2000 - Third International Conference October 14-16, 2000, Beijing, China. pp. 534-541.

 
Edit | Del

Guan, Zhiwei, Li, Yang, Wang, Hongan and Dai, Guozhong (2000): A Comparison on Free Interaction Models for Mobile Layout System. In: Thomas, Peter J. and Gellersen, Hans-Werner (eds.) Handheld and Ubiquitous Computing - Second International Symposium - HUC 2000 September 25-27, 2000, Bristol, UK. pp. 63-71.

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Changes to this page (author)

09 Nov 2012: Modified
09 Nov 2012: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
02 May 2011: Modified
20 Apr 2011: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
25 Jul 2009: Modified
09 Jul 2009: Modified
09 Jul 2009: Modified
04 Jun 2009: Modified
02 Jun 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
12 May 2008: Modified
12 May 2008: Modified
29 Jun 2007: Modified
22 Jun 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Modified
11 Jun 2007: Modified
11 Jun 2007: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/yang_li.html

Publication statistics

Pub. period:2000-2012
Pub. count:30
Number of co-authors:45



Co-authors

Number of publications with 3 favourite co-authors:

James A. Landay:10
Zhiwei Guan:6
Guozhong Dai:5

 

 

Productive colleagues

Yang Li's 3 most productive colleagues in number of publications:

James A. Landay:91
Jacob O. Wobbrock:71
Ken Hinckley:54
 
 
 

Upcoming Courses

go to course
Design Thinking: The Beginner's Guide
Starts tomorrow LAST CALL!
go to course
The Psychology of Online Sales: The Beginner's Guide
Starts the day after tomorrow !
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading