Number of co-authors:10
Number of publications with 3 favourite co-authors:Khai N. Truong:4David Dearman:3Koji Yatani:2
Frank Chun Yat Li's 3 most productive colleagues in number of publications:Khai N. Truong:45David Dearman:22Timothy Sohn:20
If houses were built the way software is built, the first woodpecker would bring down civilization.
-- Popular computer one-liner
Read the fascinating history of Wearable Computing, told by its father, Steve Mann
Read Steve's chapter !
Our Latest Books
Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger
Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad
Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam
Frank Chun Yat Li
Publications by Frank Chun Yat Li (bibliography)
Banovic, Nikola, Li, Frank Chun Yat, Dearman, David, Yatani, Koji and Truong, Khai N. (2011): Design of unimanual multi-finger pie menu interaction. In: Proceedings of the 2011 ACM International Conference on Interactive Tabletops and Surfaces 2011. pp. 120-129.
Context menus, most commonly the right click menu, are a traditional method of interaction when using a keyboard and mouse. Context menus make a subset of commands in the application quickly available to the user. However, on tabletop touchscreen computers, context menus have all but disappeared. In this paper, we investigate how to design context menus for efficient unimanual multi-touch use. We investigate the limitations of the arm, wrist, and fingers and how it relates to human performance of multi-targets selection tasks on multi-touch surface. We show that selecting targets with multiple fingers simultaneously improves the performance of target selection compared to traditional single finger selection, but also increases errors. Informed by these results, we present our own context menu design for horizontal tabletop surfaces.
© All rights reserved Banovic et al. and/or ACM Press
Sohn, Timothy, Li, Frank Chun Yat, Battestini, Agathe, Setlur, Vidya, Mori, Koichi and Horii, Hiroshi (2011): Myngle: unifying and filtering web content for unplanned access between multiple personal devices. In: Proceedings of the 2011 International Conference on Uniquitous Computing 2011. pp. 257-266.
Users often engage in tasks that span multiple personal devices. Although many current solutions exist to provide ubiquitous access to one's data, users continue to struggle with cross-device tasks. These solutions often require them to plan ahead for their information needs. In this paper, we present Myngle, a device-agnostic system that lets users quickly find the information they are looking for from previously visited web pages without having to plan ahead. Myngle provides a unified web history from multiple personal devices, and allows users to filter their history based on high-level categories influenced by common mobile information need categories (e.g., address, phone number). We evaluated Myngle with 32 users and found that our category-based method of filtering eases the burden of continuing cross-device tasks.
© All rights reserved Sohn et al. and/or ACM Press
Li, Frank Chun Yat, Guy, Richard T., Yatani, Koji and Truong, Khai N. (2011): The 1line keyboard: a QWERTY layout in a single line. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 461-470.
Current soft QWERTY keyboards often consume a large portion of the screen space on portable touchscreens. This space consumption can diminish the overall user experience on these devices. In this paper, we present the 1Line keyboard, a soft QWERTY keyboard that is 140 pixels tall (in landscape mode) and 40% of the height of the native iPad QWERTY keyboard. Our keyboard condenses the three rows of keys in the normal QWERTY layout into a single line with eight keys. The sizing of the eight keys is based on users' mental layout of a QWERTY keyboard on an iPad. The system disambiguates the word the user types based on the sequence of keys pressed. The user can use flick gestures to perform backspace and enter, and tap on the bezel below the keyboard to input a space. Through an evaluation, we show that participants are able to quickly learn how to use the 1Line keyboard and type at a rate of over 30 WPM after just five 20-minute typing sessions. Using a keystroke level model, we predict the peak expert text entry rate with the 1Line keyboard to be 66-68 WPM.
© All rights reserved Li et al. and/or ACM Press
Li, Frank Chun Yat, Dearman, David and Truong, Khai N. (2010): Leveraging proprioception to make mobile phones more accessible to users with visual impairments. In: Twelfth Annual ACM SIGACCESS Conference on Assistive Technologies 2010. pp. 187-194.
Accessing the advanced functions of a mobile phone is not a trivial task for users with visual impairments. They rely on screen readers and voice commands to discover and execute functions. In mobile situations, however, screen readers are not ideal because users may depend on their hearing for safety, and voice commands are difficult for a system to recognize in noisy environments. In this paper, we extend Virtual Shelves -- an interaction technique that leverages proprioception to access application shortcuts -- for visually impaired users. We measured the directional accuracy of visually impaired participants and found that they were less accurate than people with vision. We then built a functional prototype that uses an accelerometer and a gyroscope to sense its position and orientation. Finally, we evaluated the interaction and prototype by allowing participants to customize the placement of seven shortcuts within 15 regions. Participants were able to access shortcuts in their personal layout with 88.3% accuracy in an average of 1.74 seconds.
© All rights reserved Li et al. and/or their publisher
Li, Frank Chun Yat, Dearman, David and Truong, Khai N. (2009): Virtual shelves: interactions with orientation aware devices. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 125-128.
Triggering shortcuts or actions on a mobile device often requires a long sequence of key presses. Because the functions of buttons are highly dependent on the current application's context, users are required to look at the display during interaction, even in many mobile situations when eyes-free interactions may be preferable. We present Virtual Shelves, a technique to trigger programmable shortcuts that leverages the user's spatial awareness and kinesthetic memory. With Virtual Shelves, the user triggers shortcuts by orienting a spatially-aware mobile device within the circular hemisphere in front of her. This space is segmented into definable and selectable regions along the phi and theta planes. We show that users can accurately point to 7 regions on the theta and 4 regions on the phi plane using only their kinesthetic memory. Building upon these results, we then evaluate a proof-of-concept prototype of the Virtual Shelves using a Nokia N93. The results show that Virtual Shelves is faster than the N93's native interface for common mobile phone tasks.
© All rights reserved Li et al. and/or their publisher
Show list on your website
Join the design elite and advance:
Changes to this page (author)05 Apr 2012: Modified05 Apr 2012: Modified
04 Apr 2012: Modified
15 Jan 2011: Modified
03 Nov 2010: Added
Page maintainer: The Editorial Team