Publication statistics

Pub. period:2005-2012
Pub. count:16
Number of co-authors:34



Co-authors

Number of publications with 3 favourite co-authors:

Khai N. Truong:9
Nicole Coddington:2
Michel Pahud:2

 

 

Productive colleagues

Koji Yatani's 3 most productive colleagues in number of publications:

Bill Buxton:78
Ken Hinckley:54
Khai N. Truong:45
 
 
 

Upcoming Courses

go to course
Emotional Design: How to make products people will love
Starts TODAY LAST CALL!
go to course
UI Design Patterns for Successful Software
87% booked. Starts in 8 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Koji Yatani

 

Publications by Koji Yatani (bibliography)

 what's this?
2012
 
Edit | Del

Yatani, Koji, Gergle, Darren and Truong, Khai (2012): Investigating effects of visual and tactile feedback on spatial coordination in collaborative handheld systems. In: Proceedings of ACM CSCW12 Conference on Computer-Supported Cooperative Work 2012. pp. 661-670. Available online

Mobile and handheld devices have become platforms to support remote collaboration. But, their small form-factor may impact the effectiveness of the visual feedback channel often used to help users maintain an awareness of their partner's activities during synchronous collaborative tasks. We investigated how visual and tactile feedback affects collaboration on mobile devices, with emphasis on spatial coordination in a shared workspace. From two user studies, our results highlight different benefits of each feedback channel in collaborative handheld systems. Visual feedback can provide precise spatial information for collaborators, but degrades collaboration when the feedback is occluded, and sometimes can distract the user's attention. Spatial tactile feedback can reduce the overload of information in visual space and gently guides the user's attention to an area of interest. Our results also show that visual and tactile feedback can complement each other, and systems using both feedback channels can support better spatial coordination than systems using only one form of feedback.

© All rights reserved Yatani et al. and/or ACM Press

 
Edit | Del

Yatani, Koji and Truong, Khai N. (2012): BodyScope: a wearable acoustic sensor for activity recognition. In: Proceedings of the 2012 International Conference on Uniquitous Computing 2012. pp. 341-350. Available online

Accurate activity recognition enables the development of a variety of ubiquitous computing applications, such as context-aware systems, lifelogging, and personal health systems. Wearable sensing technologies can be used to gather data for activity recognition without requiring sensors to be installed in the infrastructure. However, the user may need to wear multiple sensors for accurate recognition of a larger number of different activities. We developed a wearable acoustic sensor, called BodyScope, to record the sounds produced in the user's throat area and classify them into user activities, such as eating, drinking, speaking, laughing, and coughing. The F-measure of the Support Vector Machine classification of 12 activities using only our BodyScope sensor was

© All rights reserved Yatani and Truong and/or ACM Press

2011
 
Edit | Del

Yatani, Koji, Novati, Michael, Trusty, Andrew and Truong, Khai N. (2011): Review spotlight: a user interface for summarizing user-generated reviews using adjective-noun word pairs. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1541-1550. Available online

Many people read online reviews written by other users to learn more about a product or venue. However, the overwhelming amount of user-generated reviews and variance in length, detail and quality across the reviews make it difficult to glean useful information. In this paper, we present the iterative design of our system, called Review Spotlight. It provides a brief overview of reviews using adjective-noun word pairs, and allows the user to quickly explore the reviews in greater detail. Through a laboratory user study which required participants to perform decision making tasks, we showed that participants could form detailed impressions about restaurants and decide between two options significantly faster with Review Spotlight than with traditional review webpages.

© All rights reserved Yatani et al. and/or their publisher

 
Edit | Del

Banovic, Nikola, Li, Frank Chun Yat, Dearman, David, Yatani, Koji and Truong, Khai N. (2011): Design of unimanual multi-finger pie menu interaction. In: Proceedings of the 2011 ACM International Conference on Interactive Tabletops and Surfaces 2011. pp. 120-129. Available online

Context menus, most commonly the right click menu, are a traditional method of interaction when using a keyboard and mouse. Context menus make a subset of commands in the application quickly available to the user. However, on tabletop touchscreen computers, context menus have all but disappeared. In this paper, we investigate how to design context menus for efficient unimanual multi-touch use. We investigate the limitations of the arm, wrist, and fingers and how it relates to human performance of multi-targets selection tasks on multi-touch surface. We show that selecting targets with multiple fingers simultaneously improves the performance of target selection compared to traditional single finger selection, but also increases errors. Informed by these results, we present our own context menu design for horizontal tabletop surfaces.

© All rights reserved Banovic et al. and/or ACM Press

 
Edit | Del

Li, Frank Chun Yat, Guy, Richard T., Yatani, Koji and Truong, Khai N. (2011): The 1line keyboard: a QWERTY layout in a single line. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 461-470. Available online

Current soft QWERTY keyboards often consume a large portion of the screen space on portable touchscreens. This space consumption can diminish the overall user experience on these devices. In this paper, we present the 1Line keyboard, a soft QWERTY keyboard that is 140 pixels tall (in landscape mode) and 40% of the height of the native iPad QWERTY keyboard. Our keyboard condenses the three rows of keys in the normal QWERTY layout into a single line with eight keys. The sizing of the eight keys is based on users' mental layout of a QWERTY keyboard on an iPad. The system disambiguates the word the user types based on the sequence of keys pressed. The user can use flick gestures to perform backspace and enter, and tap on the bezel below the keyboard to input a space. Through an evaluation, we show that participants are able to quickly learn how to use the 1Line keyboard and type at a rate of over 30 WPM after just five 20-minute typing sessions. Using a keystroke level model, we predict the peak expert text entry rate with the 1Line keyboard to be 66-68 WPM.

© All rights reserved Li et al. and/or ACM Press

2010
 
Edit | Del

Hinckley, Ken, Yatani, Koji, Pahud, Michel, Coddington, Nicole, Rodenhouse, Jenny, Wilson, Andy, Benko, Hrvoje and Buxton, Bill (2010): Manual deskterity: an exploration of simultaneous pen + touch direct input. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2793-2802. Available online

Manual Deskterity is a prototype digital drafting table that supports both pen and touch input. We explore a division of labor between pen and touch that flows from natural human skill and differentiation of roles of the hands. We also explore the simultaneous use of pen and touch to support novel compound gestures.

© All rights reserved Hinckley et al. and/or their publisher

 
Edit | Del

Hinckley, Ken, Yatani, Koji, Pahud, Michel, Coddington, Nicole, Rodenhouse, Jenny, Wilson, Andy, Benko, Hrvoje and Buxton, Bill (2010): Pen + touch = new tools. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 27-36. Available online

We describe techniques for direct pen+touch input. We observe people's manual behaviors with physical paper and notebooks. These serve as the foundation for a prototype Microsoft Surface application, centered on note-taking and scrapbooking of materials. Based on our explorations we advocate a division of labor between pen and touch: the pen writes, touch manipulates, and the combination of pen + touch yields new tools. This articulates how our system interprets unimodal pen, unimodal touch, and multimodal pen+touch inputs, respectively. For example, the user can hold a photo and drag off with the pen to create and place a copy; hold a photo and cross it in a freeform path with the pen to slice it in two; or hold selected photos and tap one with the pen to staple them all together. Touch thus unifies object selection with mode switching of the pen, while the muscular tension of holding touch serves as the "glue" that phrases together all the inputs into a unitary multimodal gesture. This helps the UI designer to avoid encumbrances such as physical buttons, persistent modes, or widgets that detract from the user's focus on the workspace.

© All rights reserved Hinckley et al. and/or their publisher

 
Edit | Del

Scott, Jeremy, Dearman, David, Yatani, Koji and Truong, Khai N. (2010): Sensing foot gestures from the pocket. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 199-208. Available online

Visually demanding interfaces on a mobile phone can diminish the user experience by monopolizing the user's attention when they are focusing on another task and impede accessibility for visually impaired users. Because mobile devices are often located in pockets when users are mobile, explicit foot movements can be defined as eyes-and-hands-free input gestures for interacting with the device. In this work, we study the human capability associated with performing foot-based interactions which involve lifting and rotation of the foot when pivoting on the toe and heel. Building upon these results, we then developed a system to learn and recognize foot gestures using a single commodity mobile phone placed in the user's pocket or in a holster on their hip. Our system uses acceleration data recorded by a built-in accelerometer on the mobile device and a machine learning approach to recognizing gestures. Through a lab study, we demonstrate that our system can classify ten different foot gestures at approximately 86% accuracy.

© All rights reserved Scott et al. and/or their publisher

 
Edit | Del

Chung, Eunyoung, Jensen, Carlos, Yatani, Koji, Kuechler, Victor and Truong, Khai N. (2010): Sketching and Drawing in the Design of Open Source Software. In: Hundhausen, Christopher D., Pietriga, Emmanuel, Diaz, Paloma and Rosson, Mary Beth (eds.) IEEE Symposium on Visual Languages and Human-Centric Computing, VL/HCC 2010 21-25 September 2010, 2010, Legans-Madrid, Spain. pp. 195-202. Available online

2009
 
Edit | Del

Yatani, Koji, Chung, Eunyoung, Jensen, Carlos and Truong, Khai N. (2009): Understanding how and why open source contributors use diagrams in the development of Ubuntu. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 995-1004. Available online

Some of the most interesting differences between Open Source Software (OSS) development and commercial co-located software development lie in the communication and collaboration practices of these two groups of developers. One interesting practice is that of diagramming. Though well studied and important in many aspects of co-located software development (including communication and collaboration among developers), its role in OSS development has not been thoroughly studied. In this paper, we report our investigation on how and why Ubuntu contributors use diagrams in their work. Our study shows that diagrams are not actively used in many scenarios where they commonly would in co-located software development efforts. We describe differences in the use and practices of diagramming, their possible reasons, and present design considerations for potential systems aimed at better supporting diagram use in OSS development.

© All rights reserved Yatani et al. and/or ACM Press

 
Edit | Del

Yatani, Koji and Truong, Khai Nhut (2009): SemFeel: a user interface with semantic tactile feedback for mobile touch-screen devices. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 111-120. Available online

One of the challenges with using mobile touch-screen devices is that they do not provide tactile feedback to the user. Thus, the user is required to look at the screen to interact with these devices. In this paper, we present SemFeel, a tactile feedback system which informs the user about the presence of an object where she touches on the screen and can offer additional semantic information about that item. Through multiple vibration motors that we attached to the backside of a mobile touch-screen device, SemFeel can generate different patterns of vibration, such as ones that flow from right to left or from top to bottom, to help the user interact with a mobile device. Through two user studies, we show that users can distinguish ten different patterns, including linear patterns and a circular pattern, at approximately 90% accuracy, and that SemFeel supports accurate eyes-free interactions.

© All rights reserved Yatani and Truong and/or their publisher

 
Edit | Del

Huang, Elaine M., Yatani, Koji, Truong, Khai N., Kientz, Julie A. and Patel, Shwetak N. (2009): Understanding Mobile Phone Situated Sustainability: The Influence of Local Constraints and Practices on Transferability. In IEEE Pervasive Computing, 8 (1) pp. 46-53. Available online

2008
 
Edit | Del

Yatani, Koji, Partridge, Kurt, Bern, Marshall and Newman, Mark W. (2008): Escape: a target selection technique using visually-cued gestures. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 285-294. Available online

Many mobile devices have touch-sensitive screens that people interact with using fingers or thumbs. However, such interaction is difficult because targets become occluded, and because fingers and thumbs have low input resolution. Recent research has addressed occlusion through visual techniques. However, the poor resolution of finger and thumb selection still limits selection speed. In this paper, we address the selection speed problem through a new target selection technique called Escape. In Escape, targets are selected by gestures cued by icon position and appearance. A user study shows that for targets six to twelve pixels wide, Escape performs at a similar error rate and at least 30% faster than Shift, an alternative technique, on a similar task. We evaluate Escape's performance in different circumstances, including different icon sizes, icon overlap, use of color, and gesture direction. We also describe an algorithm that assigns icons to targets, thereby improving Escape's performance.

© All rights reserved Yatani et al. and/or ACM Press

2007
 
Edit | Del

Yatani, Koji and Truong, Khai N. (2007): An evaluation of stylus-based text entry methods on handheld devices in stationary and mobile settings. In: Cheok, Adrian David and Chittaro, Luca (eds.) Proceedings of the 9th Conference on Human-Computer Interaction with Mobile Devices and Services - Mobile HCI 2007 September 9-12, 2007, Singapore. pp. 487-494. Available online

2005
 
Edit | Del

Yatani, Koji, Tamura, Koiti, Hiroki, Keiichi, Sugimoto, Masanori and Hashizume, Hiromichi (2005): Toss-it: intuitive information transfer techniques for mobile devices. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1881-1884. Available online

In recent years, mobile devices have rapidly penetrated into our daily lives. However, several drawbacks of mobile devices have been mentioned so far. The proposed system called Toss-It provides intuitive information transfer techniques for mobile devices, by fully utilizing their mobility. A user of Toss-It can send information from the user's PDA to other electronic devices with a toss or swing action, as the user would toss a ball or deal cards to others. This paper describes the current implementation of Toss-It and its user studies.

© All rights reserved Yatani et al. and/or ACM Press

 
Edit | Del

Kusunoki, Fusako, Yamaguti, Takako, Nishimura, Takuichi, Yatani, Koji and Sugimoto, Masanori (2005): Interactive and enjoyable interface in museum. In: Lee, Newton (ed.) Proceedings of the International Conference on Advances in Computer Entertainment Technology - ACE 2005 June 15-15, 2005, Valencia, Spain. pp. 1-8. Available online

 
Add publication
Show list on your website
 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/koji_yatani.html