Number of co-authors:6
Number of publications with 3 favourite co-authors:Yuichi Itoh:4Yoshifumi Kitamura:4Kazuki Takashima:4
Kazuyuki Fujita's 3 most productive colleagues in number of publications:Yoshifumi Kitamura:48Fumio Kishino:44Yuichi Itoh:14
go to course
Emotional Design: How to make products people will love
91% booked. Starts in 4 days
go to course
UI Design Patterns for Successful Software
83% booked. Starts in 12 days
Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess
User Experience and Experience Design !
Our Latest Books
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
Publications by Kazuyuki Fujita (bibliography)
Takashima, Kazuki, Fujita, Kazuyuki, Itoh, Yuichi and Kitamura, Yoshifumi (2012): Elastic scroll for multi-focus interactions. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 19-20. Available online
This paper proposes a novel and efficient multi-focus scroll interface that consists of a two-step operation using a con-tents distortion technique. The displayed content can be handled just like an elastic material that can be shrunk and stretched by a user's fingers. In the first operation, the user's dragging temporarily shows the results of the viewport transition of the scroll by elastically distorting the content. This operation allows the user to see both the newly obtained and the original focus on the viewport. Then, three types of simple gestures can be used to perform the second operation such as scrolling, restoring and zooming out to get the demanded focus (or foci).
© All rights reserved Takashima et al. and/or ACM Press
Nakajima, Kosuke, Itoh, Yuichi, Tsukitani, Takayuki, Fujita, Kazuyuki, Takashima, Kazuki, Kitamura, Yoshifumi and Kishino, Fumio (2011): FuSA touch display: a furry and scalable multi-touch display. In: Proceedings of the 2011 ACM International Conference on Interactive Tabletops and Surfaces 2011. pp. 35-44. Available online
We propose a furry and scalable multi-touch display called the "FuSA2 Touch Display." The furry type of tactile sensation of this surface affords various interactions such as stroking or clawing. The system utilizes plastic fiber optic bundles to realize a furry-type texture. The system can show visual feedback by projection and detects multi-touch input using a diffused illumination technique. We employed the optical feature of plastic fiber optics to integrate the input and output systems into such a simple configuration that the display becomes scalable. We implemented a 24-inch display, evaluated the visual feedback and touch detection features, and found that our implemented display encourages users to interact with it in various actions.
© All rights reserved Nakajima et al. and/or ACM Press
Nakajima, Kosuke, Itoh, Yuichi, Tsukitani, Takayuki, Fujita, Kazuyuki, Takashima, Kazuki, Kitamura, Yoshifumi and Kishino, Fumio (2011): FuSA2 touch display. In: Proceedings of the 2011 ACM International Conference on Interactive Tabletops and Surfaces 2011. p. D5. Available online
Fujita, Kazuyuki, Itoh, Yuichi, Takashima, Kazuki, Kitamura, Yoshifumi, Tsukitani, Takayuki and Kishino, Fumio (2010): Anchored navigation: coupling panning operation with zooming and tilting based on the anchor point on a map. In: Proceedings of the 2010 Conference on Graphics Interface 2010. pp. 233-240. Available online
We propose two novel map navigation techniques, called Anchored Zoom (AZ) and Anchored Zoom and Tilt (AZT). In these techniques, the zooming and tilting of a virtual camera are automatically coupled with users' panning displacements so that the anchor point determined by users always remains in a viewport. This allows users to manipulate a viewport without mode-switching among pan, zoom, and tilt while maintaining a sense of distance and direction from the anchor point. We conducted an experiment to evaluate AZ and AZT and compare them with Pan&Zoom (PZ)  and Speed-dependent Automatic Zooming (SDAZ)  in off-screen target acquisition tasks and spatial recognition tests. Results showed that our proposed techniques were more effective than those of competitors in reducing time to reach off-screen objects while maintaining users' sense of distance and direction as well as PZ.
© All rights reserved Fujita et al. and/or their publisher
Join our community and advance:
Page maintainer: The Editorial Team