Number of co-authors:12
Number of publications with 3 favourite co-authors:Takayuki Tsukitani:Sriram Subramanian:Kazuyuki Fujita:
Kazuki Takashima's 3 most productive colleagues in number of publications:Yoshifumi Kitamura:48Sriram Subramanian:46Fumio Kishino:44
go to course
Gestalt Psychology and Web Design: The Ultimate Guide
Starts the day after tomorrow !
go to course
The Psychology of Online Sales: The Beginner's Guide
89% booked. Starts in 6 days
Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess
User Experience and Experience Design !
Our Latest Books
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
Publications by Kazuki Takashima (bibliography)
Takashima, Kazuki, Fujita, Kazuyuki, Itoh, Yuichi and Kitamura, Yoshifumi (2012): Elastic scroll for multi-focus interactions. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 19-20. Available online
This paper proposes a novel and efficient multi-focus scroll interface that consists of a two-step operation using a con-tents distortion technique. The displayed content can be handled just like an elastic material that can be shrunk and stretched by a user's fingers. In the first operation, the user's dragging temporarily shows the results of the viewport transition of the scroll by elastically distorting the content. This operation allows the user to see both the newly obtained and the original focus on the viewport. Then, three types of simple gestures can be used to perform the second operation such as scrolling, restoring and zooming out to get the demanded focus (or foci).
© All rights reserved Takashima et al. and/or ACM Press
Nakajima, Kosuke, Itoh, Yuichi, Tsukitani, Takayuki, Fujita, Kazuyuki, Takashima, Kazuki, Kitamura, Yoshifumi and Kishino, Fumio (2011): FuSA touch display: a furry and scalable multi-touch display. In: Proceedings of the 2011 ACM International Conference on Interactive Tabletops and Surfaces 2011. pp. 35-44. Available online
We propose a furry and scalable multi-touch display called the "FuSA2 Touch Display." The furry type of tactile sensation of this surface affords various interactions such as stroking or clawing. The system utilizes plastic fiber optic bundles to realize a furry-type texture. The system can show visual feedback by projection and detects multi-touch input using a diffused illumination technique. We employed the optical feature of plastic fiber optics to integrate the input and output systems into such a simple configuration that the display becomes scalable. We implemented a 24-inch display, evaluated the visual feedback and touch detection features, and found that our implemented display encourages users to interact with it in various actions.
© All rights reserved Nakajima et al. and/or ACM Press
Nakajima, Kosuke, Itoh, Yuichi, Tsukitani, Takayuki, Fujita, Kazuyuki, Takashima, Kazuki, Kitamura, Yoshifumi and Kishino, Fumio (2011): FuSA2 touch display. In: Proceedings of the 2011 ACM International Conference on Interactive Tabletops and Surfaces 2011. p. D5. Available online
Fujita, Kazuyuki, Itoh, Yuichi, Takashima, Kazuki, Kitamura, Yoshifumi, Tsukitani, Takayuki and Kishino, Fumio (2010): Anchored navigation: coupling panning operation with zooming and tilting based on the anchor point on a map. In: Proceedings of the 2010 Conference on Graphics Interface 2010. pp. 233-240. Available online
We propose two novel map navigation techniques, called Anchored Zoom (AZ) and Anchored Zoom and Tilt (AZT). In these techniques, the zooming and tilting of a virtual camera are automatically coupled with users' panning displacements so that the anchor point determined by users always remains in a viewport. This allows users to manipulate a viewport without mode-switching among pan, zoom, and tilt while maintaining a sense of distance and direction from the anchor point. We conducted an experiment to evaluate AZ and AZT and compare them with Pan&Zoom (PZ)  and Speed-dependent Automatic Zooming (SDAZ)  in off-screen target acquisition tasks and spatial recognition tests. Results showed that our proposed techniques were more effective than those of competitors in reducing time to reach off-screen objects while maintaining users' sense of distance and direction as well as PZ.
© All rights reserved Fujita et al. and/or their publisher
Takashima, Kazuki, Omori, Yasuko, Yoshimoto, Yoshiharu, Itoh, Yuich, Kitamura, Yoshifumi and Kishino, Fumio (2008): Effects of Avatar's Blinking Animation on Person Impressions. In: Proceedings of the 2008 Conference on Graphics Interface May 28-30, 2008, Windsor, Ontario, Canada. pp. 169-176.
Blinking is one of the most important cues for forming person impressions. We focus on the eye blinking rate of avatars and investigate its effect on viewer subjective impressions. Two experiments are conducted. The stimulus avatars included humans with generic reality (male and female), cartoon-style humans (male and female), animals, and unidentified life forms that were presented as a 20-second animation with various blink rates: 9, 12, 18, 24 and 36 blinks/min. Subjects rated their impressions of the presented stimulus avatars on a seven-point semantic differential scale. The results showed a significant effect of the avatar's blinking on viewer impressions and it was larger with the human-style avatars than the others. The results also lead to several implications and guidelines for the design of avatar representation. Blink animation of 18 blinks/min with a human-style avatar produces the friendliest impression. The higher blink rates, i.e., 36 blinks/min, give inactive impressions while the lower blink rates, i.e., 9 blinks/min, give intelligent impressions. Through these results, guidelines are derived for managing attractiveness of avatar by changing the avatar's blinking rate.
© All rights reserved Takashima et al. and/or their publisher
Takashima, Kazuki, Subramanian, Sriram, Tsukitani, Takayuki, Kitamura, Yoshifumi and Kishino, Fumio (2008): Acquisition of Off-Screen Object by Predictive Jumping. In: Lee, Seongil, Choo, Hyunseung, Ha, Sungdo and Shin, In Chul (eds.) Computer-Human Interaction 8th Asia-Pacific Conference - APCHI 2008 July 6-9, 2008, Seoul, Korea. pp. 301-310. Available online
Asano, Takeshi, Sharlin, Ehud, Kitamura, Yoshifumi, Takashima, Kazuki and Kishino, Fumio (2005): Predictive interaction using the delphian desktop. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 133-141. Available online
This paper details the design and evaluation of the Delphian Desktop, a mechanism for online spatial prediction of cursor movements in a Windows-Icons-Menus-Pointers (WIMP) environment. Interaction with WIMP-based interfaces often becomes a spatially challenging task when the physical interaction mediators are the common mouse and a high resolution, physically large display screen. These spatial challenges are especially evident in overly crowded Windows desktops. The Delphian Desktop integrates simple yet effective predictive spatial tracking and selection paradigms into ordinary WIMP environments in order to simplify and ease pointing tasks. Predictions are calculated by tracking cursor movements and estimating spatial intentions using a computationally inexpensive online algorithm based on estimating the movement direction and peak velocity. In testing the Delphian Desktop effectively shortened pointing time to faraway icons, and reduced the overall physical distance the mouse (and user hand) had to mechanically traverse.
© All rights reserved Asano et al. and/or ACM Press
Join our community and advance:
Page maintainer: The Editorial Team