Number of co-authors:23
Number of publications with 3 favourite co-authors:Albert J. Causo:Norihiro Hagita:Hiroshi Ishiguro:
Tsukasa Ogasawara's 3 most productive colleagues in number of publications:Hiroshi Ishiguro:55Norihiro Hagita:46Takayuki Kanda:46
go to course
Psychology of Interaction Design: The Ultimate Guide
Starts the day after tomorrow !
Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess
User Experience and Experience Design !
Our Latest Books
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
Publications by Tsukasa Ogasawara (bibliography)
Kondo, Yutaka, Kawamura, Masato, Takemura, Kentaro, Takamatsu, Jun and Ogasawara, Tsukasa (2011): Gaze motion planning for android robot. In: Proceedings of the 6th International Conference on Human Robot Interaction 2011. pp. 171-172. http://dx.doi.org/10.1145/1957656.1957716
Androids are potentially required to show human-like behavior, because their appearance resembles humans' physical features. Therefore, we propose a gaze motion planning method. Within this method, we control the convergence of eyes and the ratio of eye angle to head angle, which leads to a more precise estimation of gaze direction. We implemented our method on the android Actroid-SIT and conducted experiments for evaluation of the effects of our method. Through these experiments, we achieved a common guidance for androids when planning more precise gaze motion.
© All rights reserved Kondo et al. and/or their publisher
Takemura, Kentaro, Ito, Akihiro, Takamatsu, Jun and Ogasawara, Tsukasa (2011): Active bone-conducted sound sensing for wearable interfaces. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 53-54. http://dx.doi.org/10.1145/2046396.2046419
In this paper, we propose a wearable sensor system that measures an angle of an elbow and position tapped by finger using bone-conducted sound. Our system consists of two microphones and a speaker, and they are attached on forearm. A novelty of this paper is to use active sensing for measuring an angle of an elbow. In this paper, active sensing means to emit sounds to a bone, and a microphone receives the sounds reflected at the elbow. The reflection of sound depends on the angle of elbow. Since frequencies of bone-conducted sound by tapping and from the speaker are different, these proposed techniques can be used simultaneously. We confirmed the feasibility of proposed system through experiments.
© All rights reserved Takemura et al. and/or ACM Press
Causo, Albert J., Matsuo, Mai, Ueda, Etsuko, Matsumoto, Yoshio and Ogasawara, Tsukasa (2009): Individualization of voxel-based hand model. In: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction 2009. pp. 219-220. http://doi.acm.org/10.1145/1514095.1514145
Improvements in hand pose estimation, made possible by refining the model matching step, is necessary in creating a more natural human-robot interface. Individualizing the 3D hand model of the user can result to a better hand pose estimation. This paper presents a way to accomplish the individualization by estimating the length of the finger links (bones), which is unique for every user. The 3D model of the hand is made up of voxel data derived from silhouette images obtained by multiple cameras and the finger link is estimated by searching a set of models generated from the calibration motion of the fingers. Initial pose estimation result using the model shows the feasibility of the system.
© All rights reserved Causo et al. and/or ACM Press
Ido, Junichi, Ueda, Etsuko, Matsumoto, Yoshio and Ogasawara, Tsukasa (2007): Robotic telecommunication system based on facial information measurement. In: Proceedings of the 2007 International Conference on Intelligent User Interfaces 2007. pp. 266-269. http://doi.acm.org/10.1145/1216295.1216343
This paper proposes a multi-modal telecommunication system using a facial expression robot. We developed a telecommunication system which projects the facial expression of an operator to a remote place using the facial expression robot "Infanoid2." The facial information of the operator is measured using a stereo camera system and projected through a robot in order to communicate with another person in a remote location. Impression evaluation experiment is performed using this system. This paper discusses the effectiveness of robots as a telecommunication medium based on the experimental results.
© All rights reserved Ido et al. and/or ACM Press
Hayashi, Kotaro, Sakamoto, Daisuke, Kanda, Takayuki, Shiomi, Masahiro, Koizumi, Satoshi, Ishiguro, Hiroshi, Ogasawara, Tsukasa and Hagita, Norihiro (2007): Humanoid robots as a passive-social medium: a field experiment at a train station. In: Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction 2007. pp. 137-144. http://doi.acm.org/10.1145/1228716.1228735
This paper reports a method that uses humanoid robots as a communication medium. There are many interactive robots under development, but due to their limited perception, their interactivity is still far poorer than that of humans. Our approach in this paper is to limit robots' purpose to a non-interactive medium and to look for a way to attract people's interest in the information that robots convey. We propose using robots as a passive-social medium, in which multiple robots converse with each other. We conducted a field experiment at a train station for eight days to investigate the effects of a passive-social medium.
© All rights reserved Hayashi et al. and/or ACM Press
Khiat, Abdelaziz, Toyota, Masataka, Matsumoto, Yoshio and Ogasawara, Tsukasa (2006): Investigating the relation between robot bodily expressions and their impression on the user. In: Proceedings of the 2006 International Conference on Intelligent User Interfaces 2006. pp. 339-341. http://doi.acm.org/10.1145/1111449.1111530
During an interaction process, people usually adapt their behavior according to the interpretation of their partner's bodily expressions. It is not known how much similar expressions performed by robots affect a human observer. This paper explores this issue. The study shows a correlation between the nature of the bodily expressions, through the result of questionnaires, and the effect on brain activity. It has been demonstrated that unpleasant bodily expressions of the robot elicit unpleasant impressions and vice versa. This was observed through brain activity in a specific area when the expression is pleasant, and in another area when it is unpleasant.
© All rights reserved Khiat et al. and/or ACM Press
Suenaga, Tsuyoshi, Matsumoto, Yoshio and Ogasawara, Tsukasa (2005): 3D display based on motion parallax using non-contact 3D measurement of head position. In: Proceedings of OZCHI05, the CHISIG Annual Conference on Human-Computer Interaction 2005. pp. 1-4. http://portal.acm.org/citation.cfm?id=1108368.1108424
In this paper, a novel non-contact 3D display based on motion parallax is proposed. The 3D viewpoint of the user is measured by real-time non-contact measurement system. By moving the user's head position and watching a CG image which corresponds to the measured viewpoint, the user can perceive 3D information using a normal flat display. Basic experiments for depth perception using single eye and both eyes of the user are conducted to show the feasibility of the system.
© All rights reserved Suenaga et al. and/or their publisher
Kondo, Masahiro, Ueda, Jun, Matsumoto, Yoshio and Ogasawara, Tsukasa (2005): Evaluation of Manipulative Familiarization and Fatigue Based on Contact State Transition. In: WHC 2005 - World Haptics Conference 18-20 March, 2005, Pisa, Italy. pp. 102-107. http://csdl.computer.org/comp/proceedings/whc/2005/2310/00/23100102abs.htm
Khiat, Abdelaziz, Matsumoto, Yoshio and Ogasawara, Tsukasa (2004): Task specific eye movements understanding for a gaze-sensitive dictionary. In: Nunes, Nuno Jardim and Rich, Charles (eds.) International Conference on Intelligent User Interfaces 2004 January 13-16, 2004, Funchal, Madeira, Portugal. pp. 265-267. http://doi.acm.org/10.1145/964442.964500
In this paper, we study the relation between the user's degree of understanding and his/her eye movements; in an effort to realize a proactive interface that monitors the user and provides a contextual support. The application is a gaze sensitive dictionary that helps the user when reading a text in a browser's window. Not only is the user's gaze analyzed but also the context and thus the difficulty degree of the text being read. The experiment results suggest using regressions as an indicator to trigger the help process along with a context grounding approach.
© All rights reserved Khiat et al. and/or ACM Press
Koeda, Masanao, Matsumoto, Yoshio and Ogasawara, Tsukasa (2004): Annotation-Based Assistance System for Unmanned Helicopter with Wearable Augmented Reality Environment. In: 3rd IEEE and ACM International Symposium on Mixed and Augmented Reality ISMAR 2004 2-5 November, 2004, Arlington, VA, USA. pp. 288-289. http://doi.ieeecomputersociety.org/10.1109/ISMAR.2004.15
Join our community and advance:
Page maintainer: The Editorial Team