Publication statistics

Pub. period:2003-2012
Pub. count:14
Number of co-authors:31



Co-authors

Number of publications with 3 favourite co-authors:

Hirotaka Osawa:6
Takayuki Kanda:6
Norihiro Hagita:5

 

 

Productive colleagues

Michita Imai's 3 most productive colleagues in number of publications:

Hiroshi Ishiguro:55
Norihiro Hagita:46
Takayuki Kanda:46
 
 
 
Jul 31

Design is the art of gradually applying constraints until only one solution remains.

-- Unknown

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!

 
 

Michita Imai

Add description
Add publication

Publications by Michita Imai (bibliography)

 what's this?
2012
 
Edit | Del

Matsumoto, Takahiro, Satake, Satoru, Kanda, Takayuki, Imai, Michita and Hagita, Norihiro (2012): Do you remember that shop?: computational model of spatial memory for shopping companion robots. In: Proceedings of the 7th International Conference on Human-Robot Interaction 2012. pp. 447-454.

We aim to develop a shopping companion robot that can share experience with users. In this study, we focused on the shared memory acquired when a robot walks together with a user. We developed a computational model of memory recall of visited locations in a shopping mall. The model was developed with data collection from 30 participants. We found that shop size, color intensity of facade, relative visibility, and time elapsed are the influencing features for recall. The model was used in a scenario of a shopping companion robot. The robot, Robovie, autonomously follows a user while inferring the user's memory recall of shops in the visited route. When the user asks the location of other shops, Robovie replied with destination description, referring to the known locations inferred with the model of the user's memory recall. With this scenario, we verified the effectiveness of the developed computational model of memory recall. The evaluation experiment revealed that the model outputs shops that the participants are likely to recall, and makes the directions given easier to understand.

© All rights reserved Matsumoto et al. and/or their publisher

 
Edit | Del

Ogata, Masa, Sugiura, Yuta, Osawa, Hirotaka and Imai, Michita (2012): iRing: intelligent ring using infrared reflection. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 131-136.

We present the iRing, an intelligent input ring device developed for measuring finger gestures and external input. iRing recognizes rotation, finger bending, and external force via an infrared (IR) reflection sensor that leverages skin characteristics such as reflectance and softness. Furthermore, iRing allows using a push and stroke input method, which is popular in touch displays. The ring design has potential to be used as a wearable controller because its accessory shape is socially acceptable, easy to install, and safe, and iRing does not require extra devices. We present examples of iRing applications and discuss its validity as an inexpensive wearable interface and as a human sensing device.

© All rights reserved Ogata et al. and/or ACM Press

2011
 
Edit | Del

Voisin, Thibault, Osawa, Hirotaka, Yamada, Seiji and Imai, Michita (2011): Between real-world and virtual agents: the disembodied robot. In: Proceedings of the 6th International Conference on Human Robot Interaction 2011. pp. 281-282.

In this study, we propose a disembodied real-world agent and the study of the influence of this disembodiment on the social separation between the user and the agent. In order to give a clue to the user about the presence of the robot and to make possible a visual feedback, we decide to use independent robotic body parts that mimic human hands and eyes. This robot is also able to share real-world space with the user, and react to his presence, through 3d detection and oral communication. Thus, we can obtain an agent with an important presence while keeping good space efficiency, and as a result ban any existing social barrier.

© All rights reserved Voisin et al. and/or their publisher

2010
 
Edit | Del

Osawa, Hirotaka, Matsuda, Yuji, Ohmura, Ren and Imai, Michita (2010): Toward the body image horizon: how do users recognize the body of a robot?. In: Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction 2010. pp. 179-180.

In this study, we investigated the boundary for recognizing robots. Many anthropomorphic robots are used for interactions with users. These robots show various body forms and appearances, which are recognized by their users. This ability to recognize a variety of robotic appearances suggests that a user can recognize a wide range of imaginary body forms compared with the native human appearance. We attempted to determine the boundary for the recognition of robot appearances. On the basis of our previous studies, we hypothesized that the discrimination of robot appearances depends of the order of the parts. If the body parts of a robot are placed in order from top to bottom, the user can recognize the assembly as a robot body. We performed a human-robot experiment in which we compared the results for robots with ordered parts with those for robots with inverted parts. The result showed that the users' perception of the robot's body differed between the two groups. This result confirms our hypothesized boundary for the recognition of robot appearances.

© All rights reserved Osawa et al. and/or their publisher

 
Edit | Del

Hato, Yasuhiko, Satake, Satoru, Kanda, Takayuki, Imai, Michita and Hagita, Norihiro (2010): Pointing to space: modeling of deictic interaction referring to regions. In: Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction 2010. pp. 301-308.

In daily conversation, we sometimes observe a deictic interaction scene that refers to a region in a space, such as saying "please put it over there" with pointing. How can such an interaction be possible with a robot? Is it enough to simulate people's behaviors, such as utterance and pointing? Instead, we highlight the importance of simulating human cognition. In the first part of our study, we empirically demonstrate the importance of simulating human cognition of regions when a robot engages in a deictic interaction by referring to a region in a space. The experiments indicate that a robot with simulated cognition of regions improves efficiency of its deictic interaction. In the second part, we present a method for a robot to computationally simulate cognition of regions.

© All rights reserved Hato et al. and/or their publisher

2009
 
Edit | Del

Okuno, Yusuke, Kanda, Takayuki, Imai, Michita, Ishiguro, Hiroshi and Hagita, Norihiro (2009): Providing route directions: design of robot's utterance, gesture, and timing. In: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction 2009. pp. 53-60.

Providing route directions is a complicated interaction. Utterances are combined with gestures and pronounced with appropriate timing. This study proposes a model for a robot that generates route directions by integrating three important crucial elements: utterances, gestures, and timing. Two research questions must be answered in this modeling process. First, is it useful to let robot perform gesture even though the information conveyed by the gesture is given by utterance as well? Second, is it useful to implement the timing at which humans speaks? Many previous studies about the natural behavior of computers and robots have learned from human speakers, such as gestures and speech timing. However, our approach is different from such previous studies. We emphasized the listener's perspective. Gestures were designed based on the usefulness, although we were influenced by the basic structure of human gestures. Timing was not based on how humans speak, but modeled from how they listen. The experimental result demonstrated the effectiveness of our approach, not only for task efficiency but also for perceived naturalness.

© All rights reserved Okuno et al. and/or ACM Press

 
Edit | Del

Satake, Satoru, Kanda, Takayuki, Glas, Dylan F., Imai, Michita, Ishiguro, Hiroshi and Hagita, Norihiro (2009): How to approach humans?: strategies for social robots to initiate interaction. In: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction 2009. pp. 109-116.

This paper proposes a model of approach behavior with which a robot can initiate conversation with people who are walking. We developed the model by learning from the failures in a simplistic approach behavior used in a real shopping mall. Sometimes people were unaware of the robot's presence, even when it spoke to them. Sometimes, people were not sure whether the robot was really trying to start a conversation, and they did not start talking with it even though they displayed interest. To prevent such failures, our model includes the following functions: predicting the walking behavior of people, choosing a target person, planning its approaching path, and nonverbally indicating its intention to initiate a conversation. The approach model was implemented and used in a real shopping mall. The field trial demonstrated that our model significantly improves the robot's performance in initiating conversations.

© All rights reserved Satake et al. and/or ACM Press

 
Edit | Del

Noda, Masato, Osumi, Toshihiro, Fujimoto, Kenta, Kuwayama, Yuki, Osawa, Hirotaka, Imai, Michita and Shinozawa, Kazuhiko (2009): Blog robot: a new style for accessing location-based contents. In: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction 2009. pp. 203-204.

We propose a portable robot named "Blog Robot" which presents blog contents by using verbal and non-verbal expression. Blog Robot is a robotized smart-phone which has a head and arms for making hand gestures, eye contact, and joint attention. The blog is widely used to express personal views or to record daily occurrences. One of the information frequently posted on the blog is related to a certain place such as a tourist site or a shop. Meanwhile, people sit down in front of their PC and check blogs through the text and the image displayed on the Web browser. However, their style of checking the blogs is not good way for them to realize the authentic situations which blog writers let them know. The user carries Blog Robot like cellular phone and can browse blogs related to the location where user is. The browse method makes the user access the blog at the real scene related to the contents of the blog. Blog Robot gives her/him the content of the blog by reading it with synthesized speech. In particular, the nonverbal information generated by Blog Robot enhances the read information as if the blog writer is next her/him while telling her/him it. The browse method is expected to enable the user to obtain more realistic information than the Web browser on the PC. Moreover, it enables the user shares the information with the blog writer. In addition, since the browse through Blog Robot is performed at the location that the blog writer once visited, the blog writer has proper feedback from the user. It is difficult for the blog writer to obtain the same feedback from the user who sits in front of her/his PC because she/he is not there. We have also designed tags specific to generating the nonverbal expression of Blog Robot and the tags are embedded within the text in the blog. The tags can be used not only for Blog Robot but also for the PC. If the user checks the blog including the tags, they are displayed as icon on the Web browser.

© All rights reserved Noda et al. and/or ACM Press

 
Edit | Del

Osawa, Hirotaka, Ohmura, Ren and Imai, Michita (2009): Anthropomorphization method using attachable humanoid parts. In: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction 2009. pp. 207-208.

With this video, we propose a new human-robot interaction that anthropomorphizes a target common object and transform it into a communicative agent using attachable humanoid parts. The user perceives the target to have its own intentions and body image through the attached body parts. This video shows examples of anthropomorphization method as below. First, the video shows the setup process of our method in which a demonstrator attaches each part, such as eye-like parts, arm-like parts, and camera to a common electric oven. The oven becomes a communicative robot by attaching these parts. Second, the video explains three applications -- self advertisement, self presentation, and interactive manual -- that are achieved by anthropomorphized objects. In the self advertisement situation, the anthropomorphized oven attracts customers and explains its function by itself. This situation assumes that these devices are used in shops in future. In the self presentation situation, an anthropomorphized poster explains its contents by itself. There is no other explainer. This situation assumes that these devices are used on a poster presentation. In the interactive manual situation, an anthropomorphized printer explains its function interactively. This explanation is intuitive and understandable for children and elderly people. After third situation, this anthropomorphized printer is compared to an explanation from the humanoid robot Robovie through gaze direction analysis. In the Robovie situation, the guidance fails because the robot distracts from the target itself. However in the anthropomorphized printer situation, users can concentrate on the interaction. Last, we use an anthropomorphized shredder using eye-like parts, arm-like parts, and skin sensor. The shredder explains its interactive manual like in the printer situation. However in this interaction, this shredder detects the user's touch and proceeds with the interaction instead of waiting to detect voice.

© All rights reserved Osawa et al. and/or ACM Press

 
Edit | Del

Osawa, Hirotaka, Ohmura, Ren and Imai, Michita (2009): Self introducing poster using attachable humanoid parts. In: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction 2009. pp. 327-328.

In this paper, we propose new robotics presentation method called, Self Introducing Poster that uses attachable humanoid parts and explains its contents through a self introduction style. Presentation by a conventional robot sometimes fails because the robot presenter is often too attractive and distracts from the presentation itself. In our method, the poster is anthropomorphized and explains its contents. Due to this self presentation, users can more easily understand its meaning because the information's contents and information provider are strongly related. We designed and implemented our system and evaluated it in the field. The results suggest that the self-introducing system is useful for gaining users attention and effectively presenting information.

© All rights reserved Osawa et al. and/or ACM Press

2008
 
Edit | Del

Shiwa, Toshiyuki, Kanda, Takayuki, Imai, Michita, Ishiguro, Hiroshi and Hagita, Norihiro (2008): How quickly should communication robots respond?. In: Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction 2008. pp. 153-160.

This paper reports a study about system response time (SRT) in communication robots that utilize human-like social features, such as anthropomorphic appearance and conversation in natural language. Our research purpose established a design guideline for SRT in communication robots. The first experiment observed user preferences toward different SRTs in interaction with a robot. In other existing user interfaces, faster response is usually preferred. In contrast, our experimental result indicated that user preference for SRT in a communication robot is highest at one second, and user preference ratings level off at two seconds. However, a robot cannot always respond in such a short time as one or two seconds. Thus, the important question is "What should a robot do if it cannot respond quickly enough?" The second experiment tested the effectiveness of a conversational filler: behavior to notify listeners that the robot is going to respond. In Japanese "etto" is used to buy time to think and resembles "well..." and "uh..." In English. We used the same strategy in a communication robot to shadow system response time. Our results indicated that using a conversational filler by the robot moderated the user's impression toward a long SRT.

© All rights reserved Shiwa et al. and/or ACM Press

2007
 
Edit | Del

Ishii, Kentaro, Yamamoto, Yukiko, Imai, Michita and Nakadai, Kazuhiro (2007): A Navigation System Using Ultrasonic Directional Speaker with Rotating Base. In: Smith, Michael J. and Salvendy, Gavriel (eds.) Symposium on Human Interface 2007 - Part II July 22-27, 2007, Beijing, China. pp. 526-535.

2005
 
Edit | Del

Sakamoto, Daisuke, Kanda, Takayuki, Ono, Tetsuo, Kamashima, Masayuki, Imai, Michita and Ishiguro, Hiroshi (2005): Cooperative embodied communication emerged by interactive humanoid robots. In International Journal of Human-Computer Studies, 62 (2) pp. 247-265.

Research on humanoid robots has produced various uses for their body properties in communication. In particular, mutual relationships of body movements between a robot and a human are considered to be important for smooth and natural communication, as they are in human-human communication. We have developed a semi-autonomous humanoid robot system that is capable of cooperative body movements with humans using environment-based sensors and switching communicative units. Concretely, this system realizes natural communication by using typical behaviors such as: "nodding," "eye-contact," "face-to-face," etc. It is important to note that the robot parts are NOT operated directly; only the communicative units in the robot system are switched. We conducted an experiment using the mentioned robot system and verified the importance of cooperative behaviors in a route-guidance situation where a human gives directions to the robot. The task requires a human participant (called the "speaker") to teach a route to a "hearer" that is (1) a human, (2) a developed robot that performs cooperative movements, and (3) a robot that does not move at all. This experiment is subjectively evaluated through a questionnaire and an analysis of body movements using three-dimensional data from a motion capture system. The results indicate that the cooperative body movements greatly enhance the emotional impressions of human speakers in a route-guidance situation. We believe these results will allow us to develop interactive humanoid robots that sociably communicate with humans.

© All rights reserved Sakamoto et al. and/or Academic Press

2003
 
Edit | Del

Imai, Michita, Hiraki, Kazuo, Miyasato, Tsutomu, Nakatsu, Ryohei and Anzai, Yuichiro (2003): Interaction With Robots: Physical Constraints on the Interpretation of Demonstrative Pronouns. In International Journal of Human-Computer Interaction, 16 (2) pp. 367-384.

This study investigated what effect physical constraints have on the interpretation of demonstrative pronouns when a user navigates a robot. For this investigation, a robot navigation environment called Spondia-II was develope, and an experiment conducted. It is known that the interpretation of demonstrative pronouns requires information about not only the situation (or context) but also the speaker's viewpoint during a dialogue. The results of the experiment suggest that physical constraints do affect the user's viewpoint, especially when a user utters a demonstrative pronoun while navigating the robot. In actual fact, the user alters the use of demonstrative pronouns according to the change in the user's viewpoint. It is also suggested that the user and the robot share the same viewpoint during the physical interaction.

© All rights reserved Imai et al. and/or Lawrence Erlbaum Associates

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
 
 
 

Changes to this page (author)

23 Nov 2012: Modified
04 Apr 2012: Modified
18 Apr 2011: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
09 Jul 2009: Modified
09 Jul 2009: Modified
09 Jul 2009: Modified
09 Jul 2009: Modified
09 Jul 2009: Modified
09 Jul 2009: Modified
05 Jun 2009: Modified
26 Jul 2007: Modified
27 Jun 2007: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/michita_imai.html

Publication statistics

Pub. period:2003-2012
Pub. count:14
Number of co-authors:31



Co-authors

Number of publications with 3 favourite co-authors:

Hirotaka Osawa:6
Takayuki Kanda:6
Norihiro Hagita:5

 

 

Productive colleagues

Michita Imai's 3 most productive colleagues in number of publications:

Hiroshi Ishiguro:55
Norihiro Hagita:46
Takayuki Kanda:46
 
 
 
Jul 31

Design is the art of gradually applying constraints until only one solution remains.

-- Unknown

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!