Publication statistics

Pub. period:2000-2012
Pub. count:47
Number of co-authors:95



Co-authors

Number of publications with 3 favourite co-authors:

Maki Sugimoto:19
Takeo Igarashi:15
Hideaki Nii:10

 

 

Productive colleagues

Masahiko Inami's 3 most productive colleagues in number of publications:

Takeo Igarashi:66
Bruce H. Thomas:54
Susumu Tachi:41
 
 
 
Jul 26

The display is the computer.

-- Jen-Hsun Huang, 2002

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!

 
 

Masahiko Inami

Personal Homepage:
inamilab.kmd.keio.ac.jp/index.php/people/faculty/2-masahiko-inami


Add description
Add publication

Publications by Masahiko Inami (bibliography)

 what's this?
2012
 
Edit | Del

Koizumi, Naoya, Sugimoto, Maki, Nagaya, Naohisa, Inami, Masahiko and Furukawa, Masahiro (2012): Stop motion goggle: augmented visual perception by subtraction method using high speed liquid crystal. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 14.

Stop Motion Goggle (SMG) expands visual perception by allowing users to perceive visual information selectively through a high speed shutter. In this system, the user can easily observe not only periodic rotational motion such as rotating fans or wheels, but also random motion like bouncing balls. In this research, we developed SMG and evaluated the effect of SMG on visual perception of high speed moving objects. Furthermore this paper describes users' behaviors under the expanded visual experience.

© All rights reserved Koizumi et al. and/or ACM Press

 
Edit | Del

Matsumura, Kohei, Sakamoto, Daisuke, Inami, Masahiko and Igarashi, Takeo (2012): Universal earphones: earphones with automatic side and shared use detection. In: Proceedings of the 2012 International Conference on Intelligent User Interfaces 2012. pp. 305-306.

We present universal earphones that use both a proximity sensor and a skin conductance sensor and we demonstrate several implicit interaction techniques they achieve by automatically detecting the context of use. The universal earphones have two main features. The first involves detecting the left and right sides of ears, which provides audio to either ear, and the second involves detecting the shared use of earphones and this provides mixed stereo sound to both earphones. These features not merely free users from having to check the left and right sides of earphones, but they enable them to enjoy sharing stereo audio with other people.

© All rights reserved Matsumura et al. and/or ACM Press

 
Edit | Del

Sugiura, Yuta, Inami, Masahiko and Igarashi, Takeo (2012): A thin stretchable interface for tangential force measurement. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 529-536.

We have developed a simple skin-like user interface that can be easily attached to curved as well as flat surfaces and used to measure tangential force generated by pinching and dragging interactions. The interface consists of several photoreflectors that consist of an IR LED and a phototransistor and elastic fabric such as stocking and rubber membrane. The sensing method used is based on our observation that photoreflectors can be used to measure the ratio of expansion and contraction of a stocking using the changes in transmissivity of IR light passing through the stocking. Since a stocking is thin, stretchable, and nearly transparent, it can be easily attached to various types of objects such as mobile devices, robots, and different parts of the body as well as to various types of conventional pressure sensors without altering the original shape of the object. It can also present natural haptic feedback in accordance with the amount of force exerted. A system using several such sensors can determine the direction of a two-dimensional force. A variety of example applications illustrated the utility of this sensing system.

© All rights reserved Sugiura et al. and/or ACM Press

 
Edit | Del

Mi, Haipeng, Ishii, Kentaro, Ma, Lei, Laokulrat, Natsuda, Inami, Masahiko and Igarashi, Takeo (2012): Pebbles: an interactive configuration tool for indoor robot navigation. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 11-12.

This study presents an interactive configuration tool that assists non-expert users to design specific navigation route for mobile robot in an indoor environment. The user places small active markers, called pebbles, on the floor along the desired route in order to guide the robot to the destination. The active markers establish a navigation network by communicating each other with IR beacon and the robot follows the markers to reach the designated goal. During the installation, a user can get effective feedback from LED indicators and voice prompts, so that the user can immediately understand if the navigation route is appropriately configured as expected. With this tool a novice user may easily customize a mobile robot for various indoor tasks.

© All rights reserved Mi et al. and/or ACM Press

2011
 
Edit | Del

Takahashi, Masato, Fernando, Charith Lasantha, Kumon, Yuto, Takeda, Shuhey, Nii, Hideaki, Tokiwa, Takuji, Sugimoto, Maki and Inami, Masahiko (2011): Earthlings Attack!: a ball game using human body communication. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 17.

In this paper, we describe a ball game "Earthlings Attack!" that uses the contact between users and an active ball device as an information channel to the game content. When the ball device with built-in transmitter comes in contact with the user who wears the receiver, theis system transmits information from the ball device to the receiver through user's body with the human body communication. With this method, we aim at the interaction improvement of the augmentation of the interaction in such a way that presenting information on user's body according to the contact between each ball device and each user. This system also enables to use in a wide range field in the same network by managing contact information of both collectively.

© All rights reserved Takahashi et al. and/or ACM Press

 
Edit | Del

Hashimoto, Sunao, Ostanin, Andrei, Inami, Masahiko and Igarashi, Takeo (2011): Snappy: snapshot-based robot interaction for arranging objects. In: Proceedings of the 6th International Conference on Human Robot Interaction 2011. pp. 411-412.

Photograph is a very useful tool for describing configurations of real-world objects to others. People immediately understand various pieces of information such as "what is the target object" and "where is the target position" by looking at a photograph, even without verbal descriptions. Our goal was to leverage these features of photographs to enrich human-robot interactions. We propose to use photographs as a front-end between a human and a home robot system. We named this method "Snappy". The user takes a photo to remember the target in a real-world situation involving a task and shows it to the system to make it physically execute the task. We developed a prototype system in which the user took a photo of a dish layout on a table and showed it to the system later to then have robots deliver and arrange the dishes in the same way.

© All rights reserved Hashimoto et al. and/or their publisher

 
Edit | Del

Yoshizaki, Wataru, Sugiura, Yuta, Chiou, Albert C., Hashimoto, Sunao, Inami, Masahiko, Igarashi, Takeo, Akazawa, Yoshiaki, Kawachi, Katsuaki, Kagami, Satoshi and Mochimaru, Masaaki (2011): An actuated physical puppet as an input device for controlling a digital manikin. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 637-646.

We present an actuated handheld puppet system for controlling the posture of a virtual character. Physical puppet devices have been used in the past to intuitively control character posture. In our research, an actuator is added to each joint of such an input device to provide physical feedback to the user. This enhancement offers many benefits. First, the user can upload pre-defined postures to the device to save time. Second, the system is capable of dynamically adjusting joint stiffness to counteract gravity, while allowing control to be maintained with relatively little force. Third, the system supports natural human body behaviors, such as whole-body reaching and joint coupling. This paper describes the user interface and implementation of the proposed technique and reports the results of expert evaluation. We also conducted two user studies to evaluate the effectiveness of our method.

© All rights reserved Yoshizaki et al. and/or their publisher

 
Edit | Del

Liu, Kexi, Sakamoto, Daisuke, Inami, Masahiko and Igarashi, Takeo (2011): Roboshop: multi-layered sketching interface for robot housework assignment and management. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 647-656.

As various home robots come into homes, the need for efficient robot task management tools is arising. Current tools are designed for controlling individual robots independently, so they are not ideally suitable for assigning coordinated action among multiple robots. To address this problem, we developed a management tool for home robots with a graphical editing interface. The user assigns instructions by selecting a tool from a toolbox and sketching on a bird's-eye view of the environment. Layering supports the management of multiple tasks in the same room. Layered graphical representation gives a quick overview of and access to rich information tied to the physical environment. This paper describes the prototype system and reports on our evaluation of the system.

© All rights reserved Liu et al. and/or their publisher

 
Edit | Del

Probst, Kathrin, Seifried, Thomas, Haller, Michael, Yasu, Kentaro, Sugimoto, Maki and Inami, Masahiko (2011): Move-it: interactive sticky notes actuated by shape memory alloys. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1393-1398.

A lot of people still rely on pen and paper for taking short notes. Post-Its® are still the most popular paper media for informal note taking. In this paper, we present the design and implementation of Move-It, a system that combines the affordances of note taking on paper with the capabilities of computer systems. Furthermore, we present how common Post-It® notes can be actuated by shape memory alloys, thus become interactive sticky notes giving active physical feedback.

© All rights reserved Probst et al. and/or their publisher

 
Edit | Del

Sugiura, Yuta, Kakehi, Gota, Withana, Anusha, Lee, Calista, Sakamoto, Daisuke, Sugimoto, Maki, Inami, Masahiko and Igarashi, Takeo (2011): Detecting shape deformation of soft objects using directional photoreflectivity measurement. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 509-516.

We present the FuwaFuwa sensor module, a round, hand-size, wireless device for measuring the shape deformations of soft objects such as cushions and plush toys. It can be embedded in typical soft objects in the household without complex installation procedures and without spoiling the softness of the object because it requires no physical connection. Six LEDs in the module emit IR light in six orthogonal directions, and six corresponding photosensors measure the reflected light energy. One can easily convert almost any soft object into a touch-input device that can detect both touch position and surface displacement by embedding multiple FuwaFuwa sensor modules in the object. A variety of example applications illustrate the utility of the FuwaFuwa sensor module. An evaluation of the proposed deformation measurement technique confirms its effectiveness.

© All rights reserved Sugiura et al. and/or ACM Press

2010
 
Edit | Del

Watanabe, Keita, Matsuda, Shota, Yasumura, Michiaki, Inami, Masahiko and Igarashi, Takeo (2010): CastOven: a microwave oven with just-in-time video clips. In: Proceedings of the 2010 International Conference on Uniquitous Computing 2010. pp. 385-386.

In this paper, we propose a novel microwave oven called CastOven. CastOven is a microwave oven with a LCD display that enables people to enjoy videos while they are waiting for the completion of cooking. Current media contents force us to adjust our schedules to enjoy them. Media contents, especially movies, take specific time durations to watch them, but it is not easy to squeeze in time to do so in daily life. The system identifies the idle time in daily life and delivers an appropriate amount of media content to the user to enjoy during their idle time.

© All rights reserved Watanabe et al. and/or their publisher

 
Edit | Del

Sugiura, Yuta, Sakamoto, Diasuke, Withana, Anusha, Inami, Masahiko and Igarashi, Takeo (2010): Cooking with robots: designing a household system working in open environments. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2427-2430.

We propose a cooking system that operates in an open environment. The system cooks a meal by pouring various ingredients into a boiling pot on an induction heating cooker and adjusts the heating strength according to the user's instructions. We then describe how the system incorporates robotic- and human-specific elements in a shared workspace so as to achieve a cooperative rudimentary cooking capability. First, we use small mobile robots instead of built-in arms to save space, improve flexibility and increase safety. Second, we use detachable visual markers to allow the user to easily configure the real-world environment. Third, we provide a graphical user interface to display detailed cooking instructions to the user. We hope insights obtained in this experiment will be useful for the design of other household systems in the future.

© All rights reserved Sugiura et al. and/or their publisher

 
Edit | Del

Ishii, Kentaro, Ishida, Akihiko, Saul, Greg, Inami, Masahiko and Igarashi, Takeo (2010): Active navigation landmarks for a service robot in a home environment. In: Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction 2010. pp. 99-100.

This paper proposes a physical user interface for a user to teach a robot to navigate a home environment. The user places small devices containing infrared based communication functionality as landmarks in the environment. The robot follows these landmarks to navigate to a goal landmark. Active landmarks communicate with each other to map their spatial relationships. Our method allows the user to start using the system immediately after placing the landmarks without installing any global position sensing system or prior mapping by the robot.

© All rights reserved Ishii et al. and/or their publisher

 
Edit | Del

Hashimoto, Sunao, Ostanin, Andrei, Inami, Masahiko and Igarashi, Takeo (2010): Photograph-based interaction for teaching object delivery tasks to robots. In: Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction 2010. pp. 153-154.

Personal photographs are important media for communication in our daily lives. People take photos to remember things about themselves and show them to others to share the experience. We expect that a photograph can be useful tool for teaching a task to a robot. We propose a novel human-robot interaction using photographs. The user takes a photo to remember the target in a real-world situation involving a task and shows it to the system to make it physically execute the task. We developed a prototype system in which the user took a photo of a dish arrangement on a table and showed it to the system later to then have a small robot deliver and arrange the dishes in the same way.

© All rights reserved Hashimoto et al. and/or their publisher

 
Edit | Del

Koizumi, Naoya, Yasu, Kentaro, Liu, Angela, Sugimoto, Maki and Inami, Masahiko (2010): Animated paper: a moving prototyping platform. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 389-390.

We have developed a novel prototyping method that utilizes animated paper, a versatile platform created from paper and shape memory alloy (SMA), which is easy to control using a range of different energy sources from sunlight to lasers. We have further designed a laser point tracking system to improve the precision of the wireless control system by embedding retro-reflective material on the paper to act as light markers. It is possible to change the movement of paper prototypes by varying where to mount the SMA or how to heat it, creating a wide range of applications.

© All rights reserved Koizumi et al. and/or their publisher

 
Edit | Del

Mistry, Pranav, Ishii, Kentaro, Inami, Masahiko and Igarashi, Takeo (2010): Blinkbot: look at, blink and move. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 397-398.

In this paper we present BlinkBot -- a hands free input interface to control and command a robot. BlinkBot explores the natural modality of gaze and blink to direct a robot to move an object from a location to another. The paper also explains detailed hardware and software implementation of the prototype system.

© All rights reserved Mistry et al. and/or their publisher

 
Edit | Del

Shirokura, Takumi, Sakamoto, Daisuke, Sugiura, Yuta, Ono, Tetsuo, Inami, Masahiko and Igarashi, Takeo (2010): RoboJockey: real-time, simultaneous, and continuous creation of robot actions for everyone. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 399-400.

We developed a RoboJockey (Robot Jockey) interface for coordinating robot actions, such as dancing -- similar to "Disc jockey" and "Video jockey". The system enables a user to choreograph a dance for a robot to perform by using a simple visual language. Users can coordinate humanoid robot actions with a combination of arm and leg movements. Every action is automatically performed to background music and beat. The RoboJockey will give a new entertainment experience with robots to the end-users.

© All rights reserved Shirokura et al. and/or their publisher

 
Edit | Del

Withana, Anusha, Kondo, Makoto, Kakehi, Gota, Makino, Yasutoshi, Sugimoto, Maki and Inami, Masahiko (2010): ImpAct: enabling direct touch and manipulation for surface computing. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 411-412.

This paper explores direct touch and manipulation techniques for surface computing platforms using a special force feedback stylus named ImpAct (Immersive Haptic Augmentation for Direct Touch). Proposed haptic stylus can change its length when it is pushed against a display surface. Correspondingly, a virtual stem is rendered inside the display area so that user perceives the stylus immersed through to the digital space below the screen. We propose ImpAct as a tool to probe and manipulate digital objects in the shallow region beneath display surface. ImpAct creates a direct touch interface by providing kinesthetic haptic sensations along with continuous visual contact to digital objects below the screen surface.

© All rights reserved Withana et al. and/or their publisher

 
Edit | Del

Furukawa, Masahiro, Uema, Yuji, Sugimoto, Maki and Inami, Masahiko (2010): Fur interface with bristling effect induced by vibration. In: Proceedings of the 2010 Augmented Human International Conference 2010. p. 17.

Wearable computing technology is one of the methods that can augment the information processing ability of humans. However, in this area, a soft surface is often necessary to maximize the comfort and practicality of such wearable devices. Thus in this paper, we propose a soft surface material, with an organic bristling effect achieved through mechanical vibration, as a new user interface. We have used fur in order to exhibit the visually rich transformation induced by the bristling effect while also achieving the full tactile experience and benefits of soft materials. Our method needs only a layer of fur and simple vibration motors. The hairs of fur instantly bristle with only horizontal mechanical vibration. The vibration is provided by a simple vibration motor embedded below the fur material. This technology has significant potential as garment textiles or to be utilized as a general soft user interface.

© All rights reserved Furukawa et al. and/or ACM Press

2009
 
Edit | Del

Sakamoto, Daisuke, Honda, Koichiro, Inami, Masahiko and Igarashi, Takeo (2009): Sketch and run: a stroke-based interface for home robots. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 197-200.

Numerous robots have been developed, and some of them are already being used in homes, institutions, and workplaces. Despite the development of useful robot functions, the focus so far has not been on user interfaces of robots. General users of robots find it hard to understand what the robots are doing and what kind of work they can do. This paper presents an interface for the commanding home robots by using stroke gestures on a computer screen. This interface allows the user to control robots and design their behaviors by sketching the robot's behaviors and actions on a top-down view from ceiling cameras. To convey a feeling of directly controlling the robots, our interface employs the live camera view. In this study, we focused on a house-cleaning task that is typical of home robots, and developed a sketch interface for designing behaviors of vacuuming robots.

© All rights reserved Sakamoto et al. and/or ACM Press

 
Edit | Del

Kato, Jun, Sakamoto, Daisuke, Inami, Masahiko and Igarashi, Takeo (2009): Multi-touch interface for controlling multiple mobile robots. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 3443-3448.

We must give some form of a command to robots in order to have the robots do a complex task. An initial instruction is required even if they do their tasks autonomously. We therefore need interfaces for the operation and teaching of robots. Natural languages, joysticks, and other pointing devices are currently used for this purpose. These interfaces, however, have difficulty in operating multiple robots simultaneously. We developed a multi-touch interface with a top-down view from a ceiling camera for controlling multiple mobile robots. The user specifies a vector field followed by all robots on the view. This paper describes the user interface and its implementation, and future work of the project.

© All rights reserved Kato et al. and/or ACM Press

 
Edit | Del

Seifried, Thomas, Haller, Michael, Scott, Stacey D., Perteneder, Florian, Rendl, Christian, Sakamoto, Daisuke and Inami, Masahiko (2009): CRISTAL: a collaborative home media and device controller based on a multi-touch display. In: Proceedings of the 2009 ACM International Conference on Interactive Tabletops and Surfaces 2009. pp. 33-40.

While most homes are inherently social places, existing devices designed to control consumer electronics typically only support single user interaction. Further, as the number of consumer electronics in modern homes increases, people are often forced to switch between many controllers to interact with these devices. To simplify interaction with these devices and to enable more collaborative forms of device control, we propose an integrated remote control system, called CRISTAL (Control of Remotely Interfaced Systems using Touch-based Actions in Living spaces). CRISTAL enables people to control a wide variety of digital devices from a centralized, interactive tabletop system that provides an intuitive, gesture-based interface that enables multiple users to control home media devices through a virtually augmented video image of the surrounding environment. A preliminary user study of the CRISTAL system is presented, along with a discussion of future research directions.

© All rights reserved Seifried et al. and/or their publisher

 
Edit | Del

Imbe, Takuo, Ozaki, Fumitaka, Kiyasu, Shin, Mizukami, Yusuke, Ishibashi, Shuichi, Inakage, Masa, Okude, Naohito, Cheok, Adrian D., Inami, Masahiko and Sugimoto, Maki (2009): Myglobe: a navigation service based on cognitive maps. In: Proceedings of the 4th International Conference on Tangible and Embedded Interaction 2009. pp. 189-192.

Myglobe is a user generated navigation service that enables users to share each cognitive map with one another. Cognitive map is a personalized map, shape of which is emphasized according to user's preference and activity in the city. It facilitates users to look back on their own city and have a new understanding by using an application in smart phones and physically interacting with a globe shaped device. In this paper, we present Myglobe service for users to achieve a new city experience with cognitive maps.

© All rights reserved Imbe et al. and/or their publisher

2008
 
Edit | Del

Leitner, Jakob, Haller, Michael, Yun, Kyungdahm, Woo, Woontack, Sugimoto, Maki and Inami, Masahiko (2008): IncreTable, a mixed reality tabletop game experience. In: Inakage, Masa and Cheok, Adrian David (eds.) Proceedings of the International Conference on Advances in Computer Entertainment Technology - ACE 2008 December 3-5, 2008, Yokohama, Japan. pp. 9-16.

 
Edit | Del

Shimizu, Noriyoshi, Sugimoto, Maki, Sekiguchi, Dairoku, Hasegawa, Shoichi and Inami, Masahiko (2008): Mixed reality robotic user interface: virtual kinematics to enhance robot motion. In: Inakage, Masa and Cheok, Adrian David (eds.) Proceedings of the International Conference on Advances in Computer Entertainment Technology - ACE 2008 December 3-5, 2008, Yokohama, Japan. pp. 166-169.

 
Edit | Del

Fujii, Tomoko, Nii, Hideaki, Tokiwa, Takuji, Sugimoto, Maki and Inami, Masahiko (2008): Motion capture system using single-track gray code. In: Inakage, Masa and Cheok, Adrian David (eds.) Proceedings of the International Conference on Advances in Computer Entertainment Technology - ACE 2008 December 3-5, 2008, Yokohama, Japan. p. 426.

 
Edit | Del

Hashimoto, Yuki, Inami, Masahiko and Kajimoto, Hiroyuki (2008): Straw-Like User Interface (II): A New Method of Presenting Auditory Sensations for a More Natural Experience. In: Ferre, Manuel (ed.) EuroHaptics 2008 - Haptics Perception, Devices and Scenarios - 6th International Conference June 10-13, 2008, Madrid, Spain. pp. 484-493.

2007
 
Edit | Del

Richter, Jan, Thomas, Bruce H., Sugimoto, Maki and Inami, Masahiko (2007): Remote active tangible interactions. In: Proceedings of the 1st International Conference on Tangible and Embedded Interaction 2007. pp. 39-42.

This paper presents a new form of remote active tangible interactions built with the Display-based Measurement and Control System. A prototype system was constructed to demonstrate the concepts of coupled remote tangible objects on rear projected tabletop displays. A user evaluation measuring social presence for two users performing a furniture placement task was performed, to determine a difference between this new system and a traditional mouse.

© All rights reserved Richter et al. and/or ACM Press

2006
 
Edit | Del

Kojima, Minoru, Sugimoto, Maki, Nakamura, Akihiro, Tomita, Masahiro, Inami, Masahiko and Nii, Hideaki (2006): Augmented Coliseum: An Augmented Game Environment with Small Vehicles. In: First IEEE International Workshop on Horizontal Interactive Human-Computer Systems Tabletop 2006 5-7 January, 2006, Adelaide, Australia. pp. 3-8.

 
Edit | Del

Hashimoto, Yuki, Nagaya, Naohisa, Kojima, Minoru, Miyajima, Satoru, Ohtaki, Junichiro, Yamamoto, Akio, Mitani, Tomoyasu and Inami, Masahiko (2006): Straw-like user interface: virtual experience of the sensation of drinking using a straw. In: Ishii, Hiroshi, Lee, Newton, Natkin, Stéphane and Tsushima, Katsuhide (eds.) Proceedings of the International Conference on Advances in Computer Entertainment Technology - ACE 2006 June 14-16, 2006, Hollywood, California, USA. p. 42.

 
Edit | Del

Nagaya, Naohisa, Yoshidzumi, Masashi, Sugimoto, Maki, Nii, Hideaki, Maeda, Taro, Kitazaki, Michiteru and Inami, Masahiko (2006): Gravity Jockey: a novel music experience with galvanic vestibular stimulation. In: Ishii, Hiroshi, Lee, Newton, Natkin, Stéphane and Tsushima, Katsuhide (eds.) Proceedings of the International Conference on Advances in Computer Entertainment Technology - ACE 2006 June 14-16, 2006, Hollywood, California, USA. p. 49.

 
Edit | Del

Watanabe, Junji, Hashimoto, Yuki, Nii, Hideaki and Inami, Masahiko (2006): Visual resonator: sight-based auditory experience. In: Ishii, Hiroshi, Lee, Newton, Natkin, Stéphane and Tsushima, Katsuhide (eds.) Proceedings of the International Conference on Advances in Computer Entertainment Technology - ACE 2006 June 14-16, 2006, Hollywood, California, USA. p. 37.

 
Edit | Del

Hashimoto, Yuki, Nagaya, Naohisa, Kojima, Minoru, Miyajima, Satoru, Ohtaki, Junichiro, Yamamoto, Akio, Mitani, Tomoyasu and Inami, Masahiko (2006): Straw-like user interface: virtual experience of the sensation of drinking using a straw. In: Ishii, Hiroshi, Lee, Newton, Natkin, Stéphane and Tsushima, Katsuhide (eds.) Proceedings of the International Conference on Advances in Computer Entertainment Technology - ACE 2006 June 14-16, 2006, Hollywood, California, USA. p. 50.

 
Edit | Del

Nagaya, Naohisa, Yoshidzumi, Masashi, Sugimoto, Maki, Nii, Hideaki, Maeda, Taro, Kitazaki, Michiteru and Inami, Masahiko (2006): Gravity jockey: a novel music experience with galvanic vestibular stimulation. In: Ishii, Hiroshi, Lee, Newton, Natkin, Stéphane and Tsushima, Katsuhide (eds.) Proceedings of the International Conference on Advances in Computer Entertainment Technology - ACE 2006 June 14-16, 2006, Hollywood, California, USA. p. 41.

 
Edit | Del

Imai, Tomoko, Sekiguchi, Dairoku, Inami, Masahiko, Kawakami, Naoki and Tachi, Susumu (2006): Measuring Gaze Direction Perception Capability of Humans to Design Human Centered Communication Systems. In Presence: Teleoperators and Virtual Environments, 15 (2) pp. 123-138.

 
Edit | Del

Shimizu, Noriyoshi, Koizumi, Naoya, Sugimoto, Maki, Nii, Hideaki, Sekiguchi, Dairoku and Inami, Masahiko (2006): A teddy-bear-based robotic user interface. In Computers in Entertainment, 4 (3) .

2005
 
Edit | Del

Shimizu, Noriyoshi, Koizumi, Naoya, Sugimoto, Maki, Nii, Hideaki, Sekiguchi, Dairoku and Inami, Masahiko (2005): Teddy-bear based robotic user interface. In: Lee, Newton (ed.) Proceedings of the International Conference on Advances in Computer Entertainment Technology - ACE 2005 June 15-15, 2005, Valencia, Spain. pp. 75-82.

 
Edit | Del

Shimizu, Noriyoshi, Koizumi, Naoya, Sugimoto, Maki, Nii, Hideaki, Sekiguchi, Dairoku and Inami, Masahiko (2005): Teddy-bear based robotic user interface for interactive entertainment. In: Lee, Newton (ed.) Proceedings of the International Conference on Advances in Computer Entertainment Technology - ACE 2005 June 15-15, 2005, Valencia, Spain. pp. 389-390.

 
Edit | Del

Sugimoto, Maki, Kagotani, Georges, Nii, Hideaki, Shiroma, Naoji, Inami, Masahiko and Matsuno, Fumitoshi (2005): Time Follower's Vision: A Teleoperation Interface with Past Images. In IEEE Computer Graphics and Applications, 25 (1) pp. 54-63.

2004
 
Edit | Del

Tanaka, Kenji, Hayashi, Junya, Inami, Masahiko and Tachi, Susumu (2004): TWISTER: An Immersive Autostereoscopic Display. In: IEEE Virtual Reality Conference 2004 VR 2004 27-31 March, 2004, Chicago, IL, USA. pp. 59-66.

 
Edit | Del

Tanaka, Kenji, Hayashi, Junya, Inami, Masahiko and Tachi, Susumu (2004): Colorplate: TWISTER: An Immersive Autostereoscopic Display. In: IEEE Virtual Reality Conference 2004 VR 2004 27-31 March, 2004, Chicago, IL, USA. p. 278.

 
Edit | Del

Kajimoto, Hiroyuki, Kawakami, Naoki, Tachi, Susumu and Inami, Masahiko (2004): SmartTouch: Electric Skin to Touch the Untouchable. In IEEE Computer Graphics and Applications, 24 (1) pp. 36-43.

2003
 
Edit | Del

Inami, Masahiko, Kawakami, Naoki and Tachi, Susumu (2003): Optical Camouflage Using Retro-Reflective Projection Technology. In: 2003 IEEE and ACM International Symposium on Mixed and Augmented Reality ISMAR 2003 7-10 October, 2003, Tokyo, Japan. pp. 348-349.

 
Edit | Del

Kajimoto, Hiroyuki, Inami, Masahiko, Kawakami, Naoki and Tachi, Susumu (2003): SmartTouch - Augmentation of Skin Sensation with Electrocutaneous Display. In: HAPTICS 2003 - 11th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems 22-23 March, 2003, Los Angeles, CA, USA. pp. 40-46.

2002
 
Edit | Del

Nojima, Takuya, Sekiguchi, Dairoku, Inami, Masahiko and Tachi, Susumu (2002): The SmartTool: A System for Augmented Reality of Haptics. In: VR 2002 2002. pp. 67-72.

2001
 
Edit | Del

Kunita, Yutaka, Ogawa, Naoko, Sakuma, Atsushi, Inami, Masahiko, Maeda, Taro and Tachi, Susumu (2001): Immersive Autostereoscopic Display for Mutual Telexistence: TWISTER I (Telexistence Wide-Angle Immersive STEReoscope Model I). In: VR 2001 2001. pp. 31-36.

2000
 
Edit | Del

Inami, Masahiko, Kawakami, Naoki, Sekiguchi, Dairoku, Yanagida, Yasuyuki, Maeda, Taro and Tachi, Susumu (2000): Visuo-Haptic Display Using Head-mounted Projector. In: VR 2000 2000. pp. 233-.

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
 
 
 

Changes to this page (author)

23 Nov 2012: Modified
23 Nov 2012: Modified
07 Nov 2012: Modified
07 Nov 2012: Modified
05 Apr 2012: Modified
04 Apr 2012: Modified
03 Apr 2012: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
02 May 2011: Modified
02 May 2011: Modified
02 May 2011: Modified
02 May 2011: Modified
29 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
20 Jul 2009: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
01 Jun 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
24 Jul 2007: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/masahiko_inami.html

Publication statistics

Pub. period:2000-2012
Pub. count:47
Number of co-authors:95



Co-authors

Number of publications with 3 favourite co-authors:

Maki Sugimoto:19
Takeo Igarashi:15
Hideaki Nii:10

 

 

Productive colleagues

Masahiko Inami's 3 most productive colleagues in number of publications:

Takeo Igarashi:66
Bruce H. Thomas:54
Susumu Tachi:41
 
 
 
Jul 26

The display is the computer.

-- Jen-Hsun Huang, 2002

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!