Proceedings of the 2012 Augmented Human International Conference
Time and place:
The AH international conference focuses on scientific contributions towards augmenting humans capabilities through technology for increased well-being and enjoyable human experience. The topics of interest include, but are not limited to: Augmented and Mixed RealityInternet of ThingsAugmented Sport, Sensors and HardwareWearable ComputingAugmented HealthAugmented Well-beingSmart artifacts & Smart TextilesAugmented Tourism and Games, Ubiquitous Computing, Bionics and Biomechanics, Training/Rehabilitation Technology, Exoskeletons, Brain Computer Interface, Augmented Context-Awareness, Augmented Fashion, Safety, Ethics and Legal Aspects, Security and Privacy Aspects
The following articles are from "Proceedings of the 2012 Augmented Human International Conference":
Akiyama, Shotaro, Terada, Tsutomu and Tsukamoto, Masahiko (2012): An on-site programming environment for wearable computing. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 1. Available online
In wearable computing environments, it is difficult for users to prepare applications that are used beforehand since there are various situations and places. Therefore, they want to define new services by themselves. In this study, we present a development framework and several tools for developing services in wearable computing environments. The framework consists of an event-driven rule processing engine and service implementation tools, which enable users to program services easily and quickly. The proposed system shows elements of event-driven rules as chips, and we can program services by selecting chips on graphical user interfaces. In addition, the proposed system has two functions considering programming features on wearable computing: genetic-algorithm-based programming and social-network-based programming.
Guilbourd, Roman, Yogev, Noam and Rojas, Raúl (2012): Stereo camera based wearable reading device. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 10. Available online
The ability to access textual information is crucial for visually impaired people in terms of achieving greater independence in their everyday life. Thus, there is a need for a mobile easy-to-use reading device, capable of dealing with the complexity of the outdoor environment. In this paper a wearable camera-based solution is presented, aiming at improving the performance of existing systems through the use of stereo vision. Specific aspects of the stereo matching problem in document images are discussed and an approach for its integration into the document processing procedure is introduced. We conclude with the presentation of experimental results from a prototype system, which demonstrate the practical benefits of the presented approach.
Hasegawa, Shoichi, Ishijima, Seiichiro, Kato, Fumihiro, Mitake, Hironori and Sato, Makoto (2012): Realtime sonification of the center of gravity for skiing. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 11. Available online
Control of body position is important in skiing. During turn, novice skiers often lean back and lose their control. Leaning back is natural reaction for people. They arc afraid of the slope or speed. We develop a device to provide realtime sonification feedback of the center of gravity of the skier. The device guides the position of skier. A preliminary experiment shows possibility of improvements that the user become to be able to control their position immediately and even to overcome the afraid of slope and speed.
Horie, Tatsuya, Terada, Tsutomu, Katayama, Takuya and Tsukamoto, Masahiko (2012): A pointing method using accelerometers for graphical user interfaces. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 12. Available online
Graphical User Interfaces (GUIs) are widely used and pointing devices are required to operate most of them. We have proposed Xangle, a pointing method using two accelerometers for wearable computing environments. The cursor is positioned at the intersection of two straight lines, which are synchronized with the angles of the accelerometers at fingers. However, Xangle is difficult to be used in daily-life, when the user frequently changes which part of the body they point with. Therefore, we propose a method of changing the body parts used for pointing according to the situation. Additionally, we proposed a method to accelerate the pointer and a method to layout menu items for Xangle since these methods are suitable for using GUI in wearable computing environments. We confirmed that the proposed method was effective from the results of evaluations.
Kim, Junghyun, Hashida, Tomoko, Ohtani, Tomoko and Naemura, Takeshi (2012): Effects of auditory feedback for augmenting the act of writing. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 13. Available online
In this paper, focusing on the writing sound when using an ordinary paper and pen, we explain how auditory feedback augments the act of writing. Specifically, we evaluated the effectiveness of the auditory feedback by comparing writing tasks, which involved tracing Chinese characters, Without Feedback (No), with Monaural Feedback (MF), and Stereo Feedback (SF). The results of this study showed that auditory feedback (MF and SF) of writing produced more written characters than Without Feedback (No) and had fewer negative impressions during the writing task.
Koizumi, Naoya, Sugimoto, Maki, Nagaya, Naohisa, Inami, Masahiko and Furukawa, Masahiro (2012): Stop motion goggle: augmented visual perception by subtraction method using high speed liquid crystal. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 14. Available online
Stop Motion Goggle (SMG) expands visual perception by allowing users to perceive visual information selectively through a high speed shutter. In this system, the user can easily observe not only periodic rotational motion such as rotating fans or wheels, but also random motion like bouncing balls. In this research, we developed SMG and evaluated the effect of SMG on visual perception of high speed moving objects. Furthermore this paper describes users' behaviors under the expanded visual experience.
Komatsu, Takanori (2012): Quantifying Japanese onomatopoeias: toward augmenting creative activities with onomatopoeias. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 15. Available online
Onomatopoeias are used when one cannot describe certain phenomena or events literally in the Japanese language, and it is said that one's ambiguous and intuitive feelings are embedded in these onomatopoeias. Therefore, an interface system that can use onomatopoeia as input information could comprehend such users' feelings, and moreover, this system would contribute to augmenting creative activities such as with computer graphics, music, choreography, and so on. The purpose of this study is to propose an objective quantification method for onomatopoeias in the form of an expression vector to be applied to an interface system in order to augment various creative activities.
Kuniyasu, Yuki, Sato, Michi, Fukushima, Shogo and Kajimoto, Hiroyuki (2012): Transmission of forearm motion by tangential deformation of the skin. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 16. Available online
When teaching device handling skills such as those required in calligraphy, sports or surgery, it is important that appropriate arm motion is transmitted from the trainer to the trainee. In this study, we present a novel, wearable haptic device that produces arm motion using force sensation. The device produces skin deformation and a pseudo-force sensation that is similarly to the force produced when the arm is "pulled". The device generates skin deformation in four directions, and in this paper we have evaluated the device using a directions perception experiment.
Misawa, Kana, Ishiguro, Yoshio and Rekimoto, Jun (2012): Ma petite chérie: what are you looking at?: a small telepresence system to support remote collaborative work for intimate communication. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 17. Available online
We present a telepresence system with a reduced scale face-shaped display for supporting intimate telecommunication. In our previous work, we have developed a real-size face shaped display that tracks and reproduces the remote user's head motion and face image. It can convey user's nonverbal information such as facial expression and gaze awareness. In this paper, we examine the value and effect of scale reduction of such face-shaped displays. We expect small size face displays retain the benefit of real-size talking-head type telecommunication systems, and also provide more intimate impression. It is easier to transport or put on a desk, and it can be worn on the shoulder of the local participants so that people bring it like a small buddy. However, it is not clear how such reduced-size face screen might change the quality of nonverbal communication. We thus conducted an experiment using a 1/14 scale face display, and found critical nonverbal information, such as gaze-direction, is still correctly transmitted even when face size is reduced.
Normand, Jean-Marie, Servičres, Myriam and Moreau, Guillaume (2012): A new typology of augmented reality applications. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 18. Available online
In recent years Augmented Reality (AR) has become more and more popular, especially since the availability of mobile devices, such as smartphones or tablets, brought AR into our everyday life. Although the AR community has not yet agreed on a formal definition of AR, some work focused on proposing classifications of existing AR methods or applications. Such applications cover a wide variety of technologies, devices and goals, consequently existing taxonomies rely on multiple classification criteria that try to take into account AR applications diversity. In this paper we review existing taxonomies of augmented reality applications and we propose our own, which is based on (1) the number of degrees of freedom required by the tracking of the application, as well as on (2) the visualization mode used, (3) the temporal base of the displayed content and (4) the rendering modalities used in the application. Our taxonomy covers location-based services as well as more traditional vision-based AR applications. Although AR is mainly based on the visual sense, other rendering modalities are also covered by the same degree-of-freedom criterion in our classification.
Todd, D. A., McCullagh, P. J., Mulvenna, M. D. and Lightbody, G. (2012): Investigating the use of brain-computer interaction to facilitate creativity. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 19. Available online
Brain Computer Interaction (BCI) has mainly been utilized for communication and control, but it may also find application as a channel for creative expression, as part of an entertainment package. In this paper we provide an initial investigation on how creativity can be supported and assessed. An art-based approach was adopted to investigate the effects of achieving simple drawing and painting. Subjects were asked to complete three tasks using an Steady State Visual Evoked Potential BCI; a drawing task called 'etch-a-sketch' (TASK 1) which relied entirely upon BCI control, and two painting tasks, the first (TASK 2) with a set goal and the second (TASK 3) with more potential for user expression. The tasks varied in the proportion of control to creativity required. Participants provided feedback on their perception of the control and creative aspects and their overall experience. The painting application (TASK 3) for which users perceived that they had more creativity was well accepted; 50% of the users preferred this mode of interaction. The experimental approach described allows for an initial assessment of the acceptance of BCI-mediated artistic expression.
Carrico, Luis, de Sá, Marco, Duarte, Luís and Antunes, Tiago (2012): Therapy: location-aware assessment and tasks. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 2. Available online
In this paper, we present a system that allows therapists to assess and engage patients' in activities triggered by specific stressing contexts. The system is composed by: 1) a web application that the therapist uses to specify the activities and its triggering conditions; and 2) a mobile app that measures physiologic characteristics and challenges the patient to perform the activities according to those conditions. This toolset is part of an extended cognitive behaviour therapy framework. The preliminary evaluation results are encouraging and indicate that the system can be of use and usable for direct application in therapy procedures.
Ozawa, Tomohiro, Kitani, Kris M. and Koike, Hideki (2012): Human-centric panoramic imaging stitching. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 20. Available online
We introduce a novel image mosaicing algorithm to generate 360° landscape images while also taking into account the presence of people at the boundaries between stitched images. Current image mosaicing techniques tend to fail when there is extreme parallax caused by nearby objects or moving objects at the boundary between images. This parallax causes ghosting or unnatural discontinuities in the image. To address this problem, we present an image mosaicing algorithm that is robust to parallax and misalignment, and is also able to preserve the important human-centric content, specifically faces. In particular, we find an optimal path between the boundary of two images that preserves color continuity and peoples' faces in the scene. Preliminary results show promising results of preserving close-up faces with parallax while also being able to generate a perceptually plausible 360° panoramic image.
Wang, Chieh-Chih, Healey, Jennifer and Zhao, Meiyuan (2012): Augmenting on-road perception: enabling smart and social driving with sensor fusion and cooperative localization. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 21. Available online
In many ways the car is the most common human augmentation: it increases our speed, renders us more powerful and enables us to reach distances that are otherwise impossible. In this paper, we show how advanced localization systems enable yet another dimension of human augmentation: allowing the driver to visually perceive data streams from other cars. These data streams may contain social messages from other drivers such as "Follow Me" or warnings from the sensor systems of the other cars themselves such as "Distracted Driver!" We describe both the technical work in progress that makes this system possible as well as the future vision of how this technology will enable smart and social driving through M2M communication with other vehicles that are encountered ad hoc on the road.
Yamano, Shingo, Hamajo, Takamitsu, Takahashi, Shunsuke and Higuchi, Keita (2012): EyeSound: single-modal mobile navigation using directionally annotated music. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 22. Available online
In this paper, we propose a mobile navigation system that uses only auditory information, i.e., music, to guide the user. The sophistication of mobile devices has introduced the use of contextual information in mobile navigation, such as the location and the direction of motion of a pedestrian. Typically in such systems, a map on the screen of the mobile device is required to show the current position and the destination. However, this restricts the movements of the pedestrian, because users must hold the device to observe the screen. We have, therefore, implemented a mobile navigation system that guides the pedestrian in a non-restricting manner by adding direction information to music. By measuring the resolution of the direction that the user can perceive, the phase of the musical sound is changed to guide the pedestrian. Using this system, we have verified the effectiveness of the proposed mobile navigation system.
Yokoyama, Maki, Okano, Yu, Sato, Michi, Fukushima, Shogo, Furukawa, Masahiro and Kajimoto, Hiroyuki (2012): Looming silhouette: an approaching visual stimulus device for pedestrians to avoid collisions. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 23. Available online
We are exposed daily to the risk of collision at numerous blind intersections. To avoid the risk of collision, we propose a system that elicits an "approaching sensation" by presenting a visual stimulus. Possible factors for the approaching sensation are the "expansion" and "motion" of a silhouette. We compared the effects of these two factors on the approaching sensation and found that to elicit an approaching sensation, the expansion factor is important, and the motion factor has a certain effect in alarming pedestrians. On the base of this result, we produced a system that presents an expanding and moving silhouette of an approaching pedestrian to the pedestrians user.
Zhao, Shuyang, Hachisu, Taku, Ishii, Asuka, Kuniyasu, Yuki and Kajimoto, Hiroyuki (2012): Augmentation of obstacle sensation by enhancing low frequency component for horror game background sound. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 24. Available online
Computer games provide users with a mental stimulation that the real world cannot. Especially, horror games are a popular category. Current horror games can provide the user with a visible ghost and the stereo background sound to thrill the user. Inspired by obstacle sense -- the ability of blind people localizing themselves only with hearing, a novel method to augment the sense of existence in the game background sound is proposed in this paper. We found that an effective sense can be created by decreasing high frequency component and increasing low frequency component simultaneously.
Lafuente, Carlos Ballester and Seigneur, Jean-Marc (2012): Crowd augmented wireless access. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 25. Available online
Environments such as ski slopes are highly dynamic, as users are constantly moving at high speeds and in different directions, and also many users are not locals, thus having to roam in order to be able to connect through mobile data. These two previous reasons make connectivity through regular means to be difficult to attain. This demo paper presents the simulation and validation of a crowd augmented wireless access used in order to tackle this problem.
Kapeller, Christoph, Hintermüller, Christoph and Guger, Christoph (2012): Usability of video-overlaying SSVEP based BCIs. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 26. Available online
This work investigates the usability of an steady-state visual evoked potentials (SSVEP) based brain-computer interface (BCI) with on-screen stimulation. The BCI controls were displayed with an underlying feedback video. Each control had a unique flashing frequency. For classification a combination of minimum energy (ME) and linear discriminant analysis (LDA) was used. Two experiments showed that the use of overlaying controls is possible, but also decreasing the performance.
Kapeller, Christoph, Hintermüller, Christoph and Guger, Christoph (2012): Augmented control of an avatar using an SSVEP based BCI. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 27. Available online
This allows the user to play WoW with the BCI system only by thought.
In sports, dancing and playing music, it is important to achieve correct body movement as it greatly affects performance. However, matching one's movement with ideal movement is fundamentally difficult, because we do not have a detailed perception of our own body movement. In this study, we propose to present "rotary switch feeling" feedback as a new haptic cue. A periodical ticking sensation, like that of a rotary switch, can be presented at each joint so that the user vividly perceives his/her movement. This paper presents a simple mechanical prototype that is attached to the elbow.
Murase, Taichi, Moteki, Atsunori, Suzuki, Genta, Nakai, Takahiro, Hara, Nobuyuki and Matsuda, Takahiro (2012): Gesture keyboard with a machine learning requiring only one camera. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 29. Available online
In this paper, the authors propose a novel gesture-based virtual keyboard (Gesture Keyboard) that uses a standard QWERTY keyboard layout, and requires only one camera, and employs a machine learning technique. Gesture Keyboard tracks the user's fingers and recognizes finger motions to judge keys input in the horizontal direction. Real-Adaboost (Adaptive Boosting), a machine learning technique, uses HOG (Histograms of Oriented Gradients) features in an image of the user's hands to estimate keys in the depth direction. Each virtual key follows a corresponding finger, so it is possible to input characters at the user's preferred hand position even if the user displaces his hands while inputting data. Additionally, because Gesture Keyboard requires only one camera, keyboard-less devices can implement this system easily. We show the effectiveness of utilizing a machine learning technique for estimating depth.
Chemuturi, Radhika, Amirabdollahian, Farshid and Dautenhahn, Kerstin (2012): A study to understand lead-lag performance of subject vs rehabilitation system. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 3. Available online
Robotic assistance in stroke rehabilitation is rapidly advancing based on the recent developments in robotics, haptic interfaces and virtual reality. GENTLE/S is a rehabilitation system that utilized haptic and virtual reality technologies to deliver challenging and meaningful therapies to upper limb impaired stroke subjects. The current research is working towards designing GENTLE/A system with a better adaptive human-robot interface, which allows for automatic tuning of the assistance and resistance based on provided input. This paper presents the results from a preliminary study conducted with three healthy subjects as part of this research. The aim of the investigation is to explore whether it is possible to identify if a robot or a person is leading the interaction by comparing the results from the actual performance of the subject with the minimum jerk model used to drive the robot. The final goal is to use these observations to probe various ways in which the contribution of robot can be established and the adaptability of the robot during the therapy can be enhanced.
Ochiai, Yoichi (2012): Kaleidoscopes for binocular rivalry. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 30. Available online
When you look into the two kaleidoscopes at the same time, a wonderful and strange scene is in sight. We developed the stereo electronic kaleidoscope with the high definition display. It shows the images as beautiful as classic kaleidoscopes can show. We tested and selected the images which cause the binocular rivalry effect. This work makes the augmented kaleidoscopes which gives us wonderful feeling on the structure and functions of our brain.
Ochiai, Yoichi and Toyoshima, Keisuke (2012): Invisible feet under the vehicle. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 31. Available online
When we drive a car, we have many blind spots. The information from the outside is almost limited to vision and sound. We have a vision that the driver and the car unified and moves as one  to face the problem of the gap of the information between the outside of the car and inside. We call the unity of driver and the car Homunculus which makes communication with the outside of the vehicle. With this concept, we developed a new haptic system. Our system assigns the sense of driver's foot to the bottom of a car. It connects nine vibration motors on a grid to the nine IR distance sensors on a grid. If users use this system, they can feel something passed through the bottom of a car, a bump and so on with feeling like a sole of hid foot was touched. It is like a invisible foot (Figure1) is sticked out bottom of the cars. We applied our prototype to several cases in driving and found several interesting points on this. We would discuss on these points on this paper.
Takase, Yutaka and Hasegawa, Shoichi (2012): Presentation of directional information by sound field control. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 32. Available online
We propose a novel method for presentation of directional information. The system presents an perception of presence of obstacles by controlling environment sound field. Visual map and voice prompts are practical method for directional information presentation and used in car navigation system. However they occupy sense of sight and hearing. By contrast, our method can present directional information naturally without occupying sensory channels. Therefore, users can get benefits of directional information with enjoying surrounding environment.
Duarte, Luís, Antunes, Tiago and Carrico, Luis (2012): Can you feel it?: sharing heart beats with Augmento. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 4. Available online
This paper presents Augmento, a system which aims at providing individuals with an asynchronous approach of reinforcing the bonds with their relatives by sharing emotions when they are in the vicinity of places which hold special memories of their lives. Augmento capitalizes on existing technologies to accomplish its goal, ranging from the usage of location based services, to the retrieval of the individual's physiological signals to convey typically occluded information between individuals, more particularly in long-distance relationships. The paper presents the general vision for the system, its workflow, architecture, scenarios and early prototypes. We performed an early assessment of the system and, in particular, we were interested in obtaining valuable insight whether vibrotactile feedback would be suited to convey and mimic an individual's heartbeat rate value to other users. The results for this testing period are presented and discussed in the paper.
Duarte, Luís and Carrico, Luis (2012): User performance tweaking in videogames: a physiological perspective of player reactions. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 5. Available online
The videogame industry has suffered significant modifications in the last years, broadening its horizons towards a more casual market. This market expansion not only brings new opportunities from an interaction point-of-view, but also new challenges with the inclusion of users who are not accustomed to these games. This paper presents part of an ongoing study which aims at providing a better understanding of player behavior both from an interactive and a physiological standpoint. The experiment addressed here assesses different gameplay mechanics influence not only a subset of the players' physiological signals, but also their performance and interactive behavior.
Enokibori, Yu and Mase, Kenji (2012): A method to evaluate metal filing skill level with wearable hybrid sensor. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 6. Available online
This paper presents a method to evaluate a person's skill level for metal filing. Metal filing by expert engineers is an important manufacturing skill that supports basic areas of industry, although most sequences are already automated with industrial robots. However, there is no effective training method for the skill; "coaching" has been most weighted. Most coaching has depended on the coaches' personal viewpoints. In addition, skill levels have been assessed subjectively by the coaches. Because of these problems, learners have to spend several hundred hours to acquire the basic manufacturing skill. Therefore, to develop an effective skill training scheme and an objective skill level assessment, we analyzed metal filing and implemented a method to evaluate metal-filing skill. We used wearable hybrid sensors that support an accelerometer and gyroscope, and collected data from 4 expert coaches and 10 learners. The data are analyzed from the viewpoint of the mechanical structure of their bodies during metal filing. Our analysis yielded three effective measures for skill assessment: "Class 2 Lever-like Movement Measure", "Upper Body Rigidity Measure", and "Pre-Acceleration Measure". The weighted total measure succeeded in distinguishing the coach group and the learner group as individual skill level groups at a 95% confidence level. The highest-level learner, the lowest-level learner, and the group of other learners were also able to be distinguished as individual skill level groups at a 95% confidence level; this is the same result as an expert coach's subjective score.
Fujimoto, Yuichiro, Yamamoto, Goshiro, Kato, Hirokazu and Miyazaki, Jun (2012): Relation between location of information displayed by augmented reality and user's memorization. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 7. Available online
This study aims to investigate the effectiveness of Augmented Reality (AR) on user's memory skills when it is used as an information display method. By definition, AR is a technology which displays virtual images on the real world. These computer generated images naturally contain location information on the real world. It is also known that humans can easily memorize and remember information if this information is retained along with some locations on the real world. Thus, we hypothesize that displaying annotations by using AR may have better effects on the user's memory skill, if they are associated with the location of the target object on the real world rather than when connected with an unrelated location. A user study was conducted with 30 participants in order to verify our hypothesis. As a result, a significant difference was found between the situation when information was associated with the location of the target object on the real world and when it was connected with an unrelated location. In this paper, we present the test results and explain the verification based on the results.
Fukushima, Shogo and Kajimoto, Hiroyuki (2012): Facilitating a surprised feeling by artificial control of piloerection on the forearm. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 8. Available online
There have been many proposals that have added haptic stimulation to entertainment content such as music, games, and movies. These technologies enrich the quality of the experiences by improving the reality thereof. In contrast, we present a novel approach to enrich the quality of these experiences by facilitating the emotional feeling evoked by the content. In this paper, we focus on piloerection, which is a kind of involuntary emotional reaction. Our hypothesis is that not only is it an emotional "reaction", but it can also work as an emotional "input" that enhances the emotion itself. We have constructed a device that controls piloerection on the forearm through electrostatic force. Based on a psychophysical experiment, we confirm that the piloerection system enhances the feeling of surprise.
Furukawa, Masahiro, Kajimoto, Hiroyuki and Tachi, Susumu (2012): KUSUGURI: a shared tactile interface for bidirectional tickling. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 9. Available online
Tickling, a nonverbal form of communication, can provide entertainment. Therefore, tickling is a desirable addition as content as a remote communication method. However, tickling is difficult to realize because it requires both body contact as well as bidirectionality. In this paper, we propose a method of "Shared Tactile Interface" which allows sharing of a body part with another user at a distance. The interface has three features: direct contact, transfer of the tickling sensation, and bidirectionality. The first allows users to view another person's finger as if it is directly contacting the user's own palm and moving on the user's palm. The second feature delivers a vibration to the user's palm which generates an illusion and perception of a tickling sensation. The third feature enables bidirectional tickling because one user can also tickle the other user's palm in the same manner. We built prototypes based on this design method, and evaluated the proposed method through two technical exhibitions. The users were able to tickle each other, which confirmed that the design method "Shared Tactile Interface" works as expected. However, we found issues especially regarding the reliability of the tickling sensation.