Upcoming Courses

go to course
User-Centred Design - Module 2
92% booked. Starts in 3 days
go to course
Design Thinking: The Beginner's Guide
91% booked. Starts in 4 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Upcoming Courses

go to course
User-Centred Design - Module 2
92% booked. Starts in 3 days
go to course
Design Thinking: The Beginner's Guide
91% booked. Starts in 4 days
 
 

Proceedings of the 2011 Augmented Human International Conference


 
Time and place:

2011
Conf. description:
The AH international conference focuses on scientific contributions towards augmenting humans capabilities through technology for increased well-being and enjoyable human experience. The topics of interest include, but are not limited to: Augmented and Mixed RealityInternet of ThingsAugmented Sport, Sensors and HardwareWearable ComputingAugmented HealthAugmented Well-beingSmart artifacts & Smart TextilesAugmented Tourism and Games, Ubiquitous Computing, Bionics and Biomechanics, Training/Rehabilitation Technology, Exoskeletons, Brain Computer Interface, Augmented Context-Awareness, Augmented Fashion, Safety, Ethics and Legal Aspects, Security and Privacy Aspects
Help us!
Do you know when the next conference is? If yes, please add it to the calendar!
Series:
. Part of the AH - Augmented Human International Conference conference series.
Other years:
Publisher:
ACM Press
EDIT

References from this conference (2011)

The following articles are from "Proceedings of the 2011 Augmented Human International Conference":

 what's this?

Articles

p. 1

Heidrich, Felix, Ziefle, Martina, Rcker, Carsten and Borchers, Jan (2011): Interacting with smart walls: a multi-dimensional analysis of input technologies for augmented environments. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 1. Available online

This paper reports on a multi-dimensional evaluation of three typical interaction devices for wall-sized displays in augmented environments. Touch, trackpad and gesture input were evaluated regarding a variety of usability dimensions in order to understand the quality profile of each input device. Among the three interaction devices, the touch input showed the highest scores in performance and acceptance as well as hedonic value.

© All rights reserved Heidrich et al. and/or ACM Press

p. 10

Kuniyasu, Yuki, Fukushima, Shogo, Furukawa, Masahiro and Kajimoto, Hiroyuki (2011): Weight illusion by tangential deformation of forearm skin. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 10. Available online

When we perform exercise or undergo rehabilitation, it is helpful to be supported by another person. To get this support, we normally take hold of a person's arm, and pull it. In this paper, we investigate the use of a special device to produce a "pulling arm" sensation on the forearm. Using a weight comparison task, we performed an experiment to confirm the sensation of illusory external force with our device. We concluded that our current device presented about 10g to 20g weight perception.

© All rights reserved Kuniyasu et al. and/or ACM Press

p. 11

Yoshida, Tsubasa, Kitani, Kris M., Koike, Hideki, Belongie, Serge and Schlei, Kevin (2011): EdgeSonic: image feature sonification for the visually impaired. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 11. Available online

We propose a framework to aid a visually impaired user to recognize objects in an image by sonifying image edge features and distance-to-edge maps. Visually impaired people usually touch objects to recognize their shape. However, it is difficult to recognize objects printed on flat surfaces or objects that can only be viewed from a distance, solely with our haptic senses. Our ultimate goal is to aid a visually impaired user to recognize basic object shapes, by transposing them to aural information. Our proposed method provides two types of image sonification: (1) local edge gradient sonification and (2) sonification of the distance to the closest image edge. Our method was implemented on a touch-panel mobile device, which allows the user to aurally explore image context by sliding his finger across the image on the touch screen. Preliminary experiments show that the combination of local edge gradient sonification and distance-to-edge sonification are effective for understanding basic line drawings. Furthermore, our tests show a significant improvement in image understanding with the introduction of proper user training.

© All rights reserved Yoshida et al. and/or ACM Press

p. 12

Chiang, Heien-Kun, Chou, Yin-Yu, Chang, Long-Chyr, Huang, Chun-Yen, Kuo, Feng-Lan and Chen, Hown-Wen (2011): An augmented reality learning space for PC DIY. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 12. Available online

Because of the advances of computer hardware and software, Computer Aided Instruction (CAI) makes learning effective and interesting through the use of interactive multimedia technology. Recently, Augmented Reality (AR) technology has begun to surge as a new CAI tool because of its ability to create tangible and highly interactive user interface. In addition, recent studies have shown that the learning content as well as the participation of learners in learning activities can greatly affect learners' learning performance. However, studies of the integration of PC DIY (Personal Computer Do It Yourself) learning with AR technology are still few in current literature. Therefore, this study proposes an AR learning space for PC DIY whose system architecture and implementation are detailed. To evaluate the usability of the proposed system, a questionnaire is given to twenty-six graduate students after their hands-on experience with the prototype. Results of the questionnaire show the proposed AR learning space for PC DIY offers students a motivating, pleasant, and satisfying learning experience. Limitation, conclusion and future studies are given.

© All rights reserved Chiang et al. and/or ACM Press

p. 13

Houri, Naoyuki, Arita, Hiroyuki and Sakaguchi, Yutaka (2011): Audiolizing body movement: its concept and application to motor skill learning. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 13. Available online

We propose a concept of "audiolization of body movement," which transforms the posture/movement of the human body or human-controlled-tools into acoustic signals and feeds them back to the users in a real-time manner. It aims at helping people being aware of their body/tool states, and resultantly assisting their motor skill learning. The present paper describes features of the concepts and introduces some demonstrative applications.

© All rights reserved Houri et al. and/or ACM Press

p. 14

Kajiwara, Yasuhiro, Tajimi, Keisuke, Uemura, Keiji, Sakata, Nobuchika and Nishida, Shogo (2011): ClippingLight: a method for easy snapshots with projection viewfinder and tilt-based zoom control. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 14. Available online

In this paper, we present a novel method to take photos with a hand-held camera. Cameras are being used for new purposes in our daily lives these days, such as to augment human memory or scan visual markers (e.g. QR-codes) and opportunities to take snapshots are increasing. However, taking snapshots with today's hand-held camera is troublesome, because its viewfinder forces the user to see the real space through itself, and it requires complicated operation to control zoom levels and press a shutter-release button at the same time. Therefore, we propose ClippingLight that is a combination method of Projection Viewfinder and tilt-based zoom control. It enables to take snapshots with low effort. We implement this method using a prototype of real-world projection camera. We conducted user study to confirm the effect of CippingLight in situations to take photos one after another. As a result, we found that ClippingLight is more comfortable and requires lower effort than today's typical camera when a user takes a photo quickly.

© All rights reserved Kajiwara et al. and/or ACM Press

p. 15

Yamanaka, Shunji, Tsuji, Yuki, Higaki, Mariko and Suzuki, Hideka (2011): Designing the sports prosthetic leg. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 15. Available online

From a prosthesis hidden under clothing to a one comes on spotlight. Our common recognition is changing through sports. For amputee's more beautiful form in running, we've developed prostheses specially focused on usability, exterior, and safety. Here we'd like to introduce how we've designed the prosthesis for lower limb, knee joints and air stabilizer for the carbon fiber foot.

© All rights reserved Yamanaka et al. and/or ACM Press

p. 16

Takahashi, Keisuke, Kadone, Hideki and Suzuki, Kenji (2011): Head orientation sensing by a wearable device for assisted locomotion. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 16. Available online

In this paper, we propose a novel wearable sensor device for the measurement of the head orientation and relative position against the body trunk in real-time. It is known that in natural walking, human locomotion is preceded by changes in head orientation [1, 2, 3] and the walking direction can therefore be predicted by observing the head orientation. We have been developing a wearable sensing device for the measurement of head orientation, which enables prediction of the future walking direction in real-time for the assistive technologies for locomotion -- such as exoskeleton robots and wheelchair. Existing body posture measurement devices tend to be large and non-portable [4], therefore measurement in everyday space is still difficult. On the other hand, the developed system enables wireless and location independent measurement of the orientation of the head and it can be applied for assisted locomotion. In evaluation of the accuracy of the developed device, we observed the head anticipation during natural walking. Additionally, we compared the head anticipation in natural walking and electric wheelchair locomotion using the developed device and discuss a novel wheelchair control based on head orientation.

© All rights reserved Takahashi et al. and/or ACM Press

p. 17

Takahashi, Masato, Fernando, Charith Lasantha, Kumon, Yuto, Takeda, Shuhey, Nii, Hideaki, Tokiwa, Takuji, Sugimoto, Maki and Inami, Masahiko (2011): Earthlings Attack!: a ball game using human body communication. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 17. Available online

In this paper, we describe a ball game "Earthlings Attack!" that uses the contact between users and an active ball device as an information channel to the game content. When the ball device with built-in transmitter comes in contact with the user who wears the receiver, theis system transmits information from the ball device to the receiver through user's body with the human body communication. With this method, we aim at the interaction improvement of the augmentation of the interaction in such a way that presenting information on user's body according to the contact between each ball device and each user. This system also enables to use in a wide range field in the same network by managing contact information of both collectively.

© All rights reserved Takahashi et al. and/or ACM Press

p. 18

Maeda, Taro, Ando, Hideyuki, Iizuka, Hiroyuki, Yonemura, Tomoko, Kondo, Daisuke and Niwa, Masataka (2011): Parasitic Humanoid: the wearable robotics as a behavioral assist interface like oneness between horse and rider. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 18. Available online

The Parasitic Humanoid (PH) is a wearable robotic human interface for sampling, modeling, and assisting nonverbal human behavior. This anthropomorphic robot senses the behavior of the wearer and has the internal models to learn the process of human sensory motor integration, thereafter it begins to predict the next behavior of the wearer using the learned models. When the reliability of the prediction is sufficient, the PH outputs the difference from the actual behavior as a request for motion to the wearer by motion induction using sensory illusion. Through the symbiotic interaction, the internal model and the process of human sensory motor integration approximate each other asymptotically. This process is available to transmit modalities such as senses of sight, hearing, touch, force and balance with human embodiment. This synergistic multimodal communication between distant people wearing PH can realize experience-sharing, skill transmission, and human behavior supports.

© All rights reserved Maeda et al. and/or ACM Press

p. 19

Furukawa, Masahiro, Yoshikawa, Hiromi, Hachisu, Taku, Fukushima, Shogo and Kajimoto, Hiroyuki (2011): "Vection field" for pedestrian traffic control. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 19. Available online

Visual signs and audio cues are commonly used for pedestrian control in the field of general traffic research. Because pedestrians need to first acquire and then recognize such cues, time delays invariably occur between cognition and action. To better cope with this issue of delays, wearable devices have been proposed to control pedestrians more intuitively. However, the attaching and removing of the devices can be cumbersome and impractical. In this study, we propose a new visual navigation method for pedestrians using a "Vection Field" in which the optical flow is presented on the ground. The optical flow is presented using a lenticular lens, a passive optical element that generates a visual stimulus based on a pedestrian's movement without an electrical power supply. In this paper we present a design for the fundamental visual stimulus and evaluate the principle of our proposed method for directional navigation. Results revealed that the optical-flow of a stripe and random-dot pattern displaced pedestrian pathways significantly, and that implementation with a lenticular lens is feasible.

© All rights reserved Furukawa et al. and/or ACM Press

p. 2

Matsushita, Kazuhiro, Iwai, Daisuke and Sato, Kosuke (2011): Interactive bookshelf surface for in situ book searching and storing support. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 2. Available online

We propose an interactive bookshelf surface to augment a human ability for in situ book searching and storing. In book searching support, when a user touches the edge of the bookshelf, the cover image of a stored book located above the touched position is projected directly onto the book spine. As a result, the user can search for a desired book by sliding his (or her) finger across the shelf edge. In book storing support, when a user brings a book close to the bookshelf, the place where the book should be stored is visually highlighted by a projection light. This paper also presents sensing technologies to achieve the above mentioned interactive techniques. In addition, by considering the properties of the human visual system, we propose a simple visual effect to reduce the legibility degradation of the projected image contents by the complex textures and geometric irregularities of the spines. We confirmed the feasibility of the system and the effectiveness of the proposed interaction techniques through user studies.

© All rights reserved Matsushita et al. and/or ACM Press

p. 20

Kurosaki, Keitaro, Kawasaki, Hiroki, Kondo, Daisuke, Iizuka, Hiroyuki, Ando, Hideyuki and Maeda, Taro (2011): Skill transmission for hand positioning task through view-sharing system. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 20. Available online

In this paper, we describe the skill transmission through our view-sharing system that can mix or exchange the first person perspectives from the exact the partner's viewpoints. Since a non-skilled person can see the first person perspective of a skilled person, the motion of the non-skilled person is intuitively modified and supported. The task for the skill transmission is to play theremin that requires precise hand motions. As a result we show that the skill transmission effectively happens with our view-sharing system compared with the conventional method, side-by-side teaching. The way of effective augmenting human ability will be discussed.

© All rights reserved Kurosaki et al. and/or ACM Press

p. 21

Nakagawa, Maki, Tsukada, Koji and Siio, Itiro (2011): Smart skincare system: remote skincare advice system using life logs. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 21. Available online

Many women find it difficult to maintain beautiful skin as skincare approaches require a great deal of effort, time, and special knowledge. Women often ask experts in cosmetic stores for skincare advice. However, this approach has limitations in terms of time, place, and privacy. To solve these problems, we propose a remote skincare advice system using life logs. This system helps users automatically log information related to their skin condition and share these data with skincare experts in order to obtain appropriate advice. First, we performed a feasibility study to select proper life log data for our system, and then we built prototype systems. Finally, we verified the effectiveness of our system through two studies.

© All rights reserved Nakagawa et al. and/or ACM Press

p. 22

Takahashi, Yuta, Toda, Masashi, Sakurazawa, Shigeru, Akita, Junichi, Kondo, Kazuaki and Nakamura, Yuichi (2011): Skill evaluation method based on variability of antagonism power of EMG. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 22. Available online

We can more effectively take the physical skills of individual people into consideration from various points of view when we focus on evaluating their skills while exercising. We can focus on their maximum levels of speed and power, their smoothness through a series of exercises, their instantaneous force, repeatability, and their adjustability to agitation or obstacles such as circumjacent people or nature. A lot of exercise skills can relatively and easily be quantitatively evaluated by carefully analyzing the results and performance. However, it is difficult to evaluate the "repeatability" aspect, which is only one of exercise skill, when judging its degree from only viewing the given exercise. An example of a physical exercise process that can contribute stable results would need to be equivalent to a "skill" such as hitting a home run each time. We believe that the acquisition of a given skill is very useful in fields such as physical training. Therefore, we examined the repeatability aspect from this point of view. We also used an antagonism power index calculated using EMG to achieve such purposes. The index represents any adjustments made in the output power from the muscles. I thought that the adjust function of the output power of the muscles would be very useful when evaluating the exercise skills of a given individual. The antagonism power was calculated using the quasi-muscular tension and a skeletal muscle model consisting of one joint and two muscles. We also made a comparison between the unskilled state and a skilled state. As a result, the differences in exercise skill appeared to be antagonism power. Therefore, we thought that antagonism power was effective enough for creating a new exercise skill evaluation index that we define in this paper.

© All rights reserved Takahashi et al. and/or ACM Press

p. 23

Iizuka, Hiroyuki, Kondo, Daisuke, Kawasaki, Hiroki, Ando, Hideyuki and Maeda, Taro (2011): Coordinated behavior between visually coupled dyads. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 23. Available online

We describe how visually coupled people start their synchronized behavior with two visual coupling conditions: view-swapping and view-blending. In the view-swapping condition, two people's views are changed from the first-person perspective so that both see their partner's views. The view-blending condition allows people to see the blended view of both views. We report the results of different coordinated strategies to start synchronization that is observed in different conditions. In terms of the time required to start synchronization, view-swapping outperforms view-blending.

© All rights reserved Iizuka et al. and/or ACM Press

p. 24

Seigneur, Jean-Marc (2011): The emotional economy for the augmented human. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 24. Available online

Happiness research findings are increasingly being taken into account in standard economics. However, most findings are based on a posteriori surveys trying to infer how happy people have been. In this paper, we argue that the advances in wearable computing, especially brain-computer interfaces, can lead to realtime measurements of happiness. We then propose a new kind of economy model where people pay depending on the emotions they have experienced. We have combined current commercial-on-the-shelf software and hardware components to create a proof-of-concept of the model.

© All rights reserved Seigneur and/or ACM Press

p. 25

Okada, Tomonari, Yamamoto, Tetsuya, Terada, Tsutomu and Tsukamoto, Masahiko (2011): Wearable MC system a system for supporting MC performances using wearable computing technologies. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 25. Available online

A master of ceremonies (MC) plays an important role to ensure all events progress smoothly because unexpected interruption make them unsuccessful. MCs must have various abilities such as being able to memorize the content of given scenarios and manage problems that occur unexpectedly. Moreover, since unskilled MCs cannot intuit the atmosphere in the audiences during an event, they cannot control this smoothly. Therefore, we propose a wearable system that solves these problems for MCs achieved through wearable computing technologies. Our system has functions to support MCs in carrying out their duties smoothly, such as a robust voice-tracking function for them to read scripts, a user interface that does not interrupt other tasks, and a function that enables MCs intuit grasp the atmosphere of the audience. We implemented a prototype of the wearable MC system and actually used it at several events. The results we obtained from actually using it confirmed that it worked well and helped MCs to carry out their official duties smoothly.

© All rights reserved Okada et al. and/or ACM Press

p. 26

Kondo, Daisuke, Kurosaki, Keitaro, Iizuka, Hiroyuki, Ando, Hideyuki and Maeda, Taro (2011): View sharing system for motion transmission. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 26. Available online

We are developing 'view sharing' system for supporting a remote corporative work. The view sharing is constructed from the video-see-through head mounted displays (VST-HMD) and motion trackers. This system allows two users in remote places to share their first-person views each other. The users can share what the other user is seeing, and furthermore the users can correspond their spatial perception, motion and head movement. By sharing those sensations, the non-verbal skills can be transmitted from skilled person to the non-skilled person. Using this system expert in remote place can instruct the non-skilled person to improve task performance.

© All rights reserved Kondo et al. and/or ACM Press

p. 27

Kawaguchi, Nobuo, Ogawa, Nobuhiro, Iwasaki, Yohei, Kaji, Katsuhiko, Terada, Tsutomu, Murao, Kazuya, Inoue, Sozo, Kawahara, Yoshihiro, Sumi, Yasuyuki and Nishio, Nobuhiko (2011): HASC Challenge: gathering large scale human activity corpus for the real-world activity understandings. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 27. Available online

Understandings of human activity through wearable sensors will enable the next-generation human-oriented computing. However, most of researches on the activity recognition so far are based on small number of test subjects, and not well adapted for real world applications. To overcome the situation, we have started a project named "HASC Challenge" to collect a large scale human activity corpus. By the end of 2010, by the collaboration of 20 teams, more than 6700 accelerometer data with 540 subjects have been collected through our project. We also developed a tool named "HASC Tool" for management, evaluation and collection of the large number of activity sensor data.

© All rights reserved Kawaguchi et al. and/or ACM Press

p. 28

Sugisaki, Aru, Hashimoto, Yuki, Yonemura, Tomoko, Iizuka, Hiroyuki, Ando, Hideyuki and Maeda, Taro (2011): Effective galvanic vestibular stimulation in synchronizing with ocular movement. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 28. Available online

It is known that galvanic vestibular stimulation can cause ocular movement. Our final goal is to use GVS to support ocular movements. However, the effects of GVS to ocular movements are basically investigated while gazing at a certain point despite the fact that we have two different strategies to follow a moving target such as saccade and smooth pursuit. The effect might be different because those two use different mechanism. Therefore, this paper investigates the GVS effects during saccade. As a result, we show that the effect of GVS depends on the timing when GVS is given after the target marker moves.

© All rights reserved Sugisaki et al. and/or ACM Press

p. 29

Nakamura, Mieko and Miyashita, Homei (2011): Catchy account: a system for acquiring a realistic sense of expenditures. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 29. Available online

In this paper, we propose a new household accounting system for realistically sensing expenditures. In 2D mode, expenditures are visualized through the placement of rectangles whose areas are proportional to the amount spent; thus, each item can be understood within the context of the total expenditure. In AR mode, spheres whose volumes are proportional to the amount spent appear to be floating in the camera image. The spheres fill the entire room and the size of expenditure can be realistically sensed. We designed this system in an attempt to "augment" the experience, so that the user can acquire a more realistic sense of expenditures.

© All rights reserved Nakamura and Miyashita and/or ACM Press

p. 3

Ochiai, Yoichi and Toyoshima, Keisuke (2011): Homunculus: the vehicle as augmented clothes. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 3. Available online

In this paper we propose to add a new system with valuable functionalities to vehicles. We call it "Homunculus". It is based on a new concept of interactions between humans and vehicles. It promotes and augments nonverbal communicability of humans in the vehicles. It is difficult to communicate with the drivers in the vehicles by eye contact, hand gestures or touching behavior. Our "Homunculus" is a system to solve these problems. The instruments of "Homunculus" are composed of three system modules. The First is Robotic Eyes System which is a set of robotic eyes that follows drivers eye movements&head rotations. The Second is Projection System which shows drivers hand gestures on the road. The Third is Haptic Communication System which consists of IR Distance Sensors Array on the vehicle and Vibration motors attached to the driver. It gives drivers the haptic sense to approaching objects to the vehicle. These three Systems are set on vehicle's hood or side. We propose the situation that humans and vehicles can be unified as one unit by Homunculus. This system works as a middleman for communications between men and vehicles, people in other cars, or even people just walking the street. We suggest the new relationship of men and their vehicles could be like men and their clothes.

© All rights reserved Ochiai and Toyoshima and/or ACM Press

p. 30

Sekiguchi, Koji, Nakanishi, Yasuto, Kitahara, Soh, Ohmori, Takuro and Akatsuka, Daisuke (2011): Designing augmented environment with hybrid prototyping using virtual simulation and physical device. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 30. Available online

In this paper, we describe hybrid prototyping that combines virtual simulation with physical device, and argue the possibility of hybrid prototyping through a simulation of an augmented environment. In this paper, we describe hybrid prototyping that combines virtual simulation with physical device, and argue the possibility of hybrid prototyping through a simulation of an augmented environment.

© All rights reserved Sekiguchi et al. and/or ACM Press

p. 31

Kurze, Martin and Roselius, Axel (2011): Smart glasses linking real live and social network's contacts by face recognition. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 31. Available online

Imagine you participate in a big meeting with several people remotely known to you. You remember their faces but not their names. This is where "Smart Glasses" supports you: Smart Glasses consist of a (wearable) display, a tiny camera, some local processing power and an uplink to a backend service. The current implementation is based on Android and runs on smartphones, early research prototypes with different types of wearable displays have been evaluated as well. The system executes face detection and face tracking locally on the device (e.g. smartphone) and then links to the service running in the cloud to perform the actual face recognition based on the user's personal contact list (gallery). Recognized and identified persons are then displayed with names and latest social network activities. The approach is directed towards an AR ecosystem for mobile use. Therefore, open interfaces on the device are provided as well as to the service backend. We intend to take today's location based AR systems one step further towards computer vision based AR to really fit the needs of today's and tomorrow's users.

© All rights reserved Kurze and Roselius and/or ACM Press

p. 32

Okamoto, Shin, Kawasaki, Hiroki, Iizuka, Hiroyuki, Yokosaka, Takumi, Yonemura, Tomoko, Hashimoto, Yuki, Ando, Hideyuki and Maeda, Taro (2011): Inducing human motion by visual manipulation. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 32. Available online

This paper reports a study of augmenting human motions by manipulating visual images displayed to users. The target motion is not only the motion that can be seen in the subject's views (i.e. hands or foots motion) but also the full-body motion that cannot be captured from their own perspective. As a result, it is shown that the motions are modulated without any physical contacts only by manipulated images.

© All rights reserved Okamoto et al. and/or ACM Press

p. 33

Tsetserukou, Dzmitry (2011): FlexTorque, FlexTensor, and HapticEye: exoskeleton haptic interfaces for augmented interaction. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 33. Available online

In order to realize haptic interaction (e.g., holding, pushing, and contacting the object) in virtual environment and mediated haptic communication with human beings (e.g., handshaking), the force feedback is required. Recently there has been a substantial need and interest in haptic displays, which can provide realistic and high fidelity physical interaction in virtual environment. The aim of our research is to implement wearable haptic displays for presentation of realistic feedback (kinesthetic stimulus) to the human arm. We developed wearable devices FlexTorque and FlexTensor that induce forces to the human arm and do not require holding any additional haptic interfaces in the human hand. It is a new technology for Virtual Reality that allows user to explore surroundings freely. The concept of Karate (empty hand) Haptics proposed by us is opposite to conventional interfaces (e.g., Wii Remote, SensAble's PHANTOM, SPIDAR [1]) that require holding haptic interface in the hand, restricting thus the motion of the fingers in midair. The HapticEye interface allows the blind person to explore the unknown environment in a natural and effective manner. The wearer can literally see the environment by hand.

© All rights reserved Tsetserukou and/or ACM Press

p. 34

Nakamura, Hiromi and Miyashita, Homei (2011): Augmented gustation using electricity. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 34. Available online

In this paper, we propose a method to augment gustation and increase the number of perceptible tastes. Electric taste is the sensation elicited upon stimulating the tongue with electric current. We used this phenomenon to convey information that humans cannot perceive with their tongue. Our method involves changing the taste of foods and drinks by using electric taste. First, we propose a system to drink beverages using straws that are connected to an electric circuit. Second, we propose a system to eat foods using a fork or chopsticks connected to an electric circuit. Finally, we discuss augmented gustation using various sensors.

© All rights reserved Nakamura and Miyashita and/or ACM Press

p. 35

Okamoto, Makoto, Komatsu, Takanori, Ito, Kiyohide, Akita, Junichi and Ono, Tetsuo (2011): FutureBody: design of perception using the human body. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 35. Available online

We created a new interactive design concept "FutureBody" that generates or augments new perceptions for users. The concept of FutureBody consists of two elements, "active searching" and "embodiment," allowing users to search their environment actively and to emit indirect feedback to activate users' embodiments. We believe this concept will form the basis for a new perception design methodology for people.

© All rights reserved Okamoto et al. and/or ACM Press

p. 36

Kimber, Don, Vaughan, Jim and Rieffel, Eleanor (2011): Augmented perception through mirror worlds. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 36. Available online

We describe a system that mirrors a public physical space into cyberspace to provide people with augmented awareness of that space. Through views on web pages, portable devices, or on 'Magic Window' displays located in the physical space, remote people may 'look in' to the space, while people within the space are provided information not apparent through unaided perception. For example, by looking at a mirror display, people can learn how long others have been present, where they have been, etc. People in one part of a building can get a sense of the activities in the rest of the building, who is present in their office, look in to a talk in another room, etc. We describe a prototype for such a system developed in our research lab and office space.

© All rights reserved Kimber et al. and/or ACM Press

p. 37

Tanjo, Yuuki, Ogawa, Junichi, Ito, Sadanori, Sakamoto, Ryuuki, Umata, Ichiro and Ando, Hiroshi (2011): Training support system for violin bowing. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 37. Available online

The purpose of this paper is to propose a multimodal data viewer for teaching the violin. There are many studies on motor skills with multimodal data captured from motion capture systems. Using normal motion capture data alone, however, it is difficult to give explanations when experts teach their skills to beginners. For example, not only the motion of the right arm and wrist but also shifting the pressure on the strings with the bow is a critical skill to master when playing the violin. The shifting pressures can be obtained by strain gauge sensors. In this paper, we propose a system designed to provide training support with multimodal data by using composed visualizing motion data and other sensor data such as a strain gauge. As an example, we show a teaching violin support system and experiment data.

© All rights reserved Tanjo et al. and/or ACM Press

p. 38

Yamada, Takashi and Watanabe, Tomio (2011): An arm wrestling robot system for human upper extremity wear. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 38. Available online

In this study, we develop a prototype of arm wrestling robot system called AssistRobot for human upper extremity wear. Further, we introduce a force display response model based on the impact absorption of human hand, proposed earlier by the authors, into the system. The effectiveness of the system for arm wrestling is demonstrated by sensory evaluation from viewpoints of operability and enjoyment.

© All rights reserved Yamada and Watanabe and/or ACM Press

p. 4

Schnauer, Christian, Pintaric, Thomas and Kaufmann, Hannes (2011): Full body interaction for serious games in motor rehabilitation. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 4. Available online

Serious games and especially their use in healthcare applications are an active and rapidly growing area of research. A key aspect of games in rehabilitation is 3D input. In this paper we present our implementation of a full body motion capture (MoCap) system, which, together with a biosignal acquisition device, has been integrated in a game engine. Furthermore, a workflow has been established that enables the use of acquired skeletal data for serious games in a medical environment. Finally, a serious game has been implemented, targeting rehabilitation of patients with chronic pain of the lower back and neck, a group that has previously been neglected by serious games. The focus of this work is on the full body MoCap system and its integration with biosignal devices and the game engine. A short overview of the application and prelimiary results are provided.

© All rights reserved Schnauer et al. and/or ACM Press

p. 5

Richter, Hendrik, Hang, Alina and Blaha, Benedikt (2011): The PhantomStation: towards funneling remote tactile feedback on interactive surfaces. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 5. Available online

We present the PhantomStation, a novel interface that communicates tactile feedback to remote parts of the user's body. Thus, touch input on interactive surfaces can be augmented with synchronous tactile sensations. With the objective to reduce the number of tactile actuators on the user's body, we use the psychophysical Phantom Sensation (PhS) [1]. This illusion occurs when two or more tactile stimuli are presented simultaneously to the skin. The location of the pseudo-tactile sensation can be changed by modulating intensity or interstimulus time interval. We compare three different actuator technologies to recreate the PhS. Furthermore, we discuss how remote tactile of this kind can improve interaction accuracy. We present our prototype and propose scenarios in conjunction with interactive surfaces.

© All rights reserved Richter et al. and/or ACM Press

p. 6

Shimizu, Shoichi and Fujiyoshi, Hironobu (2011): Acquisition of 3D gaze information from eyeball movements using inside-out camera. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 6. Available online

We propose a method for obtaining 3D gaze information using inside-out camera. Such information on 3D gaze points can be useful not only to clarify higher cognitive processes in humans but also to reproduce the 3D shape of an object from eyeball movement simply by gazing at the object as an extension of the visual function. Using half-mirrors, an inside-out camera can capture a person's eyeball head-on and can capture the person's visual field from a position equivalent to that of the eyeball. Here, the relationship between the gaze vector obtained from images of the eyeball and the gaze point in images capturing the visual field is expressed by a conversion equation. The 3D position of the gaze point can then be estimated by using stereo constraints in two scene cameras. In an evaluation experiment, the gaze point could be estimated with an average error of about 15 pixels, and we also showed the 3D scan path obtained by the proposed method from eyeball movement by gazing at the object.

© All rights reserved Shimizu and Fujiyoshi and/or ACM Press

p. 7

Higuchi, Keita, Shimada, Tetsuro and Rekimoto, Jun (2011): Flying sports assistant: external visual imagery representation for sports training. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 7. Available online

Mental imagery is a quasi-perceptual experience emerging from past experiences. In sports psychology, mental imagery is used to improve athletes' cognition and motivation. Eminent athletes often create their mental imagery as if they themselves are the external observers; such ability plays an important role in sport training and performance. Mental image visualization refers to the representation of external vision containing one's own self from the perspective of others. However, without technological support, it is difficult to obtain accurate external visual imagery during sports. In this paper, we have proposed a system that has an aerial vehicle (a quadcopter) to capture athletes' external visual imagery. The proposed system integrates various sensor data to autonomously track the target athlete and compute camera angle and position. The athlete can see the captured image in realtime through a head mounted display, or more recently through a hand-held device. We have applied this system to support soccer and other sports and discussed how the proposed system can be used during training.

© All rights reserved Higuchi et al. and/or ACM Press

p. 8

Ishiguro, Yoshio and Rekimoto, Jun (2011): Peripheral vision annotation: noninterference information presentation method for mobile augmented reality. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 8. Available online

Augmented-reality (AR) systems present information about a user's surrounding environment by overlaying it on the user's real-world view. However, such overlaid information tends to obscure a user's field of view and thus impedes a user's real-world activities. This problem is especially critical when a user is wearing a head-mounted display. In this paper, we propose an information presentation mechanism for mobile AR systems by focusing on the user's gaze information and peripheral vision field. The gaze information is used to control the positions and the level-of-detail of the information overlaid on the user's field of view. We also propose a method for switching displayed information based on the difference in human visual perception between the peripheral and central visual fields. We develop a mobile AR system to test our proposed method consisting of a gaze-tracking system and a retinal imaging display. The eye-tracking system estimates whether the user's visual focus is on the information display area or not, and changes the information type from simple to detailed information accordingly.

© All rights reserved Ishiguro and Rekimoto and/or ACM Press

p. 9

Kato, Kojiro, Kitani, Kris M. and Nojima, Takuya (2011): Ego-motion analysis using average image data intensity. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 9. Available online

In this paper, we present a new method to perform ego-motion analysis using intensity averaging of image data. The method can estimate general motions from two sequential images on pixel plane by calculating cross correlations. With distance information between camera and objects, this method also enables estimates of camera motion. This method is sufficiently robust even for out of focus image and the calculational overhead is quite low because it uses a simple averaging method. In the future, this method could be used to measure fast motions such as human head tracking, or robot movement. We present a detailed description of the proposed method, and experimental results demonstrating its basic capability. With these results, we verify that our proposed system can detect camera motion even with blurred images. Furthermore, we confirm that it can operate at up to 714 FPS in calculating one dimensional translation motion.

© All rights reserved Kato et al. and/or ACM Press




 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 

User-contributed notes

Give us your opinion! Do you have any comments/additions
that you would like other visitors to see?

 
comment You (your email) say: Aug 30th, 2014
#1
Aug 30
Add a thoughtful commentary or note to this page ! 
 

your homepage, facebook profile, twitter, or the like
will be spam-protected
How many?
= e.g. "6"
User ExperienceBy submitting you agree to the Site Terms
 
 
 
 

Changes to this page (conference)

18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Added
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/conferences/proceedings_of_the_2011_augmented_human_international_conference.html

Upcoming Courses

go to course
User-Centred Design - Module 2
92% booked. Starts in 3 days
go to course
Design Thinking: The Beginner's Guide
91% booked. Starts in 4 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Upcoming Courses

go to course
User-Centred Design - Module 2
92% booked. Starts in 3 days
go to course
Design Thinking: The Beginner's Guide
91% booked. Starts in 4 days