Publication statistics

Pub. period:1993-2012
Pub. count:60
Number of co-authors:37



Co-authors

Number of publications with 3 favourite co-authors:

Takashi Miyaki:10
Yuji Ayatsuka:9
Michimune Kohno:7

 

 

Productive colleagues

Jun Rekimoto's 3 most productive colleagues in number of publications:

Gregory D. Abowd:116
Ivan Poupyrev:37
Shwetak N. Patel:35
 
 
 

Upcoming Courses

UI Design Patterns for Successful Software

Starts tomorrow LAST CALL!
 
 
 

Affordances: Designing Intuitive User Interfaces

88% booked. Starts in 7 days
 
 
 
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Jun Rekimoto

Ph.D

Picture of Jun Rekimoto.
Has also published under the name of:
"J. Rekimoto"

Personal Homepage:
lab.rekimoto.org/members-2/rekimoto/


Current place of employment:
The University of Tokyo

Dr. Jun Rekimoto is an interaction researcher and designer. His work in the field of Human Computer Interaction has created lasting and highly significant impact that is present in a multitude of interfaces and devices used by millions of people worldwide everyday. Dr. Rekimoto received his Ph.D. in Information Science from Tokyo Institute of Technology in 1996. Since 1994 he has conducting advanced research at Sony Computer Science Laboratories (CSL) in Tokyo. In 1999 he formed and directed the Sony CSL Interaction Laboratory that was tasked with invention of novel interaction paradigms. He has been serving as a Deputy Director of Sony CSL since 2011. From 2007 he has been also directing his own research laboratory at The University of Tokyo. Over the length of his research career Jun Rekimoto invented a range of highly innovative interactive systems and sensing technologies, including marker-based video see-through augmented reality (NaviCam), marker-based 3D tracking and registration for augmented reality applications(i.e., CyberCode), direct-manipulation techniques for multiple displays and devices (Pick-and-Drop), diffused IR hands and object tracking for augmented surfaces (HoloWall), and some of the earliest projective capacitive multi-touch sensing techniques and interfaces (i.e., SmartSkin) to name a few. He has published hundreds of articles in some of the most prestigious international research conferences, released his research on the market in Sony products and span-off entire companies. He received the Multi-Media Grand Prix Technology Award from the Multi-Media Contents Association Japan in 1998, iF Interaction Design Award in 2000, the Japan Inter-Design Award in 2003, and iF Communication Design Award in 2005. In 2007, he was elected to ACM SIGCHI Academy. Jun Rekimoto's current research interests include human-computer interaction, computer augmented environments and computer augmented human (human-computer integration)

Edit author info
Add publication

Publications by Jun Rekimoto (bibliography)

 what's this?
2012
 
Edit | Del

Misawa, Kana, Ishiguro, Yoshio and Rekimoto, Jun (2012): Ma petite chérie: what are you looking at?: a small telepresence system to support remote collaborative work for intimate communication. In: Proceedings of the 2012 Augmented Human International Conference 2012. p. 17.

We present a telepresence system with a reduced scale face-shaped display for supporting intimate telecommunication. In our previous work, we have developed a real-size face shaped display that tracks and reproduces the remote user's head motion and face image. It can convey user's nonverbal information such as facial expression and gaze awareness. In this paper, we examine the value and effect of scale reduction of such face-shaped displays. We expect small size face displays retain the benefit of real-size talking-head type telecommunication systems, and also provide more intimate impression. It is easier to transport or put on a desk, and it can be worn on the shoulder of the local participants so that people bring it like a small buddy. However, it is not clear how such reduced-size face screen might change the quality of nonverbal communication. We thus conducted an experiment using a 1/14 scale face display, and found critical nonverbal information, such as gaze-direction, is still correctly transmitted even when face size is reduced.

© All rights reserved Misawa et al. and/or ACM Press

 
Edit | Del

Rekimoto, Jun (2012): Squama: modular visibility control of walls and windows for programmable physical architectures. In: Proceedings of the 2012 International Conference on Advanced Visual Interfaces 2012. pp. 168-171.

In this paper we present Squama, a programmable physical window or wall that can independently control the visibility of its elemental small square tiles. This is an example of programmable physical architecture, our vision for future architectures where the physical features of architectural elements and facades can be dynamically changed and reprogrammed according to people's needs. When Squama is used as a wall, it dynamically controls the transparency through its surface, and simultaneously satisfies the needs for openness and privacy. It can also control the amount of sunlight and create shadows, called programmable shadows, in order to afford indoor comfort without completely blocking the outer view. In this paper, we discuss how in future, architectural space can become dynamically changeable and introduce the Squama system as an initial instance for exemplifying this concept.

© All rights reserved Rekimoto and/or ACM Press

 
Edit | Del

Misawa, Kana, Ishiguro, Yoshio and Rekimoto, Jun (2012): LiveMask: a telepresence surrogate system with a face-shaped screen for supporting nonverbal communication. In: Proceedings of the 2012 International Conference on Advanced Visual Interfaces 2012. pp. 394-397.

We propose a telepresence system with a real human face-shaped screen. This system tracks the remote user's face and extracts the head motion and the face image. The face-shaped screen moves along three degree-of-freedom (DOF) by reflecting the user's head gestures. As the face-shaped screen is molded based on the 3D-shape scan data of the user, the projected image is accurate even when it is seen from different angles. We expect this system can accurately convey the user's nonverbal communication, in particular the user's gaze direction in 3D space that is not correctly transmitted by using a 2D screen (which is known as "the Mona Lisa effect"). To evaluate how this system can contribute to the communication, we conducted three experiments. The first one examines the blind angle of a face-shaped screen and a flat screen, and compares the ease with which users can distinguish facial expressions. The second one evaluates how the direction in which the remote user's face points can be correctly transmitted. The third experiment evaluates how the gaze direction can be correctly transmitted. We found that the recognizable angles of the face-shaped screen were larger, and that the recognition of the head directions was better than on a flat 2D screen. More importantly, we found that the face-shaped screen accurately conveyed the gaze direction, resolving the problem of the Mona Lisa effect.

© All rights reserved Misawa et al. and/or ACM Press

 
Edit | Del

Rekimoto, Jun (2012): Squama: a programmable window and wall for future physical architectures. In: Proceedings of the 2012 International Conference on Uniquitous Computing 2012. pp. 667-668.

In this video we present Squama, a programmable physical window or wall that can independently control the visibility of its elemental small square tiles. This is an example of programmable physical architecture, our vision for future architectures where the physical features of architectural elements and facades can be dynamically changed and reprogrammed according to people's needs. When Squama is used as a wall, it dynamically controls the transparency through its surface, and simultaneously satisfies the needs for openness and privacy. It can also control the amount of sunlight and create shadows, called programmable shadows, in order to afford indoor comfort without completely blocking the outer view. In this video, we show how in future, architectural space can become dynamically changeable and introduce the Squama system as an initial instance for exemplifying this concept.

© All rights reserved Rekimoto and/or ACM Press

2011
 
Edit | Del

Higuchi, Keita, Shimada, Tetsuro and Rekimoto, Jun (2011): Flying sports assistant: external visual imagery representation for sports training. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 7.

Mental imagery is a quasi-perceptual experience emerging from past experiences. In sports psychology, mental imagery is used to improve athletes' cognition and motivation. Eminent athletes often create their mental imagery as if they themselves are the external observers; such ability plays an important role in sport training and performance. Mental image visualization refers to the representation of external vision containing one's own self from the perspective of others. However, without technological support, it is difficult to obtain accurate external visual imagery during sports. In this paper, we have proposed a system that has an aerial vehicle (a quadcopter) to capture athletes' external visual imagery. The proposed system integrates various sensor data to autonomously track the target athlete and compute camera angle and position. The athlete can see the captured image in realtime through a head mounted display, or more recently through a hand-held device. We have applied this system to support soccer and other sports and discussed how the proposed system can be used during training.

© All rights reserved Higuchi et al. and/or ACM Press

 
Edit | Del

Ishiguro, Yoshio and Rekimoto, Jun (2011): Peripheral vision annotation: noninterference information presentation method for mobile augmented reality. In: Proceedings of the 2011 Augmented Human International Conference 2011. p. 8.

Augmented-reality (AR) systems present information about a user's surrounding environment by overlaying it on the user's real-world view. However, such overlaid information tends to obscure a user's field of view and thus impedes a user's real-world activities. This problem is especially critical when a user is wearing a head-mounted display. In this paper, we propose an information presentation mechanism for mobile AR systems by focusing on the user's gaze information and peripheral vision field. The gaze information is used to control the positions and the level-of-detail of the information overlaid on the user's field of view. We also propose a method for switching displayed information based on the difference in human visual perception between the peripheral and central visual fields. We develop a mobile AR system to test our proposed method consisting of a gaze-tracking system and a retinal imaging display. The eye-tracking system estimates whether the user's visual focus is on the information display area or not, and changes the information type from simple to detailed information accordingly.

© All rights reserved Ishiguro and Rekimoto and/or ACM Press

 
Edit | Del

Tamaki, Emi, Miyaki, Takashi and Rekimoto, Jun (2011): PossessedHand: techniques for controlling human hands using electrical muscles stimuli. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 543-552.

If a device can control human hands, the device can be useful for HCI and tangible application's output. To aid the controlling of finger movement, we present PossessedHand, a device with a forearm belt that can inform when and which fingers should be moved. PossessedHand controls the user's fingers by applying electrical stimulus to the muscles around the forearm. Each muscle is stimulated via 28 electrode pads. Muscles at different depths in the forearm can be selected for simulation by varying the stimulation level. PossessedHand can automatically calibrate the system for individuals. The automatic calibration system estimates relations between each electrode pad, stimulation level and muscle movement. Experiments show that PossessedHand can control the motion of 16 joints in the hand. Further, we also discuss an application based on this device to aid in playing a musical instrument.

© All rights reserved Tamaki et al. and/or their publisher

 
Edit | Del

Tsujita, Hitomi and Rekimoto, Jun (2011): HappinessCounter: smile-encouraging appliance to increase positive mood. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 117-126.

As William James stated, and confirmed by several psychological studies, the act of smiling positively affects on our mental status -- we become happier when we laugh. In this paper, we propose a new digital appliance that naturally encourages the act of smiling in our daily lives. This system is designed mainly for people living alone, who may have difficulty realizing when they are in low spirits and/or difficulty in making themselves smile. Our HappinessCounter combines visual smile recognition, user feedback, and network communication. We installed this system in a home with a single occupant, and the system had positive effects on the user's mood.

© All rights reserved Tsujita and Rekimoto and/or their publisher

 
Edit | Del

Higuchi, Keita, Ishiguro, Yoshio and Rekimoto, Jun (2011): Flying eyes: free-space content creation using autonomous aerial vehicles. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 561-570.

Highly effective 3D-camerawork techniques that do not have physical limitations have been developed for creating three-dimensional (3D) computer games. Recent techniques used for real-world visual content creation, such as those used for sports broadcasting and motion pictures, also incorporate cameras moving in 3D physical space to provide viewers with a more engaging experience. For such purpose, wired cameras or mechanically controlled cameras are used, but they require huge and expensive infrastructure, and their freedom of motion is limited. To realize more flexible free-space camerawork at reasonable cost, we propose a system called "Flying Eyes" based on autonomous aerial vehicles. Flying Eyes tracks target humans based on vision processing, and computes camera paths by controlling the camera position and orientation.

© All rights reserved Higuchi et al. and/or their publisher

 
Edit | Del

Tsujita, Hitomi and Rekimoto, Jun (2011): Smiling makes us happier: enhancing positive mood and communication with smile-encouraging digital appliances. In: Proceedings of the 2011 International Conference on Uniquitous Computing 2011. pp. 1-10.

William James, the noted psychologist and philosopher, believed that smiling has a positive effect on our mind. James' view, which was confirmed by several psychological studies, was that we become happier when we laugh. In this paper, we propose a new digital appliance that encourages the act of smiling in our daily lives. This system is designed for people who may not always realize when they are in low spirits and/or have difficulty with smiling. In addition, we believe that this system will foster casual conversation and prompt communications with other people. Our appliance, called the HappinessCounter, combines visual smile recognition, user feedback, and network communication. We conducted two trials of the HappinessCounter system, the first with a single occupant and the second with a couple living together. The system had positive effects on user's mood and prompted communication among family members, thereby increasing their positive mood as well.

© All rights reserved Tsujita and Rekimoto and/or ACM Press

2010
 
Edit | Del

Tamaki, Emi, Miyak, Takashi and Rekimoto, Jun (2010): BrainyHand: a wearable computing device without HMD and it's interaction techniques. In: Proceedings of the 2010 International Conference on Advanced Visual Interfaces 2010. pp. 387-388.

Existing wearable devices like an eyeglass-type head-mounted display (HMD) are very bulky and cannot be worn everyday. On the other hand, the earphone is popular wearable device, because it is small. However, the earphone cannot be used as an interaction device because of the lack of input and visual feedback components. In this paper, we propose BrainyHand -- an enhanced earphone device for use in interaction systems. BrainyHand consists of a color camera and a laser projector. The camera recognizes the user's hand gestures for the input. For the visual feedback, the images projected on the user's hand or other nearby objects or surfaces using the projector. Since laser microprojector is becoming small, we expect this device configuration would eventually a become as small as today earphones. We introduce several interaction methods based on hand gesture recognitions and object detections.

© All rights reserved Tamaki et al. and/or their publisher

 
Edit | Del

Mujibiya, Adiyan, Miyaki, Takashi and Rekimoto, Jun (2010): Anywhere touchtyping: text input on arbitrary surface using depth sensing. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 443-444.

In this paper, touch typing enabled virtual keyboard system using depth sensing on arbitrary surface is proposed. Keystroke event detection is conducted using 3-dimensional hand appearance database matching combined with fingertip's surface touch sensing. Our prototype system acquired hand posture depth map by implementing phase shift algorithm for Digital Light Processor (DLP) fringe projection on arbitrary flat surface. The system robustly detects hand postures on the sensible surface with no requirement of hand position alignment on virtual keyboard frame. The keystroke feedback is the physical touch to the surface, thus no specific hardware must be worn. The system works real-time in average of 20 frames per second.

© All rights reserved Mujibiya et al. and/or their publisher

 
Edit | Del

Tamaki, Emi, Miyaki, Takashi and Rekimoto, Jun (2010): PossessedHand: a hand gesture manipulation system using electrical stimuli. In: Proceedings of the 2010 Augmented Human International Conference 2010. p. 2.

Acquiring knowledge about the timing and speed of hand gestures is important to learn physical skills, such as playing musical instruments, performing arts, and making handicrafts. However, it is difficult to use devices that dynamically and mechanically control a user's hand for learning because such devices are very large, and hence, are unsuitable for daily use. In addition, since groove-type devices interfere with actions such as playing musical instruments, performing arts, and making handicrafts, users tend to avoid wearing these devices. To solve these problems, we propose PossessedHand, a device with a forearm belt, for controlling a user's hand by applying electrical stimulus to the muscles around the forearm of the user. The dimensions of PossessedHand are 10 x 7.0 x 8.0 cm, and the device is portable and suited for daily use. The electrical stimuli are generated by an electronic pulse generator and transmitted from 14 electrode pads. Our experiments confirmed that PossessedHand can control the motion of 16 joints in the hand. We propose an application of this device to help a beginner learn how to play musical instruments such as the piano and koto.

© All rights reserved Tamaki et al. and/or ACM Press

 
Edit | Del

Ishiguro, Yoshio, Mujibiya, Adiyan, Miyaki, Takashi and Rekimoto, Jun (2010): Aided eyes: eye activity sensing for daily life. In: Proceedings of the 2010 Augmented Human International Conference 2010. p. 25.

Our eyes collect a considerable amount of information when we use them to look at objects. In particular, eye movement allows us to gaze at an object and shows our level of interest in the object. In this research, we propose a method that involves real-time measurement of eye movement for human memory enhancement; the method employs gaze-indexed images captured using a video camera that is attached to the user's glasses. We present a prototype system with an infrared-based corneal limbus tracking method. Although the existing eye tracker systems track eye movement with high accuracy, they are not suitable for daily use because the mobility of these systems is incompatible with a high sampling rate. Our prototype has small phototransistors, infrared LEDs, and a video camera, which make it possible to attach the entire system to the glasses. Additionally, the accuracy of this method is compensated by combining image processing methods and contextual information, such as eye direction, for information extraction. We develop an information extraction system with real-time object recognition in the user's visual attention area by using the prototype of an eye tracker and a head-mounted camera. We apply this system to (1) fast object recognition by using a SURF descriptor that is limited to the gaze area and (2) descriptor matching of a past-images database. Face recognition by using haar-like object features and text logging by using OCR technology is also implemented. The combination of a low-resolution camera and a high-resolution, wide-angle camera is studied for high daily usability. The possibility of gaze-guided computer vision is discussed in this paper, as is the topic of communication by the photo transistor in the eye tracker and the development of a sensor system that has a high transparency.

© All rights reserved Ishiguro et al. and/or ACM Press

2009
 
Edit | Del

Rekimoto, Jun (2009): SenseableRays: opto-haptic substitution for touch-enhanced interactive spaces. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 2519-2528.

This paper proposes a new haptic interaction system based on optical-haptic substitution. This system combines time-modulated structured light emitted to the workspace and a mobile or finger-mounted module consisting of a photo-detector with a tactile actuator. Unlike other tactile feedback systems, it does not require any complicated mechanism for position sensing and tactile actuation. Instead, it directly converts time-modulated structured light into haptic sensations. By sensing this light with a photo detector, users can feel this time-modulated light as haptic sensations. The system can easily add haptic feedback to a wide variety of applications, including surface computing systems and 3D interactive spaces.

© All rights reserved Rekimoto and/or ACM Press

 
Edit | Del

Tamaki, Emi, Miyaki, Takashi and Rekimoto, Jun (2009): Brainy hand: an ear-worn hand gesture interaction device. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 4255-4260.

Existing wearable hand gesture interaction devices are very bulky and cannot be worn in everyday life, because of the presence of a large visual feedback device. In particular, an eyeglass-type head-mounted display is very large for constant usage. To solve this problem, we propose Brainy Hand, which is a simple wearable device that adopts laser line, or more specifically, a mini-projector as a visual feedback device. Brainy Hand consists of a color camera, an earphone, and a laser line or mini-projector. This device uses a camera to detect 3D hand gestures. The earphone is used for receiving audio feedback. In this study, we introduce several user interfaces using Brainy Hand. (e.g., music player, phone).

© All rights reserved Tamaki et al. and/or ACM Press

 
Edit | Del

Iwasaki, Ken, Miyaki, Takashi and Rekimoto, Jun (2009): Expressive typing: a new way to sense typing pressure and its applications. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 4369-4374.

In this paper, we propose a new way for measuring key typing pressure when using off-the-shelf laptop computers. Accelerometers embedded in laptop computers to protect hard discs from sudden motion are becoming very common. This paper explores the concept of utilizing this accelerometer for sensing non-verbal aspects of key typing, such as key typing pressure. This possibility enables a wide variety of pressure-sensitive user interfaces through the use of software without requiring any additional hardware/sensors. Such software can be distributed easily to a substantial number of potential users. To confirm the feasibility of this idea, we compared typing finger velocities (obtained by high-speed camera images) with sensor data from an accelerometer embedded in a laptop computer. We then confirmed that there is a clear correlation between these two sets of data. We also investigated differences in typing pressure patterns among different users. By combining keystroke speeds and typing pressure, we found it is possible to distinguish among users. This feature can be used for security purposes such as preventing a laptop computer from being used by non-owners. We also present possible application ideas such as rich text expression, new types of user interface elements, and authentication.

© All rights reserved Iwasaki et al. and/or ACM Press

 
Edit | Del

Rekimoto, Jun (2009): Sensonomy: intelligence penetrating into the real space. In: Proceedings of the 2009 International Conference on Intelligent User Interfaces 2009. pp. 3-4.

Recent commoditization of mobile digital devices and networking brought us to use them as a very large-scale sensing platform. We call this possibility "Sensonmoy", which is an integration of collective intelligence (also known as "folksonomy") and pervasive sensing. As many users own mobile devices with sensing facilities, a collection of sensing data from these devices becomes quite important, and integration of them can be used in a very different manner. Such feature could be a new way to create intelligent systems and interfaces. In this talk, I am going to discuss a possibility of connecting a large number of simple devices to produce intelligent interactions. As a realistic example of them, I will introduce a city-scale indoor and outdoor positioning system that we have developed, and how its database can be evolved by using the idea of Sensonomy. I would also like to discuss computer-augmented memory and lifelong computing based on our platform.

© All rights reserved Rekimoto and/or his/her publisher

 
Edit | Del

Miyaki, Takashi and Rekimoto, Jun (2009): GraspZoom: zooming and scrolling control model for single-handed mobile interaction. In: Proceedings of 11th Conference on Human-computer interaction with mobile devices and services 2009. p. 11.

A pressure sensing based single-handed interaction model is presented in this paper. Unlike traditional desktop GUI model, mobile UI model has not been established yet. For example, Apple iPhone proposed "Pinch" operation, which use two fingers to zoom-in and zoom-out objects. However, in a today's hand-held situation, manipulation methods using two fingers are not always good solution because they require two hands in order to hold the device itself in most cases. We propose a single-handed UI scheme "GraspZoom": multi-state input model using pressure sensing. Force Sensitive Resistor (FSR) attached on backside of a mobile phone was employed in order to evaluate effectiveness of pressure based control model. We also describe example applications which enable intuitive and continuous zooming and scrolling. By using tiny thumb gesture input along with this pressure sensing method, bi-directional operations (e.g., zoom-in and -out) are also achieved.

© All rights reserved Miyaki and Rekimoto and/or their publisher

 
Edit | Del

Kawauchi, Kensaku, Miyaki, Takashi and Rekimoto, Jun (2009): Directional Beaconing: A Robust WiFi Positioning Method Using Angle-of-Emission Information. In: Choudhury, Tanzeem, Quigley, Aaron J., Strang, Thomas and Suginuma, Koji (eds.) Location and Context Awareness - Fourth International Symposium - LoCA 2009 May 7-8, 2009, Tokyo, Japan. pp. 103-119.

2008
 
Edit | Del

Rekimoto, Jun (2008): Brightshadow: shadow sensing with synchronous illuminations for robust gesture recognition. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 2769-2774.

We introduce a new sensor architecture for robust gesture recognition that uses a combination of a high-speed camera and synchronous LED illumination. This sensor looks at shadows cast by a user's hand for recognizing position. The position of the hand can be robustly recognized by independently tracking multiple shadows and by using multiple light sources with time-synchronous modulation with the camera. We also developed a multi-finger tracking system that uses similar modulated illumination from multiple light positions. We expect that these sensing configurations can be naturally integrated into our daily environments as LED lighting becomes more commonplace.

© All rights reserved Rekimoto and/or ACM Press

 
Edit | Del

Rekimoto, Jun (2008): Organic interaction technologies: from stone to skin. In Communications of the ACM, 51 (6) pp. 38-44.

2007
 
Edit | Del

Rekimoto, Jun, Miyaki, Takashi and Ishizawa, Takaaki (2007): LifeTag: WiFi-Based Continuous Location Logging for Life Pattern Analysis. In: Hightower, Jeffrey, Schiele, Bernt and Strang, Thomas (eds.) Location- and Context-Awareness - Third International Symposium - LoCA 2007 September 20-21, 2007, Oberpfaffenhofen, Germany. pp. 35-49.

2006
 
Edit | Del

Patel, Shwetak N., Rekimoto, Jun and Abowd, Gregory D. (2006): iCam: Precise at-a-Distance Interaction in the Physical Environment. In: Fishkin, Kenneth P., Schiele, Bernt, Nixon, Paddy and Quigley, Aaron J. (eds.) PERVASIVE 2006 - Pervasive Computing 4th International Conference May 7-10, 2006, Dublin, Ireland. pp. 272-287.

2005
 
Edit | Del

Ayatsuka, Yuji and Rekimoto, Jun (2005): tranSticks: physically manipulatable virtual connections. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 251-260.

A virtually connected medium called tranStick is described that functions both as a "virtual wire" and as a "memory card" containing a shared space. A user can connect two networked devices by simply placing one of a pair of tranSticks with the same identifier into each device. The tranSticks provide feedback indicating that the devices are connected; the connection to be closed or changed in the same way it would be if the devices were connected by a physical cable. A user can also access to a shared space on a network as if the space were in the tranStick. Since tranSticks contain long secret keys, the process of finding another tranStick with the same identifier can be encrypted. The tranStick approach differs from other approaches in that it provides feedback from the connection as well as serving as a medium for establishing a connection, and it enables disconnection and switchover to be done intuitively because the operations are reversible.

© All rights reserved Ayatsuka and Rekimoto and/or ACM Press

 
Edit | Del

Kohno, Michimune and Rekimoto, Jun (2005): Searching common experience: a social communication tool based on mobile ad-hoc networking. In: Proceedings of 7th conference on Human-computer interaction with mobile devices and services 2005. pp. 15-22.

As small digital cameras become more popular, opportunities to take photos are rapidly increasing. Photo sharing is a great way to maintain and revitalize relationships between families and friends, and is a major motivator for content sharing. While photo sharing has been well studied, little work exists on sharing multiple photo sets contained in spontaneously connected handheld devices. This paper provides an algorithm to extract photos, based on common memories collected in an ad hoc group. It automatically searches for and presents photos that could become the starting point of a conversation. We found that our mechanism has more uses than simply organizing photos in chronological order. This paper describes our prototype system realized using the above algorithm. We also implemented a synchronized shutters mechanism, that provides a new photo sharing experience. Through subjective tests, we found that our method promotes conversation, even though the users did not know each other beforehand.

© All rights reserved Kohno and Rekimoto and/or ACM Press

 
Edit | Del

Kohno, Michimune and Rekimoto, Jun (2005): Searching common experience: a social communication tool based on mobile ad-hoc networking. In: Tscheligi, Manfred, Bernhaupt, Regina and Mihalic, Kristijan (eds.) Proceedings of the 7th Conference on Human-Computer Interaction with Mobile Devices and Services - Mobile HCI 2005 September 19-22, 2005, Salzburg, Austria. pp. 15-22.

 
Edit | Del

Beigl, Michael, Intille, Stephen S., Rekimoto, Jun and Tokuda, Hideyuki (eds.) UbiComp 2005 Ubiquitous Computing - 7th International Conference September 11-14, 2005, Tokyo, Japan.

2004
 
Edit | Del

Ayatsuka, Yuji, Kohno, Michimune and Rekimoto, Jun (2004): Real-World Oriented Access Control Method with a Displayed Password. In: Masoodian, Masood, Jones, Steve and Rogers, Bill (eds.) Computer Human Interaction 6th Asia Pacific Conference - APCHI 2004 June 29 - July 2, 2004, Rotorua, New Zealand. pp. 19-29.

 
Edit | Del

Rekimoto, Jun (2004): SyncTap: synchronous user operation for spontaneous network connection. In Personal and Ubiquitous Computing, 8 (2) pp. 126-134.

 
Edit | Del

Rekimoto, Jun, Miyaki, Takashi and Kohno, Michimune (2004): ProxNet: Secure Dynamic Wireless Connection by Proximity Sensing. In: Ferscha, Alois and Mattern, Friedemann (eds.) PERVASIVE 2004 - Pervasive Computing, Second International Conference April 21-23, 2004, Vienna, Austria. pp. 213-218.

2003
 
Edit | Del

Rekimoto, Jun, Ishizawa, Takaaki, Schwesig, Carsten and Oba, Haruo (2003): PreSense: interaction techniques for finger sensing input devices. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 203-212.

Although graphical user interfaces started as imitations of the physical world, many interaction techniques have since been invented that are not available in the real world. This paper focuses on one of these "previewing", and how a sensory enhanced input device called "PreSense Keypad" can provide a preview for users before they actually execute the commands. Preview important in the real world because it is often not possible to undo an action. This previewable feature helps users to see what will occur next. It is also helpful when the command assignment of the keypad dynamically changes, such as for universal commanders. We present several interaction techniques based on this input device, including menu and map browsing systems and a text input system. We also discuss finger gesture recognition for the PreSense Keypad.

© All rights reserved Rekimoto et al. and/or ACM Press

 
Edit | Del

Rekimoto, Jun, Ayatsuka, Yuji, Kohno, Michimune and Oba, Hauro (2003): Proximal Interactions: A Direct Manipulation technique for wireless networking. In: Proceedings of IFIP INTERACT03: Human-Computer Interaction 2003, Zurich, Switzerland. p. 511.

 
Edit | Del

Matsushita, Nobuyuki, Hihara, Daisuke, Ushiro, Teruyuki, Yoshimura, Shinichi, Rekimoto, Jun and Yamamoto, Yoshikazu (2003): ID CAM: A Smart Camera for Scene Capturing and ID Recognition. In: 2003 IEEE and ACM International Symposium on Mixed and Augmented Reality ISMAR 2003 7-10 October, 2003, Tokyo, Japan. pp. 227-236.

 
Edit | Del

Rekimoto, Jun, Ayatsuka, Yuji and Kohno, Michimune (2003): SyncTap: An Interaction Technique for Mobile Networking. In: Chittaro, Luca (ed.) Human-Computer Interaction with Mobile Devices and Services - 5th International Symposium - Mobile HCI 2003 September 8-11, 2003, Udine, Italy. pp. 104-115.

2002
 
Edit | Del

Rekimoto, Jun (2002): SmartSkin: an infrastructure for freehand manipulation on interactive surfaces. In: Terveen, Loren (ed.) Proceedings of the ACM CHI 2002 Conference on Human Factors in Computing Systems Conference April 20-25, 2002, Minneapolis, Minnesota. pp. 113-120.

 
Edit | Del

Poupyrev, Ivan, Maruyama, Shigeaki and Rekimoto, Jun (2002): Ambient touch: designing tactile interfaces for handheld devices. In: Beaudouin-Lafon, Michel (ed.) Proceedings of the 15th annual ACM symposium on User interface software and technology October 27-30, 2002, Paris, France. pp. 51-60.

This paper investigates the sense of touch as a channel for communicating with miniature handheld devices. We embedded a PDA with a TouchEngine -- a thin, miniature lower-power tactile actuator that we have designed specifically to use in mobile interfaces (Figure 1). Unlike previous tactile actuators, the TouchEngine is a universal tactile display that can produce a wide variety of tactile feelings from simple clicks to complex vibrotactile patterns. Using the TouchEngine, we began exploring the design space of interactive tactile feedback for handheld computers. Here, we investigated only a subset of this space: using touch as the ambient, background channel of interaction. We proposed a general approach to design such tactile interfaces and described several implemented prototypes. Finally, our user studies demonstrated 22% faster task completion when we enhanced handheld tilting interfaces with tactile feedback.

© All rights reserved Poupyrev et al. and/or ACM Press

 
Edit | Del

Kohno, Michimune and Rekimoto, Jun (2002): New Generation of IP-Phone Enabled Mobile Devices. In: Paterno, Fabio (ed.) Mobile Human-Computer Interaction - 4th International Symposium - Mobile HCI 2002 September 18-20, 2002, Pisa, Italy. pp. 319-323.

2001
 
Edit | Del

Rekimoto, Jun, Ullmer, Brygg and Oba, Haruo (2001): DataTiles: A Modular Platform for Mixed Physical and Graphical Interactions. In: Beaudouin-Lafon, Michel and Jacob, Robert J. K. (eds.) Proceedings of the ACM CHI 2001 Human Factors in Computing Systems Conference March 31 - April 5, 2001, Seattle, Washington, USA. pp. 269-276.

The DataTiles system integrates the benefits of two major interaction paradigms: graphical and physical user interfaces. Tagged transparent tiles are used as modular construction units. These tiles are augmented by dynamic graphical information when they are placed on a sensor-enhanced flat panel display. They can be used independently or can be combined into more complex configurations, similar to the way language can express complex concepts through a sequence of simple words. In this paper, we discuss our design principles for mixing physical and graphical interface techniques, and describe the system architecture and example applications of the DataTiles system.

© All rights reserved Rekimoto et al. and/or ACM Press

 
Edit | Del

Rekimoto, Jun (2001): Interacting with a Computer Augmented Environment. In: Proceedings of IFIP INTERACT01: Human-Computer Interaction 2001, Tokyo, Japan. pp. 14-16.

 
Edit | Del

Kohtake, Naohiko, Rekimoto, Jun and Anzai, Yuichiro (2001): InfoPoint: A Device that Provides a Uniform User Interface to Allow Appliances to Work Together over a Network. In Personal and Ubiquitous Computing, 5 (4) pp. 264-274.

2000
 
Edit | Del

Rekimoto, Jun and Sciammarella, Eduardo (2000): ToolStone: Effective Use of the Physical Manipulation Vocabularies of Input Devices. In: Ackerman, Mark S. and Edwards, Keith (eds.) Proceedings of the 13th annual ACM symposium on User interface software and technology November 06 - 08, 2000, San Diego, California, United States. pp. 109-117.

 
Edit | Del

Matsushita, Nobuyuki, Ayatsuka, Yuji and Rekimoto, Jun (2000): Dual Touch: A Two-Handed Interface for Pen-Based PDAs. In: Ackerman, Mark S. and Edwards, Keith (eds.) Proceedings of the 13th annual ACM symposium on User interface software and technology November 06 - 08, 2000, San Diego, California, United States. pp. 211-212.

 
Edit | Del

Rekimoto, Jun and Ayatsuka, Yuji (2000): CyberCode: designing augmented reality environments with visual tags. In: Designing Augmented Reality Environments 2000 2000. pp. 1-10.

1999
 
Edit | Del

Rekimoto, Jun and Saitoh, Masanori (1999): Augmented Surfaces: A Spatially Continuous Work Space for Hybrid Computing Environments. In: Altom, Mark W. and Williams, Marian G. (eds.) Proceedings of the ACM CHI 99 Human Factors in Computing Systems Conference May 15-20, 1999, Pittsburgh, Pennsylvania. pp. 378-385.

This paper describes our design and implementation of a computer augmented environment that allows users to smoothly interchange digital information among their portable computers, table and wall displays, and other physical objects. Supported by a camera-based object recognition system, users can easily integrate their portable computers with the pre-installed ones in the environment. Users can use displays projected on tables and walls as a spatially continuous extension of their portable computers. Using an interaction technique called hyperdragging, users can transfer information from one computer to another, by only knowing the physical relationship between them. We also provide a mechanism for attaching digital data to physical objects, such as a videotape or a document folder, to link physical and digital spaces.

© All rights reserved Rekimoto and Saitoh and/or ACM Press

 
Edit | Del

Rekimoto, Jun (1999): Time-Machine Computing: A Time-Centric Approach for the Information Environment. In: Zanden, Brad Vander and Marks, Joe (eds.) Proceedings of the 12th annual ACM symposium on User interface software and technology November 07 - 10, 1999, Asheville, North Carolina, United States. pp. 45-54.

This paper describes the concept of Time-Machine Computing (TMC), a time-centric approach to organizing information on computers. A system based on Time-Machine Computing allows a user to visit the past and the future states of computers. When a user needs to refer to a document that he/she was working on at some other time, he/she can travel in the time dimension and the system restores the computer state at that time. Since the user's activities on the system are automatically archived, the user's daily workspace is seamlessly integrated into the information archive. The combination of spatial information management of the desktop metaphor and time traveling allows a user to organize and archive information without being bothered by folder hierarchies or the file classification problems that are common in today's desktop environments. TMC also provides a mechanism for linking multiple applications and external information sources by exchanging time information. This paper describes the key features of TMC, a time-machine desktop environment called "TimeScape," and several time-oriented application integration examples.

© All rights reserved Rekimoto and/or ACM Press

 
Edit | Del

Kohtake, Naohiko, Rekimoto, Jun and Anzai, Yuichiro (1999): InfoStick: An Interaction Device for Inter-Appliance Computing. In: Gellersen, Hans-Werner (ed.) Handheld and Ubiquitous Computing - First International Symposium - HUC99 September 27-29, 1999, Karlsruhe, Germany. pp. 246-258.

1998
 
Edit | Del

Rekimoto, Jun (1998): A Multiple Device Approach for Supporting Whiteboard-Based Interactions. In: Karat, Clare-Marie, Lund, Arnold, Coutaz, Joëlle and Karat, John (eds.) Proceedings of the ACM CHI 98 Human Factors in Computing Systems Conference April 18-23, 1998, Los Angeles, California. pp. 344-351.

In this paper, we propose a multiple-device approach for supporting informal meetings using a digital whiteboard. Traditional digital whiteboard systems often suffer from a limited capability to enter text and the handling of existing data. The large display surface of the whiteboard also makes traditional GUI design ineffective. Our proposed approach provides a hand-held computer for each participant which serves as a tool palette and data entry palette for the whiteboard. Just as an oil painter effectively uses a palette in his/her hand, this hand-held device offers an easy way to create a new text/stroke object, to select existing data from a network, to select pen attributes, and to control the whiteboard application. This paper also reports our experience with the digital whiteboard systems using a proposed multi-device architecture.

© All rights reserved Rekimoto and/or ACM Press

 
Edit | Del

Ayatsuka, Yuji, Rekimoto, Jun and Matsuoka, Satoshi (1998): Popup Vernier: A Tool for Sub-Pixel-Pitch Dragging with Smooth Mode Transition. In: Mynatt, Elizabeth D. and Jacob, Robert J. K. (eds.) Proceedings of the 11th annual ACM symposium on User interface software and technology November 01 - 04, 1998, San Francisco, California, United States. pp. 39-48.

Dragging is one of the most useful and popular techniques in direct manipulation graphical user interfaces. However, dragging has inherent restrictions caused by pixel resolution of a display. Although in some situations the restriction could be negligible, certain kinds of applications, e.g., real world applications where the range of adjustable parameters vastly exceed the screen resolution, require sub-pixel-pitch dragging. We propose a sub-pixel-pitch dragging tool, popup vernier, plus a methodology to transfer smoothly into 'vernier mode' during dragging. A popup vernier consists of locally zoomed grids and vernier scales displayed around them. Verniers provide intuitive manipulation and feedback of fine grain dragging, in that pixel-pitch movements of the grids represent sub-pixel-pitch movements of a dragged object, and the vernier scales show the object's position at a sub-pixel accuracy. The effectiveness of our technique is verified with a proposed evaluation measure that captures the smoothness of transition from standard mode to vernier mode, based on the Fitts' law.

© All rights reserved Ayatsuka et al. and/or ACM Press

 
Edit | Del

Ayatsuka, Yuji, Matsuoka, Satoshi and Rekimoto, Jun (1998): Layered Penumbrae: An Effective 3D Feedback Technique. In: Third Asian Pacific Computer and Human Interaction July 15-17, 1998, Kangawa, Japan. pp. 202-209.

 
Edit | Del

Rekimoto, Jun (1998): Matrix: A Realtime Object Identification and Registration Method for Augmented Reality. In: Third Asian Pacific Computer and Human Interaction July 15-17, 1998, Kangawa, Japan. pp. 63-69.

 
Edit | Del

Rekimoto, Jun (1998): Multiple-Computer User Interfaces: A Cooperative Environment Consisting of Multiple Digital Devices. In: Streitz, Norbert A., Konomi, Shin'ichi and Burkhardt, Heinz Jürgen (eds.) Cooperative Buildings, Integrating Information, Organization, and Architecture, First International Workshop, CoBuild98, Darmstadt, Germany, February 1998, Proceedings 1998. pp. 33-40.

 Cited in the following chapter:

3D User Interfaces: [/encyclopedia/3d_user_interfaces.html]


 
1997
 
Edit | Del

Rekimoto, Jun (1997): Pick-and-Drop: A Direct Manipulation Technique for Multiple Computer Environments. In: Robertson, George G. and Schmandt, Chris (eds.) Proceedings of the 10th annual ACM symposium on User interface software and technology October 14 - 17, 1997, Banff, Alberta, Canada. pp. 31-39.

This paper proposes a new field of user interfaces called multi-computer direct manipulation and presents a pen-based direct manipulation technique that can be used for data transfer between different computers as well as within the same computer. The proposed Pick-and-Drop allows a user to pick up an object on a display and drop it on another display as if he/she were manipulating a physical object. Even though the pen itself does not have storage capabilities, a combination of Pen-ID and the pen manager on the network provides the illusion that the pen can physically pick up and move a computer object. Based on this concept, we have built several experimental applications using palm-sized, desk-top, and wall-sized pen computers. We also considered the importance of physical artifacts in designing user interfaces in a future computing environment.

© All rights reserved Rekimoto and/or ACM Press

 Cited in the following chapter:

User Interface Design Adaptation: [/encyclopedia/user_interface_design_adaptation.html]


 
 
Edit | Del

Matsushita, Nobuyuki and Rekimoto, Jun (1997): HoloWall: Designing a Finger, Hand, Body, and Object Sensitive Wall. In: Robertson, George G. and Schmandt, Chris (eds.) Proceedings of the 10th annual ACM symposium on User interface software and technology October 14 - 17, 1997, Banff, Alberta, Canada. pp. 209-210.

This TechNote reports on our initial results of realizing a computer augmented wall called the Holo Wall. Using an infrared camera located behind the wall, this system allows a user to interact with this computerized wall using fingers, hands, their body, or even a physical object such as a document folder.

© All rights reserved Matsushita and Rekimoto and/or ACM Press

 
Edit | Del

Rekimoto, Jun (1997): A Magnifying Glass Approach to Augmented Reality Systems. In Presence: Teleoperators and Virtual Environments, 6 (4) pp. 399-412.

1996
 
Edit | Del

Ayatsuka, Yuji, Matsuoka, Satoshi and Rekimoto, Jun (1996): Penumbrae for 3D Interactions. In: Kurlander, David, Brown, Marc and Rao, Ramana (eds.) Proceedings of the 9th annual ACM symposium on User interface software and technology November 06 - 08, 1996, Seattle, Washington, United States. pp. 165-166.

We propose a new feedback technique for 3D interaction using penumbrae which the objects cast. Rather than generating a real penumbra, which is computationally expensive, a fast, simplified algorithm is employed, which also is better suited for position feedback purposes. User studies show that 1) compared to orthographic shadow projections, 3D spatial recognition and placement tasks are substantially faster with our penumbrae, and 2) the users feel the feedback to be more natural.

© All rights reserved Ayatsuka et al. and/or ACM Press

 
Edit | Del

Rekimoto, Jun (1996): Tilting Operations for Small Screen Interfaces. In: Kurlander, David, Brown, Marc and Rao, Ramana (eds.) Proceedings of the 9th annual ACM symposium on User interface software and technology November 06 - 08, 1996, Seattle, Washington, United States. pp. 167-168.

This TechNote introduces new interaction techniques for small screen devices such as palmtop computers or handheld electric devices, including pagers and cellular phones. Our proposed method uses the tilt of the device itself as input. Using both tilt and buttons, it is possible to build several interaction techniques ranging from menus and scroll bars, to more complicated examples such as a map browsing system and a 3D object viewer. During operation, only one hand is required to both hold and control the device. This feature is especially useful for field workers.

© All rights reserved Rekimoto and/or ACM Press

1995
 
Edit | Del

Rekimoto, Jun and Nagao, Katashi (1995): The World through the Computer: Computer Augmented Interaction with Real World Environments. In: Robertson, George G. (ed.) Proceedings of the 8th annual ACM symposium on User interface and software technology November 15 - 17, 1995, Pittsburgh, Pennsylvania, United States. pp. 29-36.

Current user interface techniques such as WIMP or the desktop metaphor do not support real world tasks, because the focus of these user interfaces is only on human-computer interactions, not on human-real world interactions. In this paper, we propose a method of building computer augmented environments using a situation-aware portable device. This device, called NaviCam, has the ability to recognize the user's situation by detecting color-code IDs in real world environments. It displays situation sensitive information by superimposing messages on its video see-through screen. Combination of ID-awareness and portable video-see-through display solves several problems with current ubiquitous computers systems and augmented reality systems.

© All rights reserved Rekimoto and Nagao and/or ACM Press

 
Edit | Del

Rekimoto, Jun (1995): Augmented Interaction: Interacting with the Real World through a Computer. In: Proceedings of the Sixth International Conference on Human-Computer Interaction July 9-14, 1995, Tokyo, Japan. pp. 255-260.

This paper discusses why traditional GUI is not adequate to support highly portable computers, and proposes a new HCI style called Augmented Interaction, which is concentraining on the user's real world activities. Situation awareness and implicit interaction are the two key ideas of this concept. We also report on the prototype system called NaviCam, which is based on the idea of Augmented Interaction.

© All rights reserved Rekimoto and/or Elsevier Science

1993
 
Edit | Del

Hamakawa, Rei and Rekimoto, Jun (1993): Object Composition and Playback Models for Handling Multimedia Data. In: ACM Multimedia 1993 1993. pp. 273-281.

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
 
 
 

Changes to this page (author)

23 Nov 2012: Modified
09 Nov 2012: Modified
09 Nov 2012: Modified
07 Nov 2012: Modified
05 Apr 2012: Modified
03 Apr 2012: Modified
25 Jul 2011: Modified
25 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
18 Apr 2011: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
24 Aug 2009: Modified
24 Aug 2009: Modified
18 Aug 2009: Modified
17 Aug 2009: Modified
17 Jun 2009: Modified
02 Jun 2009: Modified
01 Jun 2009: Modified
31 May 2009: Modified
31 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
12 May 2008: Modified
24 Jul 2007: Modified
24 Jul 2007: Modified
24 Jul 2007: Added
29 Jun 2007: Modified
29 Jun 2007: Modified
28 Apr 2003: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/jun_rekimoto.html

Publication statistics

Pub. period:1993-2012
Pub. count:60
Number of co-authors:37



Co-authors

Number of publications with 3 favourite co-authors:

Takashi Miyaki:10
Yuji Ayatsuka:9
Michimune Kohno:7

 

 

Productive colleagues

Jun Rekimoto's 3 most productive colleagues in number of publications:

Gregory D. Abowd:116
Ivan Poupyrev:37
Shwetak N. Patel:35
 
 
 

Upcoming Courses

UI Design Patterns for Successful Software

Starts tomorrow LAST CALL!
 
 
 

Affordances: Designing Intuitive User Interfaces

88% booked. Starts in 7 days
 
 
 
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 
 
 
This course starts in