Publication statistics

Pub. period:2003-2012
Pub. count:17
Number of co-authors:34



Co-authors

Number of publications with 3 favourite co-authors:

Steven K. Feiner:5
Hrvoje Benko:3
David McGee:2

 

 

Productive colleagues

Alex Olwal's 3 most productive colleagues in number of publications:

Hiroshi Ishii:111
Steven K. Feiner:76
Hrvoje Benko:33
 
 
 

Upcoming Courses

go to course
Gamification: Creating Addictive User Experience
Starts tomorrow LAST CALL!
go to course
User-Centred Design - Module 3
67% booked. Starts in 28 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Alex Olwal

Add description
Rename / change spelling
Add publication
 

Publications by Alex Olwal (bibliography)

 what's this?
2012
 
Edit | Del

Schonauer, Christian, Fukushi, Kenichiro, Olwal, Alex, Kaufmann, Hannes and Raskar, Ramesh (2012): Multimodal motion guidance: techniques for adaptive and dynamic feedback. In: Proceedings of the 2012 International Conference on Multimodal Interfaces 2012. pp. 133-140.

The ability to guide human motion through automatically generated feedback has significant potential for applications in areas, such as motor learning, human-computer interaction, telepresence, and augmented reality. This paper focuses on the design and development of such systems from a human cognition and perception perspective. We analyze the dimensions of the design space for motion guidance systems, spanned by technologies and human information processing, and identify opportunities for new feedback techniques. We present a novel motion guidance system, that was implemented based on these insights to enable feedback for position, direction and continuous velocities. It uses motion capture to track a user in space and guides using visual, vibrotactile and pneumatic actuation. Our system also introduces motion retargeting through time warping, motion dynamics and prediction, to allow more flexibility and adaptability to user performance.

© All rights reserved Schonauer et al. and/or ACM Press

 
Edit | Del

Follmer, Sean, Leithinger, Daniel, Olwal, Alex, Cheng, Nadia and Ishii, Hiroshi (2012): Jamming user interfaces: programmable particle stiffness and sensing for malleable and shape-changing devices. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 519-528.

Malleable and organic user interfaces have the potential to enable radically new forms of interactions and expressiveness through flexible, free-form and computationally controlled shapes and displays. This work, specifically focuses on particle jamming as a simple, effective method for flexible, shape-changing user interfaces where programmatic control of material stiffness enables haptic feedback, deformation, tunable affordances and control gain. We introduce a compact, low-power pneumatic jamming system suitable for mobile devices, and a new hydraulic-based technique with fast, silent actuation and optical shape sensing. We enable jamming structures to sense input and function as interaction devices through two contributed methods for high-resolution shape sensing using: 1) index-matched particles and fluids, and 2) capacitive and electric field sensing. We explore the design space of malleable and organic user interfaces enabled by jamming through four motivational prototypes that highlight jamming's potential in HCI, including applications for tabletops, tablets and for portable shape-changing mobile devices.

© All rights reserved Follmer et al. and/or ACM Press

2011
 
Edit | Del

Olwal, Alex, Lachanas, Dimitris and Zacharouli, Ermioni (2011): OldGen: mobile phone personalization for older adults. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 3393-3396.

Mobile devices are currently difficult to customize for the usability needs of elderly users. The elderly are instead referred to specially designed "senior phones" or software add-ons. These tend to compromise in functionality as they attempt to solve many disabilities in a single solution. We present OldGen, a prototype framework where a novel concept enables accessibility features on generic mobile devices, by decoupling the software user interface from the phone's physical form factor. This opens up for better customization of the user interface, its functionality and behavior, and makes it possible to adapt it to the specific needs of each individual. OldGen makes the user interface portable, such that it could be moved between different phone hardware, regardless of model and brand. Preliminary observations and evaluations with elderly users indicate that this concept could address individual user interface related accessibility issues on general-purpose devices.

© All rights reserved Olwal et al. and/or their publisher

 
Edit | Del

Ericsson, Finn and Olwal, Alex (2011): Interaction and rendering techniques for handheld phantograms. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1339-1344.

We present a number of rendering and interaction techniques that exploit the user's viewpoint for improved realism and immersion in 3D applications on handheld devices. Unlike 3D graphics on stationary screens, graphics on handheld devices are seldom regarded from a fixed perspective. This is particularly true for recent mobile platforms, where it is increasingly popular to use device orientation for interaction. We describe a set of techniques for improved perception of rendered 3D content. Viewpoint correct anamorphosis and stereoscopy are discussed along with ways to approximate the spatial relationship between the user and the device.

© All rights reserved Ericsson and Olwal and/or their publisher

 
Edit | Del

Tollmar, Konrad, Bentley, Frank, Moore, John and Olwal, Alex (2011): Mobile wellness: collecting, visualizing and interacting with personal health data. In: Proceedings of 13th Conference on Human-computer interaction with mobile devices and services 2011. pp. 761-763.

Mobile devices are now able to connect to a variety of sensors and provide personalized information to help people reflect on and improve their health. For example, pedometers, heart-rate sensors, glucometers, and other sensors can all provide real-time data to a variety of devices. Collecting and interacting with personal health or well-being data is a growing research area. This workshop will focus on the ways in which our mobile devices can aggregate and visualize these types of data and how these data streams can be presented to encourage interaction, increased awareness and positive behavior change.

© All rights reserved Tollmar et al. and/or ACM Press

 
Edit | Del

Zizka, Jan, Olwal, Alex and Raskar, Ramesh (2011): SpeckleSense: fast, precise, low-cost and compact motion sensing using laser speckle. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 489-498.

Motion sensing is of fundamental importance for user interfaces and input devices. In applications, where optical sensing is preferred, traditional camera-based approaches can be prohibitive due to limited resolution, low frame rates and the required computational power for image processing. We introduce a novel set of motion-sensing configurations based on laser speckle sensing that are particularly suitable for human-computer interaction. The underlying principles allow these configurations to be fast, precise, extremely compact and low cost. We provide an overview and design guidelines for laser speckle sensing for user interaction and introduce four general speckle projector/sensor configurations. We describe a set of prototypes and applications that demonstrate the versatility of our laser speckle sensing techniques.

© All rights reserved Zizka et al. and/or ACM Press

2009
 
Edit | Del

Olwal, Alex and Feiner, Steven K. (2009): Spatially aware handhelds for high-precision tangible interaction with large displays. In: Villar, Nicolas, Izadi, Shahram, Fraser, Mike and Benford, Steve (eds.) TEI 2009 - Proceedings of the 3rd International Conference on Tangible and Embedded Interaction February 16-18, 2009, Cambridge, UK. pp. 181-188.

2008
 
Edit | Del

Olwal, Alex, Feiner, Steven K. and Heyman, Susanna (2008): Rubbing and tapping for precise and rapid selection on touch-screen displays. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 295-304.

We introduce two families of techniques, rubbing and tapping, that use zooming to make possible precise interaction on passive touch screens, and describe examples of each. Rub-Pointing uses a diagonal rubbing gesture to integrate pointing and zooming in a single-handed technique. In contrast, Zoom-Tapping is a two-handed technique in which the dominant hand points, while the non-dominant hand taps to zoom, simulating multi-touch functionality on a single-touch display. Rub-Tapping is a hybrid technique that integrates rubbing with the dominant hand to point and zoom, and tapping with the non-dominant hand to confirm selection. We describe the results of a formal user study comparing these techniques with each other and with the well-known Take-Off and Zoom-Pointing selection techniques. Rub-Pointing and Zoom-Tapping had significantly fewer errors than Take-Off for small targets, and were significantly faster than Take-Off and Zoom-Pointing. We show how the techniques can be used for fluid interaction in an image viewer and in Google Maps.

© All rights reserved Olwal et al. and/or ACM Press

 
Edit | Del

Olwal, Alex and Wilson, Andrew D. (2008): SurfaceFusion: Unobtrusive Tracking of Everyday Objects in Tangible User Interfaces. In: Proceedings of the 2008 Conference on Graphics Interface May 28-30, 2008, Windsor, Ontario, Canada. pp. 235-242.

Interactive surfaces and related tangible user interfaces often involve everyday objects that are identified, tracked, and augmented with digital information. Traditional approaches for recognizing these objects typically rely on complex pattern recognition techniques, or the addition of active electronics or fiducials that alter the visual qualities of those objects, making them less practical for real-world use. Radio Frequency Identification (RFID) technology provides an unobtrusive method of sensing the presence of and identifying tagged nearby objects but has no inherent means of determining the position of tagged objects. Computer vision, on the other hand, is an established approach to track objects with a camera. While shapes and movement on an interactive surface can be determined from classic image processing techniques, object recognition tends to be complex, computationally expensive and sensitive to environmental conditions. We present a set of techniques in which movement and shape information from the computer vision system is fused with RFID events that identify what objects are in the image. By synchronizing these two complementary sensing modalities, we can associate changes in the image with events in the RFID data, in order to recover position, shape and identification of the objects on the surface, while avoiding complex computer vision processes and exotic RFID solutions.

© All rights reserved Olwal and Wilson and/or their publisher

 
Edit | Del

Olwal, Alex (2008): Unencumbered 3D interaction with see-through displays. In: Proceedings of the Fifth Nordic Conference on Human-Computer Interaction 2008. pp. 527-530.

Augmented Reality (AR) systems that employ user-worn display and sensor technology can be problematic for certain applications as the technology might, for instance, be encumbering to the user or limit the deployment options of the system. Spatial AR systems instead use stationary displays that provide augmentation to an on-looking user. They could avoid issues with damage, breakage and wear, while enabling ubiquitous installations in unmanned environments, through protected display and sensing technology. Our contribution is an exploration of compatible interfaces for public AR environments. We investigate interactive technologies, such as touch, gesture and head tracking, which are specifically appropriate for spatial optical see-through displays. A prototype system for a digital museum display was implemented and evaluated. We present the feedback from domain experts, and the results from a qualitative user study of seven interfaces for public spatial optical see-through displays.

© All rights reserved Olwal and/or his/her publisher

2006
 
Edit | Del

Olwal, Alex (2006): LightSense: enabling spatially aware handheld interaction devices. In: Fifth IEEE and ACM International Symposium on Mixed and Augmented Reality - ISMAR 2006 October 22-25, 2006, Santa Barbara, CA, USA. pp. 119-122.

2005
 
Edit | Del

Sandor, Christian, Olwal, Alex, Bell, Blaine and Feiner, Steven K. (2005): Immersive Mixed-Reality Configuration of Hybrid User Interfaces. In: Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality ISMAR 2005 5-8 October, 2005, Vienna, Austria. pp. 110-113.

 
Edit | Del

Olwal, Alex, Lindfors, Christoffer, Gustafsson, Jonny, Kjellberg, Torsten and Mattsson, Lars (2005): ASTOR: An Autostereoscopic Optical See-through Augmented Reality System. In: Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality ISMAR 2005 5-8 October, 2005, Vienna, Austria. pp. 24-27.

 
Edit | Del

Olwal, Alex and Hollerer, Tobias (2005): POLAR: portable, optical see-through, low-cost augmented reality. In: Singh, Gurminder, Lau, Rynson W. H., Chrysanthou, Yiorgos and Darken, Rudolph P. (eds.) VRST 2005 - Proceedings of the ACM Symposium on Virtual Reality Software and Technology November 7-9, 2005, Monterey, CA, USA. pp. 227-230.

2003
 
Edit | Del

Olwal, Alex, Benko, Hrvoje and Feiner, Steven K. (2003): SenseShapes: Using Statistical Geometry for Object Selection in a Multimodal Augmented Reality System. In: 2003 IEEE and ACM International Symposium on Mixed and Augmented Reality ISMAR 2003 7-10 October, 2003, Tokyo, Japan. pp. 300-301.

 
Edit | Del

Kaiser, Edward C., Olwal, Alex, McGee, David, Benko, Hrvoje, Corradini, Andrea, Li, Xiaoguang, Cohen, Philip R. and Feiner, Steven K. (2003): Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality. In: Oviatt, Sharon L., Darrell, Trevor, Maybury, Mark T. and Wahlster, Wolfgang (eds.) Proceedings of the 5th International Conference on Multimodal Interfaces - ICMI 2003 November 5-7, 2003, Vancouver, British Columbia, Canada. pp. 12-19.

 
Edit | Del

Kaiser, Ed, Olwal, Alex, McGee, David, Benko, Hrvoje, Corradini, Andrea, Li, Xiaoguang, Cohen, Phil and Feiner, Steven (2003): Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality. In: Proceedings of the 2003 International Conference on Multimodal Interfaces 2003. pp. 12-19.

We describe an approach to 3D multimodal interaction in immersive augmented and virtual reality environments that accounts for the uncertain nature of the information sources. The resulting multimodal system fuses symbolic and statistical information from a set of 3D gesture, spoken language, and referential agents. The referential agents employ visible or invisible volumes that can be attached to 3D trackers in the environment, and which use a time-stamped history of the objects that intersect them to derive statistics for ranking potential referents. We discuss the means by which the system supports mutual disambiguation of these modalities and information sources, and show through a user study how mutual disambiguation accounts for over 45% of the successful 3D multimodal interpretations. An accompanying video demonstrates the system in action.

© All rights reserved Kaiser et al. and/or their publisher

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Changes to this page (author)

23 Nov 2012: Modified
10 Nov 2012: Modified
05 Apr 2012: Modified
04 Apr 2012: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
20 Apr 2011: Modified
16 Jun 2009: Modified
02 Jun 2009: Modified
01 Jun 2009: Modified
30 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
12 May 2008: Added
12 May 2008: Modified

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/alex_olwal.html

Publication statistics

Pub. period:2003-2012
Pub. count:17
Number of co-authors:34



Co-authors

Number of publications with 3 favourite co-authors:

Steven K. Feiner:5
Hrvoje Benko:3
David McGee:2

 

 

Productive colleagues

Alex Olwal's 3 most productive colleagues in number of publications:

Hiroshi Ishii:111
Steven K. Feiner:76
Hrvoje Benko:33
 
 
 

Upcoming Courses

go to course
Gamification: Creating Addictive User Experience
Starts tomorrow LAST CALL!
go to course
User-Centred Design - Module 3
67% booked. Starts in 28 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading