Publication statistics

Pub. period:2003-2012
Pub. count:46
Number of co-authors:61



Co-authors

Number of publications with 3 favourite co-authors:

Sven Kratz:13
Antonio Krger:11
Johannes Schoning:11

 

 

Productive colleagues

Michael Rohs's 3 most productive colleagues in number of publications:

Saul Greenberg:140
Albrecht Schmidt:111
Matt Jones:63
 
 
 

Upcoming Courses

go to course
UI Design Patterns for Successful Software
Starts the day after tomorrow !
go to course
Affordances: Designing Intuitive User Interfaces
92% booked. Starts in 3 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Michael Rohs

Personal Homepage:
medien.ifi.lmu.de/team/michael.rohs/


 

Publications by Michael Rohs (bibliography)

 what's this?
2012
 
Edit | Del

Wiethoff, Alexander, Schneider, Hanna, Rohs, Michael, Butz, Andreas and Greenberg, Saul (2012): Sketch-a-TUI: low cost prototyping of tangible interactions using cardboard and conductive ink. In: Proceedings of the 6th International Conference on Tangible and Embedded Interaction 2012. pp. 309-312. Available online

Graspable tangibles are now being explored on the current generation of capacitive touch surfaces, such as the iPad and the Android tablet. Because the size and form factor is relatively new, early and low fidelity prototyping of these TUIs is crucial in getting the right design. The problem is that it is difficult for the average interaction designer to develop such physical prototypes. They require a substantial amount time and effort to physically model the tangibles, and expertise in electronics to instrument them. Thus prototyping is sometimes handed off to specialists, or is limited to only a few design iterations and alternative designs. Our solution contributes a low fidelity prototyping approach that is time and cost effective, and that requires no electronics knowledge. First, we supply non-specialists with cardboard forms to create tangibles. Second, we have them draw lines on it via conductive ink, which makes their objects recognizable by the capacitive touch screen. They can then apply routine programming to recognize these tangibles and thus iterate over various designs.

© All rights reserved Wiethoff et al. and/or ACM Press

 
Edit | Del

Kratz, Sven, Rohs, Michael, Guse, Dennis, Mller, Jorg, Bailly, Gilles and Nischt, Michael (2012): PalmSpace: continuous around-device gestures vs. multitouch for 3D rotation tasks on mobile devices. In: Proceedings of the 2012 International Conference on Advanced Visual Interfaces 2012. pp. 181-188. Available online

Rotating 3D objects is a difficult task on mobile devices, because the task requires 3 degrees of freedom and (multi-)touch input only allows for an indirect mapping. We propose a novel style of mobile interaction based on mid-air gestures in proximity of the device to increase the number of DOFs and alleviate the limitations of touch interaction with mobile devices. While one hand holds the device, the other hand performs mid-air gestures in proximity of the device to control 3D objects on the mobile device's screen. A flat hand pose defines a virtual surface which we refer to as the PalmSpace for precise and intuitive 3D rotations. We constructed several hardware prototypes to test our interface and to simulate possible future mobile devices equipped with depth cameras. We conducted a user study to compare 3D rotation tasks using the most promising two designs for the hand location during interaction -- behind and beside the device -- with the virtual trackball, which is the current state-of-art technique for orientation manipulation on touch-screens. Our results show that both variants of PalmSpace have significantly lower task completion times in comparison to the virtual trackball.

© All rights reserved Kratz et al. and/or ACM Press

2011
 
Edit | Del

Shirazi, Alireza Sahami, Rohs, Michael, Schleicher, Robert, Kratz, Sven, Mller, Alexander and Schmidt, Albrecht (2011): Real-time nonverbal opinion sharing through mobile phones during sports events. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 307-310. Available online

Even with the rise of the World Wide Web, TV has remained the most pervasive entertainment medium and is nowadays often used together with other media, which allow for active participation. The idea of connecting non-collocated TV viewers via telecommunication technologies, referred to as Social TV, has recently received considerable attention. Such systems typically include set-top boxes for supporting collaboration. In this research we investigate if real-time opinion sharing about TV shows through a nonverbal (non-textual) iconic UI on mobile phones is reasonable. For this purpose we developed a mobile app, made it available to a large number of users through the Android Market, and conducted an uncontrolled user study in the wild during the soccer world cup 2010. The results of the study indicate that TV viewers who used the app had more fun and felt more connected to other viewers. We also show that by monitoring this channel it is possible to collect sentiments relevant to the broadcasted content in real-time. The collected data exemplify that the aggregated sentiments correspond to important moments, and hence can be used to generate a summary of the event.

© All rights reserved Shirazi et al. and/or their publisher

 
Edit | Del

Rohs, Michael, Oulasvirta, Antti and Suomalainen, Tiia (2011): Interaction with magic lenses: real-world validation of a Fitts' Law model. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2725-2728. Available online

Rohs and Oulasvirta (2008) proposed a two-component Fitts' law model for target acquisition with magic lenses in mobile augmented reality (AR) with 1) a physical pointing phase, in which the target can be directly observed on the background surface, and 2) a virtual pointing phase, in which the target can only be observed through the device display. The model provides a good fit (R{squared}=0.88) with laboratory data, but it is not known if it generalizes to real-world AR tasks. In the present outdoor study, subjects (N=12) did building-selection tasks in an urban area. The differences in task characteristics to the laboratory study are drastic: targets are three-dimensional and they vary in shape, size, z-distance, and visual context. Nevertheless, the model yielded an R{squared} of 0.80, and when using effective target width an R{squared} of 0.88 was achieved.

© All rights reserved Rohs et al. and/or their publisher

 
Edit | Del

Dachselt, Raimund, Jones, Matt, Hkkil, Jonna, Lochtefeld, Markus, Rohs, Michael and Rukzio, Enrico (2011): Mobile and personal projection (MP2). In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 21-23. Available online

The emergence of mobile and personal projection devices promises new ways to display and interact with content while the user is mobile, and offer new opportunities and challenges for HCI. This workshop aims to formulate fundamental research questions around this emerging field and provides a venue for discussion for researchers and practitioners working in this area. We will focus on new interaction techniques, applications, personal projection devices, interaction design, multi-user aspects, multi-modal user interfaces and social implications. Our aim is to foster the evolution of a mobile and personal projection community.

© All rights reserved Dachselt et al. and/or their publisher

 
Edit | Del

Kratz, Sven, Westermann, Tilo, Rohs, Michael and Essl, Georg (2011): CapWidgets: tangile widgets versus multi-touch controls on mobile devices. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1351-1356. Available online

We present CapWidgets, passive tangible controls for capacitive touch screens. CapWidgets bring back physical controls to off-the-shelf multi-touch surfaces as found in mobile phones and tablet computers. While the user touches the widget, the surface detects the capacitive marker on the widget's underside. We study the relative performance of this tangible interaction with direct multi-touch interaction and our experimental results show that user performance and preferences are not automatically in favor of tangible widgets and careful design is necessary to validate their properties.

© All rights reserved Kratz et al. and/or their publisher

 
Edit | Del

Kratz, Sven, Rohs, Michael, Wolf, Katrin, Mller, Jorg, Wilhelm, Mathias, Johansson, Carolina, Tholander, Jakob and Laaksolahti, Jarmo (2011): Body, movement, gesture & tactility in interaction with mobile devices. In: Proceedings of 13th Conference on Human-computer interaction with mobile devices and services 2011. pp. 757-759. Available online

In the search for novel and more expressive interaction techniques for mobile devices, bodily aspects such as movement, gesture, and touch based interfaces are prominent. For instance, touch-screen gestures have found widespread application in mobile device interfaces while bodily gestures involving device movement are successfully applied in gaming scenarios. Research systems increasingly explore other modalities, like pressure, free-hand and on body interaction in mobile settings. This has become possible through on-going developments that have made sensing and actuating technologies cheaper and more easily integrated in mobile and handheld devices. The turn towards experiential, embodied, and enacted perspectives on cognition and action has also contributed to a shift in what aspects of interaction to focus upon in interaction design. This has led HCIresearchers to explore not only how the whole human body can be taken into account in design, but also to explore new domains of application for instance in leisure, entertainment and public urban environments.

© All rights reserved Kratz et al. and/or ACM Press

 
Edit | Del

Qin, Qian, Rohs, Michael and Kratz, Sven (2011): Dynamic ambient lighting for mobile devices. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 51-52. Available online

The information a small mobile device can show via its display has been always limited by its size. In large information spaces, relevant information, such as important locations on a map can get clipped when a user starts zooming and panning. Dynamic ambient lighting allows mobile devices to visualize off-screen objects by illuminating the background without compromising valuable display space. The lighted spots can be used to show the direction and distance of such objects by varying the spot's position and intensity. Dynamic ambient lighting also provides a new way of displaying the state of a mobile device. Illumination is provided by a prototype rear of device shell which contains LEDs and requires the device to be placed on a surface, such as a table or desk.

© All rights reserved Qin et al. and/or ACM Press

2010
 
Edit | Del

Kratz, Sven, Brodien, Ivo and Rohs, Michael (2010): Semi-automatic zooming for mobile map navigation. In: Proceedings of 12th Conference on Human-computer interaction with mobile devices and services 2010. pp. 63-72. Available online

In this paper we present a novel interface for mobile map navigation based on Semi-Automatic Zooming (SAZ). SAZ gives the user the ability to manually control the zoom level of an SDAZ interface, while retaining the automatic zooming characteristics of that interface at times when the user is not explicitly controlling the zoom level. In a user study conducted using a realistic mobile map with a wide scale space, we compare SAZ with existing map interface techniques, multi-touch and Speed-Dependent Automatic Zooming (SDAZ). We extend a dynamic state-space model for Speed-Dependent Automatic Zooming (SDAZ) to accept 2D tilt input for scroll rate and zoom level control and implement a dynamically zoomable map view with access to high-resolution map material for use in our study. The study reveals that SAZ performs significantly better than SDAZ and that SAZ is comparable in performance and usability to a standard multi-touch map interface. Furthermore, the study shows that SAZ could serve as an alternative to multi-touch as input technique for mobile map interfaces.

© All rights reserved Kratz et al. and/or their publisher

 
Edit | Del

Kray, Christian, Nesbitt, Daniel, Dawson, John and Rohs, Michael (2010): User-defined gestures for connecting mobile phones, public displays, and tabletops. In: Proceedings of 12th Conference on Human-computer interaction with mobile devices and services 2010. pp. 239-248. Available online

Gestures can offer an intuitive way to interact with a computer. In this paper, we investigate the question whether gesturing with a mobile phone can help to perform complex tasks involving two devices. We present results from a user study, where we asked participants to spontaneously produce gestures with their phone to trigger a set of different activities. We investigated three conditions (device configurations): phone-to-phone, phone-to-tabletop, and phone to public display. We report on the kinds of gestures we observed as well as on feedback from the participants, and provide an initial assessment of which sensors might facilitate gesture recognition in a phone. The results suggest that phone gestures have the potential to be easily understood by end users and that certain device configurations and activities may be well suited for gesture control.

© All rights reserved Kray et al. and/or their publisher

 
Edit | Del

Stewart, Craig, Rohs, Michael, Kratz, Sven and Essl, Georg (2010): Characteristics of pressure-based input for mobile devices. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 801-810. Available online

We conducted a series of user studies to understand and clarify the fundamental characteristics of pressure in user interfaces for mobile devices. We seek to provide insight to clarify a longstanding discussion on mapping functions for pressure input. Previous literature is conflicted about the correct transfer function to optimize user performance. Our study results suggest that the discrepancy can be explained by different signal conditioning circuitry and with improved signal conditioning the user-performed precision relationship is linear. We also explore the effects of hand pose when applying pressure to a mobile device from the front, the back, or simultaneously from both sides in a pinching movement. Our results indicate that grasping type input outperforms single-sided input and is competitive with pressure input against solid surfaces. Finally we provide an initial exploration of non-visual multimodal feedback, motivated by the desire for eyes-free use of mobile devices. The findings suggest that non-visual pressure input can be executed without degradation in selection time but suffers from accuracy problems.

© All rights reserved Stewart et al. and/or their publisher

 
Edit | Del

Gehring, Sven, Lochtefeld, Markus, Schoning, Johannes, Gorecky, Dominic, Stephan, Peter, Krger, Antonio and Rohs, Michael (2010): Mobile product customization. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 3463-3468. Available online

Many companies are using the web to enable customers to individually customize their products that range from automobiles and bicycles to CDs, cosmetics and shirts. In this paper we present a mobile application for product customization and production within a smart factory. This allows the ad hoc configuration of products at the point of sale (POS). We investigate human factors when customizing products while interacting with them. We focus on the concept of the mobile client that enables this ad hoc modification, but also present the production chain behind our product. We believe that this particular 3D interaction with a product and a mobile device help to improve the customer satisfaction as it allows for customizing a product in an easy and intuitive way. From a CHI perspective an important aspect is that our mobile augmented reality interface can help to match the costumer's expectations with the final modified product and allows the most natural and intuitive interaction. As a use case of the system, we present the modification of a soap dispenser.

© All rights reserved Gehring et al. and/or their publisher

 
Edit | Del

Kratz, Sven and Rohs, Michael (2010): A $3 gesture recognizer: simple gesture recognition for devices equipped with 3D acceleration sensors. In: Proceedings of the 2010 International Conference on Intelligent User Interfaces 2010. pp. 341-344. Available online

We present the $3 Gesture Recognizer, a simple but robust gesture recognition system for input devices featuring 3D acceleration sensors. The algorithm is designed to be implemented quickly in prototyping environments, is intended to be device-independent and does not require any special toolkits or frameworks. It relies solely on simple trigonometric and geometric calculations. A user evaluation of our system resulted in a correct gesture

© All rights reserved Kratz and Rohs and/or their publisher

 
Edit | Del

Kratz, Sven and Rohs, Michael (2010): The $3 recognizer: simple 3D gesture recognition on mobile devices. In: Proceedings of the 2010 International Conference on Intelligent User Interfaces 2010. pp. 419-420. Available online

We present the $3 Gesture Recognizer, a simple but robust gesture recognition system for input devices featuring 3D acceleration sensors. The algorithm is designed to be implemented quickly in prototyping environments, is intended to be device-independent and does not require any special toolkits or frameworks, but relies solely on simple trigonometric and geometric calculations. Our method requires significantly less training data than other gesture recognizers and is thus suited to be deployed and to deliver results rapidly.

© All rights reserved Kratz and Rohs and/or their publisher

 
Edit | Del

Schoning, Johannes, Lochtefeld, Markus, Rohs, Michael and Krger, Antonio (2010): Projector Phones: A New Class of Interfaces for Augmented Reality. In International Journal of Mobile Human Computer Interaction, 2 (3) pp. 1-14. Available online

With the miniaturization of projection technology, the integration of tiny projection units into mobile devices is no longer fiction; therefore, such integrated projectors in mobile devices could make mobile projection ubiquitous. These phones will have the ability to project large-scale information onto any surfaces in the real world, and by doing so, the interaction space of the mobile device can be considerably expanded. In addition, physical objects in the environment can be augmented with additional information, which can support interaction concepts that are not even possible on modern desktop computers today. The authors believe that mobile camera-projector units can form a promising interface type for mobile Augmented Reality (AR) applications, thus, this paper identifies different application classes of such interfaces. In addition, different spatial setups of camera and projector units will have an effect on the possible applications and the interaction space with the focus on the augmentation of real word objects in the environment. This paper presents two examples of applications for mobile camera-projector units and different hardware prototypes that allow augmentation of real world objects.

© All rights reserved Schoning et al. and/or their publisher

2009
 
Edit | Del

Schoning, Johannes, Rohs, Michael, Kratz, Sven, Lochtefeld, Markus and Krger, Antonio (2009): Map torchlight: a mobile augmented reality camera projector unit. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 3841-3846. Available online

The advantages of paper-based maps have been utilized in the field of mobile Augmented Reality (AR) in the last few years. Traditional paper-based maps provide high-resolution, large-scale information with zero power consumption. There are numerous implementations of magic lens interfaces that combine high-resolution paper maps with dynamic handheld displays. From an HCI perspective, the main challenge of magic lens interfaces is that users have to switch their attention between the magic lens and the information in the background. In this paper, we attempt to overcome this problem by using a lightweight mobile camera projector unit to augment the paper map directly with additional information. The "Map Torchlight" is tracked over a paper map and can precisely highlight points of interest, streets, and areas to give directions or other guidance for interacting with the map.

© All rights reserved Schoning et al. and/or ACM Press

 
Edit | Del

Schoning, Johannes, Daiber, Florian, Krger, Antonio and Rohs, Michael (2009): Using hands and feet to navigate and manipulate spatial data. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 4663-4668. Available online

We demonstrate how multi-touch hand gestures in combination with foot gestures can be used to perform navigation tasks in interactive systems. The geospatial domain is an interesting example to show the advantages of the combination of both modalities because the complex user interfaces of common Geographic Information System (GIS) requires a high degree of expertise from its users. Recent developments in interactive surfaces that enable the construction of low cost multi-touch displays and relatively cheap sensor technology to detect foot gestures allow the deep exploration of these input modalities for GIS users with medium or low expertise. In this paper, we provide a categorization of multitouch hand and foot gestures for the interaction with spatial data on a large-scale interactive wall. In addition we show with an initial evaluation how these gestures can improve the overall interaction with spatial information.

© All rights reserved Schoning et al. and/or ACM Press

 
Edit | Del

Kratz, Sven and Rohs, Michael (2009): HoverFlow: expanding the design space of around-device interaction. In: Proceedings of 11th Conference on Human-computer interaction with mobile devices and services 2009. p. 4. Available online

In this paper we explore the design space of around-device interaction (ADI). This approach seeks to expand the interaction possibilities of mobile and wearable devices beyond the confines of the physical device itself to include the space around it. This enables rich 3D input, comprising coarse movement-based gestures, as well as static position-based gestures. ADI can help to solve occlusion problems and scales down to very small devices. We present a novel around-device interaction interface that allows mobile devices to track coarse hand gestures performed above the device's screen. Our prototype uses infrared proximity sensors to track hand and finger positions in the device's proximity. We present an algorithm for detecting hand gestures and provide a rough overview of the design space of ADI-based interfaces.

© All rights reserved Kratz and Rohs and/or their publisher

 
Edit | Del

Schoning, Johannes, Krger, Antonio, Cheverst, Keith, Rohs, Michael, Lochtefeld, Markus and Taher, Faisal (2009): PhotoMap: using spontaneously taken images of public maps for pedestrian navigation tasks on mobile devices. In: Proceedings of 11th Conference on Human-computer interaction with mobile devices and services 2009. p. 14. Available online

In many mid- to large-sized cities public maps are ubiquitous. One can also find a great number of maps in parks or near hiking trails. Public maps help to facilitate orientation and provide special information to not only tourists but also to locals who just want to look up an unfamiliar place while on the go. These maps offer many advantages compared to mobile maps from services like Google Maps Mobile or Nokia Maps. They often show local landmarks and sights that are not shown on standard digital maps. Often these 'You are here' (YAH) maps are adapted to a special use case, e.g. a zoo map or a hiking map of a certain area. Being designed for a fashioned purpose these maps are often aesthetically well designed and their usage is therefore more pleasant. In this paper we present a novel technique and application called PhotoMap that uses images of 'You are here' maps taken with a GPS-enhanced mobile camera phone as background maps for on-the-fly navigation tasks. We discuss different implementations of the main challenge, namely helping the user to properly georeference the taken image with sufficient accuracy to support pedestrian navigation tasks. We present a study that discusses the suitability of various public maps for this task and we evaluate if these georeferenced photos can be used for navigation on GPS-enabled devices.

© All rights reserved Schoning et al. and/or their publisher

 
Edit | Del

Rohs, Michael, Essl, Georg, Schoning, Johannes, Naumann, Anja, Schleicher, Robert and Krger, Antonio (2009): Impact of item density on magic lens interactions. In: Proceedings of 11th Conference on Human-computer interaction with mobile devices and services 2009. p. 38. Available online

We conducted a user study to investigate the effect of visual context in handheld augmented reality interfaces. A dynamic peephole interface (without visual context beyond the device display) was compared to a magic lens interface (with video see-through augmentation of external visual context). The task was to explore objects on a map and look for a specific attribute shown on the display. We tested different sizes of visual context as well as different numbers of items per area, i.e. different item densities. We found that visual context is most effective for sparse item distributions and the performance benefit decreases with increasing density. User performance in the magic lens case approaches the performance of the dynamic peephole case the more densely spaced the items are. In all conditions, subjective feedback indicates that participants generally prefer visual context over the lack thereof. The insights gained from this study are relevant for designers of mobile AR and dynamic peephole interfaces by suggesting when external visual context is most beneficial.

© All rights reserved Rohs et al. and/or their publisher

 
Edit | Del

Kratz, Sven and Rohs, Michael (2009): Hoverflow: exploring around-device interaction with IR distance sensors. In: Proceedings of 11th Conference on Human-computer interaction with mobile devices and services 2009. p. 42. Available online

By equipping a mobile device with distance sensing capabilities, we aim to expand the interaction possibilities of mobile and wearable devices beyond the confines of the physical device itself to include the space immediately around it. Our prototype, an Apple iPhone equipped with six IR distance sensors, allows for rich 3D input, comprising coarse movement-based hand gestures, as well as static position-based gestures. A demonstration application, HoverFlow, illustrates the use of coarse hand gestures for interaction with mobile applications. This type of interaction, which we call Around-Device Interaction (ADI) has the potential to help to solve occlusion problems on small-screen mobile devices and scales well to small device sizes.

© All rights reserved Kratz and Rohs and/or their publisher

 
Edit | Del

Kray, Christian, Rohs, Michael, Hook, Jonathan and Kratz, Sven (2009): Bridging the gap between the Kodak and the Flickr generations: A novel interaction technique for collocated photo sharing. In International Journal of Human-Computer Studies, 67 (12) pp. 1060-1072. Available online

Passing around stacks of paper photographs while sitting around a table is one of the key social practices defining what is commonly referred to as the 'Kodak Generation'. Due to the way digital photographs are stored and handled, this practice does not translate well to the 'Flickr Generation', where collocated photo sharing often involves the (wireless) transmission of a photo from one mobile device to another. In order to facilitate 'cross-generation' sharing without enforcing either practice, it is desirable to bridge this gap in a way that incorporates familiar aspects of both. In this paper, we discuss a novel interaction technique that addresses some of the constraints introduced by current communication technology, and that enables photo sharing in a way, which resembles the passing of stacks of paper photographs. This technique is based on dynamically generated spatial regions around mobile devices and has been evaluated through two user studies. The results we obtained indicate that our technique is easy to learn and as fast, or faster than, current technology such as transmitting photos between devices using Bluetooth. In addition, we found evidence of different sharing techniques influencing social practice around photo sharing. The use of our technique resulted in a more inclusive and group-oriented behavior in contrast to Bluetooth photo sharing, which resulted in a more fractured setting composed of sub-groups.

© All rights reserved Kray et al. and/or Academic Press

2008
 
Edit | Del

Rohs, Michael and Oulasvirta, Antti (2008): Target acquisition with camera phones when used as magic lenses. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1409-1418. Available online

When camera phones are used as magic lenses in handheld augmented reality applications involving wall maps or posters, pointing can be divided into two phases: (1) an initial coarse physical pointing phase, in which the target can be directly observed on the background surface, and (2) a fine-control virtual pointing phase, in which the target can only be observed through the device display. In two studies, we show that performance cannot be adequately modeled with standard Fitts' law, but can be adequately modeled with a two-component modification. We chart the performance space and analyze users' target acquisition strategies in varying conditions. Moreover, we show that the standard Fitts' law model does hold for dynamic peephole pointing where there is no guiding background surface and hence the physical pointing component of the extended model is not needed. Finally, implications for the design of magic lens interfaces are considered.

© All rights reserved Rohs and Oulasvirta and/or ACM Press

 
Edit | Del

Schoning, Johannes, Hecht, Brent, Raubal, Martin, Krger, Antonio, Marsh, Meredith and Rohs, Michael (2008): Improving interaction with virtual globes through spatial thinking: helping users ask "why?. In: Proceedings of the 2008 International Conference on Intelligent User Interfaces 2008. pp. 129-138. Available online

Virtual globes have progressed from little-known technology to broadly popular software in a mere few years. We investigated this phenomenon through a survey and discovered that, while virtual globes are en vogue, their use is restricted to a small set of tasks so simple that they do not involve any spatial thinking. Spatial thinking requires that users ask "what is where" and "why"; the most common virtual globe tasks only include the "what". Based on the results of this survey, we have developed a multi-touch virtual globe derived from an adapted virtual globe paradigm designed to widen the potential uses of the technology by helping its users to inquire about both the "what is where" and "why" of spatial distribution. We do not seek to provide users with full GIS (geographic information system) functionality, but rather we aim to facilitate the asking and answering of simple "why" questions about general topics that appeal to a wide virtual globe user base.

© All rights reserved Schoning et al. and/or ACM Press

 
Edit | Del

Kray, Christian, Rohs, Michael, Hook, Jon and Kratz, Sven G. (2008): Group coordination and negotiation through spatial proximity regions around mobile devices on augmented tabletops. In: Third IEEE International Workshop on Tabletops and Interactive Surfaces Tabletop 2008 October 1-3, 2008, Amsterdam, The Netherlands. pp. 1-8. Available online

 
Edit | Del

Henze, Niels, Broll, Gregor, Rukzio, Enrico, Rohs, Michael, Zimmermann, Andreas and Boll, Susanne (eds.) Mobile Interaction with the Real World 2008 - MIRW 2008 - Mobile HCI Workshop September 2, 2008, Amsterdam, The Netherland.

 
Edit | Del

Schoning, Johannes, Rohs, Michael and Krger, Antonio (2008): Mobile Interaction with the real world. In: Henze, Niels, Broll, Gregor, Rukzio, Enrico, Rohs, Michael, Zimmermann, Andreas and Boll, Susanne (eds.) Mobile Interaction with the Real World 2008 - MIRW 2008 - Mobile HCI Workshop September 2, 2008, Amsterdam, The Netherland. pp. 51-60. Available online

 
Edit | Del

Cheverst, Keith, Schoning, Johannes, Krger, Antonio and Rohs, Michael (2008): Photomap: Snap, Grab and Walk away with a "You Are Here" Map. In: Henze, Niels, Broll, Gregor, Rukzio, Enrico, Rohs, Michael, Zimmermann, Andreas and Boll, Susanne (eds.) Mobile Interaction with the Real World 2008 - MIRW 2008 - Mobile HCI Workshop September 2, 2008, Amsterdam, The Netherland. pp. 73-82. Available online

 
Edit | Del

Henze, Niels, Broll, Gregor, Rukzio, Enrico, Rohs, Michael and Zimmermann, Andreas (2008): Workshop on Mobile Interaction with the Real World. In: Henze, Niels, Broll, Gregor, Rukzio, Enrico, Rohs, Michael, Zimmermann, Andreas and Boll, Susanne (eds.) Mobile Interaction with the Real World 2008 - MIRW 2008 - Mobile HCI Workshop September 2, 2008, Amsterdam, The Netherland. pp. 9-14. Available online

 
Edit | Del

Henze, Niels, Broll, Gregor, Rukzio, Enrico, Rohs, Michael and Zimmermann, Andreas (2008): Mobile interaction with the real world. In: Hofte, G. Henri ter, Mulder, Ingrid and Ruyter, Boris E. R. de (eds.) Proceedings of the 10th Conference on Human-Computer Interaction with Mobile Devices and Services - Mobile HCI 2008 September 2-5, 2008, Amsterdam, the Netherlands. pp. 563-565. Available online

2007
 
Edit | Del

Rohs, Michael and Essl, Georg (2007): Sensing-based interaction for information navigation on handheld displays. In: Cheok, Adrian David and Chittaro, Luca (eds.) Proceedings of the 9th Conference on Human-Computer Interaction with Mobile Devices and Services - Mobile HCI 2007 September 9-12, 2007, Singapore. pp. 387-394. Available online

 
Edit | Del

Rohs, Michael and Essl, Georg (2007): CaMus2: collaborative music performance with mobile camera phones. In: Inakage, Masa, Lee, Newton, Tscheligi, Manfred, Bernhaupt, Regina and Natkin, Stphane (eds.) Proceedings of the International Conference on Advances in Computer Entertainment Technology - ACE 2007 June 13-15, 2007, Salzburg, Austria. pp. 190-195. Available online

 
Edit | Del

Rohs, Michael, Schoning, Johannes, Raubal, Martin, Essl, Georg and Krger, Antonio (2007): Map navigation with mobile devices: virtual versus physical movement with and without visual context. In: Massaro, Dominic W., Takeda, Kazuya, Roy, Deb and Potamianos, Alexandros (eds.) Proceedings of the 9th International Conference on Multimodal Interfaces - ICMI 2007 November 12-15, 2007, Nagoya, Aichi, Japan. pp. 146-153. Available online

 
Edit | Del

Rohs, Michael, Schoning, Johannes, Raubal, Martin, Essl, Georg and Krger, Antonio (2007): Map navigation with mobile devices: virtual versus physical movement with and without visual context. In: Proceedings of the 2007 International Conference on Multimodal Interfaces 2007. pp. 146-153. Available online

A user study was conducted to compare the performance of three methods for map navigation with mobile devices. These methods are joystick navigation, the dynamic peephole method without visual context, and the magic lens paradigm using external visual context. The joystick method is the familiar scrolling and panning of a virtual map keeping the device itself static. In the dynamic peephole method the device is moved and the map is fixed with respect to an external frame of reference, but no visual information is present outside the device's display. The magic lens method augments an external content with graphical overlays, hence providing visual context outside the device display. Here too motion of the device serves to steer navigation. We compare these methods in a study measuring user performance, motion patterns, and subjective preference via questionnaires. The study demonstrates the advantage of dynamic peephole and magic lens interaction over joystick interaction in terms of search time and degree of exploration of the search space.

© All rights reserved Rohs et al. and/or their publisher

2006
 
Edit | Del

Rath, Matthias and Rohs, Michael (2006): Explorations in sound for tilting-based interfaces. In: Quek, Francis K. H., Yang, Jie, Massaro, Dominic W., Alwan, Abeer A. and Hazen, Timothy J. (eds.) Proceedings of the 8th International Conference on Multimodal Interfaces - ICMI 2006 November 2-4, 2006, Banff, Alberta, Canada. pp. 295-301. Available online

 
Edit | Del

Rohs, Michael and Essl, Georg (2006): Which one is better?: information navigation techniques for spatially aware handheld displays. In: Quek, Francis K. H., Yang, Jie, Massaro, Dominic W., Alwan, Abeer A. and Hazen, Timothy J. (eds.) Proceedings of the 8th International Conference on Multimodal Interfaces - ICMI 2006 November 2-4, 2006, Banff, Alberta, Canada. pp. 100-107. Available online

 
Edit | Del

Ballagas, Rafael, Borchers, Jan, Rohs, Michael and Sheridan, Jennifer G. (2006): The Smart Phone: A Ubiquitous Input Device. In IEEE Pervasive Computing, 5 (1) pp. 70-77. Available online

 
Edit | Del

Rohs, Michael and Essl, Georg (2006): Which one is better?: information navigation techniques for spatially aware handheld displays. In: Proceedings of the 2006 International Conference on Multimodal Interfaces 2006. pp. 100-107. Available online

Information navigation techniques for handheld devices support interacting with large virtual spaces on small displays, for example finding targets on a large-scale map. Since only a small part of the virtual space can be shown on the screen at once, typical interfaces allow for scrolling and panning to reach off-screen content. Spatially aware handheld displays sense their position and orientation in physical space in order to provide a corresponding view in virtual space. We implemented various one-handed navigation techniques for camera-tracked spatially aware displays. The techniques are compared in a series of abstract selection tasks that require the investigation of different levels of detail. The tasks are relevant for interfaces that enable navigating large scale maps and finding contextual information on them. The results show that halo is significantly faster than other techniques. In complex situations zoom and halo show comparable performance. Surprisingly, the combination of halo and zooming is detrimental to user performance.

© All rights reserved Rohs and Essl and/or their publisher

 
Edit | Del

Rath, Matthias and Rohs, Michael (2006): Explorations in sound for tilting-based interfaces. In: Proceedings of the 2006 International Conference on Multimodal Interfaces 2006. pp. 295-301. Available online

Everyday experience as well as recent studies tell that information contained in ecological sonic feedback may improve human control of, and interaction with, a system. This notion is particularly worthwhile to consider in the context of mobile, tilting-based interfaces as have been proposed, developed and studied extensively. Two interfaces are used for this scope, the Ballancer, based on the metaphor of balancing a rolling ball on a track, and a more concretely application-oriented setup of a mobile phone with tilting-based input. First pilot studies have been conducted.

© All rights reserved Rath and Rohs and/or their publisher

2005
 
Edit | Del

Ballagas, Rafael, Rohs, Michael and Sheridan, Jennifer G. (2005): Sweep and point and shoot: phonecam-based interactions for large public displays. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1200-1203. Available online

This paper focuses on enabling interactions with large public displays using the most ubiquitous personal computing device, the mobile phone. Two new interaction techniques are introduced that use the embedded camera on mobile phones as an enabling technology. The "Point&Shoot" technique allows users to select objects using visual codes to set up an absolute coordinate system on the display surface instead of tagging individual objects on the screen. The "Sweep" technique enables users to use the phone like an optical mouse with multiple degrees of freedom and allows interaction without having to point the camera at the display. Prototypes of these interactions have been implemented and evaluated using modern mobile phone technologies. This proof of concept provides a performance baseline and gives valuable insights to guide future research and development. These techniques are intended to inspire and enable new classes of large public display applications.

© All rights reserved Ballagas et al. and/or ACM Press

 
Edit | Del

Rukzio, Enrico, Rohs, Michael, Wagner, Daniel and Hamard, John (2005): Development of interactive applications for mobile devices. In: Proceedings of 7th conference on Human-computer interaction with mobile devices and services 2005. pp. 365-366. Available online

 
Edit | Del

Rukzio, Enrico, Rohs, Michael, Wagner, Daniel and Hamard, John (2005): Development of interactive applications for mobile devices. In: Tscheligi, Manfred, Bernhaupt, Regina and Mihalic, Kristijan (eds.) Proceedings of the 7th Conference on Human-Computer Interaction with Mobile Devices and Services - Mobile HCI 2005 September 19-22, 2005, Salzburg, Austria. pp. 365-366. Available online

 
Edit | Del

Ballagas, Rafael, Rohs, Michael and Sheridan, Jennifer G. (2005): Mobile Phones as Pointing Devices. In: Rukzio, Enrico, Hkkil, Jonna, Spasojevic, Mirjana, Mntyjrvi, Jani and Ravi, Nishkam (eds.) PERMID 2005 - Pervasive Mobile Interaction Devices - Mobile Devices as Pervasive User Interfaces and Interaction Devices - Workshop in conjunction with The 3rd International Conference on Pervasive Computing PERVASIVE 2005 May 11, 2005, Munich, Germany. pp. 27-30. Available online

 
Edit | Del

Rohs, Michael and Roduner, Christof (2005): Camera Phones with Pen Input as Annotation Devices. In: Rukzio, Enrico, Hkkil, Jonna, Spasojevic, Mirjana, Mntyjrvi, Jani and Ravi, Nishkam (eds.) PERMID 2005 - Pervasive Mobile Interaction Devices - Mobile Devices as Pervasive User Interfaces and Interaction Devices - Workshop in conjunction with The 3rd International Conference on Pervasive Computing PERVASIVE 2005 May 11, 2005, Munich, Germany. pp. 23-26. Available online

 
Edit | Del

Rohs, Michael and Zweifel, Philipp (2005): A Conceptual Framework for Camera Phone-Based Interaction Techniques. In: Gellersen, Hans-Werner, Want, Roy and Schmidt, Albrecht (eds.) PERVASIVE 2005 - Pervasive Computing, Third International Conference May 8-13, 2005, Munich, Germany. pp. 171-189. Available online

2003
 
Edit | Del

Siegemund, Frank and Rohs, Michael (2003): Rendezvous layer protocols for Bluetooth-enabled smart devices. In Personal and Ubiquitous Computing, 7 (2) pp. 91-101. Available online

 
Add publication
Show list on your website
 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/michael_rohs.html