Publication statistics

Pub. period:2000-2012
Pub. count:44
Number of co-authors:67



Co-authors

Number of publications with 3 favourite co-authors:

Sriram Subramanian:8
Carl Gutwin:7
Xing-Dong Yang:5

 

 

Productive colleagues

Pourang Irani's 3 most productive colleagues in number of publications:

Carl Gutwin:116
Andy Cockburn:68
Antonio Krüger:59
 
 
 
Jul 11

Creative without strategy is called ‘art‘. Creative with strategy is called ‘advertising‘

-- Jef I. Richards

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!

 
 

Pourang Irani

Personal Homepage:
http://www.cs.umanitoba.ca/~irani/

Add description
Add publication

Publications by Pourang Irani (bibliography)

 what's this?
2012
 
Edit | Del

Rea, Daniel J., Young, James E. and Irani, Pourang (2012): The Roomba mood ring: an ambient-display robot. In: Proceedings of the 7th International Conference on Human-Robot Interaction 2012. pp. 217-218.

We present a robot augmented with an ambient display that communicates using a multi-color halo. We use this robot in a public café-style setting where people vote on which colors the robot will display: we ask people to select a color which "best represents their mood". People can vote from a mobile device (e.g., smart phone or laptop) through a web interface. Thus, the robot's display is an abstract aggregate of the current mood of the room. Our research investigates how a robot with an ambient display may integrate into a space. For example, how will the robot alter how people use or perceive the environment, or how people will interact with the robot itself? In this paper we describe our initial prototype, an iRobot Roomba augmented with lights, and highlight the research questions driving our exploration, including initial study design.

© All rights reserved Rea et al. and/or their publisher

 
Edit | Del

Hossain, Zahid, Hasan, Khalad, Liang, Hai-Ning and Irani, Pourang (2012): EdgeSplit: facilitating the selection of off-screen objects. In: Proceedings of the 14th Conference on Human-computer interaction with mobile devices and services 2012. pp. 79-82.

Devices with small viewports (e.g., smartphones or GPS) result in interfaces where objects of interest can easily reside outside the view, into off-screen space. Researchers have addressed this challenge and have proposed visual cues to assist users in perceptually locating off-screen objects. However, little attention has been placed on methods for selecting the objects. Current designs of off-screen cues can result in overlaps that can make it difficult to use the cues as handles through which users can select the off-screen objects they represent. In this paper, we present EdgeSplit, a technique that facilitates both the visualization and selection of off-screen objects on small devices. EdgeSplit exploits the space around the device's borders to display proxies of off-screen objects and then partitions the border regions to allow for non-overlapping areas that make selection of objects easier. We present an effective algorithm that provides such partitioning and demonstrate the effectiveness of EdgeSplit for selecting off-screen objects.

© All rights reserved Hossain et al. and/or ACM Press

 
Edit | Del

Hasan, Khalad, Yang, Xing-Dong, Liang, Hai-Ning and Irani, Pourang (2012): How to position the cursor?: an exploration of absolute and relative cursor positioning for back-of-device input. In: Proceedings of the 14th Conference on Human-computer interaction with mobile devices and services 2012. pp. 103-112.

Observational studies indicate that most people use one hand to interact with their mobile devices. Interaction on the back-of-devices (BoD) has been proposed to enhance one-handed input for various tasks, including selection and gesturing. However, we do not possess a good understanding of some fundamental issues related to one-handed BoD input. In this paper, we attempt to fill this gap by conducting three studies. The first study explores suitable selection techniques; the second study investigates the performance and suitability of the two main modes of cursor movement: Relative and Absolute; and the last study examines solutions to the problem of reaching the lower part of the device. Our results indicate that for BoD interaction, relative input is more efficient and accurate for cursor positioning and target selection than absolute input. Based on these findings provide guidelines for designing BoD interactions for mobile devices.

© All rights reserved Hasan et al. and/or ACM Press

2011
 
Edit | Del

Hasan, Khalad, Grossman, Tovi and Irani, Pourang (2011): Comet and target ghost: techniques for selecting moving targets. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 839-848.

Numerous applications such as simulations, air traffic control systems, and video surveillance systems are inherently composed of spatial objects that move in a scene. In many instances, users can benefit from tools that allow them to select these targets in real-time, without having to pause the dynamic display. However, selecting moving objects is considerably more difficult and error prone than selecting stationary targets. In this paper, we evaluate the effectiveness of several techniques that assist in selecting moving targets. We present Comet, a technique that enhances targets based on their speed and direction. We also introduce Target Ghost, which allows users to select a static proxy of the target, while leaving the motion uninterrupted. We found a speed benefit for the Comet in a 1D selection task in comparison to other cursor and target enhancements. For 2D selection, Comet outperformed Bubble cursor but only when Target Ghost was not available. We conclude with guidelines for design.

© All rights reserved Hasan et al. and/or their publisher

 
Edit | Del

Williams, Cary, Yang, Xing Dong, Partridge, Grant, Millar-Usiskin, Joshua, Major, Arkady and Irani, Pourang (2011): TZee: exploiting the lighting properties of multi-touch tabletops for tangible 3d interactions. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1363-1372.

Manipulating 3D objects on a tabletop is inherently problematic. Tabletops lack a third degree of freedom and thus require novel solutions to support even the simplest 3D manipulations. Our solution is TZee -- a passive tangible widget that enables natural interactions with 3D objects by exploiting the lighting properties of diffuse illumination (DI) multi-touch tabletops. TZee is assembled from stacked layers of acrylic glass to extend the tabletop's infrared light slightly above the surface without supplemental power. With TZee, users can intuitively scale, translate and rotate objects in all three dimensions, and also perform more sophisticated gestures, like "slicing" a volumetric object, that have not been possible with existing tabletop interaction schemes. TZee is built with affordable and accessible materials, and one tabletop surface can easily support multiple TZees. Moreover, since TZee is transparent, there are numerous possibilities to augment interactions with feedback, helpful hints, or other visual enhancements. We discuss several important design considerations and demonstrate the value of TZee with several applications.

© All rights reserved Williams et al. and/or their publisher

 
Edit | Del

Yang, Xing-Dong, Grossman, Tovi, Irani, Pourang and Fitzmaurice, George (2011): TouchCuts and TouchZoom: enhanced target selection for touch displays using finger proximity sensing. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2585-2594.

Although touch-screen laptops are increasing in popularity, users still do not comfortably rely on touch in these environments, as current software interfaces were not designed for being used by the finger. In this paper, we first demonstrate the benefits of using touch as a complementary input modality along with the keyboard and mouse or touchpad in a laptop setting. To alleviate the frustration users experience with touch, we then design two techniques, TouchCuts, a single target expansion technique, and TouchZoom, a multiple target expansion technique. Both techniques facilitate the selection of small icons, by detecting the finger proximity above the display surface, and expanding the target as the finger approaches. In a controlled evaluation, we show that our techniques improve performance in comparison to both the computer mouse and a baseline touch-based target acquisition technique. We conclude by discussing other application scenarios that our techniques support.

© All rights reserved Yang et al. and/or their publisher

 
Edit | Del

Han, Teng, Alexander, Jason, Karnik, Abhijit, Irani, Pourang and Subramanian, Sriram (2011): Kick: investigating the use of kick gestures for mobile interactions. In: Proceedings of 13th Conference on Human-computer interaction with mobile devices and services 2011. pp. 29-32.

In this paper we describe the use of kick gestures for interaction with mobile devices. Kicking is a well-studied leg action that can be harnessed in mobile contexts where the hands are busy or too dirty to interact with the phone. In this paper we examine the design space of kicking as an interaction technique through two user studies. The first study investigated how well users were able to control the direction of their kicks. Users were able to aim their kicks best when the movement range is divided into segments of at least 24°. In the second study we looked at the velocity of a kick. We found that the users are able to kick with at least two varying velocities. However, they also often undershoot the target velocity. Finally, we propose some specific applications in which kicks can prove beneficial.

© All rights reserved Han et al. and/or ACM Press

 
Edit | Del

Ens, Barrett, Ahlström, David, Cockburn, Andy and Irani, Pourang (2011): Characterizing user performance with assisted direct off-screen pointing. In: Proceedings of 13th Conference on Human-computer interaction with mobile devices and services 2011. pp. 485-494.

The limited viewport size of mobile devices requires that users continuously acquire information that lies beyond the edge of the screen. Recent hardware solutions are capable of continually tracking a user's finger around the device. This has created new opportunities for interactive solutions, such as direct off-screen pointing: the ability to directly point at objects that are outside the viewport. We empirically characterize user performance with direct off-screen pointing when assisted by target cues. We predict time and accuracy outcomes for direct off-screen pointing with existing and derived models. We validate the models with good results (R² ≥ 0.9) and reveal that direct off-screen pointing takes up to four times longer than pointing at visible targets, depending on the desired accuracy tradeoff. Pointing accuracy degrades logarithmically with target distance. We discuss design implications in the context of several real-world applications.

© All rights reserved Ens et al. and/or ACM Press

 
Edit | Del

Cauchard, Jessica R., Löchtefeld, Markus, Irani, Pourang, Schoening, Johannes, Krüger, Antonio, Fraser, Mike and Subramanian, Sriram (2011): Visual separation in mobile multi-display environments. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 451-460.

Projector phones, handheld game consoles and many other mobile devices increasingly include more than one display, and therefore present a new breed of mobile Multi-Display Environments (MDEs) to users. Existing studies illustrate the effects of visual separation between displays in MDEs and suggest interaction techniques that mitigate these effects. Currently, mobile devices with heterogeneous displays such as projector phones are often designed without reference to visual separation issues; therefore it is critical to establish whether concerns and opportunities raised in the existing MDE literature apply to the emerging category of Mobile MDEs (MMDEs). This paper investigates the effects of visual separation in the context of MMDEs and contrasts these with fixed MDE results, and explores design factors for Mobile MDEs. Our study uses a novel eye-tracking methodology for measuring switches in visual context between displays and identifies that MMDEs offer increased design flexibility over traditional MDEs in terms of visual separation. We discuss these results and identify several design implications.

© All rights reserved Cauchard et al. and/or ACM Press

2010
 
Edit | Del

Ahlström, David, Cockburn, Andy, Gutwin, Carl and Irani, Pourang (2010): Why it's quick to be square: modelling new and existing hierarchical menu designs. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 1371-1380.

We consider different hierarchical menu and toolbar-like interface designs from a theoretical perspective and show how a model based on visual search time, pointing time, decision time and expertise development can assist in understanding and predicting interaction performance. Three hierarchical menus designs are modelled -- a traditional pull-down menu, a pie menu and a novel Square Menu with its items arranged in a grid -- and the predictions are validated in an empirical study. The model correctly predicts the relative performance of the designs -- both the eventual dominance of Square Menus compared to traditional and pie designs and a performance crossover as users gain experience. Our work shows the value of modelling in HCI design, provides new insights about performance with different hierarchical menu designs, and demonstrates a new high-performance menu type.

© All rights reserved Ahlström et al. and/or their publisher

 
Edit | Del

Jones, Eleanor, Alexander, Jason, Andreou, Andreas, Irani, Pourang and Subramanian, Sriram (2010): GesText: accelerometer-based gestural text-entry systems. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2173-2182.

Accelerometers are common on many devices, including those required for text-entry. We investigate how to enter text with devices that are solely enabled with accelerometers. The challenge of text-entry with such devices can be overcome by the careful investigation of the human limitations in gestural movements with accelerometers. Preliminary studies provide insight into two potential text-entry designs that purely use accelerometers for gesture recognition. In two experiments, we evaluate the effectiveness of each of the text-entry designs. The first experiment involves novice users over a 45 minute period while the second investigates the possible performance increases over a four day period. Our results reveal that a matrix-based text-entry system with a small set of simple gestures is the most efficient (5.4wpm) and subjectively preferred by participants.

© All rights reserved Jones et al. and/or their publisher

 
Edit | Del

Yang, Xing-Dong, Mak, Edward, McCallum, David, Irani, Pourang, Cao, Xiang and Izadi, Shahram (2010): LensMouse: augmenting the mouse with an interactive touch display. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2431-2440.

We introduce LensMouse, a novel device that embeds a touch-screen display -- or tangible 'lens' -- onto a mouse. Users interact with the display of the mouse using direct touch, whilst also performing regular cursor-based mouse interactions. We demonstrate some of the unique capabilities of such a device, in particular for interacting with auxiliary windows, such as toolbars, palettes, pop-ups and dialog-boxes. By migrating these windows onto LensMouse, challenges such as screen real-estate use and window management can be alleviated. In a controlled experiment, we evaluate the effectiveness of LensMouse in reducing cursor movements for interacting with auxiliary windows. We also consider the concerns involving the view separation that results from introducing such a display-based device. Our results reveal that overall users are more effective with LenseMouse than with auxiliary application windows that are managed either in single or dual-monitor setups. We conclude by presenting other application scenarios that LensMouse could support.

© All rights reserved Yang et al. and/or their publisher

2009
 
Edit | Del

Rahman, Mahfuz, Gustafson, Sean, Irani, Pourang and Subramanian, Sriram (2009): Tilt techniques: investigating the dexterity of wrist-based input. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1943-1952.

Most studies on tilt based interaction can be classified as point-designs that demonstrate the utility of wrist-tilt as an input medium; tilt parameters are tailored to suit the specific interaction at hand. In this paper, we systematically analyze the design space of wrist-based interactions and focus on the level of control possible with the wrist. In a first study, we investigate the various factors that can influence tilt control, separately along the three axes of wrist movement: flexion/extension, pronation/supination, and ulnar/radial deviation. Results show that users can control comfortably at least 16 levels on the pronation/supination axis and that using a quadratic mapping function for discretization of tilt space significantly improves user performance across all tilt axes. We discuss the findings of our results in the context of several interaction techniques and identify several general design recommendations.

© All rights reserved Rahman et al. and/or ACM Press

 
Edit | Del

Gunn, Tyler J., Irani, Pourang and Anderson, John (2009): An evaluation of techniques for selecting moving targets. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 3329-3334.

Moving targets are found in numerous applications such as computer games, air traffic control systems, and video surveillance. The selection of moving targets is considerably more difficult and error prone than traditional stationary target selection. In this paper, we introduce Comet Tails and Target Lock, two techniques that support the selection of moving targets. Our goal is to facilitate accurate and fast selection of moving targets. We compare our two techniques to unassisted selection in a controlled experiment. The results show that for moving target selection, Comet Tails and Target Lock can outperform unassisted selection, and result in fewer errors. According to post-experiment questionnaires, participants indicate a stronger preference for assisted target selection with Comet Tails and Target Lock than unassisted selection.

© All rights reserved Gunn et al. and/or ACM Press

 
Edit | Del

Gunn, Tyler J., Zhang, Hong, Mak, Ed and Irani, Pourang (2009): An evaluation of one-handed techniques for multiple-target selection. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 4189-4194.

Recent research has revealed that a large population of mobile users usually use one hand when interacting with mobile devices. However, very few techniques have been developed to support multiple-target selection. In this paper, we introduce Burst and ZoomTap, two techniques that aim to facilitate accurate and fast multiple-target acquisition with one-handed thumb operation on touch-based mobile devices. We compare our two techniques to Shift in a controlled experiment. The results show that for multiple-target selection, Burst and ZoomTap can outperform Shift; also according to the questionnaire, participants prefer Burst and ZoomTap to Shift.

© All rights reserved Gunn et al. and/or ACM Press

 
Edit | Del

Yang, Xing-Dong, Irani, Pourang, Boulanger, Pierre and Bischof, Walter (2009): One-handed behind-the-display cursor input on mobile devices. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 4501-4506.

Behind-the-display interaction has gained popularity for interactions on handheld devices as researchers have demonstrated the viability of such interactions on small devices. However, most designs have investigated the use of direct input behind the screen. We demonstrate that behind-the-display interaction with cursor input is promising and can be a useful augmentation to handheld devices. We developed a prototypical system on a PDA to which we affixed a wireless mouse. The mouse is mounted on the rear of the PDA with the optical sensor facing outwards. The system is designed to be used with one hand, and prevents occlusion and finger-reach. Through several applications we propose the benefits associated with behind-the-display cursor interaction. A preliminary user evaluation indicates that users can benefit from such an interaction when operating a handheld using one hand.

© All rights reserved Yang et al. and/or ACM Press

 
Edit | Del

McCallum, David C., Mak, Edward, Irani, Pourang and Subramanian, Sriram (2009): PressureText: pressure input for mobile phone text entry. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 4519-4524.

Pressure sensitive buttons are appealing for reducing repetitive tasks such as text entry on mobile phone keypads, where multiple key presses are currently necessary to record an action. We present PressureText, a text-entry technique for a pressure augmented mobile phone. In a study comparing PressureText to MultiTap, we found that despite limited visual feedback for pressure input, users overall performed equally well with PressureText as with MultiTap. Expertise was a determining factor for improved performance with PressureText. Expert users showed a 33.6% performance gain over novices. Additionally, expert users were 5% faster on average with PressureText than MultiTap, suggesting that pressure input is a valuable augmentation to mobile phone keypads.

© All rights reserved McCallum et al. and/or ACM Press

 
Edit | Del

Yang, Xing-Dong, Mak, Edward, Irani, Pourang and Bischof, Walter F. (2009): Dual-Surface input: augmenting one-handed interaction with coordinated front and behind-the-screen input. In: Proceedings of 11th Conference on Human-computer interaction with mobile devices and services 2009. p. 5.

Interaction patterns with handheld mobile devices are constantly evolving. Researchers observed that users prefer to interact with mobile device using one hand. However, only few interaction techniques support this mode of operation. We show that one-handed operations can be enhanced with coordinated interaction using for input the front and back of a mobile device, which we term as Dual-Surface interaction. We present some of the design rationale for introducing coordinated Dual-Surface interactions. We demonstrate that several tasks, including target selection, benefit from Dual-Surface input which allows users to rapidly select small targets in locations that are less accessible when interacting using the thumb with one-handed input. Furthermore, we demonstrate the benefits of virtual enhancements that are possible with behind-the-display relative input to perform complex tasks, such as steering. Our results show that Dual-Surface interactions offer numerous benefits that are not available with input on the front or the back alone.

© All rights reserved Yang et al. and/or their publisher

 
Edit | Del

Wang, Feng, Cao, Xiang, Ren, Xiangshi and Irani, Pourang (2009): Detecting and leveraging finger orientation for interaction with direct-touch surfaces. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 23-32.

Current interactions on direct-touch interactive surfaces are often modeled based on properties of the input channel that are common in traditional graphical user interfaces (GUI) such as x-y coordinate information. Leveraging additional information available on the surfaces could potentially result in richer and novel interactions. In this paper we specifically explore the role of finger orientation. This property is typically ignored in touch-based interactions partly because of the ambiguity in determining it solely from the contact shape. We present a simple algorithm that unambiguously detects the directed finger orientation vector in real-time from contact information only, by considering the dynamics of the finger landing process. Results of an experimental evaluation show that our algorithm is stable and accurate. We then demonstrate how finger orientation can be leveraged to enable novel interactions and to infer higher-level information such as hand occlusion or user position. We present a set of orientation-aware interaction techniques and widgets for direct-touch surfaces.

© All rights reserved Wang et al. and/or their publisher

 
Edit | Del

McCallum, David C. and Irani, Pourang (2009): ARC-Pad: absolute+relative cursor positioning for large displays with a mobile touchscreen. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 153-156.

We introduce ARC-Pad (Absolute+Relative Cursor pad), a novel technique for interacting with large displays using a mobile phone's touchscreen. In ARC-Pad we combine ab-solute and relative cursor positioning. Tapping with ARC-Pad causes the cursor to jump to the corresponding location on the screen, providing rapid movement across large distances. For fine position control, users can also clutch using relative mode. Unlike prior hybrid cursor positioning techniques, ARC-Pad does not require an explicit switch between relative and absolute modes. We compared ARC-Pad with the relative positioning commonly found on touchpads. Users were given a target acquisition task on a large display, and results showed that they were faster with ARC-Pad, without sacrificing accuracy. Users welcomed the benefits associated with ARC-Pad.

© All rights reserved McCallum and Irani and/or their publisher

2008
 
Edit | Del

Gustafson, Sean, Baudisch, Patrick, Gutwin, Carl and Irani, Pourang (2008): Wedge: clutter-free visualization of off-screen locations. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 787-796.

To overcome display limitations of small-screen devices, researchers have proposed techniques that point users to objects located off-screen. Arrow-based techniques such as City Lights convey only direction. Halo conveys direction and distance, but is susceptible to clutter resulting from overlapping halos. We present Wedge, a visualization technique that conveys direction and distance, yet avoids overlap and clutter. Wedge represents each off-screen location using an acute isosceles triangle: the tip coincides with the off-screen locations, and the two corners are located on-screen. A wedge conveys location awareness primarily by means of its two legs pointing towards the target. Wedges avoid overlap programmatically by repelling each other, causing them to rotate until overlap is resolved. As a result, wedges can be applied to numbers and configurations of targets that would lead to clutter if visualized using halos. We report on a user study comparing Wedge and Halo for three off-screen tasks. Participants were significantly more accurate when using Wedge than when using Halo.

© All rights reserved Gustafson et al. and/or ACM Press

 
Edit | Del

Shi, Kang, Irani, Pourang, Gustafson, Sean and Subramanian, Sriram (2008): PressureFish: a method to improve control of discrete pressure-based input. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1295-1298.

Studies investigating user control of pressure input have reported

© All rights reserved Shi et al. and/or ACM Press

 
Edit | Del

Tanvir, Erum, Cullen, Jonathan, Irani, Pourang and Cockburn, Andy (2008): AAMU: adaptive activation area menus for improving selection in cascading pull-down menus. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1381-1384.

Selecting items in cascading pull-down menus is a frequent task in most GUIs. These selections involve two major components: steering and selection, with the steering component being the most time-consuming and error-prone. We describe a new technique, called Adaptive Activation-Area Menu (AAMU) that eliminate corner steering. AAMUs contain an enlarged activation area which dynamically resizes itself providing a broader steering path for menu navigation. We also combined AAMUs with Force-field menus, to create Force-AAMUs. We empirically demonstrate that AAMUs and Force-AAMUs outperformed the current default menu. We also compared performances of various other menus including Enlarged activation area menus (EMUs) and Gesture based selection with mouse as an input device. Overall, users show higher satisfaction rates for AAMUs over other menu designs.

© All rights reserved Tanvir et al. and/or ACM Press

 
Edit | Del

Aliakseyeu, Dzimitry, Irani, Pourang, Lucero, Andres and Subramanian, Sriram (2008): Multi-flick: an evaluation of flick-based scrolling techniques for pen interfaces. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1689-1698.

Multi-flick, which consists of repeated flick actions, has received popular media attention as an intuitive and natural document-scrolling technique for stylus based systems. In this paper we put multi-flick to test, by designing several flick-based scrolling techniques. We first map out the de-sign space of multi-flick and identify mapping functions that make multi-flick a natural and intuitive technique for document navigation. In the first experiment we compare several multi-flick variations for navigating lists on three different devices -- a PDA, a tabletPC, and a large table. Our study shows that compound-multi-flick (CMF) is the most preferred technique and it is at least as fast, if not faster than the traditional scrollbar. In a follow-up experiment, we evaluate multi-flick for scrolling text-based documents. Results show that all implementations of multi-flick are as good as the scrollbar for short distances while CMF is the most preferred. We discuss the implications of our findings and present several design guidelines.

© All rights reserved Aliakseyeu et al. and/or ACM Press

 
Edit | Del

Shanmugasundaram, Maruthappan and Irani, Pourang (2008): The effect of animated transitions in zooming interfaces. In: Levialdi, Stefano (ed.) AVI 2008 - Proceedings of the working conference on Advanced Visual Interfaces May 28-30, 2008, Napoli, Italy. pp. 396-399.

 
Edit | Del

Hui, Bowen, Gustafson, Sean, Irani, Pourang and Boutilier, Craig (2008): The need for an interaction cost model in adaptive interfaces. In: Levialdi, Stefano (ed.) AVI 2008 - Proceedings of the working conference on Advanced Visual Interfaces May 28-30, 2008, Napoli, Italy. pp. 458-461.

2007
 
Edit | Del

Stach, Tadeusz, Gutwin, Carl, Pinelle, David and Irani, Pourang (2007): Improving recognition and characterization in groupware with rich embodiments. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 11-20.

Embodiments are visual representations of people in a groupware system. Embodiments convey awareness information such as presence, location, and movement -- but they provide far less information than what is available from a real body in a face-to-face setting. As a result, it is often difficult to recognize and characterize other people in a groupware system without extensive communication. To address this problem, information-rich embodiments use ideas from multivariate information visualization to maximize the amount of information that is represented about a person. To investigate the feasibility of rich embodiment and their effects on group interaction, we carried out three studies. The first shows that users are able to recall and interpret a large set of variables that are graphically encoded on an embodiment. The second and third studies demonstrated rich embodiments in two groupware systems -- a multiplayer game and a drawing application -- and showed that the enhanced representations do improve recognition and characterization, and that they can enrich interaction in a variety of ways.

© All rights reserved Stach et al. and/or ACM Press

 
Edit | Del

Cechanowicz, Jared, Irani, Pourang and Subramanian, Sriram (2007): Augmenting the mouse with pressure sensitive input. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 1385-1394.

In this paper we investigate the use of a uni-pressure and dual-pressure augmented mouse. With a pressure augmented mouse users can simultaneously control cursor positions as well as multiple levels of discrete selection modes for common desktop application tasks. Two or more independent pressure sensors can be mounted onto several locations on the body of the mouse. To highlight the design potential of a pressure augmented mouse we conducted a multi-part study. In the first part we identified the number of maximum discrete levels controllable with a uni-pressure augmented mouse, the most appropriate locations for installing pressure sensors on the mouse, and the design of new interaction techniques to support selection with pressure-based input. In a follow-up design we introduced an additional sensor and two different types of selection techniques to control a larger number of discrete levels with two pressure sensors. Our results show that users can comfortably control up to 64 modes with a dual-pressure augmented mouse. We discuss the findings of our results in the context of several desktop interaction techniques and identify several design recommendations.

© All rights reserved Cechanowicz et al. and/or ACM Press

 
Edit | Del

Shanmugasundaram, Maruthappan, Irani, Pourang and Gutwin, Carl (2007): Can smooth view transitions facilitate perceptual constancy in node-link diagrams?. In: Proceedings of the 2007 Conference on Graphics Interface 2007. pp. 71-78.

Many visualizations use smoothly animated transitions to help the user interact with information structures. These transitions are intended to preserve perceptual constancy during viewpoint transformations. However, animated transitions also have costs -- they increase the transition time, and they can be complicated to implement -- and it is not clear whether the benefits of smooth transitions outweigh the costs. In order to quantify these benefits, we carried out two experiments that explore the effects of smooth transitions. In the first study, subjects were asked to determine whether graph nodes were connected, and navigated the graph either with or without smooth scene transitions. In the second study, participants were asked to identify the overall structure of a tree after navigating the tree through a viewport that either did or did not use smooth transitions for view changes. The results of both experiments show that smooth transitions can have dramatic benefits for user performance -- for example, participants in smooth transition conditions made half the errors of the discrete-movement conditions. In addition, short transitions were found to be as effective as long ones, suggesting that some of the costs of animations can be avoided. These studies give empirical evidence on the benefits of smooth transitions, and provide guidelines about when designers should use them in visualization systems.

© All rights reserved Shanmugasundaram et al. and/or Canadian Information Processing Society

 
Edit | Del

Shi, Kang, Irani, Pourang and Li, Pak Ching (2007): Facilitating Visual Queries in the TreeMap Using Distortion Techniques. In: Smith, Michael J. and Salvendy, Gavriel (eds.) Symposium on Human Interface 2007 - Part I July 22-27, 2007, Beijing, China. pp. 345-353.

 
Edit | Del

Xia, Xu, Irani, Pourang and Wang, Jing (2007): Evaluation of Guiard's Theory of Bimanual Control for Navigation and Selection. In: Dainoff, Marvin J. (ed.) EHAWC 2007 - Ergonomics and Health Aspects of Work with Computers July 22-27, 2007, Beijing, China. pp. 368-377.

 
Edit | Del

Irani, Pourang, Gutwin, Carl, Partridge, Grant and Nezhadasl, Mahtab (2007): Techniques for Interacting with Off-Screen Content. In: Baranauskas, Maria Cecília Calani, Palanque, Philippe A., Abascal, Julio and Barbosa, Simone Diniz Junqueira (eds.) DEGAS 2007 - Proceedings of the 1st International Workshop on Design and Evaluation of e-Government Applications and Services September 11th, 2007, Rio de Janeiro, Brazil. pp. 234-249.

 
Edit | Del

Partridge, Grant, Nezhadasl, Mahtab, Irani, Pourang and Gutwin, Carl (2007): A Comparison of Navigation Techniques Across Different Types of Off-Screen Navigation Tasks. In: Baranauskas, Maria Cecília Calani, Palanque, Philippe A., Abascal, Julio and Barbosa, Simone Diniz Junqueira (eds.) DEGAS 2007 - Proceedings of the 1st International Workshop on Design and Evaluation of e-Government Applications and Services September 11th, 2007, Rio de Janeiro, Brazil. pp. 716-721.

2006
 
Edit | Del

Irani, Pourang, Gutwin, Carl and Yang, Xing Dong (2006): Improving selection of off-screen targets with hopping. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 299-308.

Many systems provide the user with a limited viewport of a larger graphical workspace. In these systems, the user often needs to find and select targets that are in the workspace, but not visible in the current view. Standard methods for navigating to the off-screen targets include scrolling, panning, and zooming; however, these are laborious when users cannot see a target's direction or distance. Techniques such as halos can provide awareness of targets, but actually getting to the target is still slow with standard navigation. To improve off-screen target selection, we developed a new technique called hop, which combines halos with a teleportation mechanism that shows proxies of distant objects. Hop provides both awareness of off-screen targets and fast navigation to the target context. A study showed that users are significantly faster at selecting off-screen targets with hopping than with two-level zooming or grab-and-drag panning, and it is clear that hop will be faster than either halos or proxy-based techniques (like drag-and-pop or vacuum filtering) by themselves. Hop both improves on halo-based navigation and extends the value of proxies to small-screen environments.

© All rights reserved Irani et al. and/or ACM Press

2005
 
Edit | Del

Kadaba, Nivedita R., Irani, Pourang and Toulouse, Michel (2005): Visualizing Time Dependent Semantics: An Application to Quantum Algorithms. In: IV 2005 - 9th International Conference on Information Visualisation 6-8 July, 2005, London, UK. pp. 182-187.

 
Edit | Del

Shi, Kang, Irani, Pourang and Li, Ben (2005): An Evaluation of Content Browsing Techniques for Hierarchical Space-Filling Visualizations. In: InfoVis 2005 - IEEE Symposium on Information Visualization 23-25 October, 2005, Minneapolis, MN, USA. p. 11.

2004
 
Edit | Del

Irani, Pourang (2004): Notations for Software Engineering Class Structures. In: Blackwell, Alan, Marriott, Kim and Shimojima, Atsushi (eds.) Diagrams 2004 - Diagrammatic Representation and Inference - Third International Conference March 22-24, 2004, Cambridge, UK. pp. 441-445.

 
Edit | Del

Irani, Pourang, Slonowsky, Dean and Shajahan, Peer (2004): The Effect of Shading in Extracting Structure from Space-Filling Visualizations. In: IV 2004 - 8th International Conference on Information Visualisation 14-16 July, 2004, London, UK. pp. 209-216.

 
Edit | Del

Irani, Pourang and Ware, Colin (2004): The Effect of a Perceptual Syntax on the Learnability of Novel Concepts. In: IV 2004 - 8th International Conference on Information Visualisation 14-16 July, 2004, London, UK. pp. 308-314.

 
Edit | Del

Shajahan, Peer and Irani, Pourang (2004): Representing Hierarchies Using Multiple Synthetic Voices. In: IV 2004 - 8th International Conference on Information Visualisation 14-16 July, 2004, London, UK. pp. 885-891.

2003
 
Edit | Del

Irani, Pourang and Ware, Colin (2003): Diagramming information structures using 3D perceptual primitives. In ACM Transactions on Computer-Human Interaction, 10 (1) pp. 1-19.

The class of diagrams known collectively as node-link diagrams are used extensively for many applications, including planning, communications networks, and computer software. The defining features of these diagrams are nodes, represented by a circle or rectangle connected by links usually represented by some form of line or arrow. We investigate the proposition that drawing three-dimensional shaded elements instead of using simple lines and outlines will result in diagrams that are easier to interpret. A set of guidelines for such diagrams is derived from perception theory and these collectively define the concept of the geon diagram. We also introduce a new substructure identification task for evaluating diagrams and use it to test the effectiveness of geon diagrams. The results from five experiments are reported. In the first three experiments geon diagrams are compared to Unified Modeling Language (UML) diagrams. The results show that substructures can be identified in geon diagrams with approximately half the errors and significantly faster. The results also show that geon diagrams can be recalled much more reliably than structurally equivalent UML diagrams. In the final two experiments geon diagrams are compared with diagrams having the same outline but not constructed with shaded solids. This is designed to specifically test the importance of using 3D shaded primitives. The results also show that substructures can be identified much more accurately with shaded components than with 2D outline equivalents and remembered more reliably. Implications for the design of diagrams are discussed.

© All rights reserved Irani and Ware and/or ACM Press

2002
 
Edit | Del

Irani, Pourang (2002): The Learnability of Diagram Semantics. In: Hegarty, Mary, Meyer, Bernd and Narayanan, N. Hari (eds.) Diagrams 2002 - Diagrammatic Representation and Inference - Second International Conference April 18-20, 2002, Callaway Gardens, GA, USA. pp. 335-337.

2001
 
Edit | Del

Irani, Pourang, Tingley, Maureen and Ware, Colin (2001): Using Perceptual Syntax to Enhance Semantic Content in Diagrams. In IEEE Computer Graphics and Applications, 21 (5) pp. 76-85.

2000
 
Edit | Del

Irani, Pourang and Ware, Colin (2000): Diagrams Based on Structural Object Perception. In: Advanced Visual Interfaces 2000 2000. pp. 61-67.

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
 
 
 

Changes to this page (author)

23 Nov 2012: Modified
23 Nov 2012: Modified
07 Nov 2012: Modified
05 Apr 2012: Modified
04 Apr 2012: Modified
04 Apr 2012: Modified
04 Apr 2012: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
25 Jul 2009: Modified
25 Jul 2009: Modified
19 Jun 2009: Modified
17 Jun 2009: Modified
17 Jun 2009: Modified
17 Jun 2009: Modified
15 Jun 2009: Modified
14 Jun 2009: Modified
14 Jun 2009: Modified
14 Jun 2009: Modified
05 Jun 2009: Modified
05 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
25 Jul 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Modified
28 Apr 2003: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/pourang_irani.html

Publication statistics

Pub. period:2000-2012
Pub. count:44
Number of co-authors:67



Co-authors

Number of publications with 3 favourite co-authors:

Sriram Subramanian:8
Carl Gutwin:7
Xing-Dong Yang:5

 

 

Productive colleagues

Pourang Irani's 3 most productive colleagues in number of publications:

Carl Gutwin:116
Andy Cockburn:68
Antonio Krüger:59
 
 
 
Jul 11

Creative without strategy is called ‘art‘. Creative with strategy is called ‘advertising‘

-- Jef I. Richards

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!