Publication statistics

Pub. period:2003-2012
Pub. count:46
Number of co-authors:70



Co-authors

Number of publications with 3 favourite co-authors:

Carl Gutwin:14
Miguel A. Nacenta:9
Pourang Irani:8

 

 

Productive colleagues

Sriram Subramanian's 3 most productive colleagues in number of publications:

Carl Gutwin:116
Alan J. Dix:107
Antonio Krger:59
 
 
 

Upcoming Courses

go to course
User-Centred Design - Module 3
70% booked. Starts in 25 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Sriram Subramanian

Personal Homepage:
cs.bris.ac.uk/People/personal.jsp?key=5598


Current place of employment:
University of Bristol

Add description
Rename / change spelling
Add publication
 

Publications by Sriram Subramanian (bibliography)

 what's this?
2012
 
Edit | Del

Alexander, Jason, Lucero, Andres and Subramanian, Sriram (2012): Tilt displays: designing display surfaces with multi-axis tilting and actuation. In: Proceedings of the 14th Conference on Human-computer interaction with mobile devices and services 2012. pp. 161-170.

We present a new type of actuatable display, called Tilt Displays, that provide visual feedback combined with multi-axis tilting and vertical actuation. Their ability to physically mutate provides users with an additional information channel that facilitates a range of new applications including collaboration and tangible entertainment while enhancing familiar applications such as terrain modelling by allowing 3D scenes to be rendered in a physical-3D manner. Through a mobile 3x3 custom built prototype, we examine the design space around Tilt Displays, categorise output modalities and conduct two user studies. The first, an exploratory study examines users' initial impressions of Tilt Displays and probes potential interactions and uses. The second takes a quantitative approach to understand interaction possibilities with such displays, resulting in the production of two user-defined gesture sets: one for manipulating the surface of the Tilt Display, the second for conducting everyday interactions.

© All rights reserved Alexander et al. and/or ACM Press

 
Edit | Del

Cauchard, Jessica, Lochtefeld, Markus, Fraser, Mike, Krger, Antonio and Subramanian, Sriram (2012): m+pSpaces: virtual workspaces in the spatially-aware mobile environment. In: Proceedings of the 14th Conference on Human-computer interaction with mobile devices and services 2012. pp. 171-180.

We introduce spatially-aware virtual workspaces for the mobile environment. The notion of virtual workspaces was initially conceived to alleviate mental workload in desktop environments with limited display real-estate. Using spatial properties of mobile devices, we translate this approach and illustrate that mobile virtual workspaces greatly improve task performance for mobile devices. In a first study, we compare our spatially-aware prototype (mSpaces) to existing context switching methods for navigating amongst multiple tasks in the mobile environment. We show that users are faster, make more accurate decisions and require less mental and physical effort when using spatially-aware prototypes. We furthermore prototype pSpaces and m+pSpaces, two spatially-aware systems equipped with pico-projectors as auxiliary displays to provide dual-display capability to the handheld device. A final study reveals advantages of each of the different configurations and functionalities when comparing all three prototypes. Drawing on these findings, we identify design considerations to create, manipulate and manage spatially-aware virtual workspaces in the mobile environment.

© All rights reserved Cauchard et al. and/or ACM Press

 
Edit | Del

Karnik, Abhijit, Plasencia, Diego Martinez, Mayol-Cuevas, Walterio and Subramanian, Sriram (2012): PiVOT: personalized view-overlays for tabletops. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 271-280.

We present PiVOT, a tabletop system aimed at supporting mixed-focus collaborative tasks. Through two view-zones, PiVOT provides personalized views to individual users while presenting an unaffected and unobstructed shared view to all users. The system supports multiple personalized views which can be present at the same spatial location and yet be only visible to the users it belongs to. The system also allows the creation of personal views that can be either 2D or (auto-stereoscopic) 3D images. We first discuss the motivation and the different implementation principles required for realizing such a system, before exploring different designs able to address the seemingly opposing challenges of shared and personalized views. We then implement and evaluate a sample prototype to validate our design ideas and present a set of sample applications to demonstrate the utility of the system.

© All rights reserved Karnik et al. and/or ACM Press

2011
 
Edit | Del

Jamil, Izdihar, O'Hara, Kenton, Perry, Mark, Karnik, Abhijit and Subramanian, Sriram (2011): The effects of interaction techniques on talk patterns in collaborative peer learning around interactive tables. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 3043-3052.

This paper presents the findings of a user study investigating conversational patterns across three conditions of table-based interaction (direct touch interactive table, pantograph interactive table and non-digital table) for different types of educational activities. Findings demonstrate that communication style is significantly affected by interaction techniques. The direct touch technique stimulated conversations based around the topic and pedagogical method. The pantograph technique promoted playfulness and had a higher number of directive utterances between participants, with fewer task-based, group-oriented utterances. The non-digital table promoted reflective forms of task-orientated utterance, encouraged group communication and fostered more equitable participation between members. The findings provide insights into the design of interactive tables to support particular forms of social interaction.

© All rights reserved Jamil et al. and/or their publisher

 
Edit | Del

Alexander, Jason, Marshall, Mark T. and Subramanian, Sriram (2011): Adding haptic feedback to mobile tv. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1975-1980.

With the abundance of large-screen displays, mobile device users currently have little motivation to stream video content and TV broadcasts to their device -- the desire to watch content 'on the move' does not currently outweigh the necessity of viewing this content on a miniaturised screen. However, the value and appeal of mobile TV broadcasts can be increased by the addition of a haptic-feedback channel to supplement the traditional video and audio streams. This paper discusses the development of mobile haptic TV systems. It describes the design constraints for these systems and presents one concept implementation, UltraTV. UltraTV is a mobile device that provides mid-air, multi-point, back-of-device ultrasonic haptic feedback to enhance the mobile TV experience (see Figure 1). The paper concludes with a look at avenues for further exploration within the realm of mobile haptic TV.

© All rights reserved Alexander et al. and/or their publisher

 
Edit | Del

Karnik, Abhijit, Henderson, Archie, Dean, Andrew, Pang, Howard, Campbell, Thomas, Sakurai, Satoshi, Herrmann, Guido, Izadi, Shahram, Kitamura, Yoshifumi and Subramanian, Sriram (2011): VORTEX: design and implementation of an interactive volumetric display. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2017-2022.

True 3D display systems like volumetric displays allow generation of autostereoscopic, multi-view 3D content that has real physical dimensions. However their uptake as a research tool within the HCI community is limited largely due to difficulties in buying or building such displays. The choice of commercially available systems is limited and constrains the flexibility of their use in terms of interaction capabilities, display features and integration with multi-display environments (MDEs). In this paper we describe the steps involved in creating custom volumetric display from easily available components. By building a touch-enabled volumetric display we walk-through the steps involved in the process. This will enable us to explore various interactive systems, associated techniques and challenges related to integration of the device into a MDE.

© All rights reserved Karnik et al. and/or their publisher

 
Edit | Del

Han, Teng, Alexander, Jason, Karnik, Abhijit, Irani, Pourang and Subramanian, Sriram (2011): Kick: investigating the use of kick gestures for mobile interactions. In: Proceedings of 13th Conference on Human-computer interaction with mobile devices and services 2011. pp. 29-32.

In this paper we describe the use of kick gestures for interaction with mobile devices. Kicking is a well-studied leg action that can be harnessed in mobile contexts where the hands are busy or too dirty to interact with the phone. In this paper we examine the design space of kicking as an interaction technique through two user studies. The first study investigated how well users were able to control the direction of their kicks. Users were able to aim their kicks best when the movement range is divided into segments of at least 24. In the second study we looked at the velocity of a kick. We found that the users are able to kick with at least two varying velocities. However, they also often undershoot the target velocity. Finally, we propose some specific applications in which kicks can prove beneficial.

© All rights reserved Han et al. and/or ACM Press

 
Edit | Del

Cauchard, Jessica R., Lochtefeld, Markus, Irani, Pourang, Schoening, Johannes, Krger, Antonio, Fraser, Mike and Subramanian, Sriram (2011): Visual separation in mobile multi-display environments. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 451-460.

Projector phones, handheld game consoles and many other mobile devices increasingly include more than one display, and therefore present a new breed of mobile Multi-Display Environments (MDEs) to users. Existing studies illustrate the effects of visual separation between displays in MDEs and suggest interaction techniques that mitigate these effects. Currently, mobile devices with heterogeneous displays such as projector phones are often designed without reference to visual separation issues; therefore it is critical to establish whether concerns and opportunities raised in the existing MDE literature apply to the emerging category of Mobile MDEs (MMDEs). This paper investigates the effects of visual separation in the context of MMDEs and contrasts these with fixed MDE results, and explores design factors for Mobile MDEs. Our study uses a novel eye-tracking methodology for measuring switches in visual context between displays and identifies that MMDEs offer increased design flexibility over traditional MDEs in terms of visual separation. We discuss these results and identify several design implications.

© All rights reserved Cauchard et al. and/or ACM Press

 
Edit | Del

Karnik, Abhijit, Mayol-Cuevas, Walterio and Subramanian, Sriram (2011): MUST-D: multi-user see through display. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 77-78.

In this paper we present MUST-D, a multi-user see-through display that allows users to inspect objects behind a glass panel while projecting view-dependent information on the glass to the user. MUST-D uses liquid crystal panels to implement a multi-view see-through display space in front of physical objects.

© All rights reserved Karnik et al. and/or ACM Press

2010
 
Edit | Del

Dix, Alan J., Quigley, Aaron, Subramanian, Sriram and Terrenghi, Lucia (2010): Workshop on coupled display visual interfaces. In: Proceedings of the 2010 International Conference on Advanced Visual Interfaces 2010. pp. 408-410.

Interactive displays are increasingly distributed in a broad spectrum of everyday life environments: They have very diverse form factors and portability characteristics, support a variety of interaction techniques, and can be used by a variable number of people. The coupling of multiple displays can thus create interactive "ecosystems" which mingle in the social context, and generate novel settings of communication, performance and ownership. The objective of this workshop is to focus on the range of research challenges and opportunities afforded by applications that rely on visual interfaces that can spread across multiple displays. Such displays are physically decoupled (i.e. connected to multiple computers) yet are visually coupled due to the interfaces and interactions they support. This can range from visual interfaces spread across multiple small private input displays (e.g. information exchange or game play) to small private displays coupled with larger public displays (e.g. public photo sharing).

© All rights reserved Dix et al. and/or their publisher

 
Edit | Del

Montero, Calkin S., Alexander, Jason, Marshall, Mark T. and Subramanian, Sriram (2010): Would you do that?: understanding social acceptance of gestural interfaces. In: Proceedings of 12th Conference on Human-computer interaction with mobile devices and services 2010. pp. 275-278.

With gesture-based interactions in mobile settings becoming more popular, there is a growing concern regarding the social acceptance of these interaction techniques. In this paper we begin by examining the various definitions of social acceptance that have been proposed in the literature to synthesize a definition that is based on how the user feels about performing a particular interaction as well as how the bystanders perceive the user during this interaction. We then present the main factors that influence gestures' social acceptance including culture, time, interaction type and the user's position on the innovation adoption curve. Through a user study we show that an important factor in determining social acceptance of gesture-based interaction techniques is the user's perception of others ability to interpret the potential effect of a manipulation.

© All rights reserved Montero et al. and/or their publisher

 
Edit | Del

Jones, Eleanor, Alexander, Jason, Andreou, Andreas, Irani, Pourang and Subramanian, Sriram (2010): GesText: accelerometer-based gestural text-entry systems. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2173-2182.

Accelerometers are common on many devices, including those required for text-entry. We investigate how to enter text with devices that are solely enabled with accelerometers. The challenge of text-entry with such devices can be overcome by the careful investigation of the human limitations in gestural movements with accelerometers. Preliminary studies provide insight into two potential text-entry designs that purely use accelerometers for gesture recognition. In two experiments, we evaluate the effectiveness of each of the text-entry designs. The first experiment involves novice users over a 45 minute period while the second investigates the possible performance increases over a four day period. Our results reveal that a matrix-based text-entry system with a small set of simple gestures is the most efficient (5.4wpm) and subjectively preferred by participants.

© All rights reserved Jones et al. and/or their publisher

2009
 
Edit | Del

Rahman, Mahfuz, Gustafson, Sean, Irani, Pourang and Subramanian, Sriram (2009): Tilt techniques: investigating the dexterity of wrist-based input. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1943-1952.

Most studies on tilt based interaction can be classified as point-designs that demonstrate the utility of wrist-tilt as an input medium; tilt parameters are tailored to suit the specific interaction at hand. In this paper, we systematically analyze the design space of wrist-based interactions and focus on the level of control possible with the wrist. In a first study, we investigate the various factors that can influence tilt control, separately along the three axes of wrist movement: flexion/extension, pronation/supination, and ulnar/radial deviation. Results show that users can control comfortably at least 16 levels on the pronation/supination axis and that using a quadratic mapping function for discretization of tilt space significantly improves user performance across all tilt axes. We discuss the findings of our results in the context of several interaction techniques and identify several general design recommendations.

© All rights reserved Rahman et al. and/or ACM Press

 
Edit | Del

Krol, Laurens R., Aliakseyeu, Dzmitry and Subramanian, Sriram (2009): Haptic feedback in remote pointing. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 3763-3768.

We investigate the use of haptic feedback for enhancing user performance with remote pointing devices. We present a number of concepts that use haptic feedback on such devices and the results of the first user study, in which we have compared the effects of different feedback types on users' performance and preference in remote pointing tasks. The study showed that the addition of haptic feedback significantly improves the performance, while it has also revealed a seemingly low user acceptance of haptic feedback. We discuss the implications of our findings and outline the future work.

© All rights reserved Krol et al. and/or ACM Press

 
Edit | Del

McCallum, David C., Mak, Edward, Irani, Pourang and Subramanian, Sriram (2009): PressureText: pressure input for mobile phone text entry. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 4519-4524.

Pressure sensitive buttons are appealing for reducing repetitive tasks such as text entry on mobile phone keypads, where multiple key presses are currently necessary to record an action. We present PressureText, a text-entry technique for a pressure augmented mobile phone. In a study comparing PressureText to MultiTap, we found that despite limited visual feedback for pressure input, users overall performed equally well with PressureText as with MultiTap. Expertise was a determining factor for improved performance with PressureText. Expert users showed a 33.6% performance gain over novices. Additionally, expert users were 5% faster on average with PressureText than MultiTap, suggesting that pressure input is a valuable augmentation to mobile phone keypads.

© All rights reserved McCallum et al. and/or ACM Press

 
Edit | Del

Nacenta, Miguel A., Gutwin, Carl, Aliakseyeu, Dima and Subramanian, Sriram (2009): There and Back again: Cross-Display Object Movement in Multi-Display Environments. In Human-Computer Interaction, 24 (1) pp. 170-229.

 
Edit | Del

Nacenta, Miguel A., Gutwin, Carl, Aliakseyeu, Dzmitry and Subramanian, Sriram (2009): There and Back Again: Cross-Display Object Movement in Multi-Display Environments. In Human-Computer Interaction, 24 (1) pp. 170-229.

Multi-display environments (MDEs) are now becoming common, and are becoming more complex, with more displays and more types of display in the environment. One crucial requirement specific to MDEs is that users must be able to move objects from one display to another; this cross-display movement is a frequent and fundamental part of interaction in any application that spans two or more display surfaces. Although many cross-display movement techniques exist, the differences between MDEs -- the number, location, and mixed orientation of displays, and the characteristics of the task they are being designed for -- require that interaction techniques be chosen carefully to match the constraints of the particular environment. As a way to facilitate interaction design in MDEs, we present a taxonomy that classifies cross-display object movement techniques according to three dimensions: the referential domain that determines how displays are selected, the relationship of the input space to the display configuration, and the control paradigm for executing the movement. These dimensions are based on a descriptive model of the task of cross-display object movement. The taxonomy also provides an analysis of current research that designers and researchers can use to understand the differences between categories of interaction techniques.

© All rights reserved Nacenta et al. and/or Taylor and Francis

2008
 
Edit | Del

Shi, Kang, Irani, Pourang, Gustafson, Sean and Subramanian, Sriram (2008): PressureFish: a method to improve control of discrete pressure-based input. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1295-1298.

Studies investigating user control of pressure input have reported

© All rights reserved Shi et al. and/or ACM Press

 
Edit | Del

Aliakseyeu, Dzimitry, Irani, Pourang, Lucero, Andres and Subramanian, Sriram (2008): Multi-flick: an evaluation of flick-based scrolling techniques for pen interfaces. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1689-1698.

Multi-flick, which consists of repeated flick actions, has received popular media attention as an intuitive and natural document-scrolling technique for stylus based systems. In this paper we put multi-flick to test, by designing several flick-based scrolling techniques. We first map out the de-sign space of multi-flick and identify mapping functions that make multi-flick a natural and intuitive technique for document navigation. In the first experiment we compare several multi-flick variations for navigating lists on three different devices -- a PDA, a tabletPC, and a large table. Our study shows that compound-multi-flick (CMF) is the most preferred technique and it is at least as fast, if not faster than the traditional scrollbar. In a follow-up experiment, we evaluate multi-flick for scrolling text-based documents. Results show that all implementations of multi-flick are as good as the scrollbar for short distances while CMF is the most preferred. We discuss the implications of our findings and present several design guidelines.

© All rights reserved Aliakseyeu et al. and/or ACM Press

 
Edit | Del

Liu, Jun, Pinelle, David, Gutwin, Carl and Subramanian, Sriram (2008): Improving digital handoff in shared tabletop workspaces. In: Third IEEE International Workshop on Tabletops and Interactive Surfaces Tabletop 2008 October 1-3, 2008, Amsterdam, The Netherlands. pp. 9-16.

 
Edit | Del

Sakurai, Satoshi, Kitamura, Yoshifumi, Subramanian, Sriram and Kishino, Fumio (2008): Visibility control using revolving polarizer. In: Third IEEE International Workshop on Tabletops and Interactive Surfaces Tabletop 2008 October 1-3, 2008, Amsterdam, The Netherlands. pp. 161-168.

 
Edit | Del

Takashima, Kazuki, Subramanian, Sriram, Tsukitani, Takayuki, Kitamura, Yoshifumi and Kishino, Fumio (2008): Acquisition of Off-Screen Object by Predictive Jumping. In: Lee, Seongil, Choo, Hyunseung, Ha, Sungdo and Shin, In Chul (eds.) Computer-Human Interaction 8th Asia-Pacific Conference - APCHI 2008 July 6-9, 2008, Seoul, Korea. pp. 301-310.

 
Edit | Del

Terken, Jacques M. B., Subramanian, Sriram and Zancanaro, Massimo (2008): Special issue on user-centred design and evaluation of ubiquitous groupware. In Personal and Ubiquitous Computing, 12 (3) pp. 179-180.

 
Edit | Del

Sakurai, Satoshi, Itoh, Yuichi, Kitamura, Yoshifumi, Nacenta, Miguel A., Yamaguchi, Tokuo, Subramanian, Sriram and Kishino, Fumio (2008): A Middleware for Seamless Use of Multiple Displays. In: Graham, T. C. Nicholas and Palanque, Philippe A. (eds.) DSV-IS 2008 - Interactive Systems. Design, Specification, and Verification, 15th International Workshop July 16-18, 2008, Kingston, Canada. pp. 252-266.

 
Edit | Del

Sakurai, Satoshi, Yamaguchi, Tokuo, Kitamura, Yoshifumi, Itoh, Yuichi, Fukazawa, Ryo, Kishino, Fumio, Nacenta, Miguel A. and Subramanian, Sriram (2008): M3: multi-modal interface in multi-display environment for multi-users. In: ACM SIGGRAPH ASIA artgallery emerging technologies 2008, Singapore. p. 45.

2007
 
Edit | Del

Kattinakere, Raghavendra S., Grossman, Tovi and Subramanian, Sriram (2007): Modeling steering within above-the-surface interaction layers. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 317-326.

Interaction techniques that utilize the space above the display surface to extend the functionalities of digitized surfaces continue to emerge. In such techniques, movements are constrained by the bounds of a layer. In addition, constraints imposed on the direction of movement within the layer may be present. Despite the presence of such techniques, there is limited understanding of human capabilities for performing the required steering task. In this paper we study and model user performance when steering through constrained and unconstrained paths in above-the-surface layers. Through a series of experiments we validate the derivation and applicability of our proposed models.

© All rights reserved Kattinakere et al. and/or ACM Press

 
Edit | Del

Cechanowicz, Jared, Irani, Pourang and Subramanian, Sriram (2007): Augmenting the mouse with pressure sensitive input. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 1385-1394.

In this paper we investigate the use of a uni-pressure and dual-pressure augmented mouse. With a pressure augmented mouse users can simultaneously control cursor positions as well as multiple levels of discrete selection modes for common desktop application tasks. Two or more independent pressure sensors can be mounted onto several locations on the body of the mouse. To highlight the design potential of a pressure augmented mouse we conducted a multi-part study. In the first part we identified the number of maximum discrete levels controllable with a uni-pressure augmented mouse, the most appropriate locations for installing pressure sensors on the mouse, and the design of new interaction techniques to support selection with pressure-based input. In a follow-up design we introduced an additional sensor and two different types of selection techniques to control a larger number of discrete levels with two pressure sensors. Our results show that users can comfortably control up to 64 modes with a dual-pressure augmented mouse. We discuss the findings of our results in the context of several desktop interaction techniques and identify several design recommendations.

© All rights reserved Cechanowicz et al. and/or ACM Press

 
Edit | Del

Kim, Hyun Hoi James, Gutwin, Carl and Subramanian, Sriram (2007): The magic window: lessons from a year in the life of a co-present media space. In: GROUP07: International Conference on Supporting Group Work 2007. pp. 107-116.

The windows and doorways that connect offices to public spaces are a site for people to gather awareness information and initiate interaction. However, these portals often reveal more information to the public area than the office occupant would like. As a result, people often keep doors and window blinds closed, which means that nobody can gather awareness information, even those with whom the occupant would be willing to share. One solution to this problem is a co-present media space -- a computer-mediated video connection at the boundary between an office and a public area. These systems can provide both greater privacy control to the occupant and greater overall awareness information to observers. To see how co-present media spaces would work in real world settings, we built what we believe are the first ever co-present media spaces, and deployed them in two offices. From observations gathered over fifteen months, it is clear that the systems can do a better job of balancing the occupant's need for privacy and the observers' need for awareness better, than a standard window. However, we also identified a number of issues that affected the use and the success of the systems: the existence of alternate information sources, confusion with existing social norms, disparities between effort and need, and reduced interactional subtlety for observers in the public area. Our work contributes both a novel arrangement of a media space for co-present collaborators, and the first investigation into the design factors that affect the use and acceptance of these systems.

© All rights reserved Kim et al. and/or ACM Press

 
Edit | Del

Nacenta, Miguel A., Sakurai, Satoshi, Yamaguchi, Tokuo, Miki, Yohei, Itoh, Yuichi, Kitamura, Yoshifumi, Subramanian, Sriram and Gutwin, Carl (2007): E-conic: a perspective-aware interface for multi-display environments. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 279-288.

Multi-display environments compose displays that can be at different locations from and different angles to the user; as a result, it can become very difficult to manage windows, read text, and manipulate objects. We investigate the idea of perspective as a way to solve these problems in multi-display environments. We first identify basic display and control factors that are affected by perspective, such as visibility, fracture, and sharing. We then present the design and implementation of E-conic, a multi-display multi-user environment that uses location data about displays and users to dynamically correct perspective. We carried out a controlled experiment to test the benefits of perspective correction in basic interaction tasks like targeting, steering, aligning, pattern-matching and reading. Our results show that perspective correction significantly and substantially improves user performance in all these tasks.

© All rights reserved Nacenta et al. and/or ACM Press

 
Edit | Del

Yamaguchi, Tokuo, Subramanian, Sriram, Kitamura, Yoshifumi and Kishino, Fumio (2007): Strategic Tabletop Negotiations. In: Baranauskas, Maria Ceclia Calani, Palanque, Philippe A., Abascal, Julio and Barbosa, Simone Diniz Junqueira (eds.) DEGAS 2007 - Proceedings of the 1st International Workshop on Design and Evaluation of e-Government Applications and Services September 11th, 2007, Rio de Janeiro, Brazil. pp. 169-182.

 
Edit | Del

Mao, Jinzi, Mould, David and Subramanian, Sriram (2007): Background subtraction for realtime tracking of a tennis ball. In: Ranchordas, Alpesh, Arajo, Helder and Vitri, Jordi (eds.) VISAPP 2007 - Proceedings of the Second International Conference on Computer Vision Theory and Applications - Volume 2 March 8-11, 2007, Barcelona, Spain. pp. 427-434.

2006
 
Edit | Del

Nacenta, Miguel A., Sallam, Samer, Champoux, Bernard, Subramanian, Sriram and Gutwin, Carl (2006): Perspective cursor: perspective-based interaction for multi-display environments. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 289-298.

Multi-display environments and smart meeting rooms are now becoming more common. These environments build a shared display space from variety of devices: tablets, projected surfaces, tabletops, and traditional monitors. Since the different display surfaces are usually not organized in a single plane, traditional schemes for stitching the displays together can cause problems for interaction. However, there is a more natural way to compose display space -- using perspective. In this paper, we develop interaction techniques for multi-display environments that are based on the user's perspective on the room. We designed the Perspective Cursor, a mapping of cursor to display space that appears natural and logical from wherever the user is located. We conducted an experiment to compare two perspective-based techniques, the Perspective Cursor and a beam-based technique, with traditional stitched displays. We found that both perspective techniques were significantly faster for targeting tasks than the traditional technique, and that Perspective Cursor was the most preferred method. Our results show that integrating perspective into the design of multi-display environments can substantially improve performance.

© All rights reserved Nacenta et al. and/or ACM Press

 
Edit | Del

Gates, Carrie, Subramanian, Sriram and Gutwin, Carl (2006): DJs' perspectives on interaction and awareness in nightclubs. In: Proceedings of DIS06: Designing Interactive Systems: Processes, Practices, Methods, & Techniques 2006. pp. 70-79.

Several researchers have recently proposed technology for crowd-and-DJ interactions in nightclub environments. However, these attempts have not always met with success. In order to design better technologies and systems in this area, it is important to start with an understanding of how nightclub interaction currently happens. To build this understanding, we carried out an interview study focusing on DJ-audience interactions. We interviewed eleven DJs from several different cities, and asked them to discuss the ways that they interact with the audience, and the ways that they maintain and use awareness of the audience. We found that DJs gather a wide variety of information about their audiences, and that this information is important to them as they plan and shape the evening's musical experience. DJs are adept at gathering visual information about the audience, despite poor lighting conditions and a heavy workload of selecting and mixing music. Despite the difficulties, DJs took a dim view of technology designed to let crowds exert more control over the music. This study is one of the first to look closely at the interactive relationship between the DJ and the nightclub audience through the lens of HCI, and our findings provide a number of guidelines for the design of new DJ-focused nightclub technologies.

© All rights reserved Gates et al. and/or ACM Press

 
Edit | Del

Liu, Jun, Pinelle, David, Sallam, Samer, Subramanian, Sriram and Gutwin, Carl (2006): TNT: improved rotation and translation on digital tables. In: Proceedings of the 2006 Conference on Graphics Interface 2006. pp. 25-32.

Digital tabletop systems allow users to work on computational objects in a flexible and natural setting. Since users can easily move to different positions around a table, systems must allow people to orient artifacts to their current position. However, it is only recently that rotation and translation techniques have been specifically designed for tabletops, and existing techniques still do not feel as simple and efficient as their real-world counterparts. To address this problem, we studied the ways that people move and reorient sheets of paper on real-world tabletops. We found that in almost all cases, rotation and translation are carried out simultaneously, and that an open-palm hand position was the most common way to carry out the motion. Based on our observations, we designed a new set of reorientation techniques that more closely parallel real-world motions. The new techniques, collectively called TNT, use three-degree-of-freedom (3DOF) input to allow simultaneous rotation and translation. A user study showed that all three variants of TNT were faster than a recent technique called RNT; in addition, participants strongly preferred TNT.

© All rights reserved Liu et al. and/or Canadian Information Processing Society

 
Edit | Del

Reetz, Adrian, Gutwin, Carl, Stach, Tadeusz, Nacenta, Miguel A. and Subramanian, Sriram (2006): Superflick: a natural and efficient technique for long-distance object placement on digital tables. In: Proceedings of the 2006 Conference on Graphics Interface 2006. pp. 163-170.

Moving objects past arms' reach is a common action in both real-world and digital tabletops. In the real world, the most common way to accomplish this task is by throwing or sliding the object across the table. Sliding is natural, easy to do, and fast: however, in digital tabletops, few existing techniques for long-distance movement bear any resemblance to these real-world motions. We have designed and evaluated two tabletop interaction techniques that closely mimic the action of sliding an object across the table. Flick is an open-loop technique that is extremely fast. Superflick is based on Flick, but adds a correction step to improve accuracy for small targets. We carried out two user studies to compare these techniques to a fast and accurate proxy-based technique, the radar view. In the first study, we found that Flick is significantly faster than the radar for large targets, but is inaccurate for small targets. In the second study, we found no differences between Superflick and radar for either time or accuracy. Given the simplicity and learnability of flicking, our results suggest that throwing-based techniques have promise for improving the usability of digital tables.

© All rights reserved Reetz et al. and/or Canadian Information Processing Society

 
Edit | Del

Subramanian, Sriram, Aliakseyeu, Dzimitry and Lucero, Andres (2006): Multi-layer interaction for digital tables. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 269-272.

Interaction on digital tables has been restricted to a single layer on the table's active work-surface. We extend the design space of digital tables to include multiple layers of interaction. We leverage 3D position information of a pointing device to support interaction in the space above the active work-surface by creating multiple layers with drift-correction in which the user can interact with an application. We also illustrate through a point-design that designers can use multiple-layers to create a rich and clutter free application. A subjective evaluation showed that users liked the interaction techniques and found that, because of the drift correction we use, they could control the pointer when working in any layer.

© All rights reserved Subramanian et al. and/or ACM Press

 
Edit | Del

Cechanowicz, Jared, Dawson, Steven, Victor, Matt and Subramanian, Sriram (2006): Stylus Based Text Input Using Expanding CIRRIN. In: Proceedings of the working conference on Advanced visual interfaces May 23-26, 2006, Venezia, Italy. pp. 163-166.

CIRRIN [3] is a stylus based text input technique for mobile devices with a touch sensitive display. In this paper we explore the benefit of expanding the letters of CIRRIN to reduce the overall difficulty of selecting a letter. We adapted the existing CIRRIN to expand the characters as the stylus approached it to create a new text entry technique called expanding CIRRIN. In a small user study we compared the standard CIRRIN and expanding CIRRIN for different sentences. Our results indicate that expanding CIRRIN increases error rates and text input times. We observed that expanding the letters often made the stylus enter the CIRRIN ring adjacent to the intended letter, thereby increasing error rates. We discuss the implications of these results, and possible applications of expanding targets with other text input techniques such as the Metropolis [7] soft keyboard.

© All rights reserved Cechanowicz et al. and/or ACM Press, New York, NY

 
Edit | Del

Gates, Carrie, Subramanian, Sriram and Gutwin, Carl (2006): DJs' perspectives on interaction and awareness in nightclubs. In: Proceedings of DIS06: Designing Interactive Systems: Processes, Practices, Methods, & Techniques 2006. pp. 70-79.

Several researchers have recently proposed technology for crowd-and-DJ interactions in nightclub environments. However, these attempts have not always met with success. In order to design better technologies and systems in this area, it is important to start with an understanding of how nightclub interaction currently happens. To build this understanding, we carried out an interview study focusing on DJ-audience interactions. We interviewed eleven DJs from several different cities, and asked them to discuss the ways that they interact with the audience, and the ways that they maintain and use awareness of the audience. We found that DJs gather a wide variety of information about their audiences, and that this information is important to them as they plan and shape the evening's musical experience. DJs are adept at gathering visual information about the audience, despite poor lighting conditions and a heavy workload of selecting and mixing music. Despite the difficulties, DJs took a dim view of technology designed to let crowds exert more control over the music. This study is one of the first to look closely at the interactive relationship between the DJ and the nightclub audience through the lens of HCI, and our findings provide a number of guidelines for the design of new DJ-focused nightclub technologies.

© All rights reserved Gates et al. and/or ACM Press

 
Edit | Del

Aliakseyeu, Dzmitry, Nacenta, Miguel A., Subramanian, Sriram and Gutwin, Carl (2006): Bubble radar: efficient pen-based interaction. In: Celentano, Augusto (ed.) AVI 2006 - Proceedings of the working conference on Advanced visual interfaces May 23-26, 2006, Venezia, Italy. pp. 19-26.

 
Edit | Del

Aliakseyeu, Dzmitry, Subramanian, Sriram, Lucero, Andres and Gutwin, Carl (2006): Interacting with piles of artifacts on digital tables. In: Celentano, Augusto (ed.) AVI 2006 - Proceedings of the working conference on Advanced visual interfaces May 23-26, 2006, Venezia, Italy. pp. 159-162.

 
Edit | Del

Cechanowicz, Jared, Dawson, Steven, Victor, Matt and Subramanian, Sriram (2006): Stylus based text input using expanding CIRRIN. In: Celentano, Augusto (ed.) AVI 2006 - Proceedings of the working conference on Advanced visual interfaces May 23-26, 2006, Venezia, Italy. pp. 163-166.

2005
 
Edit | Del

Nacenta, Miguel A., Aliakseyeu, Dzmitry, Subramanian, Sriram and Gutwin, Carl (2005): A comparison of techniques for multi-display reaching. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 371-380.

Recent advances in multi-user collaboration have seen a proliferation of interaction techniques for moving digital objects from one device to another. However, little is known about how these techniques work in realistic situations, or how they compare to one another. We conducted a study to compare the efficiency of six techniques for moving objects from a tablet to a tabletop display. We compared the techniques in four different distance ranges and with three movement directions. We found that techniques like the Radar View and Pick-and-Drop, that have a control-to-display ratio of 1, are significantly faster for object movement than techniques that have smaller control-to-display ratios. We also found that using spatial manipulation of objects was faster than pressure-based manipulation.

© All rights reserved Nacenta et al. and/or ACM Press

2004
 
Edit | Del

Dyck, Jeff, Gutwin, Carl, Subramanian, Sriram and Fedak, Christopher (2004): High-performance telepointers. In: Proceedings of ACM CSCW04 Conference on Computer-Supported Cooperative Work 2004. pp. 172-181.

Although telepointers are valuable for supporting real-time collaboration, they are rarely seen in commercial groupware applications that run on the Internet. One reason for their absence is that current telepointer implementations perform poorly on real-world networks with varying traffic, congestion, and loss. In this paper, we report on a new implementation of telepointers (HPT) that is designed to provide smooth, timely, and accurate telepointers in real-world groupware: on busy networks, on cable and dialup connections, and on wireless channels. HPT maintains performance at usable levels with a combination of techniques from multimedia and distributed systems research, including UDP transport, message compression, motion prediction, adaptive rate control, and adaptive forward error correction. Although these techniques have been seen before, they have never been combined and tailored to the specific requirements of telepointers. Tests of the new implementation show that HPT provides good performance in a number of network situations where other implementations do not work at all - we can provide usable telepointers even over a lossy 28K modem connection. HPT sets a new standard for telepointers, and allows designers to greatly improve the support that groupware provides for real-time interaction over distance.

© All rights reserved Dyck et al. and/or ACM Press

2003
 
Edit | Del

Subramanian, Sriram, Aliakseyeu, Dzmitry and Martens, Jean-Bernard (2003): Measuring the Coordination in 2D Positioning Tasks. In: Proceedings of IFIP INTERACT03: Human-Computer Interaction 2003, Zurich, Switzerland. p. 295.

 
Edit | Del

Subramanian, Sriram, Aliakseyeu, Dzmitry and Martens, Jean-Bernard (2003): Empirical Evaluation of Performance in Hybrid 3D and 2D Interfaces. In: Proceedings of IFIP INTERACT03: Human-Computer Interaction 2003, Zurich, Switzerland. p. 916.

 
Edit | Del

Aliakseyeu, Dzmitry, Champoux, Bernard, Martens, Jean-Bernard, Rauterberg, Matthias and Subramanian, Sriram (2003): The Visual Interaction Platform. In: Proceedings of IFIP INTERACT03: Human-Computer Interaction 2003, Zurich, Switzerland. p. 1101.

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Changes to this page (author)

23 Nov 2012: Modified
23 Nov 2012: Modified
23 Nov 2012: Modified
05 Apr 2012: Modified
05 Apr 2012: Modified
04 Apr 2012: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
18 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
18 Jan 2010: Added
18 Jan 2010: Added
24 Aug 2009: Modified
25 Jul 2009: Modified
20 Jul 2009: Modified
17 Jun 2009: Modified
17 Jun 2009: Modified
17 Jun 2009: Modified
31 May 2009: Modified
30 May 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
08 May 2008: Modified
06 Nov 2007: Added
16 Aug 2007: Added
24 Jul 2007: Modified
24 Jul 2007: Modified
24 Jul 2007: Modified
24 Jul 2007: Modified
29 Jun 2007: Modified
23 Jun 2007: Modified
23 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/sriram_subramanian.html

Publication statistics

Pub. period:2003-2012
Pub. count:46
Number of co-authors:70



Co-authors

Number of publications with 3 favourite co-authors:

Carl Gutwin:14
Miguel A. Nacenta:9
Pourang Irani:8

 

 

Productive colleagues

Sriram Subramanian's 3 most productive colleagues in number of publications:

Carl Gutwin:116
Alan J. Dix:107
Antonio Krger:59
 
 
 

Upcoming Courses

go to course
User-Centred Design - Module 3
70% booked. Starts in 25 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading