Publication statistics

Pub. period:1995-2012
Pub. count:36
Number of co-authors:79



Co-authors

Number of publications with 3 favourite co-authors:

Ken Hinckley:6
Jeffrey Heer:5
Andrew C. Beers:4

 

 

Productive colleagues

Maneesh Agrawala's 3 most productive colleagues in number of publications:

Ravin Balakrishnan:108
Mary Czerwinski:80
George G. Robertso..:61
 
 
 

Upcoming Courses

go to course
Emotional Design: How to make products people will love
Starts TODAY LAST CALL!
go to course
UI Design Patterns for Successful Software
87% booked. Starts in 8 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Maneesh Agrawala

Personal Homepage:
http://vis.berkeley.edu/~maneesh/

 

Publications by Maneesh Agrawala (bibliography)

 what's this?
2012
 
Edit | Del

Rubin, Steve, Berthouzoz, Floraine, Mysore, Gautham, Li, Wilmot and Agrawala, Maneesh (2012): UnderScore: musical underlays for audio stories. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 359-366. Available online

Audio producers often use musical underlays to emphasize key moments in spoken content and give listeners time to reflect on what was said. Yet, creating such underlays is time-consuming as producers must carefully (1) mark an emphasis point in the speech (2) select music with the appropriate style, (3) align the music with the emphasis point, and (4) adjust dynamics to produce a harmonious composition. We present UnderScore, a set of semi-automated tools designed to facilitate the creation of such underlays. The producer simply marks an emphasis point in the speech and selects a music track. UnderScore automatically refines, aligns and adjusts the speech and music to generate a high-quality underlay. UnderScore allows producers to focus on the high-level design of the underlay; they can quickly try out a variety of music and test different points of emphasis in the story. Amateur producers, who may lack the time or skills necessary to author underlays, can quickly add music to their stories. An informal evaluation of UnderScore suggests that it can produce high-quality underlays for a variety of examples while significantly reducing the time and effort required of radio producers.

© All rights reserved Rubin et al. and/or ACM Press

 
Edit | Del

Held, Robert, Gupta, Ankit, Curless, Brian and Agrawala, Maneesh (2012): 3D puppetry: a Kinect-based interface for 3D animation. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 423-434. Available online

We present a system for producing 3D animations using physical objects (i.e., puppets) as input. Puppeteers can load 3D models of familiar rigid objects, including toys, into our system and use them as puppets for an animation. During a performance, the puppeteer physically manipulates these puppets in front of a Kinect depth sensor. Our system uses a combination of image-feature matching and 3D shape matching to identify and track the physical puppets. It then renders the corresponding 3D models into a virtual set. Our system operates in real time so that the puppeteer can immediately see the resulting animation and make adjustments on the fly. It also provides 6D virtual camera \rev{and lighting} controls, which the puppeteer can adjust before, during, or after a performance. Finally our system supports layered animations to help puppeteers produce animations in which several characters move at the same time. We demonstrate the accessibility of our system with a variety of animations created by puppeteers with no prior animation experience.

© All rights reserved Held et al. and/or ACM Press

 
Edit | Del

Kin, Kenrick, Hartmann, Bjorn, DeRose, Tony and Agrawala, Maneesh (2012): Proton++: a customizable declarative multitouch framework. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 477-486. Available online

Proton++ is a declarative multitouch framework that allows developers to describe multitouch gestures as regular expressions of touch event symbols. It builds on the Proton framework by allowing developers to incorporate custom touch attributes directly into the gesture description. These custom attributes increase the expressivity of the gestures, while preserving the benefits of Proton: automatic gesture matching, static analysis of conflict detection, and graphical gesture creation. We demonstrate Proton++'s flexibility with several examples: a direction attribute for describing trajectory, a pinch attribute for detecting when touches move towards one another, a touch area attribute for simulating pressure, an orientation attribute for selecting menu items, and a screen location attribute for simulating hand ID. We also use screen location to simulate user ID and enable simultaneous recognition of gestures by multiple users. In addition, we show how to incorporate timing into Proton++ gestures by reporting touch events at a regular time interval. Finally, we present a user study that suggests that users are roughly four times faster at interpreting gestures written using Proton++ than those written in procedural event-handling code commonly used today.

© All rights reserved Kin et al. and/or ACM Press

2011
 
Edit | Del

Kin, Kenrick, Miller, Tom, Bollensdorff, Bjorn, DeRose, Tony, Hartmann, Bjorn and Agrawala, Maneesh (2011): Eden: a professional multitouch tool for constructing virtual organic environments. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1343-1352. Available online

Set construction is the process of selecting and positioning virtual geometric objects to create a virtual environment used in a computer-animated film. Set construction artists often have a clear mental image of the set composition, but find it tedious to build their intended sets with current mouse and keyboard interfaces. We investigate whether multitouch input can ease the process of set construction. Working with a professional set construction artist at Pixar Animation Studios, we designed and developed Eden, a fully functional multitouch set construction application. In this paper, we describe our design process and how we balanced the advantages and disadvantages of multitouch input to develop usable gestures for set construction. Based on our design process and the user experiences of two set construction artists, we present a general set of lessons we learned regarding the design of a multitouch interface.

© All rights reserved Kin et al. and/or their publisher

 
Edit | Del

Kser, Dominik P., Agrawala, Maneesh and Pauly, Mark (2011): FingerGlass: efficient multiscale interaction on multitouch screens. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1601-1610. Available online

Many tasks in graphical user interfaces require users to interact with elements at various levels of precision. We present FingerGlass, a bimanual technique designed to improve the precision of graphical tasks on multitouch screens. It enables users to quickly navigate to different locations and across multiple scales of a scene using a single hand. The other hand can simultaneously interact with objects in the scene. Unlike traditional pan-zoom interfaces, FingerGlass retains contextual information during the interaction. We evaluated our technique in the context of precise object selection and translation and found that FingerGlass significantly outperforms three state-of-the-art baseline techniques in both objective and subjective measurements: users acquired and translated targets more than 50% faster than with the second-best technique in our experiment.

© All rights reserved Kser et al. and/or their publisher

 
Edit | Del

Willett, Wesley, Heer, Jeffrey, Hellerstein, Joseph and Agrawala, Maneesh (2011): CommentSpace: structured support for collaborative visual analysis. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 3131-3140. Available online

Collaborative visual analysis tools can enhance sensemaking by facilitating social interpretation and parallelization of effort. These systems enable distributed exploration and evidence gathering, allowing many users to pool their effort as they discuss and analyze the data. We explore how adding lightweight tag and link structure to comments can aid this analysis process. We present CommentSpace, a collaborative system in which analysts comment on visualizations and websites and then use tags and links to organize findings and identify others'" contributions. In a pair of studies comparing CommentSpace to a system without support for tags and links, we find that a small, fixed vocabulary of tags (question, hypothesis, to-do) and links (evidence-for, evidence-against) helps analysts more consistently and accurately classify evidence and establish common ground. We also find that managing and incentivizing participation is important for analysts to progress from exploratory analysis to deeper analytical tasks. Finally, we demonstrate that tags and links can help teams complete evidence gathering and synthesis tasks and that organizing comments using tags and links improves analytic results.

© All rights reserved Willett et al. and/or their publisher

 
Edit | Del

Ramesh, Vidya, Hsu, Charlie, Agrawala, Maneesh and Hartmann, Bjorn (2011): ShowMeHow: translating user interface instructions between applications. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 127-134. Available online

Many people learn how to use complex authoring applications through tutorials. However, user interfaces for authoring tools differ between versions, platforms, and competing products, limiting the utility of tutorials. Our goal is to make tutorials more useful by enabling users to repurpose tutorials between similar applications. We introduce UI translation interfaces which enable users to locate commands in one application using the interface language of another application. Our end-user tool, ShowMeHow, demonstrates two interaction techniques to accomplish translations: 1) direct manipulation of interface facades and 2) text search for commands using the vocabulary of another application. We discuss tools needed to construct the translation maps that enable these techniques. An initial study (n=12) shows that users can locate unfamiliar commands twice as fast with interface facades. A second study showed that users can work through tutorials written for one application in another application.

© All rights reserved Ramesh et al. and/or ACM Press

 
Edit | Del

Savva, Manolis, Kong, Nicholas, Chhajta, Arti, Fei-Fei, Li, Agrawala, Maneesh and Heer, Jeffrey (2011): ReVision: automated classification, analysis and redesign of chart images. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 393-402. Available online

Poorly designed charts are prevalent in reports, magazines, books and on the Web. Most of these charts are only available as bitmap images; without access to the underlying data it is prohibitively difficult for viewers to create more effective visual representations. In response we present ReVision, a system that automatically redesigns visualizations to improve graphical perception. Given a bitmap image of a chart as input, ReVision applies computer vision and machine learning techniques to identify the chart type (e.g., pie chart, bar chart, scatterplot, etc.). It then extracts the graphical marks and infers the underlying data. Using a corpus of images drawn from the web, ReVision achieves image classification accuracy of 96% across ten chart categories. It also accurately extracts marks from 79% of bar charts and 62% of pie charts, and from these charts it successfully extracts data from 71% of bar charts and 64% of pie charts. ReVision then applies perceptually-based design principles to populate an interactive gallery of redesigned charts. With this interface, users can view alternative chart designs and retarget content to different visual styles.

© All rights reserved Savva et al. and/or ACM Press

2009
 
Edit | Del

Heer, Jeffrey, Kong, Nicholas and Agrawala, Maneesh (2009): Sizing the horizon: the effects of chart size and layering on the graphical perception of time series visualizations. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1303-1312. Available online

We investigate techniques for visualizing time series data and evaluate their effect in value comparison tasks. We compare line charts with horizon graphs -- a space-efficient time series visualization technique -- across a range of chart sizes, measuring the speed and accuracy of subjects' estimates of value differences between charts. We identify transition points at which reducing the chart height results in significantly differing drops in estimation accuracy across the compared chart types, and we find optimal positions in the speed-accuracy tradeoff curve at which viewers performed quickly without attendant drops in accuracy. Based on these results, we propose approaches for increasing data density that optimize graphical perception.

© All rights reserved Heer et al. and/or ACM Press

 
Edit | Del

Zheng, Ke Colin, Colburn, Alex, Agarwala, Aseem, Agrawala, Maneesh, Salesin, David, Curless, Brian and Cohen, Michael F. (2009): Parallax photography: creating 3D cinematic effects from stills. In: Proceedings of the 2009 Conference on Graphics Interface 2009. pp. 111-118. Available online

We present an approach to convert a small portion of a light field with extracted depth information into a cinematic effect with simulated, smooth camera motion that exhibits a sense of 3D parallax. We develop a taxonomy of the cinematic conventions of these effects, distilled from observations of documentary film footage and organized by the number of subjects of interest in the scene. We present an automatic, content-aware approach to apply these cinematic conventions to an input light field. A face detector identifies subjects of interest. We then optimize for a camera path that conforms to a cinematic convention, maximizes apparent parallax, and avoids missing information in the input. We describe a GPU-accelerated, temporally coherent rendering algorithm that allows users to create more complex camera moves interactively, while experimenting with effects such as focal length, depth of field, and selective, depth-based desaturation or brightening. We evaluate and demonstrate our approach on a wide variety of scenes and present a user study that compares our 3D cinematic effects to their 2D counterparts.

© All rights reserved Zheng et al. and/or their publisher

 
Edit | Del

Kin, Kenrick, Agrawala, Maneesh and DeRose, Tony (2009): Determining the benefits of direct-touch, bimanual, and multifinger input on a multitouch workstation. In: Proceedings of the 2009 Conference on Graphics Interface 2009. pp. 119-124. Available online

Multitouch workstations support direct-touch, bimanual, and multifinger interaction. Previous studies have separately examined the benefits of these three interaction attributes over mouse-based interactions. In contrast, we present an empirical user study that considers these three interaction attributes together for a single task, such that we can quantify and compare the performances of each attribute. In our experiment users select multiple targets using either a mouse-based workstation equipped with one mouse, or a multitouch workstation using either one finger, two fingers (one from each hand), or multiple fingers. We find that the fastest multitouch condition is about twice as fast as the mouse-based workstation, independent of the number of targets. Direct-touch with one finger accounts for an average of 83% of the reduction in selection time. Bimanual interaction, using at least two fingers, one on each hand, accounts for the remaining reduction in selection time. Further, we find that for novice multitouch users there is no significant difference in selection time between using one finger on each hand and using any number of fingers for this task. Based on these observations we conclude with several design guidelines for developing multitouch user interfaces.

© All rights reserved Kin et al. and/or their publisher

 
Edit | Del

Kong, Nicholas and Agrawala, Maneesh (2009): Perceptual interpretation of ink annotations on line charts. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 233-236. Available online

Asynchronous collaborators often use freeform ink annotations to point to visually salient perceptual features of line charts such as peaks or humps, valleys, rising slopes and declining slopes. We present a set of techniques for interpreting such annotations to algorithmically identify the corresponding perceptual parts. Our approach is to first apply a parts-based segmentation algorithm that identifies the visually salient perceptual parts in the chart. Our system then analyzes the freeform annotations to infer the corresponding peaks, valleys or sloping segments. Once the system has identified the perceptual parts it can highlight them to draw further attention and reduce ambiguity of interpretation in asynchronous collaborative discussions.

© All rights reserved Kong and Agrawala and/or their publisher

2008
 
Edit | Del

Heer, Jeffrey, Agrawala, Maneesh and Willett, Wesley (2008): Generalized selection via interactive query relaxation. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 959-968. Available online

Selection is a fundamental task in interactive applications, typically performed by clicking or lassoing items of interest. However, users may require more nuanced forms of selection. Selecting regions or attributes may be more important than selecting individual items. Selections may be over dynamic items and selections might be more easily created by relaxing simpler selections (e.g., "select all items like this one"). Creating such selections requires that interfaces model the declarative structure of the selection, not just individually selected items. We present direct manipulation techniques that couple declarative selection queries with a query relaxation engine that enables users to interactively generalize their selections. We apply our selection techniques in both information visualization and graphics editing applications, enabling generalized selection over both static and dynamic interface objects. A controlled study finds that users create more accurate selection queries when using our generalization techniques.

© All rights reserved Heer et al. and/or ACM Press

 
Edit | Del

Chen, Nicholas, Guimbretiere, Francois, Dixon, Morgan, Lewis, Cassandra and Agrawala, Maneesh (2008): Navigation techniques for dual-display e-book readers. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1779-1788. Available online

Existing e-book readers do not do a good job supporting many reading tasks that people perform, as ethnographers report that when reading, people frequently read from multiple display surfaces. In this paper we present our design of a dual display e-book reader and explore how it can be used to interact with electronic documents. Our design supports embodied interactions like folding, flipping, and fanning for local/lightweight navigation. We also show how mechanisms like Space Filling Thumbnails can use the increased display space to aid global navigation. Lastly, the detachable faces in our design can facilitate inter-document operations and flexible layout of documents in the workspace. Semi-directed interviews with seven users found that dual-displays have the potential to improve the reading experience by supporting several local navigation tasks better than a single display device. Users also identified many reading tasks for which the device would be valuable. Users did not find the embodied interface particularly useful when reading in our controlled lab setting, however.

© All rights reserved Chen et al. and/or ACM Press

 
Edit | Del

Vigas, Fernanda B., Wattenberg, Martin, Heer, Jeffrey and Agrawala, Maneesh (2008): Social data analysis workshop. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 3977-3980. Available online

This workshop addresses a new online phenomenon: social data analysis, that is, collective analysis of data supported by social interaction. The recent democratization of data sources on the Internet -- from mandated publication of government-generated data to scientific repositories of experimental data sets -- has enabled a new kind of web site where users upload and collaboratively analyze the most varied sorts of data. So far, most of these sites have relied on visualization as an intrinsic part of their analytical arsenal. The goals of this workshop are to: * Bring together, for the first time, the social data analysis community * Examine the design of social data analysis sites today * Discuss the role that visualizations play in social data analysis * Explore how users are utilizing the various sites that allow them to exchange data-based insights.

© All rights reserved Vigas et al. and/or ACM Press

2007
 
Edit | Del

Vollick, Ian, Vogel, Daniel, Agrawala, Maneesh and Hertzmann, Aaron (2007): Specifying label layout style by example. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 221-230. Available online

Creating high-quality label layouts in a particular visual style is a time-consuming process. Although automated labeling algorithms can aid the layout process, expert design knowledge is required to tune these algorithms so that they produce layouts which meet the designer's expectations. We propose a system which can learn a labellayout style from a single example layout and then apply this style to new labeling problems. Because designers find it much easier to create example layouts than tune algorithmic parameters, our system provides a more natural workflow for graphic designers. We demonstrate that our system is capable of learning a variety of label layout styles from examples.

© All rights reserved Vollick et al. and/or ACM Press

2006
 
Edit | Del

Hinckley, Ken, Guimbretiere, Francois, Baudisch, Patrick, Sarin, Raman, Agrawala, Maneesh and Cutrell, Edward (2006): The springboard: multiple modes in one spring-loaded control. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 181-190. Available online

Modes allow a few inputs to invoke many operations, yet if a user misclassifies or forgets the state of a system, modes can result in errors. Spring-loaded modes (quasimodes) maintain a mode while the user holds a control such as a button or key. The Springboard is an interaction technique for tablet computers that extends quasimodes to encompass multiple tool modes in a single spring-loaded control. The Springboard allows the user to continue holding down a nonpreferred-hand command button after selecting a tool from a menu as a way to repeatedly apply the same tool. We find the Springboard improves performance for both a local marking menu and for a non-local marking menu ("lagoon") at the lower left corner of the screen. Despite the round-trip costs incurred to move the pen to a tool lagoon, a keystroke-level analysis of the true cost of each technique reveals the local marking menu is not significantly faster.

© All rights reserved Hinckley et al. and/or ACM Press

 
Edit | Del

Wilson, Andrew D. and Agrawala, Maneesh (2006): Text entry using a dual joystick game controller. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 475-478. Available online

We present a new bimanual text entry technique designed for today's dual-joystick game controllers. The left and right joysticks are used to independently select characters from the corresponding (left/right) half of an on-screen selection keyboard. Our dual-stick approach is analogous to typing on a standard keyboard, where each hand (left/right) presses keys on the corresponding side of the keyboard. We conducted a user study showing that our technique supports keyboarding skills transfer and is thereby readily learnable. Our technique increases entry speed significantly compared to the status quo single stick selection keyboard technique.

© All rights reserved Wilson and Agrawala and/or ACM Press

 
Edit | Del

Santella, Anthony, Agrawala, Maneesh, DeCarlo, Doug, Salesin, David and Cohen, Michael (2006): Gaze-based interaction for semi-automatic photo cropping. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 771-780. Available online

We present an interactive method for cropping photographs given minimal information about important content location, provided by eye tracking. Cropping is formulated in a general optimization framework that facilitates adding new composition rules, and adapting the system to particular applications. Our system uses fixation data to identify important image content and compute the best crop for any given aspect ratio or size, enabling applications such as automatic snapshot recomposition, adaptive documents, and thumbnailing. We validate our approach with studies in which users compare our crops to ones produced by hand and by a completely automatic approach. Experiments show that viewers prefer our gaze-based crops to uncropped images and fully automatic crops.

© All rights reserved Santella et al. and/or ACM Press

 
Edit | Del

Grossman, Tovi, Hinckley, Ken, Baudisch, Patrick, Agrawala, Maneesh and Balakrishnan, Ravin (2006): Hover widgets: using the tracking state to extend the capabilities of pen-operated devices. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 861-870. Available online

We present Hover Widgets, a new technique for increasing the capabilities of pen-based interfaces. Hover Widgets are implemented by using the pen movements above the display surface, in the tracking state. Short gestures while hovering, followed by a pen down, access the Hover Widgets, which can be used to activate localized interface widgets. By using the tracking state movements, Hover Widgets create a new command layer which is clearly distinct from the input layer of a pen interface. In a formal experiment Hover Widgets were found to be faster than a more traditional command activation technique, and also reduced errors due to divided attention.

© All rights reserved Grossman et al. and/or ACM Press

 
Edit | Del

Zhao, Shengdong, Agrawala, Maneesh and Hinckley, Ken (2006): Zone and polygon menus: using relative position to increase the breadth of multi-stroke marking menus. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 1077-1086. Available online

We present Zone and Polygon menus, two new variants of multi-stroke marking menus that consider both the relative position and orientation of strokes. Our menus are designed to increase menu breadth over the 8 item limit of status quo orientation-based marking menus. An experiment shows that Zone and Polygon menus can successfully increase breadth by a factor of 2 or more over orientation-based marking menus, while maintaining high selection speed and accuracy. We also discuss hybrid techniques that may further increase menu breadth and performance. Our techniques offer UI designers new options for balancing menu breadth and depth against selection speed and accuracy.

© All rights reserved Zhao et al. and/or ACM Press

 
Edit | Del

Hinckley, Ken, Guimbretiere, Francois, Agrawala, Maneesh, Apitz, Georg and Chen, Nicholas (2006): Phrasing techniques for multi-stroke selection gestures. In: Proceedings of the 2006 Conference on Graphics Interface 2006. pp. 147-154. Available online

Pen gesture interfaces have difficulty supporting arbitrary multiple-stroke selections because lifting the pen introduces ambiguity as to whether the next stroke should add to the existing selection, or begin a new one. We explore and evaluate techniques that use a non-preferred-hand button or touchpad to phrase together one or more independent pen strokes into a unitary multi-stroke gesture. We then illustrate how such phrasing techniques can support multiple-stroke selection gestures with tapping, crossing, lassoing, disjoint selection, circles of exclusion, selection decorations, and implicit grouping operations. These capabilities extend the expressiveness of pen gesture interfaces and suggest new directions for multiple-stroke pen input techniques.

© All rights reserved Hinckley et al. and/or Canadian Information Processing Society

 
Edit | Del

Drucker, Steven M., Petschnigg, Georg and Agrawala, Maneesh (2006): Comparing and managing multiple versions of slide presentations. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 47-56. Available online

Despite the ubiquity of slide presentations, managing multiple presentations remains a challenge. Understanding how multiple versions of a presentation are related to one another, assembling new presentations from existing presentations, and collaborating to create and edit presentations are difficult tasks. In this paper, we explore techniques for comparing and managing multiple slide presentations. We propose a general comparison framework for computing similarities and differences between slides. Based on this framework we develop an interactive tool for visually comparing multiple presentations. The interactive visualization facilitates understanding how presentations have evolved over time. We show how the interactive tool can be used to assemble new presentations from a collection of older ones and to merge changes from multiple presentation authors.

© All rights reserved Drucker et al. and/or ACM Press

 
Edit | Del

Baudisch, Patrick, Tan, Desney S., Collomb, Maxime, Robbins, Dan, Hinckley, Ken, Agrawala, Maneesh, Zhao, Shengdong and Ramos, Gonzalo (2006): Phosphor: explaining transitions in the user interface using afterglow effects. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 169-178. Available online

Sometimes users fail to notice a change that just took place on their display. For example, the user may have accidentally deleted an icon or a remote collaborator may have changed settings in a control panel. Animated transitions can help, but they force users to wait for the animation to complete. This can be cumbersome, especially in situations where users did not need an explanation. We propose a different approach. Phosphor objects show the outcome of their transition instantly; at the same time they explain their change in retrospect. Manipulating a phosphor slider, for example, leaves an afterglow that illustrates how the knob moved. The parallelism of instant outcome and explanation supports both types of users. Users who already understood the transition can continue interacting without delay, while those who are inexperienced or may have been distracted can take time to view the effects at their own pace. We present a framework of transition designs for widgets, icons, and objects in drawing programs. We evaluate phosphor objects in two user studies and report significant performance benefits for phosphor objects.

© All rights reserved Baudisch et al. and/or ACM Press

 
Edit | Del

Ramos, Gonzalo, Robertson, George G., Czerwinski, Mary, Tan, Desney S., Baudisch, Patrick, Hinckley, Ken and Agrawala, Maneesh (2006): Tumble! Splat! helping users access and manipulate occluded content in 2D drawings. In: Celentano, Augusto (ed.) AVI 2006 - Proceedings of the working conference on Advanced visual interfaces May 23-26, 2006, Venezia, Italy. pp. 428-435. Available online

2005
 
Edit | Del

Agrawala, Maneesh and Durand, Frdo (2005): Guest Editors' Introduction: Smart Depiction for Visual Communication. In IEEE Computer Graphics and Applications, 25 (3) pp. 20-21. Available online

2004
 
Edit | Del

Kim, Jiwon, Seitz, Steven M. and Agrawala, Maneesh (2004): Video-based document tracking: unifying your physical and electronic desktops. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 99-107. Available online

This paper presents an approach for tracking paper documents on the desk over time and automatically linking them to the corresponding electronic documents using an overhead video camera. We demonstrate our system in the context of two scenarios, paper tracking and photo sorting. In the paper tracking scenario, the system tracks changes in the stacks of printed documents and books on the desk and builds a complete representation of the spatial structure of the desktop. When users want to find a printed document buried in the stacks, they can query the system based on appearance, keywords, or access time. The system also provides a remote desktop interface for directly browsing the physical desktop from a remote location. In the photo sorting scenario, users sort printed photographs into physical stacks on the desk. The systemautomatically recognizes the photographs and organizes the corresponding digital photographs into separate folders according to the physical arrangement. Our framework provides a way to unify the physical and electronic desktops without the need for a specialized physical infrastructure except for a video camera.

© All rights reserved Kim et al. and/or ACM Press

 
Edit | Del

Li, Wilmot, Agrawala, Maneesh and Salesin, David (2004): Interactive image-based exploded view diagrams. In: Graphics Interface 2004 May 17-19, 2004, London, Ontario, Canada. pp. 203-212. Available online

We present a system for creating interactive exploded view diagrams using 2D images as input. This image-based approach enables us to directly support arbitrary rendering styles, eliminates the need for building 3D models, and allows us to leverage the abundance of existing static diagrams of complex objects. We have developed a set of semi-automatic authoring tools for quickly creating layered diagrams that allow the user to specify how the parts of an object expand, collapse, and occlude one another. We also present a viewing system that lets users dynamically filter the information presented in the diagram by directly expanding and collapsing the exploded view and searching for individual parts. Our results demonstrate that a simple 2.5D diagram representation is powerful enough to enable a useful set of interactions and that, with the right authoring tools, effective interactive diagrams in this format can be created from existing static illustrations with a small amount of effort.

© All rights reserved Li et al. and/or their publisher

 
Edit | Del

Hoobler, Nate, Humphreys, Greg and Agrawala, Maneesh (2004): Visualizing Competitive Behaviors in Multi-User Virtual Environments. In: VIS 2004 - 15th IEEE Visualization 2004 Conference 10-15 October, 2004, Austin, TX, USA. pp. 163-170. Available online

 
Edit | Del

Heiser, Julie, Phan, Doantam, Agrawala, Maneesh, Tversky, Barbara and Hanrahan, Pat (2004): Identification and validation of cognitive design principles for automated generation of assembly instructions. In: Costabile, Maria Francesca (ed.) AVI 2004 - Proceedings of the working conference on Advanced visual interfaces May 25-28, 2004, Gallipoli, Italy. pp. 311-319. Available online

2003
 
Edit | Del

Akers, David, Losasso, Frank, Klingner, Jeff, Agrawala, Maneesh, Rick, John and Hanrahan, Pat (2003): Conveying Shape and Features with Image-Based Relighting. In: Turk, Greg, Wijk, Jarke J. van and II, Robert J. Moorhead (eds.) 14th IEEE Visualization 2003 Conference VIS 2003 19-24 October, 2003, Seattle, WA, USA. pp. 349-354.

 
Edit | Del

Niederauer, Christopher, Houston, Mike, Agrawala, Maneesh and Humphreys, Greg (2003): Non-invasive interactive visualization of dynamic architectural environments. In: SI3D 2003 2003. pp. 55-58. Available online

2000
 
Edit | Del

Frohlich, Bernd, Tramberend, Henrik, Beers, Andrew C., Agrawala, Maneesh and Baraff, David (2000): Physically-based Manipulation on the Responsive Workbench. In: VR 2000 2000. pp. 5-12. Available online

1997
 
Edit | Del

Frohlich, Bernd, Fischer, Martin, Agrawala, Maneesh, Beers, Andrew C. and Hanrahan, Pat (1997): Collaborative Production Modeling and Planning. In IEEE Computer Graphics and Applications, 17 (4) pp. 13-15. Available online

1995
 
Edit | Del

Agrawala, Maneesh, Beers, Andrew C. and Chaddha, Navin (1995): Model-Based Notion Estimation for Synthetic Animations. In: ACM Multimedia 1995 1995. pp. 477-488. Available online

 
Edit | Del

Agrawala, Maneesh, Beers, Andrew C. and Levoy, Marc (1995): 3D Painting on Scanned Surfaces. In: SI3D 1995 1995. pp. 145-150,215. Available online

 
Add publication
Show list on your website
 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/maneesh_agrawala.html