Publication statistics

Pub. period:2003-2010
Pub. count:15
Number of co-authors:41



Co-authors

Number of publications with 3 favourite co-authors:

Ravin Balakrishnan:6
Ken Hinckley:5
Patrick Baudisch:4

 

 

Productive colleagues

Gonzalo Ramos's 3 most productive colleagues in number of publications:

Carl Gutwin:116
Ravin Balakrishnan:108
Mary Czerwinski:80
 
 
 

Upcoming Courses

go to course
User-Centred Design - Module 2
91% booked. Starts in 4 days
go to course
Design Thinking: The Beginner's Guide
90% booked. Starts in 5 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Gonzalo Ramos

Picture of Gonzalo Ramos.
Update pic
Add description
Rename / change spelling
Add publication
 

Publications by Gonzalo Ramos (bibliography)

 what's this?
2010
 
Edit | Del

Karlson, Amy K., Iqbal, Shamsi T., Meyers, Brian, Ramos, Gonzalo, Lee, Kathy and Tang, John C. (2010): Mobile taskflow in context: a screenshot study of smartphone usage. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2009-2018.

The impact of interruptions on workflow and productivity has been extensively studied in the PC domain, but while fragmented user attention is recognized as an inherent aspect of mobile phone usage, little formal evidence exists of its effect on mobile productivity. Using a survey and a screenshot-based diary study we investigated the types of barriers people face when performing tasks on their mobile phones, the ways they follow up with such suspended tasks, and how frustrating the experience of task disruption is for mobile users. From 386 situated samples provided by 12 iPhone and 12 Pocket PC users, we distill a classification of barriers to the completion of mobile tasks. Our data suggest that moving to a PC to complete a phone task is common, yet not inherently problematic, depending on the task. Finally, we relate our findings to prior design guidelines for desktop workflow, and discuss how the guidelines can be extended to mitigate disruptions to mobile taskflow.

© All rights reserved Karlson et al. and/or their publisher

 
Edit | Del

Pongnumkul, Suporn, Wang, Jue, Ramos, Gonzalo and Cohen, Michael (2010): Content-aware dynamic timeline for video browsing. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 139-142.

When browsing a long video using a traditional timeline slider control, its effectiveness and precision degrade as a video's length grows. When browsing videos with more frames than pixels in the slider, aside from some frames being inaccessible, scrolling actions cause sudden jumps in a video's continuity as well as video frames to flash by too fast for one to assess the content. We propose a content-aware dynamic timeline control that is designed to overcome these limitations. Our timeline control decouples video speed and playback speed, and leverages video content analysis to allow salient shots to be presented at an intelligible speed. Our control also takes advantage of previous work on elastic sliders, which allows us to produce an accurate navigation control.

© All rights reserved Pongnumkul et al. and/or their publisher

2009
 
Edit | Del

Teevan, Jaime, Cutrell, Edward, Fisher, Danyel, Drucker, Steven M., Ramos, Gonzalo, Andr, Paul and Hu, Chang (2009): Visual snippets: summarizing web pages for search and revisitation. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 2023-2032.

People regularly interact with different representations of Web pages. A person looking for new information may initially find a Web page represented as a short snippet rendered by a search engine. When he wants to return to the same page the next day, the page may instead be represented by a link in his browser history. Previous research has explored how to best represent Web pages in support of specific task types, but, as we find in this paper, consistency in representation across tasks is also important. We explore how different representations are used in a variety of contexts and present a compact representation that supports both the identification of new, relevant Web pages and the re-finding of previously viewed pages.

© All rights reserved Teevan et al. and/or ACM Press

 
Edit | Del

Ramos, Gonzalo, Hinckley, Kenneth, Wilson, Andy and Sarin, Raman (2009): Synchronous Gestures in Multi-Display Environments. In Human-Computer Interaction, 24 (1) pp. 117-169.

Synchronous gestures are patterns of sensed user or users' activity, spanning a distributed system that take on a new meaning when they occur together in time. Synchronous gestures draw inspiration from real-world social rituals such as toasting by tapping two drinking glasses together. In this article, we explore several interactions based on synchronous gestures, including bumping devices together, drawing corresponding pen gestures on touch-sensitive displays, simultaneously pressing a button on multiple smart-phones, or placing one or more devices on the sensing surface of a tabletop computer. These interactions focus on wireless composition of physically colocated devices, where users perceive one another and coordinate their actions through social protocol. We demonstrate how synchronous gestures may be phrased together with surrounding interactions. Such connection-action phrases afford a rich syntax of cross-device commands, operands, and one-to-one or one-to-many associations with a flexible physical arrangement of devices. Synchronous gestures enable colocated users to combine multiple devices into a heterogeneous display environment, where the users may establish a transient network connection with other select colocated users to facilitate the pooling of input capabilities, display resources, and the digital contents of each device. For example, participants at a meeting may bring mobile devices including tablet computers, PDAs, and smart-phones, and the meeting room infrastructure may include fixed interactive displays, such as a tabletop computer. Our techniques facilitate creation of an ad hoc display environment for tasks such as viewing a large document across multiple devices, presenting information to another user, or offering files to others. The interactions necessary to establish such ad hoc display environments must be rapid and minimally demanding of attention: during face-to-face communication, a pause of even 5 sec is socially awkward and disrupts collaboration. Current devices may associate using a direct transport such as Infrared Data Association ports, or the emerging Near Field Communication standard. However, such transports can only support one-to-one associations between devices and require close physical proximity as well as a specific relative orientation to connect the devices (e.g., the devices may be linked when touching head-to-head but not side-to-side). By contrast, sociology research in proxemics (the study of how people use the "personal space" surrounding their bodies) demonstrates that people carefully select physical distance as well as relative body orientation to suit the task, mood, and social relationship with other persons. Wireless networking can free device-to-device connections from the limitations of direct transports but results in a potentially large number of candidate devices. Synchronous gestures address these problems by allowing users to express naturally a spontaneous wireless connection between specific proximal (collocated) interactive displays.

© All rights reserved Ramos et al. and/or Taylor and Francis

2008
 
Edit | Del

Dragicevic, Pierre, Ramos, Gonzalo, Bibliowitcz, Jacobo, Nowrouzezahrai, Derek, Balakrishnan, Ravin and Singh, Karan (2008): Video browsing by direct manipulation. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 237-246.

We present a method for browsing videos by directly dragging their content. This method brings the benefits of direct manipulation to an activity typically mediated by widgets. We support this new type of interactivity by: 1) automatically extracting motion data from videos; and 2) a new technique called relative flow dragging that lets users control video playback by moving objects of interest along their visual trajectory. We show that this method can outperform the traditional seeker bar in video browsing tasks that focus on visual content rather than time.

© All rights reserved Dragicevic et al. and/or ACM Press

 
Edit | Del

Birnholtz, Jeremy P., Gutwin, Carl, Ramos, Gonzalo and Watson, Mark (2008): OpenMessenger: gradual initiation of interaction for distributed workgroups. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1661-1664.

The initiation of interaction in face-to-face environments is a gradual process, and takes place in a rich information landscape of awareness, attention, and social signals. One of the main benefits of this process is that people can be more sensitive to issues of privacy and interruption while they are moving towards interaction. However, on-line communication tools do not provide this subtlety, and often lead to unwanted interruptions. We have developed a prototype message system called OpenMessenger (OM) that adds the idea of gradual initiation of interaction to on-line communication. OpenMessenger provides multiple levels of awareness about people, and provides notification to those about whom information is being gathered. OpenMessenger allows people to negotiate interaction in a richer fashion than is possible with any other current messaging system. Preliminary evaluation data suggest the utility of the approach, but also shows that there are a number of issues yet to be resolved in this area.

© All rights reserved Birnholtz et al. and/or ACM Press

 
Edit | Del

Bi, Xiaojun, Moscovich, Tomer, Ramos, Gonzalo, Balakrishnan, Ravin and Hinckley, Ken (2008): An exploration of pen rolling for pen-based interaction. In: Cousins, Steve B. and Beaudouin-Lafon, Michel (eds.) Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology October 19-22, 2008, Monterey, CA, USA. pp. 191-200.

2007
 
Edit | Del

Ramos, Gonzalo, Cockburn, Andy, Balakrishnan, Ravin and Beaudouin-Lafon, Michel (2007): Pointing lenses: facilitating stylus input through visual-and motor-space magnification. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 757-766.

Using a stylus on a tablet computer to acquire small targets can be challenging. In this paper we present pointing lenses -- interaction techniques that help users acquire and select targets by presenting them with an enlarged visual and interaction area. We present and study three pointing lenses for pen-based systems and find that our proposed Pressure-Activated Lens is the top overall performer in terms of speed, accuracy and user preference. In addition, our experimental results not only show that participants find all pointing lenses beneficial for targets smaller than 5 pixels, but they also suggest that this benefit may extend to larger targets as well.

© All rights reserved Ramos et al. and/or ACM Press

2006
 
Edit | Del

Baudisch, Patrick, Tan, Desney S., Collomb, Maxime, Robbins, Dan, Hinckley, Ken, Agrawala, Maneesh, Zhao, Shengdong and Ramos, Gonzalo (2006): Phosphor: explaining transitions in the user interface using afterglow effects. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 169-178.

Sometimes users fail to notice a change that just took place on their display. For example, the user may have accidentally deleted an icon or a remote collaborator may have changed settings in a control panel. Animated transitions can help, but they force users to wait for the animation to complete. This can be cumbersome, especially in situations where users did not need an explanation. We propose a different approach. Phosphor objects show the outcome of their transition instantly; at the same time they explain their change in retrospect. Manipulating a phosphor slider, for example, leaves an afterglow that illustrates how the knob moved. The parallelism of instant outcome and explanation supports both types of users. Users who already understood the transition can continue interacting without delay, while those who are inexperienced or may have been distracted can take time to view the effects at their own pace. We present a framework of transition designs for widgets, icons, and objects in drawing programs. We evaluate phosphor objects in two user studies and report significant performance benefits for phosphor objects.

© All rights reserved Baudisch et al. and/or ACM Press

 
Edit | Del

Ramos, Gonzalo, Robertson, George G., Czerwinski, Mary, Tan, Desney S., Baudisch, Patrick, Hinckley, Ken and Agrawala, Maneesh (2006): Tumble! Splat! helping users access and manipulate occluded content in 2D drawings. In: Celentano, Augusto (ed.) AVI 2006 - Proceedings of the working conference on Advanced visual interfaces May 23-26, 2006, Venezia, Italy. pp. 428-435.

2005
 
Edit | Del

Ramos, Gonzalo and Balakrishnan, Ravin (2005): Zliding: fluid zooming and sliding for high precision parameter manipulation. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 143-152.

High precision parameter manipulation tasks typically require adjustment of the scale of manipulation in addition to the parameter itself. This paper introduces the notion of Zoom Sliding, or Zliding, for fluid integrated manipulation of scale (zooming) via pressure input while parameter manipulation within that scale is achieved via x-y cursor movement (sliding). We also present the Zlider (Figure 1), a widget that instantiates the Zliding concept. We experimentally evaluate three different input techniques for use with the Zlider in conjunction with a stylus for x-y cursor positioning, in a high accuracy zoom and select task. Our results marginally favor the stylus with integrated isometric pressure sensing tip over bimanual techniques which separate zooming and sliding controls over the two hands. We discuss the implications of our results and present further designs that make use of Zliding.

© All rights reserved Ramos and Balakrishnan and/or ACM Press

 
Edit | Del

Hinckley, Ken, Baudisch, Patrick, Ramos, Gonzalo and Guimbretiere, Francois (2005): Design and analysis of delimiters for selection-action pen gesture phrases in scriboli. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 451-460.

We present a quantitative analysis of delimiters for pen gestures. A delimiter is "something different" in the input stream that a computer can use to determine the structure of input phrases. We study four techniques for delimiting a selection-action gesture phrase consisting of lasso selection plus marking-menu-based command activation. Pigtail is a new technique that uses a small loop to delimit lasso selection from marking (Fig. 1). Handle adds a box to the end of the lasso, from which the user makes a second stroke for marking. Timeout uses dwelling with the pen to delimit the lasso from the mark. Button uses a button press to signal when to delimit the gesture. We describe the role of delimiters in our Scriboli pen interaction testbed, and show how Pigtail supports scope selection, command activation, and direct manipulation all in a single fluid pen gesture.

© All rights reserved Hinckley et al. and/or ACM Press

2004
 
Edit | Del

Ramos, Gonzalo, Boulos, Matthew and Balakrishnan, Ravin (2004): Pressure widgets. In: Dykstra-Erickson, Elizabeth and Tscheligi, Manfred (eds.) Proceedings of ACM CHI 2004 Conference on Human Factors in Computing Systems April 24-29, 2004, Vienna, Austria. pp. 487-494.

Current user interface widgets typically assume that the input device can only provide x-y position and binary button press information. Other inputs such as the continuous pressure data provided by styluses on tablets are rarely used. We explore the design space of using the continuous pressure sensing capabilities of styluses to operate multi-state widgets. We present the results of a controlled experiment that investigates human ability to perform discrete target selection tasks by varying a stylus' pressure, with full or partial visual feedback. The experiment also considers different techniques for confirming selection once the target is acquired. Based on the experimental results, we discuss implications for the design of pressure sensitive widgets. A taxonomy of pressure widgets is presented, along with a set of initial concept sketches of various pressure widget designs.

© All rights reserved Ramos et al. and/or ACM Press

 
Edit | Del

Hinckley, Ken, Ramos, Gonzalo, Guimbretiere, Francois, Baudisch, Patrick and Smith, Marc (2004): Stitching: pen gestures that span multiple displays. In: Costabile, Maria Francesca (ed.) AVI 2004 - Proceedings of the working conference on Advanced visual interfaces May 25-28, 2004, Gallipoli, Italy. pp. 23-31.

2003
 
Edit | Del

Ramos, Gonzalo and Balakrishnan, Ravin (2003): Fluid interaction techniques for the control and annotation of digital video. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 105-114.

We explore a variety of interaction and visualization techniques for fluid navigation, segmentation, linking, and annotation of digital videos. These techniques are developed within a concept prototype called LEAN that is designed for use with pressure-sensitive digitizer tablets. These techniques include a transient position+velocity widget that allows users not only to move around a point of interest on a video, but also to rewind or fast forward at a controlled variable speed. We also present a new variation of fish-eye views called twist-lens, and incorporate this into a position control slider designed for the effective navigation and viewing of large sequences of video frames. We also explore a new style of widgets that exploit the use of the pen's pressure-sensing capability, increasing the input vocabulary available to the user. Finally, we elaborate on how annotations referring to objects that are temporal in nature, such as video, may be thought of as links, and fluidly constructed, visualized and navigated.

© All rights reserved Ramos and Balakrishnan and/or ACM Press

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Changes to this page (author)

18 Nov 2010: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
12 Jul 2009: Modified
17 Jun 2009: Modified
17 Jun 2009: Modified
09 May 2009: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
27 Sep 2007: Added
24 Jul 2007: Modified
29 Jun 2007: Modified
19 Jun 2007: Modified
11 Jun 2007: Modified
28 Apr 2003: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/gonzalo_ramos.html

Publication statistics

Pub. period:2003-2010
Pub. count:15
Number of co-authors:41



Co-authors

Number of publications with 3 favourite co-authors:

Ravin Balakrishnan:6
Ken Hinckley:5
Patrick Baudisch:4

 

 

Productive colleagues

Gonzalo Ramos's 3 most productive colleagues in number of publications:

Carl Gutwin:116
Ravin Balakrishnan:108
Mary Czerwinski:80
 
 
 

Upcoming Courses

go to course
User-Centred Design - Module 2
91% booked. Starts in 4 days
go to course
Design Thinking: The Beginner's Guide
90% booked. Starts in 5 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading