Publication statistics

Pub. period:1999-2012
Pub. count:33
Number of co-authors:59



Co-authors

Number of publications with 3 favourite co-authors:

Ken Hinckley:10
Hyunyoung Song:7
Chunyuan Liao:5

 

 

Productive colleagues

Francois Guimbretiere's 3 most productive colleagues in number of publications:

Ravin Balakrishnan:108
Allison Druin:81
Benjamin B. Beders..:70
 
 
 
Jul 10

Visual appearance is one of the most effective variables for quickly differentiating one application from another

-- Bob Baxley, 2003

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!

 
 

Francois Guimbretiere

Personal Homepage:
http://www.cs.umd.edu/~francois/

Current place of employment:
University of Maryland

Add description
Add publication

Publications by Francois Guimbretiere (bibliography)

 what's this?
2012
 
Edit | Del

Zeng, Xiaolu, Hedge, Alan and Guimbretiere, Francois (2012): Fitts' Law in 3D Space with Coordinated Hand Movements. In: Proceedings of the Human Factors and Ergonomics Society 2012 Annual Meeting 2012. pp. 990-994.

The study tested the applicability of Fitts' law to coordinated hand movements in a 3D response space with. An experiment was conducted in which 20 participants performed the Fitts' pointing tasks with varying target distances, target sizes and approaching angles from a home position. Results confirmed that Fitts' law applies to coordinated hand movements in 3D space.

© All rights reserved Zeng et al. and/or Human Factors and Ergonomics Society

 
Edit | Del

Liu, Shenwei and Guimbretiere, Francois (2012): FlexAura: a flexible near-surface range sensor. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 327-330.

The availability of flexible capacitive sensors that can be fitted around mice, smartphones, and pens carries great potential in leveraging grasp as a new interaction modality. Unfortunately, most capacitive sensors only track interaction directly on the surface, making it harder to differentiate among grips and constraining user movements. We present a new optical range sensor design based on high power infrared LEDs and photo-transistors, which can be fabricated on a flexible PCB and wrapped around a wide variety of graspable objects including pens, mice, smartphones, and slates. Our sensor offers a native resolution of 10 dpi with a sensing range of up to 30mm (1.2") and sampling speed of 50Hz. Based on our prototype wrapped around the barrel of a pen, we present a summary of the characteristics of the sensor and describe the sensor output in several typical pen grips. Our design is versatile enough to apply not only to pens but to a wide variety of graspable objects including smartphones and slates.

© All rights reserved Liu and Guimbretiere and/or ACM Press

2011
 
Edit | Del

Song, Hyunyoung, Benko, Hrvoje, Guimbretiere, Francois, Izadi, Shahram, Cao, Xiang and Hinckley, Ken (2011): Grips and gestures on a multi-touch pen. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1323-1332.

This paper explores the interaction possibilities enabled when the barrel of a digital pen is augmented with a multi-touch sensor. We present a novel multi-touch pen (MTPen) prototype and discuss its alternate uses beyond those of a standard stylus, such as allowing new touch gestures to be performed using the index finger or thumb and detecting how users grip the device as a mechanism for mode switching. We also discuss the hardware and software implementation challenges in realizing our prototype, and showcase how one can combine different grips (tripod, relaxed tripod, sketch, wrap) and gestures (swipe and double tap) to enable new interaction techniques with the MTPen in a prototype drawing application. One specific aim is the elimination of some of the comfort problems associated with existing auxiliary controls on digital pens. Mechanical controls such as barrel buttons and barrel scroll wheels work best in only a few specific hand grips and pen rotations. Comparatively, our gestures can be successfully and comfortably performed regardless of the rotation of the pen or how the user grips it, offering greater flexibility in use. We describe a formal evaluation comparing MTPen gestures against the use of a barrel button for mode switching. This study shows that both swipe and double tap gestures are comparable in performance to commonly employed barrel buttons without its disadvantages.

© All rights reserved Song et al. and/or their publisher

 
Edit | Del

Sun, Minghui, Cao, Xiang, Song, Hyunyoung, Izadi, Shahram, Benko, Hrvoje, Guimbretiere, Francois, Ren, Xiangshi and Hinckley, Ken (2011): Enhancing naturalness of pen-and-tablet drawing through context sensing. In: Proceedings of the 2011 ACM International Conference on Interactive Tabletops and Surfaces 2011. pp. 83-86.

Among artists and designers, the pen-and-tablet combination is widely used for creating digital drawings, as digital pens outperform other input devices in replicating the experience of physical drawing tools. In this paper, we explore how contextual information such as the relationship between the hand, the pen, and the tablet can be leveraged in the digital drawing experience to further enhance its naturalness. By embedding sensors in the pen and the tablet to sense and interpret these contexts, we demonstrate how several physical drawing practices can be reflected and assisted in digital interaction scenarios.

© All rights reserved Sun et al. and/or ACM Press

2010
 
Edit | Del

Tsukada, Yujin and Guimbretiere, Francois (2010): TouchFace: the interaction between cursors and live video images for casual videoconferencing. In: Proceedings of DIS10 Designing Interactive Systems 2010. pp. 39-42.

We present a set of interaction techniques called "TouchFace" for casual videoconferencing among people in close relationships. It enables users to experience the sense of "touching" without the need for any special devices using interactions between cursors and live video images. After presenting the design guidelines underlying TouchFace, we describe a prototype following these guidelines. We conclude by presenting the results of a preliminary study.

© All rights reserved Tsukada and Guimbretiere and/or their publisher

 
Edit | Del

Song, Hyunyoung, Guimbretiere, Francois, Grossman, Tovi and Fitzmaurice, George (2010): MouseLight: bimanual interactions on digital paper using a pen and a spatially-aware mobile projector. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2451-2460.

MouseLight is a spatially-aware standalone mobile projector with the form factor of a mouse that can be used in combination with digital pens on paper. By interacting with the projector and the pen bimanually, users can visualize and modify the virtually augmented contents on top of the paper, and seamlessly transition between virtual and physical information. We present a high fidelity hardware prototype of the system and demonstrate a set of novel interactions specifically tailored to the unique properties of MouseLight. MouseLight differentiates itself from related systems such as PenLight in two aspects. First, MouseLight presents a rich set of bimanual interactions inspired by the ToolGlass interaction metaphor, but applied to physical paper. Secondly, our system explores novel displaced interactions, that take advantage of the independent input and output that is spatially aware of the underneath paper. These properties enable users to issue remote commands such as copy and paste or search. We also report on a preliminary evaluation of the system which produced encouraging observations and feedback.

© All rights reserved Song et al. and/or their publisher

 
Edit | Del

Apitz, Georg, Guimbretiere, Francois and Zhai, Shumin (2010): Foundations for designing and evaluating user interfaces based on the crossing paradigm. In ACM Transactions on Computer-Human Interaction, 17 (2) p. 9.

Traditional graphical user interfaces have been designed with the desktop mouse in mind, a device well characterized by Fitts' law. Yet in recent years, hand-held devices and tablet personal computers using a pen (or fingers) as the primary mean of interaction have become more and more popular. These new interaction modalities have pushed the traditional focus on pointing to its limit. In this paper we explore whether a different paradigm -- goal crossing-based on pen strokes -- may substitute or complement pointing as another fundamental interaction method. First we describe a study in which we establish that goal crossing is dependent on an index of difficulty analogous to Fitts' law, and that in some settings, goal crossing completion time is shorter or comparable to pointing performance under the same index of difficulty. We then demonstrate the expressiveness of the crossing-based interaction paradigm by implementing CrossY, an application which only uses crossing for selecting commands. CrossY demonstrates that crossing-based interactions can be more expressive than the standard point and click approach. We also show how crossing-based interactions encourage the fluid composition of commands. Finally after observing that users' performance could be influenced by the general direction of travel, we report on the results of a study characterizing this effect. These latter results led us to propose a general guideline for dialog box interaction. Together, these results provide the foundation for the design of effective crossing-based interactions.

© All rights reserved Apitz et al. and/or ACM Press

2009
 
Edit | Del

Song, Hyunyoung, Grossman, Tovi, Fitzmaurice, George W., Guimbretiere, Francois, Khan, Azam, Attar, Ramtin and Kurtenbach, Gordon (2009): PenLight: combining a mobile projector and a digital pen for dynamic visual overlay. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 143-152.

Digital pen systems, originally designed to digitize annotations made on physical paper, are evolving to permit a wider variety of applications. Although the type and quality of pen feedback (e.g., haptic, audio, and visual) have a huge impact on advancing the digital pen technology, dynamic visual feedback has yet to be fully investigated. In parallel, miniature projectors are an emerging technology with the potential to enhance visual feedback for small mobile computing devices. In this paper we present the PenLight system, which is a testbed to explore the interaction design space and its accompanying interaction techniques in a digital pen embedded with a spatially-aware miniature projector. Using our prototype, that simulates a miniature projection (via a standard video projector), we visually augment paper documents, giving the user immediate access to additional information and computational tools. We also show how virtual ink can be managed in single and multi-user environments to aid collaboration and data management. User evaluation with professional architects indicated promise of our proposed techniques and their potential utility in the paper-intensive domain of architecture.

© All rights reserved Song et al. and/or ACM Press

 
Edit | Del

Hinckley, Ken, Dixon, Morgan, Sarin, Raman, Guimbretiere, Francois and Balakrishnan, Ravin (2009): Codex: a dual screen tablet computer. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1933-1942.

The Codex is a dual-screen tablet computer, about the size of a 4"x 6 day planner, with a self-supporting binding and embedded sensors. The device can be oriented in a variety of postures to support different nuances of individual work, ambient display, or collaboration with another user. In the context of a pen-operated note taking application, we demonstrate interaction techniques that support a fluid division of labor for tasks and information across the two displays while minimizing disruption to the primary experience of authoring notes.

© All rights reserved Hinckley et al. and/or ACM Press

 
Edit | Del

Song, Hyunyoung, Guimbretiere, Francois and Lipson, Hod (2009): The ModelCraft framework: Capturing freehand annotations and edits to facilitate the 3D model design process using a digital pen. In ACM Transactions on Computer-Human Interaction, 16 (3) p. 14.

Recent advancements in rapid prototyping techniques such as 3D printing and laser cutting are changing the perception of physical 3D models in architecture and industrial design. Physical models are frequently created not only to finalize a project but also to demonstrate an idea in early design stages. For such tasks, models can easily be annotated to capture comments, edits, and other forms of feedback. Unfortunately, these annotations remain in the physical world and cannot easily be transferred back to the digital world. Our system, ModelCraft, addresses this problem by augmenting the surface of a model with a traceable pattern. Any sketch drawn on the surface of the model using a digital pen is recovered as part of a digital representation. Sketches can also be interpreted as edit marks that trigger the corresponding operations on the CAD model. ModelCraft supports a wide range of operations on complex models, from editing a model to assembling multiple models, and offers physical tools to capture free-space input. Several interviews and a formal study with the potential users of our system proved the ModelCraft system useful. Our system is inexpensive, requires no tracking infrastructure or per object calibration, and we show how it could be extended seamlessly to use current 3D printing technology.

© All rights reserved Song et al. and/or ACM Press

2008
 
Edit | Del

Dixon, Morgan, Guimbretiere, Francois and Chen, Nicholas (2008): Optimal parameters for efficient crossing-based dialog boxes. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1623-1632.

We present an empirical analysis of crossing-based dialog boxes. First, we study the spatial constraints required for efficient crossing-based interactions in the case of a simple multi-parameter dialog box. Through a series of 3 tasks, we establish the minimal value of the landing margin, the takeoff margin, and the column width. We also offer an estimation of the role of stroke shape on user performance. After studying the reasons for errors during our experiment, we propose a relaxed crossing semantic that combines aspects of pointing and crossing-based interfaces. To test our design, we compare a naïve dialog box implementation with our new implementation, as well as a standard point-and-click dialog box. Our results reveal that there is not a significant difference between the naïve crossing implementation and the standard point-and-click interface and that the new crossing semantic is faster than both the naïve crossing implementation and the point-and-click interface, despite a higher error rate. Together these two experiments establish that crossing-based dialog boxes can be as spatially efficient and faster than their point-and-click counterpart. Our new semantic provides the first step towards a smooth transition from point-and-click interfaces to crossing-based interfaces.

© All rights reserved Dixon et al. and/or ACM Press

 
Edit | Del

Chen, Nicholas, Guimbretiere, Francois, Dixon, Morgan, Lewis, Cassandra and Agrawala, Maneesh (2008): Navigation techniques for dual-display e-book readers. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1779-1788.

Existing e-book readers do not do a good job supporting many reading tasks that people perform, as ethnographers report that when reading, people frequently read from multiple display surfaces. In this paper we present our design of a dual display e-book reader and explore how it can be used to interact with electronic documents. Our design supports embodied interactions like folding, flipping, and fanning for local/lightweight navigation. We also show how mechanisms like Space Filling Thumbnails can use the increased display space to aid global navigation. Lastly, the detachable faces in our design can facilitate inter-document operations and flexible layout of documents in the workspace. Semi-directed interviews with seven users found that dual-displays have the potential to improve the reading experience by supporting several local navigation tasks better than a single display device. Users also identified many reading tasks for which the device would be valuable. Users did not find the embodied interface particularly useful when reading in our controlled lab setting, however.

© All rights reserved Chen et al. and/or ACM Press

 
Edit | Del

Chen, Nicholas Y., Guimbretiere, Francois and Löckenhoff, Corinna E. (2008): Relative role of merging and two-handed operation on command selection speed. In International Journal of Human-Computer Studies, 20 (10) pp. 729-740.

This paper examines the influence of two interface characteristics on command selection speed: the integration of command selection with direct manipulation (merging), and two-handed operation. We compared four interaction techniques representing combinations of these characteristics (Marking Menu, Two-handed Tool Palette, Toolglass, and Control Menu). Results suggest that the one-handed techniques selected for the present study produced a speed advantage over two-handed techniques, whereas the influence of merging was task dependent. A follow-up study examining Bimanual Marking Menu suggests that the performance of two-handed techniques may be reduced due to a split in visual attention required for certain techniques. Taken together, these findings have important implications for the design of command selection mechanisms for pen-based interfaces.

© All rights reserved Chen et al. and/or Academic Press

2007
 
Edit | Del

Guimbretiere, Francois, Dixon, Morgan and Hinckley, Ken (2007): ExperiScope: an analysis tool for interaction data. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 1333-1342.

We present ExperiScope, an analytical tool to help designers and experimenters explore the results of quantitative evaluations of interaction techniques. ExperiScope combines a new visualization incorporating aspects of the KLM and the three-state model with an interface helping users to rapidly cluster similar patterns of interactions. The tool makes it easy to identify and compare key patterns of use encountered during data collection. This promotes a deeper understanding of the results of a given evaluation. We illustrate the advantages of this tool by revisiting the data collected for an experiment conducted by Hinckley et al. [19] which compared different mode switching techniques. Our results show that our tool complements the previously reported results by offering insights about error behavior and the impact of mode switching on user performance. By providing a more fine-grained analysis of the data gathered during empirical evaluations, we hope that our tool will improve researchers' understanding of existing and newly developed interaction techniques.

© All rights reserved Guimbretiere et al. and/or ACM Press

 
Edit | Del

Liao, Chunyuan, Guimbretiere, Francois, Hinckley, Ken and Hollan, James D. (2007): Papiercraft: A gesture-based command system for interactive paper. In ACM Transactions on Computer-Human Interaction, 14 (4) p. 18.

Paper persists as an integral component of active reading and other knowledge-worker tasks because it provides ease of use unmatched by digital alternatives. Paper documents are light to carry, easy to annotate, rapid to navigate, flexible to manipulate, and robust to use in varied environments. Interactions with paper documents create rich webs of annotation, cross reference, and spatial organization. Unfortunately, the resulting webs are confined to the physical world of paper and, as they accumulate, become increasingly difficult to store, search, and access. XLibris [Schilit et al. 1998] and similar systems address these difficulties by simulating paper with tablet PCs. While this approach is promising, it suffers not only from limitations of current tablet computers (e.g., limited screen space) but also from loss of invaluable paper affordances. In this article, we describe PapierCraft, a gesture-based command system that allows users to manipulate digital documents using paper printouts as proxies. Using an Anoto [Anoto 2002] digital pen, users can draw command gestures on paper to tag a paragraph, e-mail a selected area, copy selections to a notepad, or create links to related documents. Upon pen synchronization, PapierCraft executes the commands and presents the results in a digital document viewer. Users can then search the tagged information and navigate the web of annotated digital documents resulting from interactions with the paper proxies. PapierCraft also supports real time interactions across mix-media, for example, letting users copy information from paper to a Tablet PC screen. This article presents the design and implementation of the PapierCraft system and describes user feedback from initial use.

© All rights reserved Liao et al. and/or ACM Press

 
Edit | Del

Song, Hyunyoung, Guimbretiere, Francois, Ambrose, Michael A. and Lostritto, Carl (2007): CubeExplorer: An Evaluation of Interaction Techniques in Architectural Education. In: Baranauskas, Maria Cecília Calani, Palanque, Philippe A., Abascal, Julio and Barbosa, Simone Diniz Junqueira (eds.) DEGAS 2007 - Proceedings of the 1st International Workshop on Design and Evaluation of e-Government Applications and Services September 11th, 2007, Rio de Janeiro, Brazil. pp. 43-56.

 
Edit | Del

Liao, Chunyuan, Guimbretiere, Francois, Anderson, Richard J., Linnell, Natalie, Prince, Craig and Razmov, Valentin (2007): PaperCP: Exploring the Integration of Physical and Digital Affordances for Active Learning. In: Baranauskas, Maria Cecília Calani, Palanque, Philippe A., Abascal, Julio and Barbosa, Simone Diniz Junqueira (eds.) DEGAS 2007 - Proceedings of the 1st International Workshop on Design and Evaluation of e-Government Applications and Services September 11th, 2007, Rio de Janeiro, Brazil. pp. 15-28.

2006
 
Edit | Del

Nekrasovski, Dmitry, Bodnar, Adam, McGrenere, Joanna, Guimbretiere, Francois and Munzner, Tamara (2006): An evaluation of pan & zoom and rubber sheet navigation with and without an overview. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 11-20.

We present a study that evaluates conventional Pan and Zoom Navigation and Rubber Sheet Navigation, a rectilinear Focus+Context technique. Each of the two navigation techniques was evaluated both with and without an overview. All interfaces guaranteed that regions of interest would remain visible, at least as a compressed landmark, independent of navigation actions. Interfaces implementing these techniques were used by 40 subjects to perform a task that involved navigating a large hierarchical tree dataset and making topological comparisons between nodes in the tree. Our results show that Pan and Zoom Navigation was significantly faster and required less mental effort than Rubber Sheet Navigation, independent of the presence or absence of an overview. Also, overviews did not appear to improve performance, but were still perceived as beneficial by users. We discuss the implications of our task and guaranteed visibility on the results and the limitations of our study, and we propose preliminary design guidelines and recommendations for future work.

© All rights reserved Nekrasovski et al. and/or ACM Press

 
Edit | Del

Hinckley, Ken, Guimbretiere, Francois, Baudisch, Patrick, Sarin, Raman, Agrawala, Maneesh and Cutrell, Edward (2006): The springboard: multiple modes in one spring-loaded control. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 181-190.

Modes allow a few inputs to invoke many operations, yet if a user misclassifies or forgets the state of a system, modes can result in errors. Spring-loaded modes (quasimodes) maintain a mode while the user holds a control such as a button or key. The Springboard is an interaction technique for tablet computers that extends quasimodes to encompass multiple tool modes in a single spring-loaded control. The Springboard allows the user to continue holding down a nonpreferred-hand command button after selecting a tool from a menu as a way to repeatedly apply the same tool. We find the Springboard improves performance for both a local marking menu and for a non-local marking menu ("lagoon") at the lower left corner of the screen. Despite the round-trip costs incurred to move the pen to a tool lagoon, a keystroke-level analysis of the true cost of each technique reveals the local marking menu is not significantly faster.

© All rights reserved Hinckley et al. and/or ACM Press

 
Edit | Del

Yeh, Ron, Liao, Chunyuan, Klemmer, Scott R., Guimbretiere, Francois, Lee, Brian, Kakaradov, Boyko, Stamberger, Jeannie and Paepcke, Andreas (2006): ButterflyNet: a mobile capture and access system for field biology research. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 571-580.

Through a study of field biology practices, we observed that biology fieldwork generates a wealth of heterogeneous information, requiring substantial labor to coordinate and distill. To manage this data, biologists leverage a diverse set of tools, organizing their effort in paper notebooks. These observations motivated ButterflyNet, a mobile capture and access system that integrates paper notes with digital photographs captured during field research. Through ButterflyNet, the activity of leafing through a notebook expands to browsing all associated digital photos. ButterflyNet also facilitates the transfer of captured content to spreadsheets, enabling biologists to share their work. A first-use study with 14 biologists found this system to offer rich data capture and transformation, in a manner felicitous with current practice.

© All rights reserved Yeh et al. and/or ACM Press

 
Edit | Del

Hinckley, Ken, Guimbretiere, Francois, Agrawala, Maneesh, Apitz, Georg and Chen, Nicholas (2006): Phrasing techniques for multi-stroke selection gestures. In: Proceedings of the 2006 Conference on Graphics Interface 2006. pp. 147-154.

Pen gesture interfaces have difficulty supporting arbitrary multiple-stroke selections because lifting the pen introduces ambiguity as to whether the next stroke should add to the existing selection, or begin a new one. We explore and evaluate techniques that use a non-preferred-hand button or touchpad to phrase together one or more independent pen strokes into a unitary multi-stroke gesture. We then illustrate how such phrasing techniques can support multiple-stroke selection gestures with tapping, crossing, lassoing, disjoint selection, circles of exclusion, selection decorations, and implicit grouping operations. These capabilities extend the expressiveness of pen gesture interfaces and suggest new directions for multiple-stroke pen input techniques.

© All rights reserved Hinckley et al. and/or Canadian Information Processing Society

 
Edit | Del

Song, Hyunyoung, Guimbretiere, Francois, Hu, Chang and Lipson, Hod (2006): ModelCraft: capturing freehand annotations and edits on physical 3D models. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 13-22.

With the availability of affordable new desktop fabrication techniques such as 3D printing and laser cutting, physical models are used increasingly often during the architectural and industrial design cycle. Models can easily be annotated to capture comments, edits and other forms of feedback. Unfortunately, these annotations remain in the physical world and cannot be easily transferred back to the digital world. Here we present a simple solution to this problem based on a tracking pattern printed on the surface of each model. Our solution is inexpensive, requires no tracking infrastructure or per object calibration, and can be used in the field without a computer nearby. It lets users not only capture annotations, but also edit the model using a simple yet versatile command system. Once captured, annotations and edits are merged into the original CAD models. There they can be easily edited or further refined. We present the design of a SolidWorks plug-in implementing this concept, and report initial feedback from potential users using our prototype. We also present how this prototype could be extended seamlessly to a fully functional system using current 3D printing technology.

© All rights reserved Song et al. and/or ACM Press

 
Edit | Del

Liao, Chunyuan, Guimbretiere, Francois and Loeckenhoff, Corinna E. (2006): Pen-top feedback for paper-based interfaces. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 201-210.

Current paper-based interfaces such as PapierCraft, provide very little feedback and this limits the scope of possible interactions. So far, there has been little systematic exploration of the structure, constraints, and contingencies of feedback-mechanisms in paper-based interaction systems for paper-only environments. We identify three levels of feedback: discovery feedback (e.g., to aid with menu learning), status-indication feedback (e.g., for error detection), and task feedback (e.g., to aid in a search task). Using three modalities (visual, tactile, and auditory) which can be easily implemented on a pen-sized computer, we introduce a conceptual matrix to guide systematic research on pen-top feedback for paper-based interfaces. Using this matrix, we implemented a multimodal pen prototype demonstrating the potential of our approach. We conducted an experiment that confirmed the efficacy of our design in helping users discover a new interface and identify and correct their errors.

© All rights reserved Liao et al. and/or ACM Press

2005
 
Edit | Del

Liao, Chunyuan, Guimbretiere, Francois and Hinckley, Ken (2005): PapierCraft: a command system for interactive paper. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 241-244.

Knowledge workers use paper extensively for document reviewing and note-taking due to its versatility and simplicity of use. As users annotate printed documents and gather notes, they create a rich web of annotations and cross references. Unfortunately, as paper is a static media, this web often gets trapped in the physical world. While several digital solutions such as XLibris [15] and Digital Desk [18] have been proposed, they suffer from a small display size or onerous hardware requirements. To address these limitations, we propose PapierCraft, a gesture-based interface that allows users to manipulate digital documents directly using their printouts as proxies. Using a digital pen, users can annotate a printout or draw command gestures to indicate operations such as copying a document area, pasting an area previously copied, or creating a link. Upon pen synchronization, our infrastructure executes these commands and presents the result in a customized viewer. In this paper we describe the design and implementation of the PapierCraft command system, and report on early user feedback.

© All rights reserved Liao et al. and/or ACM Press

 
Edit | Del

Hinckley, Ken, Baudisch, Patrick, Ramos, Gonzalo and Guimbretiere, Francois (2005): Design and analysis of delimiters for selection-action pen gesture phrases in scriboli. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 451-460.

We present a quantitative analysis of delimiters for pen gestures. A delimiter is "something different" in the input stream that a computer can use to determine the structure of input phrases. We study four techniques for delimiting a selection-action gesture phrase consisting of lasso selection plus marking-menu-based command activation. Pigtail is a new technique that uses a small loop to delimit lasso selection from marking (Fig. 1). Handle adds a box to the end of the lasso, from which the user makes a second stroke for marking. Timeout uses dwelling with the pen to delimit the lasso from the mark. Button uses a button press to signal when to delimit the gesture. We describe the role of delimiters in our Scriboli pen interaction testbed, and show how Pigtail supports scope selection, command activation, and direct manipulation all in a single fluid pen gesture.

© All rights reserved Hinckley et al. and/or ACM Press

 
Edit | Del

Sun, Liyang and Guimbretiere, Francois (2005): Flipper: a new method of digital document navigation. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 2001-2004.

Page flipping is an important part of paper-based document navigation. However this affordance of paper document has not been fully transferred to digital documents. In this paper we present Flipper, a new digital document navigation technique inspired by paper document flipping. Flipper combines speed-dependent automatic zooming (SDAZ) [6] and rapid serial visual presentation (RSVP) [3], to let users navigate through documents at a wide range of speeds. It is particularly well adapted to rapid visual search. User studies show Flipper is faster than both conventional scrolling and SDAZ and is well received by users.

© All rights reserved Sun and Guimbretiere and/or ACM Press

2004
 
Edit | Del

Apitz, Georg and Guimbretiere, Francois (2004): CrossY: a crossing-based drawing application. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 3-12.

We introduce CrossY, a simple drawing application developed as a benchmark to demonstrate the feasibility of goal crossing as the basis for a graphical user interface. We show that crossing is not only as expressive as the current point-and-click interface, but also offers more flexibility in interaction design. In particular, crossing encourages the fluid composition of commands which supports the development of more fluid interfaces. While crossing was previously identified as a potential substitute for the classic point-and-click interaction, this work is the first to report on the practical aspects of implementing an interface based on goal crossing as the fundamental building block.

© All rights reserved Apitz and Guimbretiere and/or ACM Press

 
Edit | Del

Hourcade, Juan Pablo, Bederson, Benjamin B., Druin, Allison and Guimbretiere, Francois (2004): Differences in pointing task performance between preschool children and adults using mice. In ACM Transactions on Computer-Human Interaction, 11 (4) pp. 357-386.

Several experiments by psychologists and human factors researchers have shown that when young children execute pointing tasks, they perform at levels below older children and adults. However, these experiments have not provided user interface designers with an understanding of the severity or the nature of the difficulties young children have when using input devices. To address this need, we conducted a study to gain a better understanding of 4 and 5 year-old children's use of mice. We compared the performance of thirteen 4 year-olds, thirteen 5 year-olds and thirteen young adults in point-and-click tasks. Plots of the paths taken by the participants show severe differences between adults' and preschool children's ability to control the mouse. We were not surprised then to find age had a significant effect on accuracy, target reentry, and efficiency. We also found that target size had a significant effect on accuracy and target reentry. Measuring movement time at four different times (first entering target, last entering target, pressing button, releasing button) yielded the result that Fitts' law models children well only up to the time they first enter the target. Overall, we found that the difference between the performance of children and adults was large enough to warrant user interface interactions designed specifically for preschool children. The results additionally suggest that children need the most help once they get close to targets.

© All rights reserved Hourcade et al. and/or ACM Press

 
Edit | Del

Hinckley, Ken, Ramos, Gonzalo, Guimbretiere, Francois, Baudisch, Patrick and Smith, Marc (2004): Stitching: pen gestures that span multiple displays. In: Costabile, Maria Francesca (ed.) AVI 2004 - Proceedings of the working conference on Advanced visual interfaces May 25-28, 2004, Gallipoli, Italy. pp. 23-31.

2003
 
Edit | Del

Guimbretiere, Francois (2003): Paper augmented digital documents. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 51-60.

Paper Augmented Digital Documents (PADDs) are digital documents that can be manipulated either on a computer screen or on paper. PADDs, and the infrastructure supporting them, can be seen as a bridge between the digital and the paper worlds. As digital documents, PADDs are easy to edit, distribute and archive; as paper documents, PADDs are easy to navigate, annotate and well accepted in social settings. The chimeric nature of PADDs make them well suited for many tasks such as proofreading, editing, and annotation of large format document like blueprints. We are presenting an architecture which supports the seamless manipulation of PADDs using today's technologies and reports on the lessons we learned while implementing the first PADD system.

© All rights reserved Guimbretiere and/or ACM Press

2001
 
Edit | Del

Guimbretiere, Francois, Stone, Maureen C. and Winograd, Terry (2001): Fluid interaction with high-resolution wall-size displays. In: Marks, Joe and Mynatt, Elizabeth D. (eds.) Proceedings of the 14th annual ACM symposium on User interface software and technology November 11 - 14, 2001, Orlando, Florida. pp. 21-30.

This paper describes new interaction techniques for direct pen-based interaction on the Interactive Mural, a large (6'x3.5') high resolution (64 dpi) display. They have been tested in a digital brainstorming tool that has been used by groups of professional product designers. Our "interactive wall" metaphor for interaction has been guided by several goals: to support both free-hand sketching and high-resolution materials, such as images, 3D models and GUI application windows; to present a visual appearance that does not clutter the content with control devices; and to support fluid interaction, which minimizes the amount of attention demanded and interruption due to the mechanics of the interface. We have adapted and extended techniques that were developed for electronic whiteboards and generalized the use of the FlowMenu to execute a wide variety of actions in a single pen stroke, While these techniques were designed for a brainstorming tool, they are very general and can be used in a wide variety of application domains using interactive surfaces.

© All rights reserved Guimbretiere et al. and/or ACM Press

2000
 
Edit | Del

Guimbretiere, Francois and Winograd, Terry (2000): FlowMenu: Combining Command, Text, and Data Entry. In: Ackerman, Mark S. and Edwards, Keith (eds.) Proceedings of the 13th annual ACM symposium on User interface software and technology November 06 - 08, 2000, San Diego, California, United States. pp. 213-216.

1999
 
Edit | Del

Munzner, Tamara, Guimbretiere, Francois and Robertson, George G. (1999): Constellation: A Visualization Tool for Linguistic Queries from MindNet. In: InfoVis 1999 1999. pp. 132-.

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
 
 
 

Changes to this page (author)

23 Nov 2012: Modified
10 Nov 2012: Modified
04 Apr 2012: Modified
05 Jul 2011: Modified
16 Jan 2011: Modified
19 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
25 Jul 2009: Modified
25 Jul 2009: Modified
19 Jun 2009: Modified
17 Jun 2009: Modified
05 Jun 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
24 Jul 2007: Modified
24 Jul 2007: Modified
24 Jul 2007: Modified
29 Jun 2007: Modified
29 Jun 2007: Modified
23 Jun 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Modified
19 Jun 2007: Modified
11 Jun 2007: Modified
07 Jun 2007: Added
28 Apr 2003: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/francois_guimbretiere.html

Publication statistics

Pub. period:1999-2012
Pub. count:33
Number of co-authors:59



Co-authors

Number of publications with 3 favourite co-authors:

Ken Hinckley:10
Hyunyoung Song:7
Chunyuan Liao:5

 

 

Productive colleagues

Francois Guimbretiere's 3 most productive colleagues in number of publications:

Ravin Balakrishnan:108
Allison Druin:81
Benjamin B. Beders..:70
 
 
 
Jul 10

Visual appearance is one of the most effective variables for quickly differentiating one application from another

-- Bob Baxley, 2003

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!