Publication statistics

Pub. period:2001-2012
Pub. count:18
Number of co-authors:28


Number of publications with 3 favourite co-authors:

Julie Wagner:
Wendy Mackay:
Mathieu Nancel:



Productive colleagues

Emmanuel Pietriga's 3 most productive colleagues in number of publications:

Mary Beth Rosson:142
Michel Beaudouin-L..:53
Chris Johnson:43

Upcoming Courses

go to course
User Research - Methods and Best Practices
go to course
Get Your First Job as a UX or Interaction Designer
Starts tomorrow LAST CALL!

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !


Our Latest Books

The Glossary of Human Computer Interaction
by Mads Soegaard and Rikke Friis Dam
start reading
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading

Emmanuel Pietriga

Picture of Emmanuel Pietriga.
Update pic

Publications by Emmanuel Pietriga (bibliography)

 what's this?
Edit | Del

Almeida, Rodrigo A. de, Pillias, Clment, Pietriga, Emmanuel and Cubaud, Pierre (2012): Looking behind bezels: French windows for wall displays. In: Proceedings of the 2012 International Conference on Advanced Visual Interfaces 2012. pp. 124-131.

Using tiled monitors to build wall-sized displays has multiple advantages: higher pixel density, simpler setup and easier calibration. However, the resulting display walls suffer from the visual discontinuity caused by the bezels that frame each monitor. To avoid introducing distortion, the image has to be rendered as if some pixels were drawn behind the bezels. In turn, this raises the issue that a non-negligible part of the rendered image, that might contain important information, is visually occluded. We propose to draw upon the analogy to french windows that is often used to describe this approach, and make the display really behave as if the visualization were observed through a french window. We present and evaluate two interaction techniques that let users reveal content hidden behind bezels. ePan enables users to offset the entire image through explicit touch gestures. GridScape adopts a more implicit approach: it makes the grid formed by bezels act like a true french window using head tracking to simulate motion parallax, adapting to users' physical movements in front of the display. The two techniques work for both single- and multiple-user contexts.

© All rights reserved Almeida et al. and/or ACM Press

Edit | Del

Pindat, Cyprien, Pietriga, Emmanuel, Chapuis, Olivier and Puech, Claude (2012): JellyLens: content-aware adaptive lenses. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 261-270.

Focus+context lens-based techniques smoothly integrate two levels of detail using spatial distortion to connect the magnified region and the context. Distortion guarantees visual continuity, but causes problems of interpretation and focus targeting, partly due to the fact that most techniques are based on statically-defined, regular lens shapes, that result in far-from-optimal magnification and distortion. JellyLenses dynamically adapt to the shape of the objects of interest, providing detail-in-context visualizations of higher relevance by optimizing what regions fall into the focus, context and spatially-distorted transition regions. This both improves the visibility of content in the focus region and preserves a larger part of the context region. We describe the approach and its implementation, and report on a controlled experiment that evaluates the usability of JellyLenses compared to regular fisheye lenses, showing clear performance improvements with the new technique for a multi-scale visual search task.

© All rights reserved Pindat et al. and/or ACM Press

Edit | Del

Bach, Benjamin, Pietriga, Emmanuel, Liccardi, Ilaria and Legostaev, Gennady (2011): OntoTrix: a hybrid visualization for populated ontologies. In: Proceedings of the 2011 International Conference on the World Wide Web 2011. pp. 177-180.

Most Semantic Web data visualization tools structure the representation according to the concept definitions and interrelations that constitute the ontology's vocabulary. Instances are often treated as somewhat peripheral information, when considered at all. These instances, that populate ontologies, represent an essential part of any knowledge base, and are often orders of magnitude more numerous than the concept definitions that give them machine-processable meaning. We present a visualization technique designed to enable users to visualize large instance sets and the relations that connect them. This hybrid visualization uses both node-link and adjacency matrix representations of graphs to visualize different parts of the data depending on their semantic and local structural properties, exploiting ontological knowledge to drive the graph layout. The representation is embedded in an environment that features advanced interaction techniques for easy navigation, including support for smooth continuous zooming and coordinated views.

© All rights reserved Bach et al. and/or ACM Press

Edit | Del

Nancel, Mathieu, Wagner, Julie, Pietriga, Emmanuel, Chapuis, Olivier and Mackay, Wendy (2011): Mid-air pan-and-zoom on wall-sized displays. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 177-186.

Very-high-resolution wall-sized displays offer new opportunities for interacting with large data sets. While pointing on this type of display has been studied extensively, higher-level, more complex tasks such as pan-zoom navigation have received little attention. It thus remains unclear which techniques are best suited to perform multiscale navigation in these environments. Building upon empirical data gathered from studies of pan-and-zoom on desktop computers and studies of remote pointing, we identified three key factors for the design of mid-air pan-and-zoom techniques: uni- vs. bimanual interaction, linear vs. circular movements, and level of guidance to accomplish the gestures in mid-air. After an extensive phase of iterative design and pilot testing, we ran a controlled experiment aimed at better understanding the influence of these factors on task performance. Significant effects were obtained for all three factors: bimanual interaction, linear gestures and a high level of guidance resulted in significantly improved performance. Moreover, the interaction effects among some of the dimensions suggest possible combinations for more complex, real-world tasks.

© All rights reserved Nancel et al. and/or their publisher

Edit | Del

Pietriga, Emmanuel, Huot, Stephane, Nancel, Mathieu and Primet, Romain (2011): Rapid development of user interfaces on cluster-driven wall displays with jBricks. In: ACM SIGCHI 2011 Symposium on Engineering Interactive Computing Systems 2011. pp. 185-190.

Research on cluster-driven wall displays has mostly focused on techniques for parallel rendering of complex 3D models. There has been comparatively little research effort dedicated to other types of graphics and to the software engineering issues that arise when prototyping novel interaction techniques or developing full-featured applications for such displays. We present jBricks, a Java toolkit that integrates a high-quality 2D graphics rendering engine and a versatile input configuration module into a coherent framework, enabling the exploratory prototyping of interaction techniques and rapid development of post-WIMP applications running on cluster-driven interactive visualization platforms.

© All rights reserved Pietriga et al. and/or ACM Press

Edit | Del

Appert, Caroline, Chapuis, Olivier and Pietriga, Emmanuel (2010): High-precision magnification lenses. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 273-282.

Focus+context interfaces provide in-place magnification of a region of the display, smoothly integrating the focus of attention into its surroundings. Two representations of the data exist simultaneously at two different scales, providing an alternative to classical pan&zoom for navigating multi-scale interfaces. For many practical applications however, the magnification range of focus+context techniques is too limited. This paper addresses this limitation by exploring the quantization problem: the mismatch between visual and motor precision in the magnified region. We introduce three new interaction techniques that solve this problem by integrating fast navigation and high-precision interaction in the magnified region. Speed couples precision to navigation speed. Key and Ring use a discrete switch between precision levels, the former using a keyboard modifier, the latter by decoupling the cursor from the lens' center. We report on three experiments showing that our techniques make interacting with lenses easier while increasing the range of practical magnification factors, and that performance can be further improved by integrating speed-dependent visual behaviors.

© All rights reserved Appert et al. and/or their publisher

Edit | Del

Hundhausen, Christopher D., Pietriga, Emmanuel, Diaz, Paloma and Rosson, Mary Beth (eds.) IEEE Symposium on Visual Languages and Human-Centric Computing, VL/HCC 2010 21-25 September 2010, 2010, Legans-Madrid, Spain.

Edit | Del

Chapuis, Olivier, Labrune, Jean-Baptiste and Pietriga, Emmanuel (2009): DynaSpot: Speed-Dependent Area Cursor. In: Proceedings of the SIGCHI Conference on Human Factors and Computing Systems 2009. pp. 1391-1400.

© All rights reserved Chapuis et al. and/or ACM

Edit | Del

Chapuis, Olivier, Labrune, Jean-Baptiste and Pietriga, Emmanuel (2009): DynaSpot: speed-dependent area cursor. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1391-1400.

We present DynaSpot, a new technique for acquiring targets based on the area cursor. DynaSpot couples the cursor's activation area with its speed, behaving like a point cursor at low speed or when motionless. This technique minimizes visual distraction and allows pointing anywhere in empty space without requiring an explicit mode switch, thus enabling users to perform common interactions such as region selections seamlessly. The results of our controlled experiments show that the performance of DynaSpot can be modeled by Fitts' law, and that DynaSpot significantly outperforms the point cursor and achieves, in most conditions, the same level of performance as one of the most promising techniques to date, the Bubble cursor.

© All rights reserved Chapuis et al. and/or ACM Press

Edit | Del

Moscovich, Tomer, Chevalier, Fanny, Henry, Nathalie, Pietriga, Emmanuel and Fekete, Jean-Daniel (2009): Topology-aware navigation in large networks. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 2319-2328.

Applications supporting navigation in large networks are used every days by millions of people. They include road map navigators, flight route visualization systems, and network visualization systems using node-link diagrams. These applications currently provide generic interaction methods for navigation: pan-and-zoom and sometimes bird's eye views. This article explores the idea of exploiting the connection information provided by the network to help navigate these large spaces. We visually augment two traditional navigation methods, and develop two special-purpose techniques. The first new technique, called "Link Sliding", provides guided panning when continuously dragging along a visible link. The second technique, called "Bring&Go", brings adjacent nodes nearby when pointing to a node. We compare the performance of these techniques in both an adjacency exploration task and a node revisiting task. This comparison illustrates the various advantages of content-aware network navigation techniques. A significant speed advantage is found for the Bring&Go technique over other methods.

© All rights reserved Moscovich et al. and/or ACM Press

Edit | Del

Pietriga, Emmanuel and Appert, Caroline (2008): Sigma lenses: focus-context transitions combining space, time and translucence. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1343-1352.

Focus + context techniques such as fisheye lenses are used to navigate and manipulate objects in multi-scale worlds. They provide in-place magnification of a region without requiring users to zoom the whole representation and consequently lose context. Their adoption is however hindered by usability problems mostly due to the nature of the transition between focus and context. Existing transitions are often based on a physical metaphor (magnifying glass, fisheye, rubber sheet), and are almost always achieved through a single dimension: space. We investigate how other dimensions, namely time and translucence, can be used to achieve more efficient transitions. We present an extension to Carpendale's framework for unifying presentation space accommodating these new dimensions. We define new lenses in that space, called Sigma lenses, and compare them to existing lenses through experiments based on a generic task: focus targeting. Results show that one new lens, the Speed-coupled flattening lens, significantly outperforms all others.

© All rights reserved Pietriga and Appert and/or ACM Press

Edit | Del

Pietriga, Emmanuel, Appert, Caroline and Beaudouin-Lafon, Michel (2007): Pointing and beyond: an operationalization and preliminary evaluation of multi-scale searching. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 1215-1224.

A number of experimental studies based on domain-specific tasks have evaluated the efficiency of navigation techniques for searching multi-scale worlds. The discrepancies among their results call for a more generic framework similar in spirit to Fitts' reciprocal pointing task, but adapted to a task that significantly differs from pure pointing. We introduce such a framework based on an abstract task and evaluate how four multi-scale navigation techniques perform in one particular multi-scale world configuration. Experimental findings indicate that, in this context, pan&zoom combined with an overview is the most efficient technique of all four, and that focus + context techniques perform better than classical pan&zoom. We relate these findings to more realistic situations, discuss their applicability, and how the framework can be used to cover a broad range of situations.

© All rights reserved Pietriga et al. and/or ACM Press

Edit | Del

Pietriga, Emmanuel, Johnson, Chris, Karger, David and Lee, Ryan (2006): Fresnel: A Browser-Independent Presentation Vocabulary for RDF. In: Proceedings of the 5th International Semantic Web Conference ISWC 2006 2006, Athens, Greece. pp. 158-171.


© All rights reserved Pietriga et al. and/or Springer

Edit | Del

Pietriga, Emmanuel (2006): Semantic web data visualization with graph style sheets. In: Proceedings of the ACM Symposium on Software Visualization 2006. pp. 177-178.

Visual paradigms such as node-link diagrams are well suited to the representation of Semantic Web data encoded with the Resource Description Framework (RDF), whose data model can be seen as a directed labeled graph. However, these representations are not fully satisfying: diagrams can quickly become big and over-cluttered, making them hard to understand. This problem can be partly addressed by visually transforming the graphs: filtering information, providing alternative layouts for specific elements, and using all available visual variables to encode information, so as to better take advantage of human perceptual abilities. Graph Style Sheets have been designed for that purpose, allowing the filtering, grouping and styling of information elements through the specification of declarative transformation rules.

© All rights reserved Pietriga and/or ACM Press

Edit | Del

Pietriga, Emmanuel (2005): A Toolkit for Addressing HCI Issues in Visual Language Environments. In: Proceedings of the IEEE Symposium on Visual Languages and Human-Centric Computing VL-HCC 05 2005, Dallas, Texas. pp. 145-152.


© All rights reserved Pietriga and/or IEEE

Edit | Del

Pietriga, Emmanuel (2005): A Toolkit for Addressing HCI Issues in Visual Language Environments. In: VL-HCC 2005 - IEEE Symposium on Visual Languages and Human-Centric Computing 21-24 September, 2005, Dallas, TX, USA. pp. 145-152.

Edit | Del

Pietriga, Emmanuel and Vion-Dury, Jean-Yves (2001): VXT: Visual XML Transformer. In: HCC 2001 - IEEE CS International Symposium on Human-Centric Computing Languages and Environments September 5-7, 2001, Stresa, Italy. pp. 404-405.

Edit | Del

Vion-Dury, Jean-Yves and Pietriga, Emmanuel (2001): A Formal Study of a Visual Language for the Visualization of Document Type Definition. In: HCC 2001 - IEEE CS International Symposium on Human-Centric Computing Languages and Environments September 5-7, 2001, Stresa, Italy. pp. 52-59.

Add publication
Show list on your website

Join our community and advance:




Join our community!

Page Information

Page maintainer: The Editorial Team