Number of co-authors:21
Number of publications with 3 favourite co-authors:Francois Guimbretiere:4James Fogarty:2Kristen Shinohara:2
Morgan Dixon's 3 most productive colleagues in number of publications:Ravin Balakrishnan:108James A. Landay:91Jacob O. Wobbrock:70
... there are no simple 'right' answers for most web design questions (at least not for the important ones). What works is good, integrated design that fills a need--carefully thought out, well executed, and tested.
-- Steve Krug, Don't Make Me Think, p. 136
Read the fascinating history of Wearable Computing, told by its father, Steve Mann
Read Steve's chapter !
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
Publications by Morgan Dixon (bibliography)
Dixon, Morgan, Leventhal, Daniel and Fogarty, James (2011): Content and hierarchy in pixel-based methods for reverse engineering interface structure. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 969-978.
The rigidity and fragmentation of GUI toolkits are fundamentally limiting the progress and impact of interaction research. Pixel-based methods offer unique potential for addressing these challenges independent of the implementation of any particular interface or toolkit. This work builds upon Prefab, which enables the modification of existing interfaces. We present new methods for hierarchical models of complex widgets, real-time interpretation of interface content, and real-time interpretation of content and hierarchy throughout an entire interface. We validate our new methods through implementations of four applications: stencil-based tutorials, ephemeral adaptation, interface translation, and end-user interface customization. We demonstrate these enhancements in complex existing applications created from different user interface toolkits running on different operating systems.
© All rights reserved Dixon et al. and/or their publisher
Li, Yang, Cao, Xiang, Everitt, Katherine, Dixon, Morgan and Landay, James A. (2010): FrameWire: a tool for automatically extracting interaction logic from paper prototyping tests. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 503-512.
Paper prototyping offers unique affordances for interface design. However, due to its spontaneous nature and the limitations of paper, it is difficult to distill and communicate a paper prototype design and its user test findings to a wide audience. To address these issues, we created FrameWire, a computer vision-based system that automatically extracts interaction flows from the video recording of paper prototype user tests. Based on the extracted logic, FrameWire offers two distinct benefits for designers: a structural view of the video recording that allows a designer or a stakeholder to easily distill and understand the design concept and user interaction behaviors, and automatic generation of interactive HTML-based prototypes that can be easily tested with a larger group of users as well as "walked through" by other stakeholders. The extraction is achieved by automatically aggregating video frame sequences into an interaction flow graph based on frame similarities and a designer-guided clustering process. The results of evaluating FrameWire with realistic paper prototyping tests show that our extraction approach is feasible and FrameWire is a promising tool for enhancing existing prototyping practice.
© All rights reserved Li et al. and/or their publisher
Dixon, Morgan and Fogarty, James (2010): Prefab: implementing advanced behaviors using pixel-based reverse engineering of interface structure. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 1525-1534.
Current chasms between applications implemented with different user interface toolkits make it difficult to implement and explore potentially important interaction techniques in new and existing applications, limiting the progress and impact of human-computer interaction research. We examine an approach based in the single most common characteristic of all graphical user interface toolkits, that they ultimately paint pixels to a display. We present Prefab, a system for implementing advanced behaviors through the reverse engineering of the pixels in graphical interfaces. Informed by how user interface toolkits paint interfaces, Prefab features a separation of the modeling of widget layout from the recognition of widget appearance. We validate Prefab in implementations of three applications: target-aware pointing techniques, Phosphor transitions, and Side Views parameter spectrums. Working only from pixels, we demonstrate a single implementation of these enhancements in complex existing applications created in different user interface toolkits running on different windowing systems.
© All rights reserved Dixon and Fogarty and/or their publisher
Findlater, Leah, Jansen, Alex, Shinohara, Kristen, Dixon, Morgan, Kamb, Peter, Rakita, Joshua and Wobbrock, Jacob O. (2010): Enhanced area cursors: reducing fine pointing demands for people with motor impairments. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 153-162.
Computer users with motor impairments face major challenges with conventional mouse pointing. These challenges are mostly due to fine pointing corrections at the final stages of target acquisition. To reduce the need for correction-phase pointing and to lessen the effects of small target size on acquisition difficulty, we introduce four enhanced area cursors, two of which rely on magnification and two of which use goal crossing. In a study with motor-impaired and able-bodied users, we compared the new designs to the point and Bubble cursors, the latter of which had not been evaluated for users with motor impairments. Two enhanced area cursors, the Visual-Motor-Magnifier and Click-and-Cross, were the most successful new designs for users with motor
© All rights reserved Findlater et al. and/or their publisher
Hinckley, Ken, Dixon, Morgan, Sarin, Raman, Guimbretiere, Francois and Balakrishnan, Ravin (2009): Codex: a dual screen tablet computer. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1933-1942.
The Codex is a dual-screen tablet computer, about the size of a 4"x 6 day planner, with a self-supporting binding and embedded sensors. The device can be oriented in a variety of postures to support different nuances of individual work, ambient display, or collaboration with another user. In the context of a pen-operated note taking application, we demonstrate interaction techniques that support a fluid division of labor for tasks and information across the two displays while minimizing disruption to the primary experience of authoring notes.
© All rights reserved Hinckley et al. and/or ACM Press
Choe, Eun Kyoung, Shinohara, Kristen, Chilana, Parmit K., Dixon, Morgan and Wobbrock, Jacob O. (2009): Exploring the design of accessible goal crossing desktop widgets. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 3733-3738.
Prior work has shown that goal crossing may be a more accessible interaction technique than conventional pointing-and-clicking for motor-impaired users. Although goal crossing with pen-based input devices has been studied, pen-based designs have limited applicability on the desktop because the pen can "fly in," cross, and "fly out," whereas a persistent mouse cursor cannot. We therefore explore possible designs for accessible mouse-based goal crossing widgets that avoid triggering unwanted goals by using secondary goals, gestures, and corners and edges. We identify four design principles for accessible desktop goal crossing widgets: ease of use for motor-impaired users, safety from false selections, efficiency, and scalability.
© All rights reserved Choe et al. and/or ACM Press
Dixon, Morgan, Guimbretiere, Francois and Chen, Nicholas (2008): Optimal parameters for efficient crossing-based dialog boxes. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1623-1632.
We present an empirical analysis of crossing-based dialog boxes. First, we study the spatial constraints required for efficient crossing-based interactions in the case of a simple multi-parameter dialog box. Through a series of 3 tasks, we establish the minimal value of the landing margin, the takeoff margin, and the column width. We also offer an estimation of the role of stroke shape on user performance. After studying the reasons for errors during our experiment, we propose a relaxed crossing semantic that combines aspects of pointing and crossing-based interfaces. To test our design, we compare a naïve dialog box implementation with our new implementation, as well as a standard point-and-click dialog box. Our results reveal that there is not a significant difference between the naïve crossing implementation and the standard point-and-click interface and that the new crossing semantic is faster than both the naïve crossing implementation and the point-and-click interface, despite a higher error rate. Together these two experiments establish that crossing-based dialog boxes can be as spatially efficient and faster than their point-and-click counterpart. Our new semantic provides the first step towards a smooth transition from point-and-click interfaces to crossing-based interfaces.
© All rights reserved Dixon et al. and/or ACM Press
Chen, Nicholas, Guimbretiere, Francois, Dixon, Morgan, Lewis, Cassandra and Agrawala, Maneesh (2008): Navigation techniques for dual-display e-book readers. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1779-1788.
Existing e-book readers do not do a good job supporting many reading tasks that people perform, as ethnographers report that when reading, people frequently read from multiple display surfaces. In this paper we present our design of a dual display e-book reader and explore how it can be used to interact with electronic documents. Our design supports embodied interactions like folding, flipping, and fanning for local/lightweight navigation. We also show how mechanisms like Space Filling Thumbnails can use the increased display space to aid global navigation. Lastly, the detachable faces in our design can facilitate inter-document operations and flexible layout of documents in the workspace. Semi-directed interviews with seven users found that dual-displays have the potential to improve the reading experience by supporting several local navigation tasks better than a single display device. Users also identified many reading tasks for which the device would be valuable. Users did not find the embodied interface particularly useful when reading in our controlled lab setting, however.
© All rights reserved Chen et al. and/or ACM Press
Guimbretiere, Francois, Dixon, Morgan and Hinckley, Ken (2007): ExperiScope: an analysis tool for interaction data. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 1333-1342.
We present ExperiScope, an analytical tool to help designers and experimenters explore the results of quantitative evaluations of interaction techniques. ExperiScope combines a new visualization incorporating aspects of the KLM and the three-state model with an interface helping users to rapidly cluster similar patterns of interactions. The tool makes it easy to identify and compare key patterns of use encountered during data collection. This promotes a deeper understanding of the results of a given evaluation. We illustrate the advantages of this tool by revisiting the data collected for an experiment conducted by Hinckley et al.  which compared different mode switching techniques. Our results show that our tool complements the previously reported results by offering insights about error behavior and the impact of mode switching on user performance. By providing a more fine-grained analysis of the data gathered during empirical evaluations, we hope that our tool will improve researchers' understanding of existing and newly developed interaction techniques.
© All rights reserved Guimbretiere et al. and/or ACM Press
Show list on your website
Join the technology elite and advance:
Changes to this page (author)05 Jul 2011: Modified03 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
09 May 2009: Modified
09 May 2009: Modified
12 May 2008: Modified
12 May 2008: Modified
19 Jun 2007: Added
Page maintainer: The Editorial Team