Number of co-authors:14
Number of publications with 3 favourite co-authors:James Mahoney:3Edward Lank:2Thomas P. Moran:2
Eric Saund's 3 most productive colleagues in number of publications:James A. Landay:91Thomas P. Moran:65Robert C. Miller:42
User error: replace user and press any key to continue.
-- Popular computer one-liner
Read the fascinating history of Wearable Computing, told by its father, Steve Mann
Read Steve's chapter !
Publications by Eric Saund (bibliography)
Panne, Michiel van de and Saund, Eric (2008): Computers and Graphics special issue on EG SBIM 2007. In Computers & Graphics, 32 (5) p. 485.
Lank, Edward and Saund, Eric (2005): Sloppy selection: Providing an accurate interpretation of imprecise selection gestures. In Computers & Graphics, 29 (4) pp. 490-500.
Stahovich, Thomas F., Davis, Randall, Miller, Robert C., Landay, James A. and Saund, Eric (2005): Pen-based computing. In Computers & Graphics, 29 (4) pp. 477-479.
Saund, Eric and Mahoney, James (2004): Perceptual Support of Diagram Creation and Editing. In: Blackwell, Alan, Marriott, Kim and Shimojima, Atsushi (eds.) Diagrams 2004 - Diagrammatic Representation and Inference - Third International Conference March 22-24, 2004, Cambridge, UK. pp. 424-427.
Saund, Eric and Mahoney, James (2004): ScanScribe: Perceptually Supported Diagram Image Editing. In: Blackwell, Alan, Marriott, Kim and Shimojima, Atsushi (eds.) Diagrams 2004 - Diagrammatic Representation and Inference - Third International Conference March 22-24, 2004, Cambridge, UK. pp. 428-432.
Saund, Eric, Fleet, David, Larner, Daniel and Mahoney, James (2003): Perceptually-supported image editing of text and graphics. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 183-192.
This paper presents a novel image editing program emphasizing easy selection
and manipulation of material found in informal, casual documents such as
sketches, handwritten notes, whiteboard images, screen snapshots, and scanned
documents. The program, called ScanScribe, offers four significant advances.
First, it presents a new, intuitive model for maintaining image objects and
groups, along with underlying logic for updating these in the course of an
editing session. Second, ScanScribe takes advantage of newly developed image
processing algorithms to separate foreground markings from a white or light
background, and thus can automatically render the background transparent so
that image material can be rearranged without occlusion by background pixels.
Third, ScanScribe introduces new interface techniques for selecting image
objects with a pointing device without resorting to a palette of tool modes.
Fourth, ScanScribe presents a platform for exploiting image analysis and
recognition methods to make perceptually significant structure readily
available to the user. As a research prototype, ScanScribe has proven useful in
the work of members of our laboratory, and has been released on a limited basis
for user testing and evaluation.
© All rights reserved Saund et al. and/or ACM Press
Saund, Eric and Lank, Edward (2003): Stylus input and editing without prior selection of mode. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 213-216.
This paper offers a solution to the mode problem in computer
sketch/notetaking programs. Conventionally, the user must specify the intended
"draw" or "command" mode prior to performing a stroke. This necessity has
proven to be a barrier to the usability of pen/stylus systems. We offer a novel
Inferred-Mode interaction protocol that avoids the mode hassles of conventional
sketch systems. The system infers the user's intent, if possible, from the
properties of the pen trajectory and the context of the trajectory. If the
intent is ambiguous, the user is offered a choice mediator in the form of a
pop-up button. To maximize the fluidity of drawing, the user is entitled to
ignore the mediator and continue drawing. We present decision logic for the
inferred mode protocol, and discuss subtleties learned in the course of its
development. We also present results of initial user trials validating the
usability of this interaction design.
© All rights reserved Saund and Lank and/or ACM Press
Moran, Thomas P., Saund, Eric, Melle, William van, Gujar, Anuj, Fishkin, Kenneth P. and Harrison, Beverly L. (1999): Design and Technology for Collaborage: Collaborative Collages of Information on Physical Walls. In: Zanden, Brad Vander and Marks, Joe (eds.) Proceedings of the 12th annual ACM symposium on User interface software and technology November 07 - 10, 1999, Asheville, North Carolina, United States. pp. 197-206.
A Collaborage is a collaborative collage of physically represented information on a surface that is connected with electronic information, such as a physical In/Out board connected to a people-locator database. The physical surface (board) contains items that are tracked by camera and computer vision technology. Events on the board trigger electronic services. This paper motivates this concept, presents three different applications, describes the system architecture and component technologies, and discusses several design issues.
© All rights reserved Moran et al. and/or ACM Press
Saund, Eric and Moran, Thomas P. (1994): A Perceptually-Supported Sketch Editor. In: Szekely, Pedro (ed.) Proceedings of the 7th annual ACM symposium on User interface software and technology November 02 - 04, 1994, Marina del Rey, California, United States. pp. 175-184.
The human visual system makes a great deal more of images than the elemental marks on a surface. In the course of viewing, creating, or editing a picture, we actively construct a host of visual structures and relationships as components of sensible interpretations. This paper shows how some of these computational processes can be incorporated into perceptually-supported image editing tools, enabling machines to better engage users at the level of their own percepts. We focus on the domain of freehand sketch editors, such as an electronic whiteboard application for a pen-based computer. By using computer vision techniques to perform covert recognition of visual structure as it emerges during the course of a drawing/editing session, a perceptually supported image editor gives users access to visual objects as they are perceived by the human visual system. We present a flexible image interpretation architecture based on token grouping in a multistate blackboard data structure. This organization supports multiple perceptual interpretations of line drawing data, domain-specific knowledge bases for interpretable visual structures, and gesture-based selection of visual objects. A system implementing these ideas, called PerSketch, begins to explore a new space of WYPIWYG (What Your Perceive Is What You Get) image editing tools.
© All rights reserved Saund and Moran and/or ACM Press
Show this list on your homepage
Join the technology elite and advance:
Changes to this page (author)07 Nov 2012: Added07 Nov 2012: Added
07 Nov 2012: Added
15 Feb 2010: Modified
01 Jun 2009: Added
01 Jun 2009: Added
28 Apr 2003: Added
Page maintainer: The Editorial Team