Number of co-authors:15
Number of publications with 3 favourite co-authors:T. Scott Saponas:Richard C. Davis:James A. Landay:
Michael Shilman's 3 most productive colleagues in number of publications:Hiroshi Ishii:111James A. Landay:91Patrick Baudisch:57
go to course
Dynamic User Experience: Ajax Design and Usability
go to course
Gestalt Psychology and Web Design: The Ultimate Guide
92% booked. Starts in 3 days
Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess
User Experience and Experience Design !
Our Latest Books
The Glossary of Human Computer Interaction
by Mads Soegaard and Rikke Friis Dam
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
Publications by Michael Shilman (bibliography)
Chung, Keywon, Shilman, Michael, Merrill, Chris and Ishii, Hiroshi (2010): OnObject: gestural play with tagged everyday objects. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 379-380. http://doi.acm.org/10.1145/1866218.1866229
Many Tangible User Interface (TUI) systems employ sensor-equipped physical objects. However they do not easily scale to users' actual environments; most everyday objects lack the necessary hardware, and modification requires hardware and software development by skilled individuals. This limits TUI creation by end users, resulting in inflexible interfaces in which the mapping of sensor input and output events cannot be easily modified reflecting the end user's wishes and circumstances. We introduce OnObject, a small device worn on the hand, which can program physical objects to respond to a set of gestural triggers. Users attach RFID tags to situated objects, grab them by the tag, and program their responses to grab, release, shake, swing, and thrust gestures using a built-in button and a microphone. In this paper, we demonstrate how novice end users including preschool children can instantly create engaging gestural object interfaces with sound feedback from toys, drawings, or clay.
© All rights reserved Chung et al. and/or their publisher
Hinckley, Ken, Zhao, Shengdong, Sarin, Raman, Baudisch, Patrick, Cutrell, Edward, Shilman, Michael and Tan, Desney S. (2007): InkSeine: In Situ search for active note taking. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 251-260. http://doi.acm.org/10.1145/1240624.1240666
Using a notebook to sketch designs, reflect on a topic, or capture and extend creative ideas are examples of active note taking tasks. Optimal experience for such tasks demands concentration without interruption. Yet active note taking may also require reference documents or emails from team members. InkSeine is a Tablet PC application that supports active note taking by coupling a pen-and-ink interface with an in situ search facility that flows directly from a user's ink notes (Fig. 1). InkSeine integrates four key concepts: it leverages preexisting ink to initiate a search; it provides tight coupling of search queries with application content; it persists search queries as first class objects that can be commingled with ink notes; and it enables a quick and flexible workflow where the user may freely interleave inking, searching, and gathering content. InkSeine offers these capabilities in an interface that is tailored to the unique demands of pen input, and that maintains the primacy of inking above all other tasks.
© All rights reserved Hinckley et al. and/or ACM Press
Davis, Richard C., Saponas, T. Scott, Shilman, Michael and Landay, James A. (2007): SketchWizard: Wizard of Oz prototyping of pen-based user interfaces. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 119-128. http://doi.acm.org/10.1145/1294211.1294233
SketchWizard allows designers to create Wizard of Oz prototypes of pen-based user interfaces in the early stages of design. In the past, designers have been inhibited from participating in the design of pen-based interfaces because of the inadequacy of paper prototypes and the difficulty of developing functional prototypes. In SketchWizard, designers and end users share a drawing canvas between two computers, allowing the designer to simulate the behavior of recognition or other technologies. Special editing features are provided to help designers respond quickly to end-user input. This paper describes the SketchWizard system and presents two evaluations of our approach. The first is an early feasibility study in which Wizard of Oz was used to prototype a pen-based user interface. The second is a laboratory study in which designers used SketchWizard to simulate existing pen-based interfaces. Both showed that end users gave valuable feedback in spite of delays between end-user actions and wizard updates.
© All rights reserved Davis et al. and/or ACM Press
Shilman, Michael, Tan, Desney S. and Simard, Patrice (2006): CueTIP: a mixed-initiative interface for correcting handwriting errors. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 323-332. http://doi.acm.org/10.1145/1166253.1166304
With advances in pen-based computing devices, handwriting has become an increasingly popular input modality. Researchers have put considerable effort into building intelligent recognition systems that can translate handwriting to text with increasing accuracy. However, handwritten input is inherently ambiguous, and these systems will always make errors. Unfortunately, work on error recovery mechanisms has mainly focused on interface innovations that allow users to manually transform the erroneous recognition result into the intended one. In our work, we propose a mixed-initiative approach to error correction. We describe CueTIP, a novel correction interface that takes advantage of the recognizer to continually evolve its results using the additional information from user corrections. This significantly reduces the number of actions required to reach the intended result. We present a user study showing that CueTIP is more efficient and better preferred for correcting handwriting recognition errors. Grounded in the discussion of CueTIP, we also present design principles that may be applied to mixed-initiative correction interfaces in other domains.
© All rights reserved Shilman et al. and/or ACM Press
Hse, Heloise, Shilman, Michael and Newton, A. Richard (2004): Robust sketched symbol fragmentation using templates. In: Nunes, Nuno Jardim and Rich, Charles (eds.) International Conference on Intelligent User Interfaces 2004 January 13-16, 2004, Funchal, Madeira, Portugal. pp. 156-160. http://doi.acm.org/10.1145/964442.964472
Analysis of sketched digital ink is often aided by the division of stroke points into perceptually-salient fragments based on geometric features. Fragmentation has many applications in intelligent interfaces for digital ink capture and manipulation, as well as higher-level symbolic and structural analyses. It is our intuitive belief that the most robust fragmentations closely match a user's natural perception of the ink, thus leading to more effective recognition and useful user feedback. We present two optimal fragmentation algorithms that fragment common geometries into a basis set of line segments and elliptical arcs. The first algorithm uses an explicit template in which the order and types of bases are specified. The other only requires the number of fragments of each basis type. For the set of symbols under test, both algorithms achieved 100% fragmentation accuracy rate for symbols with line bases, >99% accuracy for symbols with elliptical bases, and >90% accuracy for symbols with mixed line and elliptical bases.
© All rights reserved Hse et al. and/or ACM Press
Join our community and advance:
Page maintainer: The Editorial Team