Number of co-authors:30
Number of publications with 3 favourite co-authors:Nobuo Koizumi:3Owen Noel Newton Fe..:2Lester F. Ludwig:2
Michael Cohen's 3 most productive colleagues in number of publications:Adrian David Cheok:76Maneesh Agrawala:36Gonzalo Ramos:15
Science arose from poetry? when times change the two can meet again on a higher level as friends.
-- Johann Goethe
Read the fascinating history of Wearable Computing, told by its father, Steve Mann
Read Steve's chapter !
Our Latest Books
Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger
Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad
Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam
Publications by Michael Cohen (bibliography)
Wears, Robert L., Parker, Sarah Henrickson, Cohen, Michael, Carroll, John, Perry, Shawna and Manser, Tanja (2012): A wealth of information creates a poverty of attention?: Understanding information requirements at handovers. In: Proceedings of the Human Factors and Ergonomics Society 2012 Annual Meeting 2012. pp. 860-862.
Given the growing interest in understanding and improving handovers, a deeper understanding of exactly how to create the most effective handover is necessary. This unique symposium will emphasize the distinction between salience and comprehensiveness in handover communications across multiple acute care settings. The discussion panel brings together individuals with experience spanning human factors and system safety research, complex systems, information utilization, public policy, organizational and health communication, and clinical practice to provide a set of diverse but complementary viewpoints on what it will take to successfully, safely, and meaningfully design and implement communication structures for salient information exchange.
© All rights reserved Wears et al. and/or Human Factors and Ergonomics Society
Joshi, Neel, Mehta, Sisil, Drucker, Steven, Stollnitz, Eric, Hoppe, Hugues, Uyttendaele, Matt and Cohen, Michael (2012): Cliplets: juxtaposing still and dynamic imagery. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 251-260.
We explore creating ""cliplets"", a form of visual media that juxtaposes still image and video segments, both spatially and temporally, to expressively abstract a moment. Much as in ""cinemagraphs"", the tension between static and dynamic elements in a cliplet reinforces both aspects, strongly focusing the viewer's attention. Creating this type of imagery is challenging without professional tools and training. We develop a set of idioms, essentially spatiotemporal mappings, that characterize cliplet elements, and use these idioms in an interactive system to quickly compose a cliplet from ordinary handheld video. One difficulty is to avoid artifacts in the cliplet composition without resorting to extensive manual input. We address this with automatic alignment, looping optimization and feathering, simultaneous matting and compositing, and Laplacian blending. A key user-interface challenge is to provide affordances to define the parameters of the mappings from input time to output time while maintaining a focus on the cliplet being created. We demonstrate the creation of a variety of cliplet types. We also report on informal feedback as well as a more structured survey of users.
© All rights reserved Joshi et al. and/or ACM Press
Gupta, Ankit, Fox, Dieter, Curless, Brian and Cohen, Michael (2012): DuploTrack: a real-time system for authoring and guiding Duplo block assembly. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 389-402.
We demonstrate a realtime system which infers and tracks the assembly process of a snap-together block model using a Kinect® sensor. The inference enables us to build a virtual replica of the model at every step. Tracking enables us to provide context specific visual feedback on a screen by augmenting the rendered virtual model aligned with the physical model. The system allows users to author a new model and uses the inferred assembly process to guide its recreation by others. We propose a novel way of assembly guidance where the next block to be added is rendered in blinking mode with the tracked virtual model on screen. The system is also able to detect any mistakes made and helps correct them by providing appropriate feedback. We focus on assemblies of Duplo® blocks. We discuss the shortcomings of existing methods of guidance -- static figures or recorded videos -- and demonstrate how our method avoids those shortcomings. We also report on a user study to compare our system with standard figure-based guidance methods found in user manuals. The results of the user study suggest that our method is able to aid users' structural perception of the model better, leads to fewer assembly errors, and reduces model construction time.
© All rights reserved Gupta et al. and/or ACM Press
Pongnumkul, Suporn, Wang, Jue, Ramos, Gonzalo and Cohen, Michael (2010): Content-aware dynamic timeline for video browsing. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 139-142.
When browsing a long video using a traditional timeline slider control, its effectiveness and precision degrade as a video's length grows. When browsing videos with more frames than pixels in the slider, aside from some frames being inaccessible, scrolling actions cause sudden jumps in a video's continuity as well as video frames to flash by too fast for one to assess the content. We propose a content-aware dynamic timeline control that is designed to overcome these limitations. Our timeline control decouples video speed and playback speed, and leverages video content analysis to allow salient shots to be presented at an intelligible speed. Our control also takes advantage of previous work on elastic sliders, which allows us to produce an accurate navigation control.
© All rights reserved Pongnumkul et al. and/or their publisher
Fernando, Owen Noel Newton, Cohen, Michael and Cheok, Adrian David (2007): Mobile spatial audio interfaces. In: Cheok, Adrian David and Chittaro, Luca (eds.) Proceedings of the 9th Conference on Human-Computer Interaction with Mobile Devices and Services - Mobile HCI 2007 September 9-12, 2007, Singapore. pp. 345-347.
Cohen, Michael, Bolhassan, Noor Alamshah and Fernando, Owen Noel Newton (2007): A Multiuser Multiperspective Stereographic QTVR Browser Complemented by Java3D Visualizer and Emulator. In Presence: Teleoperators and Virtual Environments, 16 (4) pp. 414-438.
Santella, Anthony, Agrawala, Maneesh, DeCarlo, Doug, Salesin, David and Cohen, Michael (2006): Gaze-based interaction for semi-automatic photo cropping. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 771-780.
We present an interactive method for cropping photographs given minimal information about important content location, provided by eye tracking. Cropping is formulated in a general optimization framework that facilitates adding new composition rules, and adapting the system to particular applications. Our system uses fixation data to identify important image content and compute the best crop for any given aspect ratio or size, enabling applications such as automatic snapshot recomposition, adaptive documents, and thumbnailing. We validate our approach with studies in which users compare our crops to ones produced by hand and by a completely automatic approach. Experiments show that viewers prefer our gaze-based crops to uncropped images and fully automatic crops.
© All rights reserved Santella et al. and/or ACM Press
Cohen, Michael (2003): The Internet Chair. In International Journal of Human-Computer Interaction, 15 (2) pp. 297-311.
A pivot (swivel, rotating) chair is considered as an input/output device, an
information appliance. The input modality is orientation tracking, which can
dynamically select transfer functions used to spatialize audio in a
rotation-invariant soundscape. In groupware situations, like teleconferencing
or chat spaces, such orientation tracking can also be used to twist iconic
representations of a seated user, avatars in a virtual world, enabling social
situation awareness via coupled visual displays, soundscape-stabilized virtual
source locations, and direction-dependent projection of non-omnidirectional
sources. Using its audio output modality, the system can present unencumbered
binaural sound with soundscape stabilization for multichannel sound image
© All rights reserved Cohen and/or Lawrence Erlbaum Associates
Balogh, Jennifer, Leduc, Nicole and Cohen, Michael (2001): Navigating the voice Web. In: Stephanidis, Constantine (ed.) HCI International 2001 - Proceedings of the 9th International Conference on Human-Computer Interaction August 5-10, 2001, New Orleans, USA. pp. 242-246.
Cohen, Michael (2000): Exclude and Include for Audio Sources and Sinks: Analogs of mute & solo are deafen & attend. In Presence: Teleoperators and Virtual Environments, 9 (1) pp. 84-96.
Cohen, Michael and Koizumi, Nobuo (1998): Virtual Gain for Audio Windows. In Presence: Teleoperators and Virtual Environments, 7 (1) pp. 53-66.
Aoki, Shigeaki, Cohen, Michael and Koizumi, Nobuo (1994): Design and Control of Shared Conferencing Environments for Audio Telecommunication Using Individually Measured HRTFs. In Presence: Teleoperators and Virtual Environments, 3 (1) pp. 60-72.
Cohen, Michael (1994): Cybertokyo: A Survey of Public VRtractions. In Presence: Teleoperators and Virtual Environments, 3 (1) pp. 87-93.
Cohen, Michael (1993): Throwing, Pitching and Catching Sound: Audio Windowing Models and Modes. In International Journal of Man-Machine Studies, 39 (2) pp. 269-304.
After surveying the concepts of audio windowing, this paper elaborates taxonomies of three sets of its dimensions -- spatial audio ("throwing sound"), timbre ("pitching sound"), and gain ("catching sound") -- establishing matrices of variability for each, drawing similes, and citing applications. Two audio windowing systems are examined across these three operations: repositioning, distortion/blending, and gain control (i.e. state transitions in virtual space, timbre space, and volume space). Handy Sound is a purely auditory system with gestural control, while MAW exploits egocentric graphical control. These two systems motivated the development of special user interface features. (Sonic) piggyback-channels are introduced as filtear manifestations of changing cursors, used to track control state. A variable control/response ratio can be used to map a near-field work envelope into perceptual space. Clusters can be used to hierarchically collapse groups of spatial sound objects. WIMP idioms are reinterpreted for audio windowing functions. Reflexive operations are cast an instance of general manipulation when all the modified entities, including an iconification of the user, are projected into an egalitarian control/response system. Other taxonomies include a spectrum of directness of manipulation, and sensitivity to current position crossed with dependency on some target position.
© All rights reserved Cohen and/or Academic Press
Cohen, Michael and Koizumi, Nobuo (1993): Virtual Gain for Audio Windows. In: Proceedings of the Fifth International Conference on Human-Computer Interaction - Poster Sessions: Abridged Proceedings 1993. p. 283.
Cohen, Michael (1992): Integrating Graphic and Audio Windows. In Presence: Teleoperators and Virtual Environments, 1 (4) pp. 468-481.
Cohen, Michael (1992): Blush and Zebrackets: Large- and Small-Scale Typographical Representation of Nested Associativity. In: Proceedings of the 1992 IEEE Workshop on Visual Languages September 15-18, 1992, Seattle, Washington, USA. pp. 264-266.
Cohen, Michael and Ludwig, Lester F. (1991): Multidimensional Audio Window Management. In International Journal of Man-Machine Studies, 34 (3) pp. 319-336.
This paper proposes an organization of presentation and control that implements a flexible audio management system we call "audio windows". The result is a new user interface integrating an enhanced spatial sound presentation system, an audio emphasis system, and a gestural input recognition system. We have implemented these ideas in a modest prototype, also described, designed as an audio server appropriate for a teleconferencing system. Our system combines a gestural front end (currently based on a DataGlove, but whose concepts are appropriate for other devices as well) with an enhanced spatial sound system, a digital signal processing separation of multiple sound sources, augmented with "filtears", audio feedback cues that convey added information without distraction or loss of intelligibility. Our prototype employs a manual front end (requiring no keyboard or mouse) driving an auditory back end (requiring no CRT or visual display).
© All rights reserved Cohen and Ludwig and/or Academic Press
Ludwig, Lester F., Pincever, Natalio and Cohen, Michael (1990): Extending the Notion of a Window System to Audio. In IEEE Computer, 23 (8) pp. 66-72.
Show list on your website
Join the design elite and advance:
Changes to this page (author)23 Nov 2012: Modified23 Nov 2012: Modified
10 Nov 2012: Modified
03 Nov 2010: Modified
16 Jun 2009: Modified
04 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
29 May 2009: Modified
29 Jun 2007: Modified
19 Jun 2007: Modified
28 Apr 2003: Added
Page maintainer: The Editorial Team