Number of co-authors:22
Number of publications with 3 favourite co-authors:Shahram Izadi:2Otmar Hilliges:2Pushmeet Kohli:1
Dustin Freeman's 3 most productive colleagues in number of publications:Ravin Balakrishnan:108Abigail Sellen:81Shahram Izadi:50
Simplicity is not the goal. It is the by-product of a good idea and modest expectations.
-- Paul Rand, 1997
Read the fascinating history of Wearable Computing, told by its father, Steve Mann
Read Steve's chapter !
Our Latest Books
Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger
Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad
Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam
Publications by Dustin Freeman (bibliography)
Freeman, Dustin, Chevalier, Fanny, Westecott, Emma, Duffield, Kyle, Hartman, Kate and Reilly, Derek (2012): Tweetris: play with me. In: Proceedings of the 6th International Conference on Tangible and Embedded Interaction 2012. pp. 319-320.
We present Tweetris, a full-body interactive Tetris game with extended audience participation. Snapshots of players making correct tetrominos are tweeted, and this feed is used by a mobile and web-based Tetris game, which can be played from anywhere in real-time.
© All rights reserved Freeman et al. and/or ACM Press
Freeman, Dustin, Hilliges, Otmar, Sellen, Abigail, O'Hara, Kenton, Izadi, Shahram and Wood, Kenneth (2012): The role of physical controllers in motion video gaming. In: Proceedings of DIS12 Designing Interactive Systems 2012. pp. 701-710.
Systems that detect the unaugmented human body allow players to interact without using a physical controller. But how is interaction altered by the absence of a physical input device? What is the impact on game performance, on a player's expectation of their ability to control the game, and on their game experience? In this study, we investigate these issues in the context of a table tennis video game. The results show that the impact of holding a physical controller, or indeed of the fidelity of that controller, does not appear in simple measures of performance. Rather, the difference between controllers is a function of the responsiveness of the game being controlled, as well as other factors to do with expectations, real world game experience and social context.
© All rights reserved Freeman et al. and/or ACM Press
Freeman, Dustin and Balakrishnan, Ravin (2011): Tangible actions. In: Proceedings of the 2011 ACM International Conference on Interactive Tabletops and Surfaces 2011. pp. 87-96.
We present Tangible Actions, an ad-hoc, just-in-time, visual programming by example language designed for large multitouch interfaces. With the design of Tangible Actions, we contribute a continually-created system of programming tokens that occupy the same space as the objects they act on. Tangible Actions are created by the gestural actions of the user, and they allow the user to reuse and modify their own gestures with a lower interaction cost than the original gesture. We implemented Tangible Actions in three different tabletop applications, and ran an informal evaluation. While we found that study participants generally liked and understood Tangible Actions, having the objects and the actions co-located can lead to visual and interaction clutter.
© All rights reserved Freeman and Balakrishnan and/or ACM Press
Izadi, Shahram, Kim, David, Hilliges, Otmar, Molyneaux, David, Newcombe, Richard, Kohli, Pushmeet, Shotton, Jamie, Hodges, Steve, Freeman, Dustin, Davison, Andrew and Fitzgibbon, Andrew (2011): KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 559-568.
KinectFusion enables a user holding and moving a standard Kinect camera to rapidly create detailed 3D reconstructions of an indoor scene. Only the depth data from Kinect is used to track the 3D pose of the sensor and reconstruct, geometrically precise, 3D models of the physical scene in real-time. The capabilities of KinectFusion, as well as the novel GPU-based pipeline are described in full. Uses of the core system for low-cost handheld scanning, and geometry-aware augmented reality and physics-based interactions are shown. Novel extensions to the core GPU pipeline demonstrate object segmentation and user interaction directly in front of the sensor, without degrading camera tracking or reconstruction. These extensions are used to enable real-time multi-touch interactions anywhere, allowing any planar or non-planar reconstructed physical surface to be appropriated for touch.
© All rights reserved Izadi et al. and/or ACM Press
Freeman, Dustin, Benko, Hrvoje, Morris, Meredith Ringel and Wigdor, Daniel (2009): ShadowGuides: visualizations for in-situ learning of multi-touch and whole-hand gestures. In: Proceedings of the 2009 ACM International Conference on Interactive Tabletops and Surfaces 2009. pp. 165-172.
We present ShadowGuides, a system for in-situ learning of multi-touch and whole-hand gestures on interactive surfaces. ShadowGuides provides on-demand assistance to the user by combining visualizations of the user's current hand posture as interpreted by the system (feedback) and available postures and completion paths necessary to finish the gesture (feedforward). Our experiment compared participants learning gestures with ShadowGuides to those learning with video-based instruction. We found that participants learning with ShadowGuides remembered more gestures and expressed significantly higher preference for the help system.
© All rights reserved Freeman et al. and/or their publisher
Show list on your website
Join the design elite and advance:
Changes to this page (author)09 Nov 2012: Modified05 Apr 2012: Modified
05 Apr 2012: Modified
04 Apr 2012: Modified
03 Nov 2010: Added
Page maintainer: The Editorial Team