Number of co-authors:19
Number of publications with 3 favourite co-authors:Andrea H. Mason:3Evan D. Graham:2Shahram Payandeh:2
Christine L. MacKenzie's 3 most productive colleagues in number of publications:Kellogg S. Booth:56John Dill:23Shahram Payandeh:19
... there are no simple 'right' answers for most web design questions (at least not for the important ones). What works is good, integrated design that fills a need--carefully thought out, well executed, and tested.
-- Steve Krug, Don't Make Me Think, p. 136
Read the fascinating history of Wearable Computing, told by its father, Steve Mann
Read Steve's chapter !
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
Christine L. MacKenzie
Publications by Christine L. MacKenzie (bibliography)
Zheng, Bin and MacKenzie, Christine L. (2009): A Comparison of Human Performance in Grasping Virtual Objects by Hand and with Tools of Different Length Ratios. In: Proceedings of the Human Factors and Ergonomics Society 53rd Annual Meeting 2009. pp. 1156-1160.
Tool use brings challenges to human movement control. Mental calibrations are constantly needed to incorporate tool properties into the movement system. This project examines the accuracy of mental calibrations and studies the impact of tool use on the movement control. Eight university students were instructed to perform a matching task using a hand-held grasper. The grasper had a changeable hinge that alters the length ratios of the tool for different trials. Throughout the matching, visual feedback regarding the hand position and tool was not available. The matching accuracy was significantly reduced when using the grasper compared to using the hand directly. This indicates that the mental calibration is not as accurate as visual or proprioception for guiding movement. No significant matching difference was observed as a function of length ratios of the grasper, suggesting similar steps were involved in the mental calibration process for one kind of tool property.
© All rights reserved Zheng and MacKenzie and/or their publisher
Mason, Andrea H. and MacKenzie, Christine L. (2004): The Role of Graphical Feedback About Self-Movement when Receiving Objects in an Augmented Environment. In Presence: Teleoperators and Virtual Environments, 13 (5) pp. 507-519.
Kuang, Alex B., Payandeh, Shahram, Zheng, Bin, Henigman, Frank and MacKenzie, Christine L. (2004): Assembling Virtual Fixtures for Guidance in Training Environments. In: HAPTICS 2004 - 12th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems 27-28 March, 2004, Chicago, IL, USA. pp. 367-374.
Payandeh, Shahram, Dill, John, Wilson, Graham, Zhang, Hui, Shi, Lilong, Lomax, Alan J. and MacKenzie, Christine L. (2003): Demo: a multi-modal training environment for surgeons. In: Oviatt, Sharon L., Darrell, Trevor, Maybury, Mark T. and Wahlster, Wolfgang (eds.) Proceedings of the 5th International Conference on Multimodal Interfaces - ICMI 2003 November 5-7, 2003, Vancouver, British Columbia, Canada. pp. 301-302.
Zahariev, Mihaela A. and MacKenzie, Christine L. (2003): Auditory, graphical and haptic contact cues for a reach, grasp, and place task in an augmented environment. In: Oviatt, Sharon L., Darrell, Trevor, Maybury, Mark T. and Wahlster, Wolfgang (eds.) Proceedings of the 5th International Conference on Multimodal Interfaces - ICMI 2003 November 5-7, 2003, Vancouver, British Columbia, Canada. pp. 273-276.
Zahariev, Mihaela A. and MacKenzie, Christine L. (2003): Auditory, graphical and haptic contact cues for a reach, grasp, and place task in an augmented environment. In: Proceedings of the 2003 International Conference on Multimodal Interfaces 2003. pp. 273-276.
An experiment was conducted to investigate how performance of a reach, grasp and place task was influenced by added auditory and graphical cues. The cues were presented at points in the task, specifically when making contact for grasping or placing the object, and were presented in single or in combined modalities. Haptic feedback was present always during physical interaction with the object. The auditory and graphical cues provided enhanced feedback about making contact between hand and object and between object and table. Also, the task was performed with or without vision of hand. Movements were slower without vision of hand. Providing auditory cues clearly facilitated performance, while graphical contact cues had no additional effect. Implications are discussed for various uses of auditory displays in virtual environments.
© All rights reserved Zahariev and MacKenzie and/or their publisher
Mason, Andrea H. and MacKenzie, Christine L. (2002): The Effects of Visual Information about Self-Movement on Grasp Forces When Receiving Objects in an Augmented Environment. In: HAPTICS 2002 - Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems 2002 2002. pp. 105-112.
Mason, Andrea H., Walji, Masuma A., Lee, Elaine J. and MacKenzie, Christine L. (2001): Reaching Movements to Augmented and Graphic Objects in Virtual Environments. In: Beaudouin-Lafon, Michel and Jacob, Robert J. K. (eds.) Proceedings of the ACM CHI 2001 Human Factors in Computing Systems Conference March 31 - April 5, 2001, Seattle, Washington, USA. pp. 426-433.
This work explores how the availability of visual and haptic feedback affects and kinematics of reaching performance in a tabletop virtual environment. Eight subjects performed reach-to-grasp movements toward target objects of various sites in conditions where visual and haptic feedback were either present or absent. It was found that movement time was slower when visual feedback of the moving limb was not available. Further MT varied systematically with target size when haptic feedback was available (i.e. augmented targets), and thus followed Fitts' law. However, movement times were constant regardless of target size when haptic feedback was removed. In depth analysis of the reaching kinematics revealed that subjects spent longer decelerating toward smaller targets in conditions where haptic feedback was available. In contrast, deceleration time was constant when haptic feedback was absent. These results suggest that visual feedback about the moving limb and veridical haptic feedback about object contract are extremely important for humans to effectively work in virtual environments.
© All rights reserved Mason et al. and/or ACM Press
Wang, Yanqing and MacKenzie, Christine L. (2000): The Role of Contextual Haptic and Visual Constraints on Object Manipulation in Virtual Environments. In: Turner, Thea, Szwillus, Gerd, Czerwinski, Mary, Peterno, Fabio and Pemberton, Steven (eds.) Proceedings of the ACM CHI 2000 Human Factors in Computing Systems Conference April 1-6, 2000, The Hague, The Netherlands. pp. 532-539.
An experiment was conducted to investigate the role of surrounding haptic and visual information on object manipulation in a virtual environment. The contextual haptic constraints were implemented with a physical table and the contextual visual constraints included a checkerboard background ("virtual table"). It was found that the contextual haptic constraints (the physical table surface) dramatically increased object manipulation speed, but slightly reduced spatial accuracy, compared to free space. The contextual visual constraints (presence of the checkerboard) actually showed detrimental effects on both object manipulation speed and accuracy. Implications of these findings for human-computer interaction design are discussed.
© All rights reserved Wang and MacKenzie and/or ACM Press
Wang, Yancling and MacKenzie, Christine L. (1999): Object Manipulation in Virtual Environments: Relative Size Matters. In: Altom, Mark W. and Williams, Marian G. (eds.) Proceedings of the ACM CHI 99 Human Factors in Computing Systems Conference May 15-20, 1999, Pittsburgh, Pennsylvania. pp. 48-55.
An experiment was conducted to systematically investigate combined effects of controller, cursor and target size on multidimensional object manipulation in a virtual environment. It was found that it was the relative size of controller, cursor and target that significantly affected object transportation and orientation processes. There were significant interactions between controller size and cursor size as well as between cursor size and target size on the total task completion time, transportation time, orientation time and spatial errors. The same size of controller and cursor improved object manipulation speed, and the same size of cursor and target generally facilitated object manipulation accuracy, regardless of their absolute sizes. Implications of these findings for human-computer interaction design are discussed.
© All rights reserved Wang and MacKenzie and/or ACM Press
Summers, Valerie A., Booth, Kellogg S., Calvert, Thomas W., Graham, Evan D. and MacKenzie, Christine L. (1999): Calibration for augmented reality experimental testbeds. In: SI3D 1999 1999. pp. 155-162.
Wang, Yanqing, MacKenzie, Christine L., Summers, Valerie A. and Booth, Kellogg S. (1998): The Structure of Object Transportation and Orientation in Human-Computer Interaction. In: Karat, Clare-Marie, Lund, Arnold, Coutaz, Joëlle and Karat, John (eds.) Proceedings of the ACM CHI 98 Human Factors in Computing Systems Conference April 18-23, 1998, Los Angeles, California. pp. 312-319.
An experiment was conducted to investigate the relationship between object transportation and object orientation by the human hand in the context of human-computer interaction (HCI). This work merges two streams of research: the structure of interactive manipulation in HCI and the natural hand prehension in human motor control. It was found that object transportation and object orientation have a parallel, interdependent structure which is generally persistent over different visual feedback conditions. The notion of concurrency and interdependence of multidimensional visuomotor control structure can provide a new framework for human-computer interface evaluation and design.
© All rights reserved Wang et al. and/or ACM Press
Graham, Evan D. and MacKenzie, Christine L. (1996): Physical Versus Virtual Pointing. In: Tauber, Michael J., Bellotti, Victoria, Jeffries, Robin, Mackinlay, Jock D. and Nielsen, Jakob (eds.) Proceedings of the ACM CHI 96 Human Factors in Computing Systems Conference April 14-18, 1996, Vancouver, Canada. pp. 292-299.
An experiment was conducted to investigate differences in performance between virtual pointing, where a 2-D computer image representing the hand and targets was superimposed on the workspace, and physical pointing with vision of the hand and targets painted on the work surface. A detailed examination of movement kinematics revealed no differences in the initial phase of the movement, but that the final phase of homing in on smaller targets was more difficult in the virtual condition. These differences are summarised by a two-part model of movement time which also captures the effects of scaling distances to, and sizes of targets. The implications of this model for design, analysis, and classification of pointing devices and positioning tasks are discussed.
© All rights reserved Graham and MacKenzie and/or ACM Press
Show list on your website
Join the technology elite and advance:
Changes to this page (author)07 Nov 2012: Modified20 Apr 2011: Modified
03 Nov 2010: Modified
12 Jun 2009: Modified
12 Jun 2009: Modified
01 Jun 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
28 Apr 2003: Added
Page maintainer: The Editorial Team