Number of co-authors:8
Number of publications with 3 favourite co-authors:Tim Halverson:5Yunfeng Zhang:2David E. Kieras:2
Anthony J. Hornof's 3 most productive colleagues in number of publications:David E. Kieras:25Scott D. Wood:7Tim Halverson:7
Visual appearance is one of the most effective variables for quickly differentiating one application from another
-- Bob Baxley, 2003
Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess
User Experience and Experience Design !
Our Latest Books
Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger
Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad
Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam
Anthony J. Hornof
Publications by Anthony J. Hornof (bibliography)
Zhang, Yunfeng and Hornof, Anthony J. (2012): A Discrete Movement Model for Cursor Tracking Validated in the Context of a Dual-Task Experiment. In: Proceedings of the Human Factors and Ergonomics Society 2012 Annual Meeting 2012. pp. 1000-1004.
Understanding human cursor tracking behavior is essential in understanding human motor control. Though tracking has been hypothesized as a sequence of discrete movements, better data is needed to support the theory. By analyzing moment-to-moment tracking data, this paper shows that discrete, non-ballistic movements exist throughout a tracking task, and that these short submovements can be characterized by either Fitts' law or a linear model. A cognitive model was built to incorporate the characteristics of these discrete movements into a dual task. Using parameters estimated through linear regression of the movement data, the model achieves a good fit to the overall performance measures of the dual-task experiment. This research investigates the characteristics of human motor control in tracking tasks, improves modeling techniques by providing a new method for estimating tracking parameters, and advances the science of motor control with new evidence for the discrete movement tracking hypothesis. The discrete movement model presented here offers an excellent alternative to established control theory models that are used to simulate steering in cognitive models of driving.
© All rights reserved Zhang and Hornof and/or Human Factors and Ergonomics Society
Hornof, Anthony J. and Vessey, Kyle E. V. (2011): The Sound of One Eye Clapping: Tapping an Accurate Rhythm With Eye Movements. In: Proceedings of the Human Factors and Ergonomics Society 55th Annual Meeting 2011. pp. 1225-1229.
As eye-controlled interfaces becomes increasingly viable, there is a need to better understand fundamental human-machine interaction capabilities between a human and a computer via an eye tracking device. Prior research has explored the maximum rate of input from a human to a computer, such as key-entry rates in eye-typing tasks, but there has been little or no work to determine capabilities and limitations with regards to delivering gaze-mediated commands at precise moments in time. This paper evaluates four different methods for converting real-time eye movement data into control signals -- two fixation-based methods and two saccade-based methods. An experiment compares musicians' ability to use each method to trigger the playing of sounds at precise times, and examines how quickly musicians are able to move their eyes to trigger correctly-timed, evenly-paced rhythms. The results indicate that fixation-based eye-control algorithms provide better timing control than saccade-based algorithms, and that people have a fundamental performance limitation for tapping out eye-controlled rhythms that lies somewhere between two and four beats per second.
© All rights reserved Hornof and Vessey and/or HFES
Halverson, Tim and Hornof, Anthony J. (2011): A Computational Model of "Active Vision' for Visual Search in Human -- Computer Interaction. In Human Computer Interaction, 26 (4) pp. 285-314.
Human visual search plays an important role in many human-computer interaction (HCI) tasks. Better models of visual search are needed not just to predict overall performance outcomes, such as whether people will be able to find the information needed to complete an HCI task, but to understand the many human processes that interact in visual search, which will in turn inform the detailed design of better user interfaces. This article describes a detailed instantiation, in the form of a computational cognitive model, of a comprehensive theory of human visual processing known as "active vision" (Findlay&Gilchrist, 2003). The computational model is built using the Executive Process-Interactive Control cognitive architecture. Eye-tracking data from three experiments inform the development and validation of the model. The modeling asks -- and at least partially answers -- the four questions of active vision: (a) What can be perceived in a fixation? (b) When do the eyes move? (c) Where do the eyes move? (d) What information is integrated between eye movements? Answers include: (a) Items nearer the point of gaze are more likely to be perceived, and the visual features of objects are sometimes misidentified. (b) The eyes move after the fixated visual stimulus has been processed (i.e., has entered working memory). (c) The eyes tend to go to nearby objects. (d) Only the coarse spatial information of what has been fixated is likely maintained between fixations. The model developed to answer these questions has both scientific and practical value in that the model gives HCI researchers and practitioners a better understanding of how people visually interact with computers, and provides a theoretical foundation for predictive analysis tools that can predict aspects of that interaction.
© All rights reserved Halverson and Hornof and/or Lawrence Erlbaum
Hornof, Anthony J., Zhang, Yunfeng and Halverson, Tim (2010): Knowing where and when to look in a time-critical multimodal dual task. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2103-2112.
Human-computer systems intended for time-critical multitasking need to be designed with an understanding of how humans can coordinate and interleave perceptual, memory, and motor processes. This paper presents human performance data for a highly-practiced time-critical dual task. In the first of the two interleaved tasks, participants tracked a target with a joystick. In the second, participants keyed-in responses to objects moving across a radar display. Task manipulations include the peripheral visibility of the secondary display (visible or not) andąthe presence or absence of auditory cues to assist with the radar task. Eye movement analyses reveal extensive coordination and overlapping of human information processes and the extent to which task manipulations helped or hindered dual task performance. For example, auditory cues helped only a little when the secondary display was peripherally visible, but they helped a lot when it was not peripherally visible.
© All rights reserved Hornof et al. and/or their publisher
Hornof, Anthony J. (2009): Designing with children with severe motor impairments. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 2177-2180.
Children with severe motor impairments such as with disabilities resulting from severe cerebral palsy benefit greatly from assistive technology, but very little guidance is available on how to collaborate with this population as partners in the design of such technology. To explore how to facilitate such collaborations, a field-based participant observation study, as well as structured and unstructured interviews, were conducted at a home for children with severe disabilities. Team-building collaborative design activities were pursued. Guidelines are proposed for how to collaborate with children with severe motor impairments.
© All rights reserved Hornof and/or ACM Press
Halverson, Tim and Hornof, Anthony J. (2008): The effects of semantic grouping on visual search. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 3471-3476.
This paper reports on work-in-progress to better understand how users visually interact with hierarchically organized semantic information. Experimental reaction time and eye movement data are reported that give insight into strategies people employ while searching visual layouts containing words that are either grouped by category (i.e. semantically cohesive) or randomly grouped. Additionally, sometimes the category labels of the cohesive groups are displayed as part of the group. Preliminary results suggest that: (a) When groups are cohesive, people tend to search labeled and unlabeled layouts similarly. (b) People seem to trust the categorical information of labels more than non-labels. This work will be used to extend current computational models of visual search to better predict users visual interaction with interfaces.
© All rights reserved Halverson and Hornof and/or ACM Press
Halverson, Tim and Hornof, Anthony J. (2007): A minimal model for predicting visual search in human-computer interaction. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 431-434.
Visual search is an important part of human-computer interaction. It is critical that we build theory about how people visually search displays in order to better support the users' visual capabilities and limitations in everyday tasks. One way of building such theory is through computational cognitive modeling. The ultimate promise for cognitive modeling in HCI it to provide the science base needed for predictive interface analysis tools. This paper discusses computational cognitive modeling of the perceptual, strategic, and oculomotor processes people used in a visual search task. This work refines and rounds out previously reported cognitive modeling and eye tracking analysis. A revised "minimal model" of visual search is presented that explains a variety of eye movement data better than the original model. The revised model uses a parsimonious strategy that is not tied to a particular visual structure or feature beyond the location of objects. Three characteristics of the minimal strategy are discussed in detail.
© All rights reserved Halverson and Hornof and/or ACM Press
Kaur, Ishwinder and Hornof, Anthony J. (2005): A comparison of LSA, wordNet and PMI-IR for predicting user click behavior. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 51-60.
A predictive tool to simulate human visual search behavior would help interface designers inform and validate their design. Such a tool would benefit from a semantic component that would help predict search behavior even in the absence of exact textual matches between goal and target. This paper discusses a comparison of three semantic systems-LSA, WordNet and PMI-IR-to evaluate their performance in predicting the link that people would select given an information goal and a webpage. PMI-IR best predicted human performance as observed in a user study.
© All rights reserved Kaur and Hornof and/or ACM Press
Hornof, Anthony J. and Cavender, Anna (2005): EyeDraw: enabling children with severe motor impairments to draw with their eyes. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 161-170.
EyeDraw is a software program that, when run on a computer with an eye tracking device, enables children with severe motor disabilities to draw pictures by just moving their eyes. This paper discusses the motivation for building the software, how the program works, the iterative development of two versions of the software, user testing of the two versions by people with and without disabilities, and modifications to the software based on user testing. Feedback from both children and adults with disabilities, and from their caregivers, was especially helpful in the design process. The project identifies challenges that are unique to controlling a computer with the eyes, and unique to writing software for children with severe motor impairments.
© All rights reserved Hornof and Cavender and/or ACM Press
Hornof, Anthony J. (2004): Cognitive Strategies for the Visual Search of Hierarchical Computer Displays. In Human-Computer Interaction, 19 (3) pp. 183-223.
This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined: unlabeled layouts that contain multiple groups of items but no group headings, labeled layouts in which items are grouped and each group has a useful heading, and a target-only layout that contains just one item. A number of plausible strategies were proposed for each layout. Each strategy was programmed into the EPIC cognitive architecture, producing models that simulate the human visual-perceptual, oculomotor, and cognitive processing required for the task. The models generate search time predictions. For unlabeled layouts, the mean layout search times are predicted by a purely random search strategy, and the more detailed positional search times are predicted by a noisy systematic strategy. The labeled layout search times are predicted by a hierarchical strategy in which first the group labels are systematically searched, and then the contents of the target group. The target-only layout search times are predicted by a strategy in which the eyes move directly to the sudden appearance of the target. The models demonstrate that human visual search performance can be explained largely in terms of the cognitive strategy that is used to coordinate the relevant perceptual and motor processes, a clear and useful visual hierarchy triggers a fundamentally different visual search strategy and effectively gives the user greater control over the visual navigation, and cognitive strategies will be an important component of a predictive visual search tool. The models provide insights pertaining to the visual-perceptual and oculomotor processes involved in visual search and contribute to the science base needed for predictive interface analysis.
© All rights reserved Hornof and/or Taylor and Francis
Hornof, Anthony J. and Halverson, Tim (2003): Cognitive strategies and eye movements for searching hierarchical computer displays. In: Cockton, Gilbert and Korhonen, Panu (eds.) Proceedings of the ACM CHI 2003 Human Factors in Computing Systems Conference April 5-10, 2003, Ft. Lauderdale, Florida, USA. pp. 249-256.
Hornof, Anthony J. (2001): Visual search and mouse-pointing in labeled versus unlabeled two-dimensional visual hierarchies. In ACM Transactions on Computer-Human Interaction, 8 (3) pp. 171-197.
An experiment investigates (1) how the physical structure of a computer screen layout affects visual search and (2) how people select a found target object with a mouse. Two structures are examined -- labeled visual hierarchies (groups of objects with one label per group) and unlabeled visual hierarchies (groups without labels). Search and selection times were separated by imposing a point-completion deadline that discouraged participants from moving the mouse until they found the target. The observed search times indicate that labeled visual hierarchies can be searched much more efficiently than unlabeled visual hierarchies, and suggest that people use a fundamentally different strategy for each of the two structures. The results have implications for screen layout design and cognitive modeling of visual search. The observed mouse-pointing times suggest that people use a slower and more accurate speed-accuracy operating characteristic to select a target with a mouse when visual distractors are present, which suggests that Fitts' law coefficients derived from standard mouse-pointing experiments may under-predict mouse-pointing times for typical human-computer interactions. The observed mouse-pointing times also demonstrate that mouse movement times for a two-dimensional pointing task can be most-accurately predicted by setting the w in Fitts' law to the width of the target along the line of approach.
© All rights reserved Hornof and/or ACM Press
Hornof, Anthony J. and Kieras, David E. (1999): Cognitive Modeling Demonstrates How People Use Anticipated Location Knowledge of Menu Items. In: Altom, Mark W. and Williams, Marian G. (eds.) Proceedings of the ACM CHI 99 Human Factors in Computing Systems Conference May 15-20, 1999, Pittsburgh, Pennsylvania. pp. 410-417.
This research presents cognitive models of a person selecting an item from a familiar, ordered, pull-down menu. Two different models provide a good fit with human data and thus two different possible explanations for the low-level cognitive processes involved in the task. Both models assert that people make an initial eye and hand movement to an anticipated target location without waiting for the menu to appear. The first model asserts that a person knows the exact location of the target item before the menu appears, but the model uses nonstandard Fitts' law coefficients to predict mouse pointing time. The second model asserts that a person would only know the approximate location of the target item, and the model uses Fitts' law coefficients better supported by the literature. This research demonstrates that people can develop considerable knowledge of locations in a visual task environment, and that more work regarding Fitts' law is needed.
© All rights reserved Hornof and Kieras and/or ACM Press
Kieras, David E., Wood, Scott D., Abotel, Kasem and Hornof, Anthony J. (1995): GLEAN: A Computer-Based Tool for Rapid GOMS Model Usability Evaluation of User Interface Designs. In: Robertson, George G. (ed.) Proceedings of the 8th annual ACM symposium on User interface and software technology November 15 - 17, 1995, Pittsburgh, Pennsylvania, United States. pp. 91-100.
Engineering models of human performance permit some aspects of usability of interface designs to be predicted from an analysis of the task, and thus can replace to some extent expensive user testing data. The best developed such tools are GOMS models, which have been shown to be accurate and effective in predicting usability of the procedural aspects of interface designs. This paper describes a computer-based tool, GLEAN, that generates quantitative predictions from a supplied GOMS model and a set of benchmark tasks. GLEAN is demonstrated to reproduce the results of a case study of GOMS model application with considerable time savings over both manual modeling as well as empirical testing.
© All rights reserved Kieras et al. and/or ACM Press
Show list on your website
Join our community and advance:
Changes to this page (author)10 Nov 2012: Modified10 Nov 2012: Modified
04 Apr 2012: Modified
02 Nov 2010: Modified
09 May 2009: Modified
12 May 2008: Modified
29 Jun 2007: Modified
29 Jun 2007: Modified
29 Jun 2007: Modified
19 Jun 2007: Modified
28 Apr 2003: Added
Page maintainer: The Editorial Team