Number of co-authors:10
Number of publications with 3 favourite co-authors:Charlotte Wiberg:2Michael E. Atwood:2John C. Thomas:1
Heather Desurvire's 3 most productive colleagues in number of publications:Michael E. Atwood:48John C. Thomas:27Robin Jeffries:21
Knowledge is commonly socially constructed, through collaborative efforts towards shared objectives or by dialogues and challenges brought about by different persons' perspectives.
-- G. Salomon (in "Distributed Cognitions: Psychological and Educational Considerations")
Read the fascinating history of Wearable Computing, told by its father, Steve Mann
Read Steve's chapter !
Publications by Heather Desurvire (bibliography)
Wiberg, Charlotte, Jegers, Kalle and Desurvire, Heather (2009): How Applicable is Your Evaluation Methods -- Really? Analysis and Re-design of Evaluation Methods for Fun and Entertainment. In: Proceedings of the 2009 International Conference on Advances in Computer-Human Interactions 2009. pp. 324-328.
This paper presents an approach to analyze and re-design evaluation methods. The domain explored in this paper is evaluation methods for evaluating fun and entertainment. However, the approach presented may be applied in other domains as well. The approach is conceptually described and two examples of processes where the approach were used in practice are further discussed. As the map of IT applications and digital media is continuously re-designed, there is a constant need of re-designing evaluation methods.
© All rights reserved Wiberg et al. and/or IEEE
Desurvire, Heather and Wiberg, Charlotte (2008): Master of the game: assessing approachability in future game design. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 3177-3182.
Game Approachability Principles (GAP) is proposed as a set of useful guidelines for game designers to create better tutorials, or first learning levels-especially for the casual gamer. Developing better first learning levels can be a key step to ease the casual gamer into play and to do so proactively-before it is too costly or cumbersome to restructure the tutorials to be more effective. Thus, Game Approachability in the context of game development is defined as making games initially more friendly and accessible for players who have the desire to play, yet do not always follow-through to actually play. GAP has evolved through a series of stages assessing applicability as a stand alone, heuristic based approach versus one-on-one usability testing. Outcomes suggest potential for GAP as (1) effective Heuristic Evaluation, (2) adjunct to Usability Testing, and (3) as proactive filters in beginning conceptual and first learning level tutorial design to increase Game Approachability -- for all levels of gamers.
© All rights reserved Desurvire and Wiberg and/or ACM Press
Desurvire, Heather and Schwartz, Lauren (1996): The HCI Professional as Consultant. In ACM SIGCHI Bulletin, 28 (4) pp. 39-41.
Desurvire, Heather and Thomas, John C. (1993): Enhancing the Performance of Interface Evaluators Using Non-Empirical Usability Methods. In: Proceedings of the Human Factors and Ergonomics Society 37th Annual Meeting 1993. pp. 1132-1136.
Heuristic Evaluation has been shown to be a quick cost-effective methodology that can lead to early identification of many of the same user interface errors as laboratory usability studies. In this paper, we describe a method designed to enhance the performance of expert, system developer, and non-expert evaluators. The evaluators most proficient at Heuristic Evaluation are Human-Factors Experts (Desurvire, Lawrence and Atwood 1991; Desurvire, Kondziela and Atwood, 1992; Jeffries, Miller, Wharton and Uyeda, 199i) and most notably, "double experts" (Nielsen, 1992). Similar results were obtained for the Cognitive Walkthrough developed by Lewis, Polson, Wharton and Rieman, 1990 (Desurvire, et al., 1992; Jeffries, et al., 1991). We were interested in whether a non-empirical method could be developed in which evaluators other than Human Factors Experts can perform nearly as well as Experts. Desurvire, et al. (1992) found that Heuristic Evaluation and Cognitive Walkthroughs not only predicted problems observed in laboratory studies but also encouraged evaluators to suggest improvements. In addition, non-empirical methods stimulated evaluators to point out problems that would be likely to occur in actual use, but would not be observed in laboratory studies. We were interested in expanding this finding by developing a method that encouraged a broader scope of thinking, and thus a broader evaluation. In this paper, we describe the method Programmed Amplification of Valuable Experts (PAVE) and how it enhanced the performance of System Developers and Non-Human-Factors-Expert evaluators. Future work is discussed in which real users in the field will be compared to these results.
© All rights reserved Desurvire and Thomas and/or Human Factors Society
Desurvire, Heather, Kondziela, Jim M. and Atwood, Michael E. (1992): What is Gained and Lost when Using Evaluation Methods Other than Empirical Testing. In: Monk, Andrew, Diaper, Dan and Harrison, Michael D. (eds.) Proceedings of the Seventh Conference of the British Computer Society Human Computer Interaction Specialist Group - People and Computers VII August 15-18, 1992, University of York, UK. pp. 89-102.
There is increasing interest in finding usability testing methods that are easier and cheaper to implement than traditional laboratory usability testing. Recent research has looked at a few of these methods. The current study uses three groups of evaluators with different types of expertise, to evaluate a telephone-based interface using two different evaluation methods, the Cognitive Walkthrough and Heuristic Evaluation. This data is compared to laboratory results. Specific problems named in the laboratory and by the evaluator groups are analyzed for what contributions are made by each evaluator group under each method, and what is lost when traditional usability testing cannot be implemented. Future research directions are also discussed.
© All rights reserved Desurvire et al. and/or Cambridge University Press
Jeffries, Robin and Desurvire, Heather (1992): The Interactive Matrix Chart. In ACM SIGCHI Bulletin, 24 (4) pp. 39-41.
Recent research comparing usability assessment methods has been interpreted by some to imply that usability testing is no longer necessary, because other techniques, such as heuristic evaluation, can find some usability problems more cost-effectively. Such an interpretation grossly overstates the actual results of the studies. In this article, we, as authors of studies that compared inspection methods to usability testing, point out the rather severe limitations to using inspection methods as a substitute for usability testing and argue for a more balanced repertoire of usability assessment techniques.
© All rights reserved Jeffries and Desurvire and/or ACM Press
Desurvire, Heather, Lawrence, Debbie and Atwood, Michael E. (1991): Empiricism versus Judgement: Comparing User Interface Evaluation Methods on a New Telephone-Based Interface. In ACM SIGCHI Bulletin, 23 (4) pp. 58-59.
Stuart, Rory, Desurvire, Heather and Dews, Shelly (1991): The Truncation of Prompts in Phone Based Interfaces: Using TOTT in Evaluations. In: Proceedings of the Human Factors Society 35th Annual Meeting 1991. pp. 230-234.
The Intelligent Interfaces Group at NYNEX Science and Technology has evaluated numerous phone based interfaces (PBIs) during the course of iterative design. Many of the PBIs studied allow users to truncate spoken prompts by pressing keys on their touch-tone telephones. We have found that mistaken assumptions about how and when users will truncate spoken prompts may lead to large discrepancies between the expectations of system designers and the behavior of users. In order to study truncation behavior, we created the Task Oriented Taxonomy of Truncation (TOTT). This taxonomy can be used to describe the behavior of users in truncating spoken prompts in PBIs. TOTT was found to facilitate our understanding of users' truncation behavior and allowed us to change the PBI prompts to better fit this behavior. We found that many users did not interrupt the spoken prompts and we speculate that they may be using a model of turn-taking from human conversation. Future areas of research and applications of TOTT are discussed.
© All rights reserved Stuart et al. and/or Human Factors Society
Show this list on your homepage
Join the technology elite and advance:
Changes to this page (author)09 Nov 2012: Added19 Feb 2010: Modified
12 May 2008: Added
26 Jun 2007: Added
26 Jun 2007: Added
24 Jun 2007: Added
24 Jun 2007: Added
23 Jun 2007: Added
28 Apr 2003: Added
Page maintainer: The Editorial Team