Publication statistics

Pub. period:1973-2012
Pub. count:14
Number of co-authors:23


Number of publications with 3 favourite co-authors:

Niklas Rober:
Guang-Zhong Yang:
Marios Nicolaou:



Productive colleagues

M. Stella Atkins's 3 most productive colleagues in number of publications:

Kori Inkpen:70
Regan L. Mandryk:29
M. Sheelagh T. Car..:27

Upcoming Courses

go to course
User Research - Methods and Best Practices
go to course
Get Your First Job as a UX or Interaction Designer
Starts tomorrow LAST CALL!

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !


Our Latest Books

The Glossary of Human Computer Interaction
by Mads Soegaard and Rikke Friis Dam
start reading
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading

M. Stella Atkins


Publications by M. Stella Atkins (bibliography)

 what's this?
Edit | Del

Tien, Geoffrey, Atkins, M. Stella and Zheng, Bin (2012): Measuring gaze overlap on videos between multiple observers. In: Proceedings of the 2012 Symposium on Eye Tracking Research & Applications 2012. pp. 309-312.

For gaze-based training in surgery to be meaningful, the similarity between a trainee's gaze and an expert's gaze during performance of surgical tasks must be assessed. As it is difficult to record two people's gaze simultaneously, we produced task videos made by experts, and measured the amount of overlap between the gaze path of the expert surgeon and third-party observers while watching the videos. For this investigation, we developed a new, simple method for displaying and summarizing the proportion of time during which two observers' points of gaze on a common stimulus were separated by no more than a specified visual angle. In a study of single-observer self-review and multiple-observer initial view of a laparoscopic training task, we predicted that self-review would produce the highest overlap. We found relatively low overlap between watchers and the task performer; even operators with detailed task knowledge produce low overlap when watching their own videos. Conversely, there was a high overlap among all watchers. Results indicate that it may be insufficient to improve trainees' eye-hand coordination by just watching a video. Gaze training will need to be integrated with other teaching methods to be effective.

© All rights reserved Tien et al. and/or ACM Press

Edit | Del

Atkins, M. Stella, Jiang, Xianta, Tien, Geoffrey and Zheng, Bin (2012): Saccadic delays on targets while watching videos. In: Proceedings of the 2012 Symposium on Eye Tracking Research & Applications 2012. pp. 405-408.

To observe whether there is a difference in eye gaze between doing a task, and watching a video of the task, we recorded the gaze of 17 subjects performing a simple surgical eye-hand coordination task. We also recorded eye gaze of the same subjects later while they were watching videos of their performance. We divided the task into 9 or more sub-tasks, each of which involved a large hand movement to a new target location. We analyzed the videos manually and located the video frame for each sub-task where the operator's saccadic movement began, and the frame where the watcher's eye movement began. We found a consistent delay of about 600 ms between initial eye movement when doing the task, and initial eye movement when watching the task, observed in 96.3% of the sub-tasks. For the first time, we have quantified the differences between doing and watching a manual task. This will help develop gaze-based training strategies for manual tasks.

© All rights reserved Atkins et al. and/or ACM Press

Edit | Del

Tien, Geoffrey and Atkins, M. Stella (2008): Improving hands-free menu selection using eyegaze glances and fixations. In: Rih, Kari-Jouko and Duchowski, Andrew T. (eds.) ETRA 2008 - Proceedings of the Eye Tracking Research and Application Symposium March 26-28, 2008, Savannah, Georgia, USA. pp. 47-50.

Edit | Del

Mandryk, Regan L. and Atkins, M. Stella (2007): A fuzzy physiological approach for continuously modeling emotion during interaction with play technologies. In International Journal of Human-Computer Studies, 65 (4) pp. 329-347.

The popularity of computer games has exploded in recent years, yet methods of evaluating user emotional state during play experiences lag far behind. There are few methods of assessing emotional state, and even fewer methods of quantifying emotion during play. This paper presents a novel method for continuously modeling emotion using physiological data. A fuzzy logic model transformed four physiological signals into arousal and valence. A second fuzzy logic model transformed arousal and valence into five emotional states relevant to computer game play: boredom, challenge, excitement, frustration, and fun. Modeled emotions compared favorably with a manual approach, and the means were also evaluated with subjective self-reports, exhibiting the same trends as reported emotions for fun, boredom, and excitement. This approach provides a method for quantifying emotional states continuously during a play experience.

© All rights reserved Mandryk and Atkins and/or Academic Press

Edit | Del

Mandryk, Regan L., Atkins, M. Stella and Inkpen, Kori (2006): A continuous and objective evaluation of emotional experience with interactive play environments. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 1027-1036.

Researchers are using emerging technologies to develop novel play environments, while established computer and console game markets continue to grow rapidly. Even so, evaluating the success of interactive play environments is still an open research challenge. Both subjective and objective techniques fall short due to limited evaluative bandwidth; there remains no corollary in play environments to task performance with productivity systems. This paper presents a method of modeling user emotional state, based on a user's physiology, for users interacting with play technologies. Modeled emotions are powerful because they capture usability and playability through metrics relevant to ludic experience; account for user emotion; are quantitative and objective; and are represented continuously over a session. Furthermore, our modeled emotions show the same trends as reported emotions for fun, boredom, and excitement; however, the modeled emotions revealed differences between three play conditions, while the differences between the subjective reports failed to reach significance.

© All rights reserved Mandryk et al. and/or ACM Press

Edit | Del

Lam, Heidi, Kirkpatrick, Arthur E., Dill, John and Atkins, M. Stella (2006): Effective Display of Medical Laboratory Report Results on Small Screens: Evaluation of Linear and Hierarchical Displays. In International Journal of Human-Computer Interaction, 21 (1) pp. 73-89.

Two studies evaluated linear and hierarchy+elision small-screen display formats for clinical reasoning tasks. A controlled, quantitative study with 28 medically naive participants using a task abstracted from clinical use of laboratory results found that both display formats supported rapid and accurate decision making. Distribution of the search targets significantly affected speed, with decisions in linear format made 13% faster (4.7 sec) when all targets could be viewed on a single screen than when targets required scrolling between several screens and in hierarchical format 15% faster (5.1 sec) when all the targets were confined within one category. Performance was equivalent regardless of the relative order of the target results and data in the laboratory report. In a qualitative study, 7 physicians used the displays to perform a realistic diagnosis. Physicians were comfortable with both display formats, but preference varied with clinical experience. The 5 less experienced clinicians favored hierarchy+elision, whereas the 2 highly experienced clinicians tended to prefer the linear display.

© All rights reserved Lam et al. and/or Lawrence Erlbaum Associates

Edit | Del

Tory, Melanie, Atkins, M. Stella, Kirkpatrick, Arthur E., Nicolaou, Marios and Yang, Guang-Zhong (2005): Eyegaze Analysis of Displays With Combined 2D and 3D Views. In: 16th IEEE Visualization Conference VIS 2005 23-28 October, 2005, Minneapolis, MN, USA. p. 66.

Edit | Del

Tory, Melanie, Moller, Torsten, Atkins, M. Stella and Kirkpatrick, Arthur E. (2004): Combining 2D and 3D views for orientation and relative position tasks. In: Dykstra-Erickson, Elizabeth and Tscheligi, Manfred (eds.) Proceedings of ACM CHI 2004 Conference on Human Factors in Computing Systems April 24-29, 2004, Vienna, Austria. pp. 73-80.

We compare 2D/3D combination displays to displays with 2D and 3D views alone. Combination displays we consider are: orientation icon (i.e., side-by-side), in-place methods (e.g., clip planes), and a new method called ExoVis. We specifically analyze performance differences (i.e., time and accuracy) for 3D orientation and relative position tasks. Empirical results show that 3D displays are effective for approximate navigation and relative positioning whereas 2D/3D combination displays (orientation icon and ExoVis) are useful for precise orientation and position tasks. Combination 2D/3D displays had as good or better performance as 2D displays. Clip planes were not effective for a 3D orientation task, but may be useful when only one slice is needed.

© All rights reserved Tory et al. and/or ACM Press

Edit | Del

Law, Benjamin, Atkins, M. Stella, Kirkpatrick, Arthur E. and Lomax, Alan J. (2004): Eye gaze patterns differentiate novice and experts in a virtual laparoscopic surgery training environment. In: Duchowski, Andrew T. and Vertegaal, Roel (eds.) ETRA 2004 - Proceedings of the Eye Tracking Research and Application Symposium March 22-24, 2004, San Antonio, Texas, USA. pp. 41-48.

Edit | Del

Tory, Melanie, Rober, Niklas, Moller, Torsten, Celler, Anna and Atkins, M. Stella (2001): 4D Space-Time Techniques: A Medical Imaging Case Study. In: Ertl, Thomas, Joy, Kenneth I. and Varshney, Amitabh (eds.) IEEE Visualization 2001 October 24-26, 2001, San Diego, CA, USA. .

Edit | Del

Heyden, Johanna E. van der, Inkpen, Kori, Atkins, M. Stella and Carpendale, M. S. T. (1999): A User Centered Task Analysis of Interface Requirements for MRI Viewing. In: Graphics Interface 99 June 2-4, 1999, Kingston, Ontario, Canada. pp. 18-26.

Edit | Del

Heyden, Johanna E. van der, Carpendale, M. Sheelagh T., Inkpen, Kevin B. and Atkins, M. Stella (1998): Visual presentation of magnetic resonance images. In: IEEE Visualization 1998 1998. pp. 423-426.

Edit | Del

Atkins, M. Stella, Zuk, Torre, Johnston, B. and Arden, T. (1994): Role of Visual Languages in Developing Image Analysis Algorithms. In: VL 1994 1994. pp. 262-269.

Edit | Del

Atkins, M. Stella (1973): Mutual Recursion in Algol 60 Using Restricted Compilers. In Communications of the ACM, 16 (1) pp. 47-48.

Add publication
Show list on your website

Join our community and advance:




Join our community!

Page Information

Page maintainer: The Editorial Team