Publication statistics

Pub. period:1983-2008
Pub. count:5
Number of co-authors:11


Number of publications with 3 favourite co-authors:

Gregorio Convertino:
Dorrit Billman:
Peter Pirolli:



Productive colleagues

Jeff Shrager's 3 most productive colleagues in number of publications:

Thomas P. Moran:66
Terry Winograd:59
Peter Pirolli:46

Upcoming Courses

go to course
User-Centred Design - Module 1
go to course
Dynamic User Experience: Ajax Design and Usability
Starts tomorrow LAST CALL!

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !


Our Latest Books

The Glossary of Human Computer Interaction
by Mads Soegaard and Rikke Friis Dam
start reading
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading

Jeff Shrager


Publications by Jeff Shrager (bibliography)

 what's this?
Edit | Del

Bernstein, Michael S., Shrager, Jeff and Winograd, Terry (2008): Taskpos: exploring fluid boundaries in an associative window visualization. In: Cousins, Steve B. and Beaudouin-Lafon, Michel (eds.) Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology October 19-22, 2008, Monterey, CA, USA. pp. 231-234.

Edit | Del

Convertino, Gregorio, Billman, Dorrit, Pirolli, Peter, Massar, J. and Shrager, Jeff (2008): The CACHE Study: Group Effects in Computer-supported Collaborative Analysis. In Computer Supported Cooperative Work, 17 (4) pp. 353-393.

The present experiment investigates effects of group composition in computer-supported collaborative intelligence analysis. Human cognition, though highly adaptive, is also quite limited, leading to systematic errors and limitations in performance -- that is, biases. We experimentally investigated the impact of group composition on an individual's bias, by composing groups that differ in whether their members initial beliefs are diverse (heterogeneous group) or similar (homogeneous group). We study three-member, distributed, computer-supported teams in heterogeneous, homogeneous, and solo (or nominal) groups. We measured bias in final judgment, and also in the selection and evaluation of the evidence that contributed to the final beliefs. The distributed teams collaborated via CACHE-A, a web-based software environment that supports a collaborative version of Analysis of Competing Hypotheses (or ACH, a method used by intelligence analysts). Individuals in Heterogeneous Groups showed no net process cost, relative to noninteracting individuals. Both heterogeneous and solo (noninteracting) groups debiased strongly, given a stream of balanced evidence. In contrast, individuals in Homogenous Groups did worst, accentuating their initial bias rather than debiasing. We offer suggestions about how CACHE-A supports collaborative analysis, and how experimental investigation in this research area can contribute to design of CSCW systems.

© All rights reserved Convertino et al. and/or Kluwer Academic Publishers

Edit | Del

Shrager, Jeff, Jordan, Daniel S., Moran, Thomas P., Kiczales, Gregor and Russell, Daniel M. (1987): Issues in the Pragmatics of Qualitative Modeling: Lessons Learned from a Xerographics Project. In Communications of the ACM, 30 (12) pp. 1036-1047.

Edit | Del

Shrager, Jeff and Klahr, David (1986): Instructionless Learning about a Complex Device: The Paradigm and Observations. In International Journal of Man-Machine Studies, 25 (2) pp. 153-189.

In order to study the mechanisms that underlie "intuitive" scientific reasoning, verbal protocols were collected from seven computer-naive college students asked to "figure out" a Big Trak programmable toy, without a user's guide or other assistance. We call this paradigm Instructionless Learning. The present paper presents a detailed account of how people learn about a complex device in an instructionless-learning context. Subjects' behavior is divided into an orientation phase and a systematic phase. We attend most carefully to the systematic phase. Learners form hypotheses about various aspects of the Big Trak: the syntax of interaction, the semantics of operators, and the device model -- which includes objects such as memories, switches, etc. Subjects attempt to confirm hypotheses from which predictions can be made, to refine hypotheses that do not immediately yield predictions, and to verify their total knowledge of the device. Hypotheses are formulated from observation. If an initial hypothesis is incorrect, it will yield incorrect predictions in interactions. When such failures occur, learners change their theory to account for the currently perceived behavior of the device. These changes are often based upon little evidence and may even be contradicted by available information. Thus, the new hypotheses may also be incorrect, and lead to further errors and changes.

© All rights reserved Shrager and Klahr and/or Academic Press

Edit | Del

Shrager, Jeff and Klahr, David (1983): Learning in an Instructionless Environment: Observation and Analysis. In: Smith, Raoul N., Pew, Richard W. and Janda, Ann (eds.) Proceedings of the ACM CHI 83 Human Factors in Computing Systems Conferenc December 12-15, 1983, Boston, Massachusetts, United States. pp. 226-229.

In an instructionless environment, there are neither teachers nor books. The only feedback comes from interaction with the target. All information appears from within the subject or from observation of the environment. In this setting, subjects rely upon experimentation to develop an understanding of the target. They form hypotheses by analogy or inference and test these hypotheses via experiments of calibration, replication, confirmation, exploration, and discrimination. This paper describes subjects' performance in a particular instructionless environment. The target object is a programmable toy robot tank. We use the hypotheses formed by subjects and form of the experiments performed, to assess subjects' knowledge of the system. This knowledge falls into distinguishable categories: syntactic knowledge of the programming language, semantic knowledge of the actions of the device, and model knowledge which addresses the structure of the device. Exemplary selections of our protocols are used to support the various aspects of the learning model.

© All rights reserved Shrager and Klahr and/or ACM Press

Add publication
Show list on your website

Join our community and advance:




Join our community!

Page Information

Page maintainer: The Editorial Team