Upcoming Courses

go to course
Gamification: Creating Addictive User Experience
Starts the day after tomorrow !
go to course
User-Centred Design - Module 3
66% booked. Starts in 29 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

The MIT Press

No description available of The MIT Press.

ADD DESCRIPTION
 
 
 

Examples of published books

Landauer, Thomas K. (1996): The Trouble with Computers: Usefulness, Usability, and Productivity. The MIT Press

Despite enormous investments in computers over the last twenty years, productivity in the very service industries at which they were aimed virtually stagnated everywhere in the world.If computers are not making businesses, organizations, or countries more productive, then why are we spending so much time and money on them? Cutting through a raft of technical data, Thomas Landauer explains and illustrates why computers are in trouble and why massive outlays for computing since 1973 have not resulted in comparable productivity payoffs. Citing some of his own successful research programs, as well as many others, Landauer offers solutions to the problems he describes.While acknowledging that mismanagement, organizational barriers, learning curves, and hardware and software incompatibilities can play a part in the productivity paradox, Landauer targets individual utility and usability as the main culprits. He marshals overwhelming evidence that computers rarely improve the efficiency of the information work they are designed for because they are too hard to use and do too little that is sufficiently useful. Their many features, designed to make them more marketable, merely increase cost and complexity. Landauer proposes that emerging techniques for user-centered development can turn the situation around. Through task analysis, iterative design, trial use, and evaluation, computer systems can be made into powerful tools for the service economy.Landauer estimates that the application of these methods would make computers have the same enormous impact on productivity and standard of living that were the historical results of technological advances in energy use (the steam engine, electric motors), automation in textiles and other manufacture, and in agriculture. He presents solid evidence for this claim, and for a huge benefit-to-cost ratio for user-centered design activities backed by descriptions of how to do these necessary things, of promising applications for better computer software designs in business, and of the relation of user-centered design to business process reengineering, quality, and management.

© All rights reserved Landauer and/or The MIT Press

Picard, Rosalind W. (1997): Affective computing. Ma, USA, The MIT Press

Cypher, Allen (1993): Watch What I Do: Programming by Demonstration. Cambridge, MA, The MIT Press

Moggridge, Bill (2007): Designing Interactions. The MIT Press

Digital technology has changed the way we interact with everything from the games we play to the tools we use at work. Designers of digital technology products no longer regard their job as designing a physical object--beautiful or utilitarian--but as designing our interactions with it. In Designing Interactions, award-winning designer Bill Moggridge introduces us to forty influential designers who have shaped our interaction with technology. Moggridge, designer of the first laptop computer (the GRiD Compass, 1981) and a founder of the design firm IDEO, tells us these stories from an industry insider's viewpoint, tracing the evolution of ideas from inspiration to outcome. The innovators he interviews--including Will Wright, creator of The Sims, Larry Page and Sergey Brin, the founders of Google, and Doug Engelbart, Bill Atkinson, and others involved in the invention and development of the mouse and the desktop--have been instrumental in making a difference in the design of interactions. Their stories chart the history of entrepreneurial design development for technology.Moggridge and his interviewees discuss such questions as why a personal computer has a window in a desktop, what made Palm's handheld organizers so successful, what turns a game into a hobby, why Google is the search engine of choice, and why 30 million people in Japan choose the i-mode service for their cell phones. And Moggridge tells the story of his own design process and explains the focus on people and prototypes that has been successful at IDEO--how the needs and desires of people can inspire innovative designs and how prototyping methods are evolving for the design of digital technology.Designing Interactions is illustrated with more than 700 images, with color throughout. Accompanying the book is a DVD that contains segments from all the interviews intercut with examples of the interactions under discussion.Interviews with:Bill Atkinson • Durrell Bishop • Brendan Boyle • Dennis Boyle • Paul Bradley • Duane Bray • Sergey Brin • Stu Card • Gillian Crampton Smith • Chris Downs• Tony Dunne • John Ellenby • Doug Englebart • Jane Fulton Suri • Bill Gaver • Bing Gordon • Rob Haitani • Jeff Hawkins • Matt Hunter • Hiroshi Ishii • Bert Keely • David Kelley • Rikako Kojima • Brenda Laurel • David Liddle • Lavrans Løvlie • John Maeda • Paul Mercer • Tim Mott • Joy Mountford • Takeshi Natsuno • Larry Page • Mark Podlaseck • Fiona Raby • Cordell Ratzlaff • Ben Reason • Jun Rekimoto • Steve Rogers • Fran Samalionis • Larry Tesler • Bill Verplank • Terry Winograd • Will Wright

© All rights reserved Moggridge and/or The MIT Press

Rheingold, Howard (1993): The Virtual Community: Homesteading on the Electronic Frontier. Reading, MA, The MIT Press

Howard Rheingold has been called the First Citizen of the Internet. In this book he tours the "virtual community" of online networking. He describes a community that is as real and as much a mixed bag as any physical community -- one where people talk, argue, seek information, organize politically, fall in love, and dupe others. At the same time that he tells moving stories about people who have received online emotional support during devastating illnesses, he acknowledges a darker side to people's behavior in cyberspace. Indeed, contends Rheingold, people relate to each other online much the same as they do in physical communities.Originally published in 1993, The Virtual Community is more timely than ever. This edition contains a new chapter, in which the author revisits his ideas about online social communication now that so much more of the world's population is wired. It also contains an extended bibliography.

© All rights reserved Rheingold and/or The MIT Press

McCarthy, John and Wright, Peter (2004): Technology as Experience. The MIT Press

In Technology as Experience, John McCarthy and Peter Wright argue that any account of what is often called the user experience must take into consideration the emotional, intellectual, and sensual aspects of our interactions with technology. We don't just use technology, they point out; we live with it. They offer a new approach to understanding human-computer interaction through examining the felt experience of technology. Drawing on the pragmatism of such philosophers as John Dewey and Mikhail Bakhtin, they provide a framework for a clearer analysis of technology as experience. Just as Dewey, in Art as Experience, argued that art is part of everyday lived experience and not isolated in a museum, McCarthy and Wright show how technology is deeply embedded in everyday life. The "zestful integration" or transcendent nature of the aesthetic experience, they say, is a model of what human experience with technology might become. McCarthy and Wright illustrate their theoretical framework with real-world examples that range from online shopping to ambulance dispatch. Their approach to understanding human computer interaction—seeing it as creative, open, and relational, part of felt experience—is a measure of the fullness of technology's potential to be more than merely functional.

© All rights reserved McCarthy and Wright and/or The MIT Press

Fishwick, Paul A. (2006): Aesthetic Computing (Leonardo Books). The MIT Press

Kaptelinin, Victor and Nardi, Bonnie A. (2006): Acting with Technology: Activity Theory and Interaction Design. The MIT Press

Kaptelinin, Victor and Czerwinski, Mary (2007): Beyond the Desktop Metaphor: Designing Integrated Digital Work Environments. The MIT Press

Spinuzzi, Clay (2003): Tracing Genres through Organizations: A Sociocultural Approach to Information Design (Acting with Technology). The MIT Press

In Tracing Genres through Organizations, Clay Spinuzzi examines the everyday improvisations by workers who deal with designed information and shows how understanding this impromptu creation can improve information design. He argues that the traditional user-centered approach to design does not take into consideration the unofficial genres that spring up as workers write notes, jot down ideas, and read aloud from an officially designed text. These often ephemeral innovations in information design are vital components in a genre ecology (the complex of artifacts mediating a given activity). When these innovations are recognized for what they are, they can be traced and their evolution as solutions to recurrent design problems can be studied. Spinuzzi proposes a sociocultural method for studying these improvised innovations that draws on genre theory (which provides the unit of analysis, the genre) and activity theory (which provides a theory of mediation and a way to study the different levels of activity in an organization).After defining terms and describing the method of genre tracing, the book shows the methodology at work in four interrelated studies of traffic workers in Iowa and their use of a database of traffic accidents. These workers developed an ingenious array of ad hoc innovations to make the database better serve their needs. Spinuzzi argues that these inspired improvisations by workers can tell us a great deal about how designed information fails or succeeds in meeting workers' needs. He concludes by considering how the insights reached in studying genre innovation can guide information design itself.

© All rights reserved Spinuzzi and/or The MIT Press

Nardi, Bonnie A. (1995): Context and Consciousness: Activity Theory and Human-Computer Interaction. The MIT Press

Hippel, Eric von (2005): Democratizing Innovation. The MIT Press

Innovation is rapidly becoming democratized. Users, aided by improvements in computer and communications technology, increasingly can develop their own new products and services. These innovating users—both individuals and firms—often freely share their innovations with others, creating user-innovation communities and a rich intellectual commons. In Democratizing Innovation, Eric von Hippel looks closely at this emerging system of user-centered innovation. He explains why and when users find it profitable to develop new products and services for themselves, and why it often pays users to reveal their innovations freely for the use of all. The trend toward democratized innovation can be seen in software and information products—most notably in the free and open-source software movement—but also in physical products. Von Hippel's many examples of user innovation in action range from surgical equipment to surfboards to software security features. He shows that product and service development is concentrated among "lead users," who are ahead on marketplace trends and whose innovations are often commercially attractive. Von Hippel argues that manufacturers should redesign their innovation processes and that they should systematically seek out innovations developed by users. He points to businesses—the custom semiconductor industry is one example—that have learned to assist user-innovators by providing them with toolkits for developing new products. User innovation has a positive impact on social welfare, and von Hippel proposes that government policies, including R&D subsidies and tax credits, should be realigned to eliminate biases against it. The goal of a democratized user-centered innovation system, says von Hippel, is well worth striving for. An electronic version of this book is available under a Creative Commons license.

© All rights reserved Hippel and/or The MIT Press

Foray, Dominique (2004): The Economics of Knowledge. The MIT Press

The economics of knowledge is a rapidly emerging subdiscipline of economics that has never before been given the comprehensive and cohesive treatment found in this book. Dominique Foray analyzes the deep conceptual and structural transformation of our economic activities that has led to a gradual shift to knowledge-intensive activities. This transformation is the result of the collision of a longstanding trend—the expansion of knowledge-based investments and activities—with a technological revolution that radically altered the production and transmission of knowledge and information. The book focuses on the dual nature of the economics of knowledge: its emergence as a discipline (which Foray calls "the economics of knowledge") and the historical development of a particular period in the growth and organization of economic activities ("the knowledge-based economy"). The book, which alternates between analysis of the economic transformation and examination of the tools and concepts of the discipline, begins by discussing "knowledge" as an economic good and the historical development of the knowledge-based economies. It then develops a conceptual framework for considering the issues raised. Topics considered in the remaining chapters include forms of knowledge production, codification and infusion, incentives and institutions for the efficient production of knowledge (including discussions of private markets and "open" sources), and knowledge management as a new organizational capability. Finally, the book addresses policy concerns suggested by the uneven development of knowledge across different sectors and by the need to find ways of reclaiming the public dimension of knowledge from an essentially privatized knowledge revolution.

© All rights reserved Foray and/or The MIT Press

Jaffe, Adam B., Lerner, Josh and Stern, Scott (2001): Innovation Policy and the Economy, Vol. 1. The MIT Press

Enos, John L. (1962): Petroleum Progress and Profits: A History of Process Innovation. The MIT Press

Feller, Joseph, Fitzgerald, Brian, Hissam, Scott A., Lakhani, Karim R., Shirky, Clay and Cusumano, Michael (2005): Perspectives on Free and Open Source Software. The MIT Press

Erickson, Thomas and McDonald, David W. (2007): HCI Remixed: Reflections on Works That Have Influenced the HCI Community. The MIT Press

Fishwick, Paul A. (ed.) (2006): Aesthetic Computing. The MIT Press

In Aesthetic Computing, key scholars and practitioners from art, design, computer science, and mathematics lay the foundations for a discipline that applies the theory and practice of art to computing. Aesthetic computing explores the way art and aesthetics can play a role in different areas of computer science. One of its goals is to modify computer science by the application of the wide range of definitions and categories normally associated with making art. For example, structures in computing might be represented using the style of Gaudi or the Bauhaus school. This goes beyond the usual definition of aesthetics in computing, which most often refers to the formal, abstract qualities of such structures—a beautiful proof, or an elegant diagram. The contributors to this book discuss the broader spectrum of aesthetics—from abstract qualities of symmetry and form to ideas of creative expression and pleasure—in the context of computer science. The assumption behind aesthetic computing is that the field of computing will be enriched if it embraces all of aesthetics. Human-computer interaction will benefit—"usability," for example, could refer to improving a user's emotional state—and new models of learning will emerge. Aesthetic Computing approaches its subject from a variety of perspectives. After defining the field and placing it in its historical context, the book looks at art and design, mathematics and computing, and interface and interaction. Contributions range from essays on the art of visualization and "the poesy of programming" to discussions of the aesthetics of mathematics throughout history and transparency and reflectivity in interface design. Contributors: James Alty, Olav W. Bertelsen, Jay David Bolter, Donna Cox, Stephan Diehl, Mark d'Inverno, Michele Emmer, Paul Fishwick, Monica Fleischmann, Ben Fry, Carsten Görg, Susanne Grabowski, Diane Gromala, Kenneth A. Huff, John Lee, Frederic Fol Leymarie, Michael Leyton, Jonas Löwgren, Roger F. Malina, Laurent Mignonneau, Frieder Nake, Ray Paton, Jane Prophet, Aaron Quigley, Casey Reas, Christa Sommerer, Wolfgang Strauss, Noam Tractinksy, Paul Vickers, Dror Zmiri

© All rights reserved Fishwick and/or The MIT Press

Turkle, Sherry (2005): The Second Self: Computers and the Human Spirit - Twentieth Anniversary Edition. The MIT Press

In The Second Self, Sherry Turkle looks at the computer not as a "tool," but as part of our social and psychological lives; she looks beyond how we use computer games and spreadsheets to explore how the computer affects our awareness of ourselves, of one another, and of our relationship with the world. "Technology," she writes, "catalyzes changes not only in what we do but in how we think." First published in 1984, The Second Self is still essential reading as a primer in the psychology of computation. This twentieth anniversary edition allows us to reconsider two decades of computer culture--to (re)experience what was and is most novel in our new media culture and to view our own contemporary relationship with technology with fresh eyes. Turkle frames this classic work with a new introduction, a new epilogue, and extensive notes added to the original text.Turkle talks to children, college students, engineers, AI scientists, hackers, and personal computer owners--people confronting machines that seem to think and at the same time suggest a new way for us to think--about human thought, emotion, memory, and understanding. Her interviews reveal that we experience computers as being on the border between inanimate and animate, as both an extension of the self and part of the external world. Their special place betwixt and between traditional categories is part of what makes them compelling and evocative. (In the introduction to this edition, Turkle quotes a PDA user as saying, "When my Palm crashed, it was like a death. I thought I had lost my mind.") Why we think of the workings of a machine in psychological terms--how this happens, and what it means for all of us--is the ever more timely subject of The Second Self.

© All rights reserved Turkle and/or The MIT Press

Thimbleby, Harold (2007): Press On: Principles of Interaction Programming. The MIT Press

How to understand and program interactive devices so that they are reliable and easy to use; includes wide-ranging programming insights, tools, and code.

© All rights reserved Thimbleby and/or The MIT Press

Kaptelinin, Victor and Nardi, Bonnie A. (2009): Acting with Technology: Activity Theory and Interaction Design. The MIT Press

Gay, Geraldine and Hembrooke, Helene (2004): Activity-Centered Design: An Ecological Approach to Designing Smart Tools and Usable Systems (Acting with Technology). The MIT Press

The shift in the practice of human-computer interaction (HCI) Design from user-centered to context-based design marks a significant change in focus. With context-based design, designers start not with a preconceived idea of what users should do, but with an understanding of what users actually do. Context-based design focuses on the situation in which the technology will be used -- the activities relating to it and their social contexts. Designers must also realize that introduction of the technology itself changes the situation; in order to design workable systems, the design process must become flexible and adaptive. In Activity-Centered Design, Geri Gay and Helene Hembrooke argue that it is time to develop new models for HCI design that support not only research and development but also investigations into the context and motivation of user behavior.Gay and Hembrooke examine the ongoing interaction of computer systems use, design practice, and design evaluation, using the concepts of activity theory and related methods as a theoretical framework. Among the topics they discuss are the reciprocal relationship between the tool and the task, how activities shape the requirements of particular tools and how the application of the tools begins to reshape the activity; differing needs and expectations of participants when new technology is introduced, examining in particular the integration of wireless handheld devices into museums and learning environments; and the effect of the layout of the computing space on movement, function, and social interaction. Gay and Hembrooke then apply their findings on the use of technology in everyday contexts to inform future HCI design practice.

© All rights reserved Gay and Hembrooke and/or The MIT Press

Carroll, John M. (1987): Interfacing Thought: Cognitive Aspects of Human-Computer Interaction (Bradford Books). The MIT Press

Souza, Clarisse Sieckenius de de (2005): The Semiotic Engineering of Human-Computer Interaction (Acting with Technology). The MIT Press

In The Semiotic Engineering of Human-Computer Interaction, Clarisse Sieckenius de Souza proposes an account of HCI that draws on concepts from semiotics and computer science to investigate the relationship between user and designer. Semiotics is the study of signs, and the essence of semiotic engineering is the communication between designers and users at interaction time; designers must somehow be present in the interface to tell users how to use the signs that make up a system or program. This approach, which builds on -- but goes further than -- the currently dominant user-centered approach, allows designers to communicate their overall vision and therefore helps users understand designs -- rather than simply which icon to click.According to de Souza's account, both designers and users are interlocutors in an overall communication process that takes place through an interface of words, graphics, and behavior. Designers must tell users what they mean by the artifact they have created, and users must understand and respond to what they are being told. By coupling semiotic theory and engineering, de Souza's approach to HCI design encompasses the principles, the materials, the processes, and the possibilities for producing meaningful interactive computer system discourse and achieves a broader perspective than cognitive, ethnographic, or ergonomic approaches.De Souza begins with a theoretical overview and detailed exposition of the semiotic engineering account of HCI. She then shows how this approach can be applied specifically to HCI evaluation and design of online help systems, customization and end-user programming, and multiuser applications. Finally, she reflects on the potential and opportunities for research in semiotic engineering.

© All rights reserved Souza and/or The MIT Press

Norman, Donald A. (2010): Living with Complexity. The MIT Press

If only today's technology were simpler! It's the universal lament, but it's wrong. We don't want simplicity. Simple tools are not up to the task. The world is complex; our tools need to match that complexity. Simplicity turns out to be more complex than we thought. In this provocative and informative book, Don Norman writes that the complexity of our technology must mirror the complexity and richness of our lives. It's not complexity that's the problem, it's bad design. Bad design complicates things unnecessarily and confuses us. Good design can tame complexity.Norman gives us a crash course in the virtues of complexity. But even such simple things as salt and pepper shakers, doors, and light switches become complicated when we have to deal with many of them, each somewhat different. Managing complexity, says Norman, is a partnership. Designers have to produce things that tame complexity. But we too have to do our part: we have to take the time to learn the structure and practice the skills. This is how we mastered reading and writing, driving a car, and playing sports, and this is how we can master our complex tools. Complexity is good. Simplicity is misleading. The good life is complex, rich, and rewarding--but only if it is understandable, sensible, and meaningful.

© All rights reserved Norman and/or The MIT Press

Stasko, John T., Domingue, John B., Brown, Marc H. and Price, Blaine A. (eds.) (1998): Software Visualization. The MIT Press

Foreword by Jim Foley In the past decade, high quality interfaces have become standard in a growing number of areas such as games and CD-ROM-based encyclopedias. Yet the overwhelming majority of programmers edit their code using a single font within a single window and view code execution via the hand insertion of print statements.Software Visualization (SV) redresses this imbalance by using typography, graphics, and animation techniques to show program code, data, and control flow. This book describes the history of SV, techniques and frameworks for its construction, its use in education and program debugging, and recent attempts to evaluate its effectiveness. In making programming a multimedia experience, SV leaves programmers and computer science researchers free to explore more interesting issues and to tackle more challenging problems.Contributors : Ronald Baecker, John Bazik, Alan Blackwell, Mike Brayshaw, Marc H. Brown, Wim De Pauw, John B. Domingue, Stephen Eick, Marc Eisenstadt, Christopher Fry, Peter Gloor, Thomas Green, Michael Heath, John Hershberger, Clinton L. Jeffery, Doug Kimelman, Eileen Kraemer, Andrea Lawrence, Henry Lieberman, Allen Malony, Aaron Marcus, Paul Mulholland, Marc Najork, Stephen North, Marian Petre, Blaine A. Price, Steven Reiss, Gruia-Catalin Roman, Diane Rover, Bryan Rosenburg, Tova Roth, Robert Sedgewick, Ian Small, John T. Stasko, Roberto Tamassia, Andries van Dam, John Vlissides.

© All rights reserved Stasko et al. and/or The MIT Press

Reas, Casey and Fry, Ben (eds.) (2007): Processing: A Programming Handbook for Visual Designers and Artists. The MIT Press

An introduction to the ideas of computer programming within the context of the visual arts that also serves as a reference and text for Processing, an open-source programming language designed for creating images, animation, and interactivity.

© All rights reserved Reas and Fry and/or The MIT Press

McLuhan, Marshall (1964): Understanding Media: The Extensions of Man. The MIT Press

Marshall McLuhan's classic expose on the state of the then emerging phenomenon of mass media. Terms and phrases such as "the global village" and "the medium is the message" are now part of the lexicon, and McLuhan's theories continue to challenge our sensibilities and our assumptions about how and what we communicate.There has been a notable resurgence of interest in McLuhan's work in the last few years, fueled by the recent and continuing conjunctions between the cable companies and the regional phone companies, the appearance of magazines such as WiRed, and the development of new media models and information ecologies, many of which were spawned from MIT's Media Lab. In effect, media now begs to be redefined. In a new introduction to this edition of Understanding Media, Harper's editor Lewis Lapham reevaluates McLuhan's work in the light of the technological as well as the political and social changes that have occurred in the last part of this century.

© All rights reserved McLuhan and/or The MIT Press

Manovich, Lev (2001): The Language of New Media. The MIT Press

In this book Lev Manovich offers the first systematic and rigorous theory of new media. He places new media within the histories of visual and media cultures of the last few centuries. He discusses new media's reliance on conventions of old media, such as the rectangular frame and mobile camera, and shows how new media works create the illusion of reality, address the viewer, and represent space. He also analyzes categories and forms unique to new media, such as interface and database. Manovich uses concepts from film theory, art history, literary theory, and computer science and also develops new theoretical constructs, such as cultural interface, spatial montage, and cinegratography. The theory and history of cinema play a particularly important role in the book. Among other topics, Manovich discusses parallels between the histories of cinema and of new media, digital cinema, screen and montage in cinema and in new media, and historical ties between avant-garde film and new media.

© All rights reserved Manovich and/or The MIT Press

Grau, Oliver (2004): Virtual Art: From Illusion to Immersion (Leonardo Books). The MIT Press

Although many people view virtual reality as a totally new phenomenon, it has its foundations in an unrecognized history of immersive images. Indeed, the search for illusionary visual space can be traced back to antiquity. In this book, Oliver Grau shows how virtual art fits into the art history of illusion and immersion. He describes the metamorphosis of the concepts of art and the image and relates those concepts to interactive art, interface design, agents, telepresence, and image evolution. Grau retells art history as media history, helping us to understand the phenomenon of virtual reality beyond the hype.Grau shows how each epoch used the technical means available to produce maximum illusion. He discusses frescoes such as those in the Villa dei Misteri in Pompeii and the gardens of the Villa Livia near Primaporta, Renaissance and Baroque illusion spaces, and panoramas, which were the most developed form of illusion achieved through traditional methods of painting and the mass image medium before film. Through a detailed analysis of perhaps the most important German panorama, Anton von Werner's 1883 The Battle of Sedan, Grau shows how immersion produced emotional responses. He traces immersive cinema through Cinerama, Sensorama, Expanded Cinema, 3-D, Omnimax and IMAX, and the head mounted display with its military origins. He also examines those characteristics of virtual reality that distinguish it from earlier forms of illusionary art. His analysis draws on the work of contemporary artists and groups ART+COM, Maurice Benayoun, Charlotte Davies, Monika Fleischmann, Ken Goldberg, Agnes Hegedues, Eduardo Kac, Knowbotic Research, Laurent Mignonneau, Michael Naimark, Simon Penny, Daniela Plewe, Paul Sermon, Jeffrey Shaw, Karl Sims, Christa Sommerer, and Wolfgang Strauss. Grau offers not just a history of illusionary space but also a theoretical framework for analyzing its phenomenologies, functions, and strategies throughout history and into the future.

© All rights reserved Grau and/or The MIT Press

Fuller, Matthew (ed.) (2008): Software Studies: A Lexicon (Leonardo Book Series). The MIT Press

A cultural field guide to software: artists, computer scientists, designers, cultural theorists, programmers, and others define a new field of study and practice.

© All rights reserved Fuller and/or The MIT Press

Bolter, Jay David and Grusin, Richard (2000): Remediation: Understanding New Media. The MIT Press

Media critics remain captivated by the modernist myth of the new: they assume that digital technologies such as the World Wide Web, virtual reality, and computer graphics must divorce themselves from earlier media for a new set of aesthetic and cultural principles. In this richly illustrated study, Jay David Bolter and Richard Grusin offer a theory of mediation for our digital age that challenges this assumption. They argue that new visual media achieve their cultural significance precisely by paying homage to, rivaling, and refashioning such earlier media as perspective painting, photography, film, and television. They call this process of refashioning "remediation," and they note that earlier media have also refashioned one another: photography remediated painting, film remediated stage production and photography, and television remediated film, vaudeville, and radio.

© All rights reserved Bolter and Grusin and/or The MIT Press

Mitchell, William J. (1995): City of Bits: Space, Place, and the Infobahn (On Architecture). The MIT Press

Entertaining, concise, and relentlessly probing, City of Bits is a comprehensive introduction to a new type of city, an increasingly important system of virtual spaces interconnected by the information superhighway. William Mitchell makes extensive use of practical examples and illustrations in a technically well-grounded yet accessible examination of architecture and urbanism in the context of the digital telecommunications revolution, the ongoing miniaturization of electronics, the commodification of bits, and the growing domination of software over materialized form.

© All rights reserved Mitchell and/or The MIT Press

Dreyfus, Hubert L. (1992): What Computers Still Can't Do: A Critique of Artificial Reason. The MIT Press

When it was first published in 1972, Hubert Dreyfus's manifesto on the inherent inability of disembodied machines to mimic higher mental functions caused an uproar in the artificial intelligence community. The world has changed since then. Today it is clear that "good old-fashioned AI," based on the idea of using symbolic representations to produce general intelligence, is in decline (although several believers still pursue its pot of gold), and the focus of the Al community has shifted to more complex models of the mind. It has also become more common for AI researchers to seek out and study philosophy. For this edition of his now classic book, Dreyfus has added a lengthy new introduction outlining these changes and assessing the paradigms of connectionism and neural networks that have transformed the field.At a time when researchers were proposing grand plans for general problem solvers and automatic translation machines, Dreyfus predicted that they would fail because their conception of mental functioning was naive, and he suggested that they would do well to acquaint themselves with modern philosophical approaches to human beings. What Computers Can't Do was widely attacked but quietly studied. Dreyfus's arguments are still provocative and focus our attention once again on what it is that makes human beings unique.Hubert L. Dreyfus, who is Professor of Philosophy at the University of California, Berkeley, is also the author of Being-in-the-World. A Commentary on Heidegger's Being and Time, Division I.

© All rights reserved Dreyfus and/or The MIT Press

Wheeler, Michael (2005): Reconstructing the Cognitive World: The Next Step (Bradford Books). The MIT Press

In Reconstructing the Cognitive World, Michael Wheeler argues that we should turn away from the generically Cartesian philosophical foundations of much contemporary cognitive science research and proposes instead a Heideggerian approach. Wheeler begins with an interpretation of Descartes. He defines Cartesian psychology as a conceptual framework of explanatory principles and shows how each of these principles is part of the deep assumptions of orthodox cognitive science (both classical and connectionist). Wheeler then turns to Heidegger's radically non-Cartesian account of everyday cognition, which, he argues, can be used to articulate the philosophical foundations of a genuinely non-Cartesian cognitive science. Finding that Heidegger's critique of Cartesian thinking falls short, even when supported by Hubert Dreyfus's influential critique of orthodox artificial intelligence, Wheeler suggests a new Heideggerian approach. He points to recent research in "embodied-embedded" cognitive science and proposes a Heideggerian framework to identify, amplify, and clarify the underlying philosophical foundations of this new work. He focuses much of his investigation on recent work in artificial intelligence-oriented robotics, discussing, among other topics, the nature and status of representational explanation, and whether (and to what extent) cognition is computation rather than a noncomputational phenomenon best described in the language of dynamical systems theory. Wheeler's argument draws on analytic philosophy, continental philosophy, and empirical work to "reconstruct" the philosophical foundations of cognitive science in a time of a fundamental shift away from a generically Cartesian approach. His analysis demonstrates that Heideggerian continental philosophy and naturalistic cognitive science need not be mutually exclusive and shows further that a Heideggerian framework can act as the "conceptual glue" for new work in cognitive science.

© All rights reserved Wheeler and/or The MIT Press

Lynch, Kevin (1960): The Image of the City (Harvard-MIT Joint Center for Urban Studies Series). The MIT Press

What does the city's form actually mean to the people who live there? What can the city planner do to make the city's image more vivid and memorable to the city dweller? To answer these questions, Mr. Lynch, supported by studies of Los Angeles, Boston, and Jersey City, formulates a new criterion--imageability--and shows its potential value as a guide for the building and rebuilding of cities.The wide scope of this study leads to an original and vital method for the evaluation of city form. The architect, the planner, and certainly the city dweller will all want to read this book.

© All rights reserved Lynch and/or The MIT Press

Bijker, Wiebe E. and Law, John (eds.) (1994): Shaping Technology / Building Society: Studies in Sociotechnical Change (Inside Technology). The MIT Press

Technology is everywhere, yet a theory of technology and its social dimension remains to be fully developed. Building on the influential book The Social Construction of Technological Systems, this volume carries forward the project of creating a theory of technological development and implementation that is strongly grounded in both sociology and history. The 12 essays address the central question of how technologies become stabilized, how they attain a final form and use that is generally accepted. The essays are tied together by a general introduction, part introductions, and a theoretical conclusion.The first part of the book examines and criticizes the idea that technologies have common life cycles; three case studies cover the history of a successful but never produced British jet fighter, the manipulation of patents by a French R&D company to gain a market foothold, and the managed development of high-intensity fluorescent lighting to serve the interests of electricity suppliers as well as the producing company.The second part looks at broader interactions shaping technology and its social context: the question of who was to define "steel," the determination of what constitutes radioactive waste and its proper disposal, and the social construction of motion pictures as exemplified by Thomas Edison's successful development of the medium and its commercial failure.The last part offers theoretical studies suggesting alternative approaches to sociotechnologies; two studies argue for a strong sociotechnology in which artifact and social context are viewed as a single seamless web, while the third looks at the ways in which a social program is a technology.Wiebe E. Bijker is Associate Professor at the University of Limburg, The Netherlands. John Law is Professor in Sociology at the University of Keele, Staffordshire, England.

© All rights reserved Bijker and Law and/or The MIT Press

Maeda, John (ed.) (1999): Design by Numbers. The MIT Press

Most art and technology projects pair artists with engineers or scientists: the artist has the conception, and the technical person provides the know-how. John Maeda is an artist and a computer scientist, and he views the computer not as a substitute for brush and paint but as an artistic medium in its own right. Design By Numbers is a reader-friendly tutorial on both the philosophy and nuts-and-bolts techniques of programming for artists.Practicing what he preaches, Maeda composed Design By Numbers using a computational process he developed specifically for the book. He introduces a programming language and development environment, available on the Web, which can be freely downloaded or run directly within any JAVA-enabled Web browser. Appropriately, the new language is called DBN (for "design by numbers"). Designed for "visual" people -- artists, designers, anyone who likes to pick up a pencil and doodle -- DBN has very few commands and consists of elements resembling those of many other languages, such as LISP, LOGO, C/JAVA, and BASIC.Throughout the book, Maeda emphasizes the importance -- and delights -- of understanding the motivation behind computer programming, as well as the many wonders that emerge from well-written programs. Sympathetic to the "mathematically challenged," he places minimal emphasis on mathematics in the first half of the book. Because computation is inherently mathematical, the books second half uses intermediate mathematical concepts that generally do not go beyond high-school algebra. The reader who masters the skills so clearly set out by Maeda will be ready to exploit the true character of digital media design.

© All rights reserved Maeda and/or The MIT Press

McCullough, Malcolm (2005): Digital Ground: Architecture, Pervasive Computing, and Environmental Knowing. The MIT Press

Digital Ground is an architect's response to the design challenge posed by pervasive computing. One century into the electronic age, people have become accustomed to interacting indirectly, mediated through networks. But now as digital technology becomes invisibly embedded in everyday things, even more activities become mediated, and networks extend rather than replace architecture. The young field of interaction design reflects not only how people deal with machine interfaces but also how people deal with each other in situations where interactivity has become ambient. It shifts previously utilitarian digital design concerns to a cultural level, adding notions of premise, appropriateness, and appreciation.Malcolm McCullough offers an account of the intersections of architecture and interaction design, arguing that the ubiquitous technology does not obviate the human need for place. His concept of "digital ground" expresses an alternative to anytime-anyplace sameness in computing; he shows that context not only shapes usability but ideally becomes the subject matter of interaction design and that "environmental knowing" is a process that technology may serve and not erode.Drawing on arguments from architecture, psychology, software engineering, and geography, writing for practicing interaction designers, pervasive computing researchers, architects, and the general reader on digital culture, McCullough gives us a theory of place for interaction design. Part I, "Expectations," explores our technological predispositions -- many of which ("situated interactions") arise from our embodiment in architectural settings. Part II, "Technologies," discusses hardware, software, and applications, including embedded technology ("bashing the desktop"), and building technology genres around life situations. Part III, "Practices," argues for design as a liberal art, seeing interactivity as a cultural -- not only technological -- challenge and a practical notion of place as essential. Part IV, "Epilogue," acknowledges the epochal changes occurring today, and argues for the role of "digital ground" in the necessary adaptation.

© All rights reserved McCullough and/or The MIT Press

McCullough, Malcolm (2004): Digital Ground: Architecture, Pervasive Computing, and Environmental Knowing. The MIT Press

Digital Ground is an architect's response to the design challenge posed by pervasive computing. One century into the electronic age, people have become accustomed to interacting indirectly, mediated through networks. But now as digital technology becomes invisibly embedded in everyday things, even more activities become mediated, and networks extend rather than replace architecture. The young field of interaction design reflects not only how people deal with machine interfaces but also how people deal with each other in situations where interactivity has become ambient. It shifts previously utilitarian digital design concerns to a cultural level, adding notions of premise, appropriateness, and appreciation.Malcolm McCullough offers an account of the intersections of architecture and interaction design, arguing that the ubiquitous technology does not obviate the human need for place. His concept of "digital ground" expresses an alternative to anytime-anyplace sameness in computing; he shows that context not only shapes usability but ideally becomes the subject matter of interaction design and that "environmental knowing" is a process that technology may serve and not erode.Drawing on arguments from architecture, psychology, software engineering, and geography, writing for practicing interaction designers, pervasive computing researchers, architects, and the general reader on digital culture, McCullough gives us a theory of place for interaction design. Part I, "Expectations," explores our technological predispositions -- many of which ("situated interactions") arise from our embodiment in architectural settings. Part II, "Technologies," discusses hardware, software, and applications, including embedded technology ("bashing the desktop"), and building technology genres around life situations. Part III, "Practices," argues for design as a liberal art, seeing interactivity as a cultural -- not only technological -- challenge and a practical notion of place as essential. Part IV, "Epilogue," acknowledges the epochal changes occurring today, and argues for the role of "digital ground" in the necessary adaptation.

© All rights reserved McCullough and/or The MIT Press

Harsanyi, John C. (1988): A General Theory of Equilibrium Selection in Games. Cambridge, MA, The MIT Press

The authors, two of the most prominent game theorists of this generation, have devoted a number of years to the development of the theory presented here, and to its economic applications. They propose rational criteria for selecting one particular uniformly perfect equilibrium point as the solution of any noncooperative game. And, because any cooperative game can be remodelled as a noncooperative bargaining game, their theory defines a one-point solution for any cooperative game as well.By providing solutions - based on the same principles of rational behavior - for all classes of games, both cooperative and noncooperative, both those with complete and with incomplete information, Harsanyi and Selten's approach achieves a remarkable degree of theoretical unification for game theory as a whole and provides a deeper insight into the nature of game-theoretic rationality.The book applies this theory to a number of specific game classes, such as unanimity games; bargaining with transaction costs; trade involving one seller and several buyers; two-person bargaining with incomplete information on one side, and on both sides. The last chapter discusses the relationship of the authors' theory to other recently proposed solution concepts, particularly the Kohberg-Mertens stability theory.John C. Harsanyi is Flood Research Professor in Business Administration and Professor of Economics, University of California, Berkeley. Reinhard Selten is Professor of Economics Institute of Social and Economic Sciences: University of Bonn, Federal Republic of Germany.

© All rights reserved Harsanyi and/or The MIT Press

Marr, David (2010): Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. The MIT Press

David Marr's posthumously published Vision (1982) influenced a generation of brain and cognitive scientists, inspiring many to enter the field. In Vision, Marr describes a general framework for understanding visual perception and touches on broader questions about how the brain and its functions can be studied and understood. Researchers from a range of brain and cognitive sciences have long valued Marr's creativity, intellectual power, and ability to integrate insights and data from neuroscience, psychology, and computation. This MIT Press edition makes Marr's influential work available to a new generation of students and scientists. In Marr's framework, the process of vision constructs a set of representations, starting from a description of the input image and culminating with a description of three-dimensional objects in the surrounding environment. A central theme, and one that has had far-reaching influence in both neuroscience and cognitive science, is the notion of different levels of analysis--in Marr's framework, the computational level, the algorithmic level, and the hardware implementation level. Now, thirty years later, the main problems that occupied Marr remain fundamental open problems in the study of perception. Vision provides inspiration for the continuing efforts to integrate knowledge from cognition and computation to understand vision and the brain.

© All rights reserved Marr and/or The MIT Press

Ensmenger, Nathan L. (2010): The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise (History of Computing). The MIT Press

This is a book about the computer revolution of the mid-20th century and the people who made it possible.  Unlike most histories of computing, it is not a book about machines, inventors, or entrepreneurs. Instead, it tells the story of the vast but largely anonymous legions of computer specialists—programmers, systems analysts, and other software developers—who transformed the electronic computer from a scientific curiosity into the defining technology of the modern era.  Known alternatively as "whiz kids," "hackers," and "gurus," this new breed of technical specialists were alternately admired for their technical prowess and despised for their eccentric mannerisms and the disruptive potential of the technologies they developed.  As the systems that they built became evermore powerful and ubiquitous, these specialists became the focus of a series of critiques of the social and organizational impact of electronic computing.  To many of their contemporaries, it seemed the "computer boys" were taking over, not just in the corporate setting, but also in government, politics, and society in general. In The Computer Boys Take Over, Nathan Ensmenger traces the rise to power of the computer expert in modern American society.  He follows the history of computer programming from its origins as low-status, largely feminized labor in the secret wartime computing projects through its reinvention as a glamorous "black art" practiced by "computer cowboys" in the 1950s through its rationalization in the 1960s as the academic discipline of computer science and the software engineering profession. His rich and nuanced portrayal of the men and women (a surprising number of the "computer boys" were, in fact, female) who built their careers around the novel technology of electronic computing explores issues of power, identity, and expertise that have only become more significant to our increasingly computerized society.  His detailed analysis of the pervasive "software crisis" rhetoric of the late 1960s shows how seemingly technical debates about how to manage large-scale software development projects reflected deeper concerns about the growing power and influence of technical specialists in corporate, academic, and governmental organizations.  In his recasting of the drama of the computer revolution through the eyes of its principle revolutionaries, Ensmenger reminds us that the computerization of modern society was not an inevitable process driven by impersonal technological or economic imperatives, but was rather a creative, contentious, and above all, fundamentally human development.

© All rights reserved Ensmenger and/or The MIT Press

Picard, Rosalind W. (2000): Affective Computing. The MIT Press

The latest scientific findings indicate that emotions play an essential role in decision making, perception, learning, and more -- that is, they influence the very mechanisms of rational thinking. Not only too much, but too little emotion can impair decision making. According to Rosalind Picard, if we want computers to be genuinely intelligent and to interact naturally with us, we must give computers the ability to recognize, understand, even to have and express emotions.Part 1 of this book provides the intellectual framework for affective computing. It includes background on human emotions, requirements for emotionally intelligent computers, applications of affective computing, and moral and social questions raised by the technology. Part 2 discusses the design and construction of affective computers. Although this material is more technical than that in Part 1, the author has kept it less technical than typical scientific publications in order to make it accessible to newcomers. Topics in Part 2 include signal-based representations of emotions, human affect recognition as a pattern recognition and learning problem, recent and ongoing efforts to build models of emotion for synthesizing emotions in computers, and the new application area of affective wearable computers.

© All rights reserved Picard and/or The MIT Press

Thimbleby, Harold (2010): Press On: Principles of Interaction Programming. The MIT Press

Interactive systems and devices, from mobile phones to office copiers, do not fulfill their potential for a wide variety of reasons--not all of them technical. Press On shows that we can design better interactive systems and devices if we draw on sound computer science principles. It uses state machines and graph theory as a powerful and insightful way to analyze and design better interfaces and examines specific designs and creative solutions to design problems. Programmers--who have the technical knowledge that designers and users often lack--can be more creative and more central to interaction design than we might think. Sound programming concepts improve device design. Press On provides the insights, concepts and programming tools to improve usability. Knowing the computer science is fundamental, but Press On also shows how essential it is to have the right approaches to manage the design of systems that people use. Particularly for complex systems, the social, psychological and ethical concerns--the wider design issues--are crucial, and these are covered in depth. Press On highlights key principles throughout the text and provides cross-topic linkages between chapters and suggestions for further reading. Additional material, including all the program code used in the book, is available on an interactive web site. Press On is an essential textbook and reference for computer science students, programmers, and anyone interested in the design of interactive technologies.Harold Thimbleby is Professor of Computer Science at Swansea University, Wales. He is the author or editor of a number of books, including User Interface Design, and nearly 400 other publications.

© All rights reserved Thimbleby and/or The MIT Press

Souza, Clarisse S. de (2007): Semiotic Engineering of Human-Computer Interaction. Cambridge, USA, The MIT Press

In The Semiotic Engineering of Human-Computer Interaction, Clarisse Sieckenius de Souza proposes an account of HCI that draws on concepts from semiotics and computer science to investigate the relationship between user and designer. Semiotics is the study of signs, and the essence of semiotic engineering is the communication between designers and users at interaction time; designers must somehow be present in the interface to tell users how to use the signs that make up a system or program. This approach, which builds on—but goes further than—the currently dominant user-centered approach, allows designers to communicate their overall vision and therefore helps users understand designs—rather than simply which icon to click. According to de Souza's account, both designers and users are interlocutors in an overall communication process that takes place through an interface of words, graphics, and behavior. Designers must tell users what they mean by the artifact they have created, and users must understand and respond to what they are being told. By coupling semiotic theory and engineering, de Souza's approach to HCI design encompasses the principles, the materials, the processes, and the possibilities for producing meaningful interactive computer system discourse and achieves a broader perspective than cognitive, ethnographic, or ergonomic approaches. De Souza begins with a theoretical overview and detailed exposition of the semiotic engineering account of HCI. She then shows how this approach can be applied specifically to HCI evaluation and design of online help systems, customization and end-user programming, and multiuser applications. Finally, she reflects on the potential and opportunities for research in semiotic engineering.

© All rights reserved Souza and/or The MIT Press

 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 

User-contributed notes

Give us your opinion! Do you have any comments/additions
that you would like other visitors to see?

 
comment You (your email) say: Sep 15th, 2014
#1
Sep 15
Add a thoughtful commentary or note to this page ! 
 

your homepage, facebook profile, twitter, or the like
will be spam-protected
How many?
= e.g. "6"
User ExperienceBy submitting you agree to the Site Terms
 
 
 
 

Changes to this page (publisher)

19 Jan 2014: Added
15 Oct 2013: Added
01 Sep 2013: Added
31 Aug 2013: Added
31 Aug 2013: Added
30 Aug 2013: Added
28 Aug 2013: Added
17 Feb 2013: Modified
16 Feb 2013: Modified
09 Nov 2012: Modified
06 Nov 2012: Modified
01 Nov 2012: Modified
01 Nov 2012: Modified
01 Nov 2012: Modified
26 Jul 2012: Modified
14 May 2012: Modified
14 May 2012: Modified
11 May 2012: Modified
26 Mar 2012: Modified
26 Mar 2012: Modified
26 Mar 2012: Modified
25 Mar 2012: Modified
25 Mar 2012: Modified
25 Mar 2012: Modified
25 Mar 2012: Modified
25 Mar 2012: Modified
25 Mar 2012: Modified
25 Mar 2012: Modified
13 Mar 2012: Modified
13 Mar 2012: Modified
13 Mar 2012: Modified
03 Jan 2012: Modified
17 Nov 2011: Modified
17 Nov 2011: Modified
28 Oct 2011: Modified
13 Oct 2011: Modified
13 Oct 2011: Modified
13 Oct 2011: Modified
12 Oct 2011: Modified
12 Oct 2011: Modified
06 Oct 2011: Modified
04 Oct 2011: Modified
04 Oct 2011: Modified
04 Oct 2011: Modified
04 Oct 2011: Modified
03 Oct 2011: Modified
11 Jul 2011: Modified
11 Jul 2011: Modified
22 Jun 2011: Modified
22 Jun 2011: Modified
12 May 2011: Modified
12 May 2011: Modified
12 May 2011: Modified
12 May 2011: Modified
12 May 2011: Modified
05 May 2011: Modified
15 Apr 2011: Modified
14 Apr 2011: Modified
10 Jun 2008: Modified
04 Feb 2008: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/publishers/the_mit_press.html

Upcoming Courses

go to course
Gamification: Creating Addictive User Experience
Starts the day after tomorrow !
go to course
User-Centred Design - Module 3
66% booked. Starts in 29 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading