Publication statistics

Pub. period:1982-2011
Pub. count:64
Number of co-authors:79



Co-authors

Number of publications with 3 favourite co-authors:

Catherine Havasi:5
Ted Selker:4
Hugo Liu:4

 

 

Productive colleagues

Henry Lieberman's 3 most productive colleagues in number of publications:

Brad A. Myers:154
Mary Beth Rosson:142
Fabio Paterno:127
 
 
 

Upcoming Courses

go to course
UI Design Patterns for Successful Software
Starts tomorrow LAST CALL!
go to course
Affordances: Designing Intuitive User Interfaces
Starts the day after tomorrow !
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Henry Lieberman

Picture of Henry Lieberman.
Update pic
Personal Homepage:
http://web.media.mit.edu/~lieber/

I'm a Research Scientist at the MIT Media Laboratory. I work with the Agents Group. I'm especially interested in using Artificial Intelligence techniques to improve all kinds of user interfaces. Much of my work is centered on the use of Common Sense knowledge, simple facts about people and everyday life. I also work on bringing the full procedural power of computers to non-expert users through Programming by Example, natural language and visual programming. I'm also interested in a wide variety of other topics in AI and interactive computing.

 

Publications by Henry Lieberman (bibliography)

 what's this?
2011
 
Edit | Del

Chi, Pei-Yu and Lieberman, Henry (2011): Raconteur: integrating authored and real-time social media. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 3165-3168. Available online

Social media enables people to share personal experiences, often through real-time media such as chat. People also record their life experiences in media collections, with photos and video. However, today's social media force a choice between real-time communication, and authoring a coherent story illustrated with digital media. There is simply not enough time in real-time communication to select and compose coherent multimedia stories. We present Raconteur, which introduces a new style of social media combining aspects of the real-time and authored styles of communication. It is structured around a text chat, augmented by an agent that continuously interprets the chat text to suggest appropriate media elements to illustrate the story. A small experiment shows that storytellers find Raconteur's suggestions helpful in presenting their experiences, and audiences find the interaction engaging.

© All rights reserved Chi and Lieberman and/or their publisher

 
Edit | Del

Ko, Andrew J., Abraham, Robin, Beckwith, Laura, Blackwell, Alan, Burnett, Margaret M., Erwig, Martin, Scaffidi, Christopher, Lawrance, Joseph, Lieberman, Henry, Myers, Brad A., Rosson, Mary Beth, Rothermel, Gregg, Shaw, Mary and Wiedenbeck, Susan (2011): The State of the Art in End-User Software Engineering. In ACM Computing Surveys, 43 (3) pp. 1-44.

Most programs today are written not by professional software developers, but by people with expertise in other domains working towards goals for which they need computational support. For example, a teacher might write a grading spreadsheet to save time grading, or an interaction designer might use an interface builder to test some user interface design ideas. Although these end-user programmers may not have the same goals as professional developers, they do face many of the same software engineering challenges, including understanding their requirements, as well as making decisions about design, reuse, integration, testing, and debugging. This article summarizes and classifies research on these activities, defining the area of End-User Software Engineering (EUSE) and related terminology. The article then discusses empirical research about end-user software engineering activities and the technologies designed to support them. The article also addresses several crosscutting issues in the design of EUSE tools, including the roles of risk, reward, and domain complexity, and self-efficacy in the design of EUSE tools and the potential of educating users about software engineering principles.

© All rights reserved Ko et al. and/or ACM Press

2010
 
Edit | Del

Chi, Pei-Yu and Lieberman, Henry (2010): Raconteur: from intent to stories. In: Proceedings of the 2010 International Conference on Intelligent User Interfaces 2010. pp. 301-304. Available online

When editing a story from a large collection of media, such as photos and video clips captured from daily life, it is not always easy to understand how particular scenes fit into the intent for the overall story. Especially for novice editors, there is often a lack of coherent connections between scenes, making it difficult for the viewers to follow the story. In this paper, we present Raconteur, a story editing system that helps users assemble coherent stories from media elements, each annotated with a sentence or two in unrestricted natural language. It uses a Commonsense knowledge base, and the AnalogySpace Commonsense reasoning technique. Raconteur focuses on finding story analogies -- different elements illustrating the same overall "point", or independent stories exhibiting similar narrative structures.

© All rights reserved Chi and Lieberman and/or their publisher

 
Edit | Del

Smith, Dustin A. and Lieberman, Henry (2010): The why UI: using goal networks to improve user interfaces. In: Proceedings of the 2010 International Conference on Intelligent User Interfaces 2010. pp. 377-380. Available online

People interact with interfaces to accomplish goals, and knowledge about human goals can be useful for building intelligent user interfaces. We suggest that modeling high, human-level goals like "repair my credit score", is especially useful for coordinating workflows between interfaces, automated planning, and building introspective applications. We analyzed data from 43Things.com, a website where users share and discuss goals and plans in natural language, and constructed a goal network that relates what goals people have with how people solve them. We then label goals with specific details, such as where the goal typically is met and how long it takes to achieve, facilitating plan and goal recognition. Lastly, we demonstrate a simple application of goal networks, deploying it in a mobile, location-aware to-do list application, ToDoGo, which uses goal networks to help users plan where and when to accomplish their desired goals.

© All rights reserved Smith and Lieberman and/or their publisher

 
Edit | Del

Speer, Robert H., Havasi, Catherine, Treadway, K. Nichole and Lieberman, Henry (2010): Finding your way in a multi-dimensional semantic space with luminoso. In: Proceedings of the 2010 International Conference on Intelligent User Interfaces 2010. pp. 385-388. Available online

In AI, we often need to make sense of data that can be measured in many different dimensions -- thousands of dimensions or more -- especially when this data represents natural language semantics. Dimensionality reduction techniques can make this kind of data more understandable and more powerful, by projecting the data into a space of many fewer dimensions, which are suggested by the computer. Still, frequently, these results require more dimensions than the human mind can grasp at once to represent all the meaningful distinctions in the data. We present Luminoso, a tool that helps researchers to visualize and understand a multi-dimensional semantic space by exploring it interactively. It also streamlines the process of creating such a space, by inputting text documents and optionally including common-sense background information. This interface is based on the fundamental operation of "grabbing" a point, which simultaneously allows a user to rotate their view using that data point, view associated text and statistics, and compare it to other data points. This also highlights the point's neighborhood of semantically-associated points, providing clues for reasons as to why the points were classified along the dimensions they were. We show how this interface can be used to discover trends in a text corpus, such as free-text responses to a survey.

© All rights reserved Speer et al. and/or their publisher

2009
 
Edit | Del

Shen, Edward Yu-Te, Lieberman, Henry and Davenport, Glorianna (2009): What's next?: emergent storytelling from video collection. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 809-818. Available online

In the world of visual storytelling, narrative development relies on a particular temporal ordering of shots and sequences and scenes. Rarely is this ordering cast in stone. Rather, the particular ordering of a story reflects a myriad of interdependent decisions about the interplay of structure, narrative arc and character development. For storytellers, particularly those developing their narratives from large documentary archives, it would be helpful to have a visualization system partnered with them to present suggestions for the most compelling story path. We present Storied Navigation, a video editing system that helps authors compose a sequence of scenes that tell a story, by selecting from a corpus of annotated clips. The clips are annotated in unrestricted natural language. Authors can also type a story in unrestricted English, and the system finds possibilities for clips that best match high-level elements of the story. Beyond simple keyword matching, these elements can include the characters, emotions, themes, and story structure. Authors can also interactively replace existing scenes or predict the next scene to continue a story, based on these characteristics. Storied Navigation gives the author the feel of brainstorming about the story rather than simply editing the media.

© All rights reserved Shen et al. and/or ACM Press

 
Edit | Del

Villalon, Rachelle, Lieberman, Henry and Sass, Larry (2009): Breaking down brick walls: design, construction, and prototype fabrication knowledge in architecture. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 4261-4266. Available online

Architectural designs are not just collections of 3D objects. Architects have both high-level aesthetic design intent, and intent for the functionality of the building; these must eventually translate into real-world construction materials and processes. Physical prototypes are still essential for the architect and their clients to get a feel for whether designs "work". An exciting recent development in architecture is the use of industrial robots to automatically construct 3D prototype architectural models. But programming the robots requires tedious procedures of low-level commands, far removed from the designer's intent. Adeon is a system that integrates high-level architectural design knowledge, including aesthetic and stylistic intent, with knowledge about materials and construction processes, and robot programming code for constructing prototype 3D physical models. It centers around collecting and associating "common sense" knowledge, expressed in English and converted to a knowledge representation about the various levels. It provides a graphic editor that allows architects to draw high-level aesthetic designs, perhaps referencing known styles or historical examples, and retrieving relevant construction, materials, and cost information. It automatically produces a robot program for constructing the prototype. We present examples detailing the design of various styles of brick walls. Adeon is an interesting example of how to provide an interface for creative work that spans both high-level and low-level concerns.

© All rights reserved Villalon et al. and/or ACM Press

 
Edit | Del

Smith, Dustin Arthur and Lieberman, Henry (2009): Recognizing and using goals in event management. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 4525-4530. Available online

Personal event management involves planning when, where and how events should occur, making sure the event's prerequisites are satisfied, and developing contingencies for when things go wrong. Conventional calendar and project management tools, however, only record and visualize explicit human decisions regarding event specifics. We present Event Minder, a calendar program that takes into account the goals for which the events are scheduled. Users can input descriptions of events in natural language, mixing high-level objectives, concrete time and place decisions, and omit "obvious" common sense details. A commonsense knowledge base provides sensible defaults, and machine learning refines these defaults with experience. We can make recommendations for alternative plans, including alternatives that satisfy higher-level goals in different ways as well as those that meet immediate constraints. Our current system covers dining-related events, integrating commonsense with domain knowledge about specific restaurants, bars and hotels.

© All rights reserved Smith and Lieberman and/or ACM Press

 
Edit | Del

Speer, Robert, Krishnamurthy, Jayant, Havasi, Catherine, Smith, Dustin, Lieberman, Henry and Arnold, Kenneth (2009): An interface for targeted collection of common sense knowledge using a mixture model. In: Proceedings of the 2009 International Conference on Intelligent User Interfaces 2009. pp. 137-146. Available online

We present a game-based interface for acquiring common sense knowledge. In addition to being interactive and entertaining, our interface guides the knowledge acquisition process to learn about the most salient characteristics of a particular concept. We use statistical classification methods to discover the most informative characteristics in the Open Mind Common Sense knowledge base, and use these characteristics to play a game of 20 Questions with the user. Our interface also allows users to enter knowledge more quickly than a more traditional knowledge-acquisition interface. An evaluation showed that users enjoyed the game and that it increased the speed of knowledge acquisition.

© All rights reserved Speer et al. and/or their publisher

 
Edit | Del

Havasi, Catherine, Lieberman, Henry and Mueller, Erik T. (2009): CSIUI 2009: story understanding and generation for aware and interactive interface design. In: Proceedings of the 2009 International Conference on Intelligent User Interfaces 2009. pp. 491-492. Available online

In order to be helpful to people, the intelligent interfaces of the future will have to acquire, represent, and infer simple knowledge about everyday life and activities. While much work in AI has represented this knowledge at the word, sentence, and logical assertion level, we see a growing need to understand it at a larger granularity, that of stories. The workshop, like its predecessors, had the goal of bringing together researchers in common sense reasoning with researchers in intelligent interfaces. Each year our workshop has a different focus in addition to these two areas and this year's workshop focused on the acquisition, understanding and creation of stories.

© All rights reserved Havasi et al. and/or their publisher

 
Edit | Del

Alonso, Jason B., Havasi, Catherine and Lieberman, Henry (2009): PerspectiveSpace: Opinion Modeling with Dimensionality Reduction. In: Proceedings of the 2009 Conference on User Modeling, Adaptation and Personalization 2009. pp. 162-172. Available online

Words mean different things to different people, and capturing these differences is often a subtle art. These differences are often "a matter of perspective". Perspective can be taken to be the set of beliefs held by a person as a result of their background, culture, tastes, and experience. But how can we represent perspective computationally? In this paper, we present PerspectiveSpace, a new technique for modeling spaces of users and their beliefs. PerspectiveSpace represents these spaces as a matrix of users, and data on how people agree or disagree on assertions that they themselves have expressed. It uses Principal Component Analysis (PCA) to reduce the dimensionality of that matrix, discovering the most important axes that best characterize the space. It can then express user perspectives and opinions in terms of these axes. For recommender systems, because it discovers patterns in the beliefs about items, rather than similarity of the items or users themselves, it can perform more nuanced categorization and recommendation. It integrates with our more general common sense reasoning technique, AnalogySpace, which can reason over the content of expressed opinions. An application of PerspectiveSpace to movie recommendation, 2-wit, is presented. A leave-one-out test shows that PerspectiveSpace captures the consistency of users' opinions very well. The technique also has applications ranging from discovering subcultures in a larger society, to building community-driven web sites.

© All rights reserved Alonso et al. and/or their publisher

2008
 
Edit | Del

Kim, Kyunghee, Picard, Rosalind W. and Lieberman, Henry (2008): Common sense assistant for writing stories that teach social skills. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 2805-2810. Available online

People on the autistic spectrum often have difficulties with social interaction, and these difficulties are compounded when a person faces the uncertainty of not knowing what to expect in a new social setting. Detailed, step-by-step explanations of people's intentions and plausible actions can often help autistic people make sense of the situation, adapt to the social rules, and reduce stress associated with the social encounter. Carol Gray's Social Stories? are carefully structured stories designed to prepare autistic people for everyday situations such as smiling at friends, waiting in a line, and staying calm in an audience when the speaker's slides don't match the handouts. Teachers or parents writing these stories often forget to include explanations of simple, "common sense" facts and simple variations of the story that might occur in different circumstances. We present a new tool that helps the writer explain salient points and think of more variations of the story. It uses a knowledge base of Common Sense sentences, Open Mind Common Sense, and inference in a semantic network, ConceptNet. We are investigating whether this new tool's suggestions are useful by examining how often the writers choose and use the suggestions that it generates.

© All rights reserved Kim et al. and/or ACM Press

2007
 
Edit | Del

Havasi, Catherine and Lieberman, Henry (2007): Common sense and intelligent user interfaces. In: Proceedings of the 2007 International Conference on Intelligent User Interfaces 2007. p. 7. Available online

There is a mutually beneficial relationship between user interfaces and common sense reasoning and acquisition. Common sense knowledge enables interfaces to better understand and to be more grounded in the world of the user, thus improving the user's overall experience with the interface. This would not be possible without large sources of common sense knowledge, which likewise benefit from intelligent interfaces designed to make the knowledge acquisition processes more productive and enjoyable for the contributor. These two complementary interface types and their interaction are explored in this workshop.

© All rights reserved Havasi and Lieberman and/or ACM Press

 
Edit | Del

Shen, Edward, Lieberman, Henry and Lam, Francis (2007): What am I gonna wear?: scenario-oriented recommendation. In: Proceedings of the 2007 International Conference on Intelligent User Interfaces 2007. pp. 365-368. Available online

Electronic Commerce on the Web is thriving, but consumers still have trouble finding products that will meet their needs and desires. AI has offered many kinds of Recommender Systems [11], but they are all oriented toward searching based on concrete attributes of the product (e.g. price, color) or the user (as in Collaborative Filtering). Based on commonsense reasoning technology, we introduce a novel recommendation technique, Scenario-Oriented Recommendation, which helps users by mapping their daily scenarios to product attributes, and works even when users don't know exactly what products they are looking for.

© All rights reserved Shen et al. and/or ACM Press

 
Edit | Del

Silva, Jnia Coutinho Anacleto, Godoi, Muriel de Souza, Carvalho, Aparecido Fabiano Pinatti de and Lieberman, Henry (2007): A Common Sense-Based On-Line Assistant for Training Employees. In: Baranauskas, Maria Ceclia Calani, Palanque, Philippe A., Abascal, Julio and Barbosa, Simone Diniz Junqueira (eds.) DEGAS 2007 - Proceedings of the 1st International Workshop on Design and Evaluation of e-Government Applications and Services September 11th, 2007, Rio de Janeiro, Brazil. pp. 243-254. Available online

2006
 
Edit | Del

Faaborg, Alexander and Lieberman, Henry (2006): A goal-oriented web browser. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 751-760. Available online

Many users are familiar with the interesting but limited functionality of Data Detector interfaces like Microsoft's Smart Tags and Google's AutoLink. In this paper we significantly expand the breadth and functionality of this type of user interface through the use of large-scale knowledge bases of semantic information. The result is a Web browser that is able to generate personalized semantic hypertext, providing a goal-oriented browsing experience. We present (1) Creo, a Programming by Example system for the Web that allows users to create a general-purpose procedure with a single example, and (2) Miro, a Data Detector that matches the content of a page to high-level user goals. An evaluation with 34 subjects found that they were more efficient using our system, and that the subjects would use features like these if they were integrated into their Web browser.

© All rights reserved Faaborg and Lieberman and/or ACM Press

 
Edit | Del

Lieberman, Henry and Espinosa, Jose (2006): A goal-oriented interface to consumer electronics using planning and commonsense reasoning. In: Proceedings of the 2006 International Conference on Intelligent User Interfaces 2006. pp. 226-233. Available online

We are reaching a crisis with design of user interfaces for consumer electronics. Flashing 12:00 time indicators, push-and-hold buttons, and interminable modes and menus are all symptoms of trying to maintain a one-to-one correspondence between functions and physical controls, which becomes hopeless as the number of capabilities of devices grows. We propose instead to orient interfaces around the goals that users have for the use of devices. We present Roadie, a user interface agent that provides intelligent context-sensitive help and assistance for a network of consumer devices. Roadie uses a Commonsense knowledge base to map between user goals and functions of the devices, and an AI partial-order planner to provide mixed-initiative assistance with executing multi-step procedures and debugging help when things go wrong.

© All rights reserved Lieberman and Espinosa and/or ACM Press

 
Edit | Del

Lee, Chia-Hsun Jackie, Bonanni, Leonardo, Espinosa, Jose H., Lieberman, Henry and Selker, Ted (2006): Augmenting kitchen appliances with a shared context using knowledge about daily events. In: Proceedings of the 2006 International Conference on Intelligent User Interfaces 2006. pp. 348-350. Available online

Networked appliances might make them aware of each other, but interacting with a complex network can be difficult in itself. KitchenSense is a sensor rich networked kitchen research platform that uses Common Sense reasoning to simplify control interfaces and augment interaction. The system's sensor net attempts to interpret people's intentions to create fail-soft support for safe, efficient and aesthetic activity. By considering embedded sensor data together with daily-event knowledge, a centrally-controlled system can develop a shared context across various appliances. The system is a research platform that is used to evaluate augmented intelligent support of work scenarios in physical spaces.

© All rights reserved Lee et al. and/or ACM Press

 
Edit | Del

Lieberman, Henry, Paterno, Fabio and Wulf, Volker (eds.) (2006): End-User Development. Springer-Verlag

 
Edit | Del

Lieberman, Henry, Paterno, Fabio and Wulf, Volker (2006): End User Development (Human-Computer Interaction Series). Springer

 
Edit | Del

Lieberman, Henry, Paterno, Fabio, Klann, Markus and Wulf, Volker (2006): End-user development: An emerging paradigm. End User Development. In: Lieberman, Henry, Paterno, Fabio and Wulf, Volker (eds.). "End User Development (Human-Computer Interaction Series)". Springerpp. 1-8

 
Edit | Del

Anacleto, Junia Coutinho, Carvalho, Aparecido Fabiano P. de, Nris, Vnia Paula de Almeida, Godoi, Muriel de Souza, Zem-Mascarenhas, Slvia, Neto, Americo Talarico and Lieberman, Henry (2006): Applying common sense to distance learning: the case of home care education. In: Proceedings of the 2006 Brazilian Symposium on Human Factors in Computing Systems 2006. pp. 1-4. Available online

One of the challenges of Distance Learning (DL) over the Web is that it imposes physical distance between teachers and students, and thus reduces opportunities for teachers to fully understand the state of student knowledge and adapt instructional material to their needs. While the knowledge to be taught is usually well expressed in the curriculum material, the knowledge of a typical student is harder to codify. In this paper, it is shown how a large knowledge base of common sense statements can be used to help model student knowledge and consequently to help teachers plan learning actions by * Identifying topics that need coverage by the curriculum; * Identifying what a student is already likely to know; * Locating possible misconceptions that need to be corrected; and * Assuring that the student understands the vocabulary needed to express relevant subject material.

© All rights reserved Anacleto et al. and/or Sociedade Brasileira de Computao

2005
 
Edit | Del

Chung, Jae-woo, Kern, Rachel and Lieberman, Henry (2005): Topic spotting common sense translation assistant. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1280-1283. Available online

Our Translation Assistant applies common sense logic to the problem of translating speech in real time from one language to another. Using speech recognition combined with a software translator to do word-by-word translation is not feasible because speech recognition is notorious for poor results. Word-by-word translation requires grammatically correct input to translate accurately. Therefore, translation of speech that is potentially already fraught with errors is not expected to be good. Our Translation Assistant works around these problems by using the context of the conversation as a basis for translation. It takes the location and the speaker as input to establish the circumstances. Then it uses a common sense knowledge network to do topic-spotting using key words from the conversation. It only translates the most likely topics of conversation into the target language. This system does not require perfect speech recognition, yet enables end-users to have a sense of the conversation.

© All rights reserved Chung et al. and/or ACM Press

 
Edit | Del

Liu, Hugo and Lieberman, Henry (2005): Programmatic semantics for natural language interfaces. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1597-1600. Available online

An important way of making interfaces usable by non-expert users is to enable the use of natural language input, as in natural language query interfaces to databases, or MUDs and MOOs. When the subject matter is about procedures, however, we have discovered that interfaces can take advantage of what we call Programmatic Semantics, procedural relations that can be inferred from the linguistic structure. Roughly, nouns can be interpreted as data structures; verbs are functions; adjectives are properties. Some linguistic forms imply conditionals, loops, and recursive structures. We illustrate the principles of Programmatic Semantics with a description of Metafor, a "brainstorming" editor for programs, analogous to an outlining tool for prose writing. Metafor interactively converts English sentences to partially specified program code, to be used as "scaffolding" for a more detailed program. A user study showed that Metafor is capable of capturing enough Programmatic Semantics to facilitate non-programming users and beginners' conceptualization of programming problems.

© All rights reserved Liu and Lieberman and/or ACM Press

 
Edit | Del

Lieberman, Henry and Kumar, Ashwani (2005): Providing Expert Advice by Analogy for On-Line Help. In: Skowron, Andrzej, Agrawal, Rakesh, Luck, Michael, Yamaguchi, Takahira, Morizet-Mahoudeaux, Pierre, Liu, Jiming and Zhong, Ning (eds.) 2005 IEEE / WIC / ACM International Conference on Web Intelligence WI 2005 19-22 September, 2005, Compiegne, France. pp. 26-32. Available online

 
Edit | Del

Liu, Hugo and Lieberman, Henry (2005): Metafor: visualizing stories as code. In: Amant, Robert St., Riedl, John and Jameson, Anthony (eds.) Proceedings of the 2005 International Conference on Intelligent User Interfaces January 10-13, 2005, San Diego, California, USA. pp. 305-307. Available online

2004
 
Edit | Del

Wagner, Earl J. and Lieberman, Henry (2004): Supporting user hypotheses in problem diagnosis. In: Nunes, Nuno Jardim and Rich, Charles (eds.) International Conference on Intelligent User Interfaces 2004 January 13-16, 2004, Funchal, Madeira, Portugal. pp. 30-37. Available online

People are performing increasingly complicated actions on the web, such as automated purchases involving multiple sites. Things often go wrong, however, and it can be difficult to diagnose a problem in a complex process. Information must be integrated from multiple sites before relations among processes and data can be visualized and understood. Once the source of a problem has been diagnosed, it can be tedious to explain the process of diagnosis to others, and difficult to review the steps later. We present a web interface agent, Woodstein, that monitors user actions on the web and retrieves related information to assemble an integrated view of an action. It manages user hypotheses during problem diagnosis by capturing users' judgments of the correctness of data and processes. These hypotheses can be shared with others, including customer service representatives, or accessed later. We will see this feature in the context of diagnosing problems on the web, and discuss its broader applicability to system interfaces in general.

© All rights reserved Wagner and Lieberman and/or ACM Press

 
Edit | Del

Wagner, Earl J. and Lieberman, Henry (2004): Demonstration of agent support for user hypotheses in problem diagnosis. In: Nunes, Nuno Jardim and Rich, Charles (eds.) International Conference on Intelligent User Interfaces 2004 January 13-16, 2004, Funchal, Madeira, Portugal. pp. 368-369. Available online

We present a web interface agent, Woodstein, that monitors user actions on the web and retrieves related information to assemble an integrated view of a transaction. It manages user hypotheses during diagnosis by capturing users' judgments of the correctness of data and processes. These hypotheses can be shared with others, such as customer service representatives, or saved for later. We will see this feature in the context of diagnosing problems on the web.

© All rights reserved Wagner and Lieberman and/or ACM Press

 
Edit | Del

Liu, Hugo and Lieberman, Henry (2004): Toward a Programmatic Semantics of Natural Language. In: VL-HCC 2004 - IEEE Symposium on Visual Languages and Human-Centric Computing 26-29 September, 2004, Rome, Italy. pp. 281-282. Available online

2003
 
Edit | Del

Liu, Hugo, Lieberman, Henry and Selker, Ted (2003): A model of textual affect sensing using real-world knowledge. In: Johnson, Lewis and Andre, Elisabeth (eds.) International Conference on Intelligent User Interfaces 2003 January 12-15, 2003, Miami, Florida, USA. pp. 125-132. Available online

This paper presents a novel way for assessing the affective qualities of natural language and a scenario for its use. Previous approaches to textual affect sensing have employed keyword spotting, lexical affinity, statistical methods, and hand-crafted models. This paper demonstrates a new approach, using large-scale real-world knowledge about the inherent affective nature of everyday situations (such as "getting into a car accident") to classify sentences into "basic" emotion categories. This commonsense approach has new robustness implications. Open Mind Commonsense was used as a real world corpus of 400,000 facts about the everyday world. Four linguistic models are combined for robustness as a society of commonsense-based affect recognition. These models cooperate and compete to classify the affect of text. Such a system that analyzes affective qualities sentence by sentence is of practical value when people want to evaluate the text they are writing. As such, the system is tested in an email writing application. The results suggest that the approach is robust enough to enable plausible affective text user interfaces.

© All rights reserved Liu et al. and/or ACM Press

 
Edit | Del

Lieberman, Henry and Wagner, Earl J. (2003): End-user debugging for e-commerce. In: Johnson, Lewis and Andre, Elisabeth (eds.) International Conference on Intelligent User Interfaces 2003 January 12-15, 2003, Miami, Florida, USA. pp. 257-259. Available online

One of the biggest unaddressed challenges for the digital economy is what to do when electronic transactions go wrong. Consumers are frustrated by interminable phone menus, and long delays to problem resolution. Businesses are frustrated by the high cost of providing quality customer service. We believe that many simple problems, such as mistyped numbers or lost orders, could be easily diagnosed if users were supplied with end-user debugging tools, analogous to tools for software debugging. These tools can show the history of actions and data, and provide assistance for keeping track of and testing hypotheses. These tools would benefit not only users, but businesses as well by decreasing the need for customer service.

© All rights reserved Lieberman and Wagner and/or ACM Press

 
Edit | Del

Sharon, Taly, Lieberman, Henry and Selker, Ted (2003): A zero-input interface for leveraging group experience in web browsing. In: Johnson, Lewis and Andre, Elisabeth (eds.) International Conference on Intelligent User Interfaces 2003 January 12-15, 2003, Miami, Florida, USA. pp. 290-292. Available online

The experience of a trusted group of colleagues can help users improve the quality and focus of their browsing and searching activities. How could a system provide such help, when and where the users need it, without disrupting their normal work activities? This paper describes Context-Aware Proxy based System (CAPS), an agent that recommends pages and annotates links to reveal their relative popularity among the users colleagues, matched with their automatically computed interest profiles. A Web proxy tracks browsing habits, so CAPS requires no explicit input from the user. We review here CAPS design principles and implementation. We tested user satisfaction with the interface and the accuracy of the ranking algorithm. These experiments indicate that CAPS has high potential to support effective ranking for quality judgment -- by users.

© All rights reserved Sharon et al. and/or ACM Press

2002
 
Edit | Del

Lieberman, Henry (2002): Les "Spirales de contextes" dans I'interaction homme-machine. In: Proceedings of the 2002 Conference of the Association Francophone dInteraction Homme-Machine 2002. p. 15. Available online

Context-sensitive and context-aware interfaces have become increasingly important in human-computer interaction, as we seek to make interfaces more adaptable and personalized. Context can include, for example, time, place, history of interaction, user preferences, and any relevant information external to the application. But fundamental problem remains, just how much context should you use? If you use too little, the application remains inflexible; if you use too much you risk slowing down the application, hurting interactivity. I propose the idea of Spiral Contexts, a dynamic approach that consists of starting out with small, restricted context, and gradually expanding it (in time and space), interleaved with user interaction.

© All rights reserved Lieberman and/or ACM Press

2001
 
Edit | Del

Shearin, Sybil and Lieberman, Henry (2001): Intelligent Profiling by Example. In: International Conference on Intelligent User Interfaces 2001 January 14-17, 2001, Sanata Fe, New Mexico, USA. pp. 145-151. Available online

The Apt Decision agent learns user preferences in the domain of rental real estate by observing the user's critique of apartment features. Users provide a small number of criteria in the initial interaction, receive a display of sample apartments, and then react to any feature of any apartment independently, in any order. Users learn which features are important to them as they discover the details of specific apartments. The agent uses interactive learning techniques to build a profile of user preferences, which can then be saved and used in further retrievals. Because the user's actions in specifying preferences are also used by the agent to create a profile, the result is an agent that builds a profile without redundant or unnecessary effort on the user's part.

© All rights reserved Shearin and Lieberman and/or ACM Press

 
Edit | Del

Lieberman, Henry (2001): Your Wish Is My Command: Programming by Example. San Francisco, CA, Morgan Kaufmann Publishers

 
Edit | Del

Lieberman, Henry, Rozenweig, Elizabeth and Singh, Push (2001): Aria: An Agent for Annotating and Retrieving Images. In IEEE Computer, 34 (7) pp. 57-62. Available online

 
Edit | Del

Lieberman, Henry and Fry, Christopher (2001): Will software ever work?. In Communications of the ACM, 44 (3) pp. 122-124. Available online

 
Edit | Del

Lieberman, Henry, Fry, Christopher and Weitzman, Louis (2001): Exploring the Web with reconnaissance agents. In Communications of the ACM, 44 (8) pp. 69-75. Available online

2000
 
Edit | Del

Vivacqua, Adriana and Lieberman, Henry (2000): Agents to Assist in Finding Help. In: Turner, Thea, Szwillus, Gerd, Czerwinski, Mary, Peterno, Fabio and Pemberton, Steven (eds.) Proceedings of the ACM CHI 2000 Human Factors in Computing Systems Conference April 1-6, 2000, The Hague, The Netherlands. pp. 65-72. Available online

When a novice needs help, often the best solution is to find a human expert who is capable of answering the novice's questions. But often, novices have difficulty characterizing their own questions and expertise and finding appropriate experts. Previous attempts to assist expertise location have provided matchmaking services, but leave the task of classifying knowledge and queries to be performed manually by the participants. We introduce Expert Finder, an agent that automatically classifies both novice and expert knowledge by autonomously analyzing documents created in the course of routine work. Expert Finder works in the domain of Java programming, where it relates a user's Java class usage to an independent domain model. User models are automatically generated that allow accurate matching of query to expert without either the novice or expert filling out skill questionnaires. Testing showed that automatically generated profiles matched well with experts' own evaluation of their skills, and we achieved a high rate of matching novice questions with appropriate experts.

© All rights reserved Vivacqua and Lieberman and/or ACM Press

 
Edit | Del

Lieberman, Henry (ed.) International Conference on Intelligent User Interfaces 2000 January 9-12, 2000, New Orleans, Louisiana, USA.

 
Edit | Del

Amant, Robert St., Lieberman, Henry, Potter, Richard and Zettlemoyer, Luke S. (2000): Visual Generalization in Programming by Example. In Communications of the ACM, 43 (3) pp. 107-114. Available online

 
Edit | Del

Lieberman, Henry (2000): Programming by Example: Introduction. In Communications of the ACM, 43 (3) pp. 72-74. Available online

1999
 
Edit | Del

Lieberman, Henry (1999): Intelligent Interface Agents. In: Maybury, Mark T. (ed.) International Conference on Intelligent User Interfaces 1999 January 5-8, 1999, Redondo Beach, California, USA. p. 7. Available online

 
Edit | Del

Dyke, Neil W. Van, Lieberman, Henry and Maes, Pattie (1999): Butterfly: A Conversation-Finding Agent for Internet Relay Chat. In: Maybury, Mark T. (ed.) International Conference on Intelligent User Interfaces 1999 January 5-8, 1999, Redondo Beach, California, USA. pp. 39-41. Available online

The Internet enables groups of people throughout the world to interact to discuss issues, get assistance, learn, and socialize. However, when there are thousands of loosely defined groups in which a user could potentially participate, the problem becomes finding the groups of most interest. In this paper we focus on the domain of Internet Relay Chat real-time text messaging, and describe a "social butterfly" agent called Butterfly that samples available conversational groups and recommends ones of interest. We discuss Butterfly's motivation, usage, real-world design constraints, implementation, and results. Finally, we introduce work in progress on a multi-agent approach that has grown out of our experience with Butterfly.

© All rights reserved Dyke et al. and/or ACM Press

 
Edit | Del

Lieberman, Henry, Dyke, Neil W. Van and Vivacqua, Adrian S. (1999): Let's Browse: A Collaborative Web Browsing Agent. In: Maybury, Mark T. (ed.) International Conference on Intelligent User Interfaces 1999 January 5-8, 1999, Redondo Beach, California, USA. pp. 65-68. Available online

Web browsing, like most of today's desktop applications, is usually a solitary activity. Other forms of media, such as watching television, are often done by groups of people, such as families or friends. What would it be like to do collaborative Web browsing? Could the computer provide assistance to group browsing by trying to help find mutual interests among the participants? Let's Browse is an experiment in building an agent to assist a group of people in browsing, by suggesting new material likely to be of common interest. It is built as an extension to the single-user Web browsing agent Letizia. Let's Browse features automatic detection of the presence of users, automated "channel surfing" browsing, and dynamic display of the user profiles and explanation of recommendations.

© All rights reserved Lieberman et al. and/or ACM Press

 
Edit | Del

Lieberman, Henry, Bradshaw, Jeffrey M., Gil, Yolanda and Selker, Ted (1999): IUI and Agents for the New Millennium. In: Maybury, Mark T. (ed.) International Conference on Intelligent User Interfaces 1999 January 5-8, 1999, Redondo Beach, California, USA. pp. 93-94. Available online

Advocates of intelligent user interfaces are used to fighting an uphill battle against more conventional approaches. Skeptics have been reluctant to accept intelligent tutoring systems, adaptive user interfaces, machine learning, predictive user models, anthropomorphic interaction, etc. as part of everyday interfaces because they have been suspicious of the feasibility of such techniques and fearful of the risk of possible mistakes. The good news is that we seem to be making progress in gaining acceptance. Past IUI conferences abound with examples of intelligent interface experiments that clearly demonstrate their feasibility. Limited examples of intelligent interfaces are actually starting to make their ways into commercial products. There is considerable evidence that opposition is softening. However, we're not out of the woods yet. Many of the early examples of commercial IUI and agent software are positioned as "add-ons" to the more familiar direct-manipulation interfaces, rather than playing a central role. We haven't yet reached the point where a new application is simply assumed, as a matter of course, to require all the representation, reasoning and learning features that IUI attendees advocate. But suppose we do? Suppose intelligence becomes such an integral part of the interface in the 21st century that we couldn't imagine applications without it? How will our software environment and the software industry change as a result? Will knowledge bases, inference engines, and learning algorithms become as much a part of the operating system as windows and menus? Will the idea of an "application", as a standalone, shrink-wrapped single-purpose interface, disappear? Once the interface is intelligent, is there any point to having present-day concepts like "files" or "directories"? Will all interfaces become personalized to the extent that there won't be any more "generic" interfaces that remain the same across millions of users? Will all information sources be interactive and customized, obsoleting paper books and linear movies? Will that lead to a loss of shared context among users? How will different intelligent user interfaces interoperate and co-operate? What, if anything, will be the next step beyond IUIs and agents? The panel will ask participants to speculate on how the widespread acceptance of intelligent user interfaces that we expect for the next millennium will transform our computing environments.

© All rights reserved Lieberman et al. and/or ACM Press

1998
 
Edit | Del

Lieberman, Henry (1998): Intelligent Interface Agents. In: Marks, Joe (ed.) International Conference on Intelligent User Interfaces 1998 January 6-9, 1998, San Francisco, California, USA. p. 3. Available online

Attendees will come away with a real, no-nonsense understanding of "the agent phenomenon"; why people are excited about agents, what the range of applicability of interface agent systems is, what is good and bad about agents, how to learn more about agents, and what is necessary to build them. Features * What's an Agent? * Perspectives from Artificial Intelligence, and from Human-Computer Interaction * Controversies about Agents * Architectural Considerations for Agent Interfaces * Learning Techniques for Interface Agents * Examples of Agent Systems * Programming and User Interface Design for Agent Systems * Resources for learning about the Agents field

© All rights reserved Lieberman and/or ACM Press

 
Edit | Del

Lieberman, Henry (1998): Integrating User Interface Agents with Conventional Applications. In: Marks, Joe (ed.) International Conference on Intelligent User Interfaces 1998 January 6-9, 1998, San Francisco, California, USA. pp. 39-46. Available online

In most experiments with user interface agents to date, it has been necessary either to implement both the agent and the application from scratch, or to modify the code of an existing application to enable the necessary communication. Instead, we would like to be able to "attach" an agent to an existing application, while requiring only a minimum of advance planning on the part of the application developer. Commercial applications are increasingly supporting the use of "application programmers' interfaces" and scripting languages as mean of achieving external control of applications. Are these mechanisms sufficient for software agents to achieve communication with applications? This paper reports some preliminary experiments in developing agent software that works with existing, unmodified commercial applications and agents that work across multiple applications. We describe a programming by example agent, ScriptAgent, that uses a scripting language, Applescript, to record example procedures that are generalized by the agent. Another approach is examinability, where the application grants to the agent the right to examine internal data structures. We present another kind of learning agent, Tatlin, that compares successive application states to infer interface operations. Finally, we discuss broader systems issues such as parallelism, interface sharing between agent and application, and access to objects.

© All rights reserved Lieberman and/or ACM Press

1997
 
Edit | Del

Marks, Joe, Birnbaum, Larry, Horvitz, Eric, Kurlander, David, Lieberman, Henry and Roth, Steve (1997): Compelling Intelligent User Interfaces: How Much AI?. In: Moore, Johanna D., Edmonds, Ernest and Puerta, Angel R. (eds.) International Conference on Intelligent User Interfaces 1997 January 6-9, 1997, Orlando, Florida, USA. pp. 173-175. Available online

Efforts to incorporate intelligence into the user interface have been underway for decades, but the commercial impact of this work has not lived up to early expectations, and is not immediately apparent. This situation appears to be changing. However, so far the most interesting intelligent user interfaces (IUIs) have tended to use minimal or simplistic AI. In this panel we consider whether more or less AI is the key to the development of compelling IUIs. The panelists will present examples of compelling IUIs that use a selection of AI techniques, mostly simple, but some complex. Each panelist will then comment on the merits of different kinds and quantities of AI in the development of pragmatic interface technology.

© All rights reserved Marks et al. and/or ACM Press

 
Edit | Del

Ungar, David, Lieberman, Henry and Fry, Christopher (1997): Debugging and the Experience of Immediacy. In Communications of the ACM, 40 (4) pp. 38-43.

1996
 
Edit | Del

Lieberman, Henry (1996): Intelligent Graphics. In Communications of the ACM, 39 (8) pp. 38-48.

1995
 
Edit | Del

Lieberman, Henry and Fry, Christopher (1995): Bridging the Gulf Between Code and Behavior in Programming. In: Katz, Irvin R., Mack, Robert L., Marks, Linn, Rosson, Mary Beth and Nielsen, Jakob (eds.) Proceedings of the ACM CHI 95 Human Factors in Computing Systems Conference May 7-11, 1995, Denver, Colorado. pp. 480-486. Available online

Program debugging can be an expensive, complex and frustrating process. Conventional programming environments provide little explicit support for the cognitive tasks of diagnosis and visualization faced by the programmer. ZStep 94 is a program debugging environment designed to help the programmer understand the correspondence between static program code and dynamic program execution. Some of ZStep 94's innovations include: * An animated view of program execution, using the very same display used to edit the source code * A window that displays values which follows the stepper's focus * An incrementally-generated complete history of program execution and output * "Video recorder" controls to run the program in forward and reverse directions and control the level of detail displayed * One-click access from graphical objects to the code that drew them * One-click access from expressions in the code to their values and graphical output

© All rights reserved Lieberman and Fry and/or ACM Press

 
Edit | Del

Lieberman, Henry (1995): A Demonstrational Interface for Recording Technical Procedures by Annotation of Videotaped Examples. In International Journal of Human-Computer Studies, 43 (3) pp. 383-417.

In conventional knowledge acquisition, a domain expert interacts with a knowledge engineer, who interviews the expert, and codes knowledge about the domain objects and procedures in a rule-based language, or other textual representation language. This indirect methodology can be tedious and error-prone, since the domain expert's verbal descriptions can be inaccurate or incomplete, and the knowledge engineer may not correctly interpret the expert's intent. We describe a user interface that allows a domain expert who is not a programmer to construct representations of objects and procedures directly from a video of a human performing an example procedure. The domain expert need not be fluent in the underlying representation language, since all interaction is through direct manipulation. Starting from digitized video, the user selects significant frames that illustrate before- and after- states of important operations. Then the user graphically annotates the contents of each selected frame, selecting portions of the image to represent each part, labeling the parts, and indicating part/whole relationships. The actions that represent the transition between frames are described using the technique of programming by demonstration (also called programming by example). The user performs operations on concrete visual objects in the graphical interface, and the system records the user's actions. Explanation-based learning techniques are used to synthesize a generalized program that can be used on subsequent examples. The knowledge acquisition and video annotation facilities are implemented as part of the graphical editor Mondrian, which incorporates a programming by demonstration facility. We explain the operation of Mondrian's interface in its base domain of graphical editing as well as for the video annotation and knowledge acquisition application. The result of the knowledge acquisition process is object descriptions for each object in the domain, generalized procedural descriptions, and visual and natural language documentation of the procedure. We illustrate the system in the domain of documentation of operational and maintenance procedures for electrical devices.

© All rights reserved Lieberman and/or Academic Press

 
Edit | Del

Lieberman, Henry (1995): The Visual Language of Experts in Graphic Design. In: VL 1995 1995. pp. 5-12. Available online

 
Edit | Del

Stoehr, Elizabeth and Lieberman, Henry (1995): Hearing Aid: Adding Verbal Hints to a Learning Interface. In: ACM Multimedia 1995 1995. pp. 223-230. Available online

1994
 
Edit | Del

Lieberman, Henry (1994): Powers of Ten Thousand: Navigating in Large Information Spaces. In: Szekely, Pedro (ed.) Proceedings of the 7th annual ACM symposium on User interface software and technology November 02 - 04, 1994, Marina del Rey, California, United States. pp. 15-16. Available online

How would you interactively browse a very large display space, for example, a street map of the entire United States? The traditional solution is zoom and pan. But each time a zoom-in operation takes place, the context from which it came is visually lost. Sequential applications of the zoom-in and zoom-out operations may become tedious. This paper proposes an alternative technique, the microscope, based on zooming and panning in multiple translucent layers. A microscope display should comfortably permit browsing continuously on a single image, or set of images in multiple resolutions, on a scale of at least 1 to 10,000.

© All rights reserved Lieberman and/or ACM Press

 
Edit | Del

Lieberman, Henry (1994): Powers of Ten Thousand: A Translucent Zooming Technique. In: Szekely, Pedro (ed.) Proceedings of the 7th annual ACM symposium on User interface software and technology November 02 - 04, 1994, Marina del Rey, California, United States. .

1993
 
Edit | Del

Lieberman, Henry (1993): Graphical Annotation as a Visual Language for Specifying Generalization Relations. In: Proceedings of the 1993 IEEE Workshop on Visual Languages August 24-27, 1993, Bergen, Norway. pp. 19-24.

1992
 
Edit | Del

Lieberman, Henry (1992): Capturing Visual Design Expertise Interactively by Example. In: East-West International Conference on Human-Computer Interaction: Proceedings of the EWHCI92 1992. pp. 360-366.

A problem in applying artificial intelligence techniques to visual design domains is that much of the knowledge possessed by experts is best expressible in terms of visual examples. The traditional expert systems methodology requires this knowledge to be communicated from a design expert to a knowledge engineer, who then translates this knowledge into rules and other textual descriptions. This process is awkward and error-prone. An alternative is to capture design knowledge more directly through an interactive graphical interface, by having the design expert manipulate concrete design examples in a graphical editor. The editor is equipped with an interface agent that records the users actions, and produces a generalized description of the procedure. The design procedure thus learned can subsequently be applied to examples that are similar to, but not identical to those on which the system was originally taught. This approach is illustrated in this paper by a description of the graphical editor Mondrian, which uses programming by example to capture interface actions that represent an expert's problem solving behavior. The paper presents an example in a desktop publishing domain, where the system is taught a procedure for rearranging a layout of newspaper articles.

© All rights reserved Lieberman and/or Intl. Centre for Scientific And Technical Information

 
Edit | Del

Lieberman, Henry (1992): Dominoes and Storyboards: Beyond "Icons on Strings. In: Proceedings of the 1992 IEEE Workshop on Visual Languages September 15-18, 1992, Seattle, Washington, USA. pp. 65-71.

1991
 
Edit | Del

Fineblum, Michelle and Lieberman, Henry (1991): Storyboard-Based Programming Tools. In ACM SIGCHI Bulletin, 23 (4) pp. 81-82.

1984
 
Edit | Del

Lieberman, Henry (1984): Seeing What Your Programs are Doing. In International Journal of Man-Machine Studies, 21 (4) pp. 311-331.

The advent of personal computers with high resolution displays and pointing devices will permit a drastic improvement in the quality of user interfaces for programming environments. Programming environments already are beginning to make use of interactive graphics as a tool for helping us visualize the operation of programs we write. Watching a program work step-by-step, where each step is reflected in visible changes to the display screen, greatly facilitates understanding of the internal workings of a program. But the power of interactive graphics for program visualization has yet to be exploited in a programming environment as a tool for creating programs, as opposed to merely observing already-written programs. Tinker is a programming environment for Lisp in which a program is constructed by demonstrating its steps on representative examples, and the system displays graphically the result of each step as it is performed. The programmer can "see what the program is doing" while the program is being constructed. The processes of writing a program and debugging it on test cases are combined into a single interactive activity, rather than separated as they are in conventional programming environments. To help the reader visualize the operation of Tinker itself, an example is presented of how Tinker may be used to construct an alpha-beta tree search program.

© All rights reserved Lieberman and/or Academic Press

1983
 
Edit | Del

Lieberman, Henry and Hewitt, Carl (1983): A Real-Time Garbage Collector Based on the Lifetimes of Objects. In Communications of the ACM, 26 (6) pp. 419-429.

1982
 
Edit | Del

Lieberman, Henry (1982): Constructing graphical user interfaces by example. In: Graphics Interface 82 May 17-21, 1982, Toronto, Ontario, Canada. pp. 295-302.

 
Add publication
Show list on your website
 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/henry_lieberman.html