Publication statistics

Pub. period:2005-2012
Pub. count:22
Number of co-authors:32



Co-authors

Number of publications with 3 favourite co-authors:

Jodi Forlizzi:12
Jessica Hodgins:7
Takayuki Kanda:4

 

 

Productive colleagues

Bilge Mutlu's 3 most productive colleagues in number of publications:

Manfred Tscheligi:105
Jodi Forlizzi:90
Sara Kiesler:59
 
 
 
Jul 13

A general principle for all user interface design is to go through all of your design elements and remove them one at a time. If the design works as well without a certain design element, kill it.

-- Jakob Nielsen, Designing Web Usability, p. 22.

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!

 
 

Bilge Mutlu

Assistant Professor

Picture of Bilge Mutlu.
Personal Homepage:
http://bilgemutlu.com

Current place of employment:
Department of Computer Sciences, University of Wisconsin-Madison

My current research goal is designing social behavior for socially interactive systems, particularly humanlike robots. I am also interested in understanding the social, cognitive, and organizational impact of these technologies through experimental and ethnographic studies. In my research, I follow an interdisciplinary process of theoretically and empirically grounded design.

Edit author info
Add publication

Publications by Bilge Mutlu (bibliography)

 what's this?
2012
 
Edit | Del

Tscheligi, Manfred, Meschtscherjakov, Alexander, Weiss, Astrid, Wulf, Volker, Evers, Vanessa and Mutlu, Bilge (2012): Exploring collaboration in challenging environments: from the car to the factory and beyond. In: Companion Proceedings of ACM CSCW12 Conference on Computer-Supported Cooperative Work 2012. pp. 15-16.

We propose a daylong workshop at CSCW2012 on the topic collaboration in challenging and difficult environments, which are to our understanding all contexts, which go beyond traditional working/office settings topic. Examples for these environments can be the automotive context or the context of a semiconductor factory, which show very specific contextual conditions and therefore offer special research challenges: How to address all passengers in the car, not only the driver? How to explore operator tasks in a cleanroom? How could the long-term (social) collaboration of robots and humans be investigated in privacy critical environments?

© All rights reserved Tscheligi et al. and/or ACM Press

 
Edit | Del

Huang, Chien-Ming and Mutlu, Bilge (2012): Robot behavior toolkit: generating effective social behaviors for robots. In: Proceedings of the 7th International Conference on Human-Robot Interaction 2012. pp. 25-32.

Social interaction involves a large number of patterned behaviors that people employ to achieve particular communicative goals. To achieve fluent and effective human-like communication, robots must seamlessly integrate the necessary social behaviors for a given interaction context. However, very little is known about how robots might be equipped with a collection of such behaviors and how they might employ these behaviors in social interaction. In this paper, we propose a framework that guides the generation of social behavior for human-like robots by systematically using specifications of social behavior from the social sciences and contextualizing these specifications in an Activity-Theory-based interaction model. We present the Robot Behavior Toolkit, an open-source implementation of this framework as a Robot Operating System (ROS) module and a community-based repository for behavioral specifications, and an evaluation of the effectiveness of the Toolkit in using these specifications to generate social behavior in a human-robot interaction study, focusing particularly on gaze behavior. The results show that specifications from this knowledge base enabled the Toolkit to achieve positive social, cognitive, and task outcomes, such as improved information recall, collaborative work, and perceptions of the robot.

© All rights reserved Huang and Mutlu and/or their publisher

 
Edit | Del

Chidambaram, Vijay, Chiang, Yueh-Hsuan and Mutlu, Bilge (2012): Designing persuasive robots: how robots might persuade people using vocal and nonverbal cues. In: Proceedings of the 7th International Conference on Human-Robot Interaction 2012. pp. 293-300.

Social robots have to potential to serve as personal, organizational, and public assistants as, for instance, diet coaches, teacher's aides, and emergency respondents. The success of these robots -- whether in motivating users to adhere to a diet regimen or in encouraging them to follow evacuation procedures in the case of a fire -- will rely largely on their ability to persuade people. Research in a range of areas from political communication to education suggest that the nonverbal behaviors of a human speaker play a key role in the persuasiveness of the speaker's message and the listeners' compliance with it. In this paper, we explore how a robot might effectively use these behaviors, particularly vocal and bodily cues, to persuade users. In an experiment with 32 participants, we evaluate how manipulations in a robot's use of nonverbal cues affected participants' perceptions of the robot's persuasiveness and their compliance with the robot's suggestions across four conditions: (1) no vocal or bodily cues, (2) vocal cues only, (3) bodily cues only, and (4) vocal and bodily cues. The results showed that participants complied with the robot's suggestions significantly more when it used nonverbal cues than they did when it did not use these cues and that bodily cues were more effective in persuading participants than vocal cues were. Our model of persuasive nonverbal cues and experimental results have direct implications for the design of persuasive behaviors for human-like robots.

© All rights reserved Chidambaram et al. and/or their publisher

 
Edit | Del

Broz, Frank, Lehmann, Hagen, Nakano, Yukiko and Mutlu, Bilge (2012): Gaze in HRI: from modeling to communication. In: Proceedings of the 7th International Conference on Human-Robot Interaction 2012. pp. 491-492.

The purpose of this half-day workshop is to explore the role of social gaze in human-robot interaction, both how to measure social gaze behavior by humans and how to implement it in robots that interact with them. Gaze directed at an interaction partner has become a subject of increased attention in human-robot interaction research. While traditional robotics research has focused work on robot gaze solely on the identification and manipulation of objects, researchers in HRI have come to recognize that gaze is a social behavior in addition to a way of sensing the world. This workshop will approach the problem of understanding the role of social gaze in human-robot interaction from the dual perspectives of investigating human-human gaze for design principles to apply to robots and of experimentally evaluating human-robot gaze interaction in order to assess how humans engage in gaze behavior with robots. Computational modeling of human gaze behavior is useful for human-robot interaction in a number of different ways. Such models can enable a robot to perceive information about the state of the human in the interaction and adjust its behavior accordingly. Additionally, more human-like gaze behavior may make a person more comfortable and engaged during an interaction. It is known the gaze pattern of a social interaction partner has a huge impact on one's own interaction behavior. Therefore, the experimental verification of robot gaze policies is extremely important. Appropriate gaze behavior is critical for establishing joint attention, which enables humans to engage in collaborative activities and gives structure to social interactions. There is still much to be learned about which properties of human-human gaze should be transferred to human-robot gaze and how to model human-robot gaze for autonomous robots. The goal of the workshop is to exchange ideas and develop and improve methodologies for this growing area of research.

© All rights reserved Broz et al. and/or their publisher

 
Edit | Del

Mutlu, Bilge, Kanda, Takayuki, Forlizzi, Jodi, Hodgins, Jessica and Ishiguro, Hiroshi (2012): Conversational gaze mechanisms for humanlike robots. In ACM Transactions on Interactive Intelligent Systems, 1 (2) p. 33.

During conversations, speakers employ a number of verbal and nonverbal mechanisms to establish who participates in the conversation, when, and in what capacity. Gaze cues and mechanisms are particularly instrumental in establishing the participant roles of interlocutors, managing speaker turns, and signaling discourse structure. If humanlike robots are to have fluent conversations with people, they will need to use these gaze mechanisms effectively. The current work investigates people's use of key conversational gaze mechanisms, how they might be designed for and implemented in humanlike robots, and whether these signals effectively shape human-robot conversations. We focus particularly on whether humanlike gaze mechanisms might help robots signal different participant roles, manage turn-exchanges, and shape how interlocutors perceive the robot and the conversation. The evaluation of these mechanisms involved 36 trials of three-party human-robot conversations. In these trials, the robot used gaze mechanisms to signal to its conversational partners their roles either of two addressees, an addressee and a bystander, or an addressee and a nonparticipant. Results showed that participants conformed to these intended roles 97% of the time. Their conversational roles affected their rapport with the robot, feelings of groupness with their conversational partners, and attention to the task.

© All rights reserved Mutlu et al. and/or ACM Press

 Cited in the following chapter:

Human-Robot Interaction: [/encyclopedia/human-robot_interaction.html]


 
2011
 
Edit | Del

Mumm, Jonathan and Mutlu, Bilge (2011): Human-robot proxemics: physical and psychological distancing in human-robot interaction. In: Proceedings of the 6th International Conference on Human Robot Interaction 2011. pp. 331-338.

To seamlessly integrate into the human physical and social environment, robots must display appropriate proxemic behavior -- that is, follow societal norms in establishing their physical and psychological distancing with people. Social-scientific theories suggest competing models of human proxemic behavior, but all conclude that individuals' proxemic behavior is shaped by the proxemic behavior of others and the individual's psychological closeness to them. The present study explores whether these models can also explain how people physically and psychologically distance themselves from robots and suggest guidelines for future design of proxemic behaviors for robots. In a controlled laboratory experiment, participants interacted with Wakamaru to perform two tasks that examined physical and psychological distancing of the participants. We manipulated the likeability (likeable/dislikeable) and gaze behavior (mutual gaze/averted gaze) of the robot. Our results on physical distancing showed that participants who disliked the robot compensated for the increase in the robot's gaze by maintaining a greater physical distance from the robot, while participants who liked the robot did not differ in their distancing from the robot across gaze conditions. The results on psychological distancing suggest that those who disliked the robot also disclosed less to the robot. Our results offer guidelines for the design of appropriate proxemic behaviors for robots so as to facilitate effective human-robot interaction.

© All rights reserved Mumm and Mutlu and/or their publisher

 
Edit | Del

Mutlu, Bilge, Bartneck, Christoph, Ham, Jaap, Evers, Vanessa and Kanda, Takayuki (2011): Social Robotics Lecture Notes in Computer Science. Berlin, Germany,

2009
 
Edit | Del

Mutlu, Bilge, Shiwa, Toshiyuki, Kanda, Takayuki, Ishiguro, Hiroshi and Hagita, Norihiro (2009): Footing in human-robot conversations: how robots might shape participant roles using gaze cues. In: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction 2009. pp. 61-68.

During conversations, speakers establish their and others' participant roles (who participates in the conversation and in what capacity) -- or "footing" as termed by Goffman-using gaze cues. In this paper, we study how a robot can establish the participant roles of its conversational partners using these cues. We designed a set of gaze behaviors for Robovie to signal three kinds of participant roles: addressee, bystander, and overhearer. We evaluated our design in a controlled laboratory experiment with 72 subjects in 36 trials. In three conditions, the robot signaled to two subjects, only by means of gaze, the roles of (1) two addressees, (2) an addressee and a bystander, or (3) an addressee and an overhearer. Behavioral measures showed that subjects' participation behavior conformed to the roles that the robot communicated to them. In subjective evaluations, significant differences were observed in feelings of groupness between addressees and others and liking between overhearers and others. Participation in the conversation did not affect task performance-measured by recall of information presented by the robot-but affected subjects' ratings of how much they attended to the task.

© All rights reserved Mutlu et al. and/or ACM Press

 
Edit | Del

Mutlu, Bilge, Yamaoka, Fumitaka, Kanda, Takayuki, Ishiguro, Hiroshi and Hagita, Norihiro (2009): Nonverbal leakage in robots: communication of intentions through seemingly unintentional behavior. In: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction 2009. pp. 69-76.

Human communication involves a number of nonverbal cues that are seemingly unintentional, unconscious, and automatic-both in their production and perception-and convey rich information on the emotional state and intentions of an individual. One family of such cues is called "nonverbal leakage." In this paper, we explore whether people can read nonverbal leakage cues-particularly gaze cues-in humanlike robots and make inferences on robots' intentions, and whether the physical design of the robot affects these inferences. We designed a gaze cue for Geminoid-a highly humanlike android-and Robovie-a robot with stylized, abstract humanlike features-that allowed the robots to "leak" information on what they might have in mind. In a controlled laboratory experiment, we asked participants to play a game of guessing with either of the robots and evaluated how the gaze cue affected participants' task performance. We found that the gaze cue did, in fact, lead to better performance, from which we infer that the cue led to attributions of mental states and intentionality. Our results have implications for robot design, particularly for designing expression of intentionality, and for our understanding of how people respond to human social cues when they are enacted by robots.

© All rights reserved Mutlu et al. and/or ACM Press

2008
 
Edit | Del

Mutlu, Bilge and Forlizzi, Jodi (2008): Robots in Organizations: The Role of Workflow, Social, and Environmental Factors in Human-Robot Interaction. In: Proceedings in the Third International Conference on Human-Robot Interaction March 12-15, 2008, Amsterdam, The Netherlands. .

Robots are becoming increasingly integrated into the workplace, impacting organizational structures and processes, and affecting products and services created by these organizations. While robots promise significant benefits to organizations, their introduction poses a variety of design challenges. In this paper, we use ethnographic data collected at a hospital using an autonomous delivery robot to examine how organizational factors affect the way its members respond to robots and the changes engendered by their use. Our analysis uncovered dramatic differences between the medical and post-partum units in how people integrated the robot into their workflow and their perceptions of and interactions with it. Different patient profiles in these units led to differences in workflow, goals, social dynamics, and the use of the physical environment. In medical units, low tolerance for interruptions, a discrepancy between the perceived cost and benefits of using the robot, and breakdowns due to high traffic and clutter in the robot's path caused the robot to have a negative impact on the workflow and staff resistance. On the contrary, post-partum units integrated the robot into their workflow and social context. Based on our findings, we provide design guidelines for the development of robots for organizations.

© All rights reserved Mutlu and Forlizzi and/or ACM Press, New York, NY

 
Edit | Del

Mutlu, Bilge (2008): The design of gaze behavior for embodied social interfaces. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 2661-2664.

Non-verbal behavior, particularly gaze, is a crucial part of human communication. To interact with humans in a rich, natural way, social interfaces need to use this communicative channel effectively. While the role and mechanics of human gaze are extensively studied, how gaze might be used effectively by embodied interfaces is not well explored. The goal of my dissertation is to gain a deeper understanding of how gaze behavior affects people's interactions with embodied social interfaces and how we can design gaze for effective communication. This research focuses on four main social functions of gaze: Regulation, Expression, Establishing Joint Attention, and Initiating/Avoiding of Social Encounters and four sets of design variables: Temporal, Spatial, Physiological, and Contextual. A systematic study of how these functions and design variables affect each other is conducted through a series of empirical studies.

© All rights reserved Mutlu and/or ACM Press

 
Edit | Del

Mutlu, Bilge and Forlizzi, Jodi (2008): Robots in organizations: the role of workflow, social, and environmental factors in human-robot interaction. In: Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction 2008. pp. 287-294.

Robots are becoming increasingly integrated into the workplace, impacting organizational structures and processes, and affecting products and services created by these organizations. While robots promise significant benefits to organizations, their introduction poses a variety of design challenges. In this paper, we use ethnographic data collected at a hospital using an autonomous delivery robot to examine how organizational factors affect the way its members respond to robots and the changes engendered by their use. Our analysis uncovered dramatic differences between the medical and post-partum units in how people integrated the robot into their workflow and their perceptions of and interactions with it. Different patient profiles in these units led to differences in workflow, goals, social dynamics, and the use of the physical environment. In medical units, low tolerance for interruptions, a discrepancy between the perceived cost and benefits of using the robot, and breakdowns due to high traffic and clutter in the robot's path caused the robot to have a negative impact on the workflow and staff resistance. On the contrary, post-partum units integrated the robot into their workflow and social context. Based on our findings, we provide design guidelines for the development of robots for organizations.

© All rights reserved Mutlu and Forlizzi and/or ACM Press

2007
 
Edit | Del

Mutlu, Bilge, Krause, Andreas, Forlizzi, Jodi, Guestrin, Carlos and Hodgins, Jessica (2007): Robust, Lowcost, Non-intrusive Sensing and Recognition of Seated Postures. In: Proceedings of 20th ACM Symposium on User Interface Software and Technology October 7-10, 2007, 2007, Newport, RI, USA. .

In this paper, we present a methodology for recognizing seated postures using data from pressure sensors installed on a chair. Information about seated postures could be used to help avoid adverse effects of sitting for long periods of time or to predict seated activities for a human-computer interface. Our system design displays accurate near-real-time classification performance on data from subjects on which the posture recognition system was trained by using a set of carefully designed, subject-invariant signal features. By using a near-optimal sensor placement strategy, we keep the number of required sensors low thereby reducing cost and computational complexity. We evaluated the performance of our technology using a series of empirical methods including (1) cross-validation (classification accuracy of 87% for ten postures using data from 31 sensors), and (2) a physical deployment of our system (78% classification accuracy using data from 19 sensors).

© All rights reserved Mutlu et al. and/or ACM

2006
 
Edit | Del

Mutlu, Bilge, Forlizzi, Jodi, Nourbakhsh, Illah and Hodgins, Jessica (2006): The use of abstraction and motion in the design of social interfaces. In: Proceedings of DIS06: Designing Interactive Systems: Processes, Practices, Methods, & Techniques 2006. pp. 251-260.

In this paper, we explore how dynamic visual cues can be used to create accessible and meaningful social interfaces without raising expectations beyond what is achievable with current technology. Our approach is inspired by research in perceptual causality, which suggests that simple displays in motion can evoke high-level social and emotional content. For our exploration, we iteratively designed and implemented a public social interface using abstraction and motion as design elements. Our interface communicated simple social and emotional content such as displaying happiness when there is high social interaction in the environment. Our qualitative evaluations showed that people frequently and repeatedly interacted with the interface while they tried to make sense of the underlying social content. They also shared their models with others, which led to more social interaction in the environment.

© All rights reserved Mutlu et al. and/or ACM Press

 
Edit | Del

Mutlu, Bilge (2006): An empirical framework for designing social products. In: Proceedings of DIS06: Designing Interactive Systems: Processes, Practices, Methods, & Techniques 2006. pp. 341-342.

Designers generally agree that understanding the context of use is important in designing products. However, technologically advanced products such as personal robots engender complex contextual characteristics that are not yet well understood. The social context of use shapes the roles that the user and the product play in the interaction. For instance, an intelligent agent that acts as a coach for an exercise program and one that supervises a physical rehabilitation regimen for the physically challenged function in different social contexts. Only a few studies to date have considered the social context of use as part of the design. My research proposes a conceptual framework for understanding the critical social aspects of interaction with products such as the social context of use. I combine interaction design and social science methodology to make an evaluation of my framework with a series of empirical studies.

© All rights reserved Mutlu and/or ACM Press

 
Edit | Del

Mutlu, Bilge, Osman, Steven, Forlizzi, Jodi, Hodgins, Jessica and Kiesler, Sara (2006): Task Structure and User Attributes as Elements of Human-Robot Interaction Design. In: Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication (Ro-Man06) September, 2006, Hatfield, UK. .

 
Edit | Del

Mutlu, Bilge, Osman, Steven, Forlizzi, Jodi, Hodgins, Jessica and Kiesler, Sara (2006): Perceptions of ASIMO: An exploration on co-operation and competition with humans and humanoid robots. In: Extended Abstracts of the Human-Robot Interaction Conference (HRI06) March, 2006, Salt Lake City, UT, USA. .

 
Edit | Del

Mutlu, Bilge, Hodgins, Jessica and Forlizzi, Jodi (2006): A Storytelling Robot: Modeling and Evaluation of Human-like Gaze Behavior. In: Proceedings 2006 IEEE-RAS International Conference on Humanoid Robots December 2006, 2006, Genova, Italy. .

Engaging storytelling is a necessary skill for humanoid robots if they are to be used in education and entertainment applications. Storytelling requires that the humanoid robot be aware of its audience and able to direct its gaze in a natural way. In this paper, we explore how human gaze can be modeled and implemented on a humanoid robot to create a natural, human-like behavior for storytelling. Our gaze model integrates data collected from a human storyteller and a discourse structure model developed by Cassell and her colleagues for human-like conversational agents [1]. We used this model to direct the gaze of a humanoid robot, Honda’s ASIMO, as he recited a Japanese fairy tale using a pre-recorded human voice. We assessed the efficacy of this gaze algorithm by manipulating the frequency of ASIMO’s gaze between two participants and used pre and post questionnaires to assess whether participants evaluated the robot more positively and did better on a recall task when ASIMO looked at them more. We found that participants performed significantly better in recalling ASIMO's story when the robot looked at them more. Our results also showed significant differences in how men and women evaluated ASIMO based on the frequency of gaze they received from the robot. Our study adds to the growing evidence that there are many commonalities between human-human communication and human-robot communication.

© All rights reserved Mutlu et al. and/or IEEE

 
Edit | Del

Mutlu, Bilge, Osman, Steven, Forlizzi, Jodi, Hodgins, Jessica and Kiesler, Sara (2006): Perceptions of ASIMO: an exploration on co-operation and competition with humans and humanoid robots. In: Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction 2006. pp. 351-352.

Recent developments in humanoid robotics have made possible a vision of robots in everyday use in the home and workplace. However, little is known about how we should design social interactions with humanoid robots. We explored how co-operation versus competition in a game shaped people's perceptions of ASIMO. We found that in the co-operative interaction, people found the robot more sociable and more intellectual than in the competitive interaction while people felt more positive and were more involved in the task in the competitive condition than in the co-operative condition. Our poster presents these findings with the supporting theoretical background.

© All rights reserved Mutlu et al. and/or ACM Press

2005
 
Edit | Del

Keyani, Pedram, Hsieh, Gary, Mutlu, Bilge, Easterday, Matthew and Forlizzi, Jodi (2005): DanceAlong: Supporting Positive Social Exchange and Exercise for the Elderly Through Dance. In: Extended Abstracts of the Conference on Human Factors in Computing Systems (CHI05) April, 2005, Portland, OR, USA. .

 
Edit | Del

DiSalvo, Carl, Forlizzi, Jodi, Zimmerman, John, Mutlu, Bilge and Hurst, Amy (2005): The SenseChair: The lounge chair as an intelligent assistive device for elders. In: Proceedings of the Conference on Designing for User Experiences (DUX05) November, 2005, San Francisco, CA, USA. .

 
Edit | Del

Forlizzi, Jodi, DiSalvo, Carl, Zimmerman, John, Mutlu, Bilge and Hurst, Amy (2005): The SenseChair: the lounge chair as an intelligent assistive device for elders. In: Proceedings of the Conference on Designing for User Experiences DUX05 2005. p. 31.

The elder population is rising. In the United States, the number of those needing assistance far exceeds the number of care facilities available to help the aging population. This creates a great incentive to help elders remain independently in their homes. Our group is exploring how robotic technology, designed in forms as familiar as home appliances, might be used to assist elders and those who provide care. We have designed the SenseChair, an intelligent assistive lounge chair that brings assistive technology to elders in a comfortable and familiar fashion. The SenseChair takes information about a sitter's behavior and the environment and provides information ranging from ambient displays to explicit notification. It serves as a research platform to understand how we can help elders stay independently in their homes, and offer them physical, social, and emotional support.

© All rights reserved Forlizzi et al. and/or ACM Press

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
 
 
 

Changes to this page (author)

07 Jun 2013: Added
31 May 2013: Modified
04 Apr 2012: Modified
04 Apr 2012: Modified
04 Apr 2012: Modified
03 Apr 2012: Modified
18 Apr 2011: Modified
09 Jul 2009: Modified
09 Jul 2009: Modified
09 Jul 2009: Modified
09 Jul 2009: Modified
05 Mar 2009: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
24 Apr 2008: Added
24 Apr 2008: Added
06 Nov 2007: Modified
06 Nov 2007: Added
29 Sep 2007: Added
29 Sep 2007: Added
29 Jun 2007: Modified
28 Jun 2007: Added
28 Jun 2007: Added
28 Jun 2007: Added
28 Jun 2007: Added
28 Jun 2007: Added
28 Jun 2007: Added
28 Jun 2007: Added
28 Jun 2007: Added
28 Jun 2007: Added
28 Jun 2007: Added
22 Jun 2007: Modified
22 Jun 2007: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/bilge_mutlu.html

Publication statistics

Pub. period:2005-2012
Pub. count:22
Number of co-authors:32



Co-authors

Number of publications with 3 favourite co-authors:

Jodi Forlizzi:12
Jessica Hodgins:7
Takayuki Kanda:4

 

 

Productive colleagues

Bilge Mutlu's 3 most productive colleagues in number of publications:

Manfred Tscheligi:105
Jodi Forlizzi:90
Sara Kiesler:59
 
 
 
Jul 13

A general principle for all user interface design is to go through all of your design elements and remove them one at a time. If the design works as well without a certain design element, kill it.

-- Jakob Nielsen, Designing Web Usability, p. 22.

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!