About the author

Hatice Gunes

Picture of Hatice Gunes.
Dr Hatice Gunes is a Lecturer (Assistant Professor) at the School of Electronic Engineering and Computer Science, Queen Mary University of London (QMUL), UK. She received her Ph.D. degree in Computer Science from University of Technology Sydney (UTS), Australia, in September 2007 for her multi-cue and multi-sensory approach to automatic recognition of emotions from face-and-body express...   
 
Become a member of the Interaction Design Foundation
Hatice Gunes
Hatice Gunes is a member of The Interaction Design Foundation
 
 

Hatice Gunes

Bio Publications
 
 

Upcoming Courses

go to course
Emotional Design: How to make products people will love
Starts the day after tomorrow !
go to course
UI Design Patterns for Successful Software
85% booked. Starts in 10 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Multimodal Affective Computing

by Hatice Gunes. How to cite in your report.
 
 

Affective Computing is computing that relates to, arises from, or deliberately influences emotion or other affective phenomena (Picard 1997).

Editor's note: Please also see version 2 of the encyclopedia entry on Affective Computing

Research on automatic emotion recognition did not start until the 1990s. Although researchers like Ekman published studies on how people recognized emotions from face display in the 1960s (Ekman and Friesen 1968), people would find it absurd that anyone would even propose giving machines such abilities when emotional mechanisms were not considered to have a significant role in various aspects of human life. However, scientists found out that even in the most rational of decisions, emotions persist: emotions always exist, we always feel something.

In the early 1990s, Salovey and Mayer published a series of papers on emotional intelligence (Salovey and Mayer 1990). They suggested that the capacity to perceive and understand emotions define a new variable in personality. Goleman popularized his view of emotional intelligence or Emotional Quotient (EQ) in his 1995 best­selling book by discussing why EQ mattered more than Intelligence Quotient (IQ) (Goleman 1995). Goleman drew together research in neurophysiology, psychology and cognitive science. Other scientists also provided evidence that emotions were tightly coupled with all functions we, humans, are engaged with: attention, perception, learning, reasoning, decision making, planning, action selection, memory storage and retrieval (Isen 2000 and Picard 2003).

This new scientific understanding of emotions provided inspiration to various researchers for building machines that will have abilities to recognize, express, model, communicate, and respond to emotions. The initial focus has been on the recognition of the prototypical emotions from posed visual input, namely face expressions. All existing work in the early 1990s attempted to recognize prototypical emotions from two static face images: neutral and expressive. In the second half of 1990s, automated face expression analysis started focusing on posed video sequences and exploiting temporal information in the displayed face expressions. In parallel to the automatic emotion recognition from visual input, works focusing on audio input emerged. Rosalind Picard's award­winning book, Affective Computing, was published in 1997, laying the groundwork for giving machines the skills of emotional intelligence. The book triggered an explosion of interest in the emotional side of computers and their users and a new research area called affective computing emerged. Affective computing advocated the idea that it might not be essential for machines to posses all the emotional intelligence and skills humans do. However, for natural and effective human­computer interaction, computers still needed to look intelligent to some extent (Picard, 1997). Experiments conducted by Reeves and Nass showed that for an intelligent interaction, the basic human­human issues should hold (Reeves and Nass 1996).

One major limitation of affective computing has been that most of the past research had focused on emotion recognition from one single sensorial source, or modality. However, as natural human­human interaction (HHI) is multimodal, the single sensory observations are often ambiguous, uncertain, and incomplete. It was not till 1998 that computer scientists attempted to use multiple modalities for recognition of emotions/affective states. The combined use of multiple modalities for sensing affective states in itself triggered another research area. What channels to use? And how to combine them? The initial interest was on fusing visual and audio data. The results were promising, using multiple modalities improved the overall recognition accuracy helping the systems function in a more efficient and reliable way. Starting from the work of Picard in 2001, interest in detecting emotions from physiological signals emerged. Moreover, researchers moved their focus from posed to spontaneous visual data (Braathen et al. 2002). Although a fundamental study by Ambady and Rosenthal suggested that the most significant channels for judging behavioural cues of humans appear to be the visual channels of face expressions and body gestures (Ambady and Rosenthal 1992), the existing literature on automatic emotion recognition did not focus on the expressive information that body gestures carry till 2003 (Hudlicka, 2003). Following the new findings in psychology, some researchers advocate that a reliable automatic affect recognition system should attempt to combine face expressions and body gestures. Accordingly, a number of approaches have been proposed for such sensorial sources (Gunes and Piccardi 2007), (Kapoor et al. 2007), (Karpouzis et al. 2007), (Lisetti and Nasoz 2002) and (Martin et al. 2006). With all these new areas, a number of new challenges have arisen.

Overall, the interest in affective computing has grown significantly in the last three years. In Europe (EU), Human­Machine Interaction Network on Emotion (HUMAINE) was created as a Network of Excellence in the EU's Sixth Framework Programme, under the Information Society Technologies (IST) programme (Humaine 2007). The HUMAINE Network started on 1st January 2004, and is funded to run for four years. In parallel to this the First international Conference on Affective Computing and Intelligent Interaction was organized in 2005 bringing together researchers from diverse fields of research (ACII 2005).

Currently, every research group agrees that multiple modalities should be explored in order to understand which channels provide better information for automatic affect/emotion recognition. If a monomodal affect recognition system is compared to a multimodal one some of the assumptions made when building monomodal affect recognisers still hold (e.g., affect data collection is still needed). However, specific problems exist for multimodal affect recognition (e.g., multiple sensors are now required).Therefore, some new assumptions need to be taken into consideration.

The final stage affective computing has reached today is, combining multiple channels for affect recognition and moving from posed data towards spontaneous data. Achieving these aims is an open challenge. At this stage, scientists expect emotion recognition to be solvable by machine in the near future, at least as well as people can label such patterns (Picard 2003). A significant issue to note here is that, the focus of affective computing research field is gradually moving from just developing more efficient and effective automated techniques to concentrating on more context­/culture­/user­related aspects. In order to achieve the smooth transition aimed, it should be realised and understood that machine learning for human­computer applications is distinctively different from the conventional machine learning field. Issues such as loads of data, spatial coherence, and the large variety of appearances make affective behaviour analysis in particular, a special challenge for machine learning algorithms.

Today, the term affective computing has many aims in common with the recently emerging research field called human computing. Human computing is an interdisciplinary research field focusing on computing and computational artefacts as they relate to the human condition. As defined in (Pantic et al. 2007), human computing focuses on the human portion of the HCI context, going beyond the traditional keyboard and mouse to include natural, human­like interactive functions including understanding and emulating behavioural and social signalling. Human computing research field is interested in devising automated analysis algorithms that aim to extract, efficiently describe, and organise information regarding the state or state transition of individuals (identity, emotional state, activity, position and pose, etc), interactions between individuals (dialogue, gestures, engagement into collaborative or competitive activities like sports), and physical characteristics of humans (anthropometric characteristics, 3D head/body models) (Pantic et al. 2007).

Starting from the survey by Pantic, Pentland, Nijholt and Huang (Pantic et al. 2007), special sessions have already been organised and special journal issues have been proposed in this field:

Although research fields such as affective computing, human computing and multimodal interfaces seem to be detached and have their own research community/conferences/audience, as prophesied by some researchers (e.g., Pantic et al. 2007), future progress in these fields is likely to bring them together and merge them into one single most widespread research area within computer science, artificial intelligence and CHI research communities. The future direction in these research fields is to advance further by making computers/machines/devices/environments more human­like rather than forcing humans to act machine­like. Further progress is mandatory in order to achieve this common goal.

Where to learn more?

Informative websites are listed as follows:

 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 

User-contributed notes

Give us your opinion! Do you have any comments/additions
that you would like other visitors to see?

 
comment You (your email) say: Oct 23rd, 2014
#1
Oct 23
Add a thoughtful commentary or note to this page ! 
 

will be spam-protected
How many?
= e.g. "6"
User ExperienceBy submitting you agree to the Site Terms
 
 
 

References

 what's this?

ACII 2005: First International Conference on Affective Computing and Intelligent Interaction October 22-24, 2005, Beijing, China.

Ambady, Nalini and Rosenthal, Robert (1992): Thin slices of expressive behavior as predictors of interpersonal consequences: A metaanalysis. In Psychological Bulletin, 11 (2) pp. 256-274.

Braathen, Bjorn, Littlewort, Gwen and Smith, Evan (2002): An approach to automatic recognition of spontaneous facial actions. In: Proceedings of the International Conference on automatic face and gesture recognition 2002. pp. 231-235.

Ekman, Paul and Friesen, Wallace V. (1968): Nonverbal behavior in psychotherapy research. In Research in Psychotherapy, pp. 179-216.

Goleman, Daniel (1995): Emotional intelligence: why it can matter more than IQ. New York, USA, Bantam Books

Gunes, Hatice and Piccardi, Massimo (2007): Bimodal emotion recognition from expressive face and body gestures. In Journal of Network and Computer Applications, 30 (4) pp. 1334-1345.

Hudlicka, Eva (2003): To feel or not to feel: the role of affect in humancomputer interaction. In International Journal of HumanComputer Studies, 59 (1) pp. 1-32.

Isen, Alice M. (2000): Positive affect and decision making. In: Lewis, Michael and HavilandJones, Jeannette M. (eds.). "Handbook of Emotions". New York, USA: Guilford

Kapoor, Ashish, Burleson, Winslow and Picard, Rosalind W. (2007): Automatic prediction of frustration. In International Journal of Human-Computer Studies, 65 (8) pp. 724-736. Available online

Karpouzis, Kostas, Caridakis, George, Kessous, Loc, Amir, Noam, Raouzaiou, Amaryllis, Malatesta, Lori and Kollias, Stefanos D. (2007): Modeling naturalistic affective states via facial, vocal, and bodily expressions recognition. In Lecture Notes on Artificial Intelligence, 4451 pp. 91-112.

Lisetti, Christine L. and Nasoz, Fatma (2002): Maui: A multimodal affective user interface. In: Proceedings of the ACM International Conference on Multimedia 2002. pp. 161-170.

Martin, JeanClaude, Niewiadomski, Radoslaw, Devillers, Laurence, Buisine, Stephanie and Pelachaud, Catherine (2006): Multimodal complex emotions: Gesture expressivity and blended facial expressions. In Journal of Humanoid Robotics, 3 p. 831843.

Pantic, Maja, Pentland, Alex, Nijholt, Anton and Huang, Thomas S. (2007): Human Computing and Machine Understanding of Human Behavior: A Survey. In Artificial Intelligence for Human Computing, Lecture Notes in Artificial Intelligence, 4451 pp. 47-71.

Picard, Rosalind W. (1997): Affective computing. Ma, USA, The MIT Press

Picard, Rosalind W. (2003): Affective computing: challenges. In International Journal of Human-Computer Studies, 1 pp. 55-64.

Picard, Rosalind W., Vyzas, Elias and Healey, Jennifer (2001): Toward machine emotional intelligence: analysis of affective physiological state. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 23 (10) pp. 1175-1191.

Reeves, Byron and Nass, Clifford (1996): The media equation: How people treat computers, television and new media like real people and places. Cambridge University Press

Salovey, Peter and Mayer, John D. (1990): Emotional intelligence. In Imagination, Cognition, and Personality, 9 pp. 185-211.

The HUMAINE Association (2008). Humaine. Retrieved 4 February 2008 from The HUMAINE Association: http://emotionresearch.net