Publication statistics

Pub. period:2004-2014
Pub. count:25
Number of co-authors:21



Co-authors

Number of publications with 3 favourite co-authors:

Massimo Piccardi:17
Maja Pantic:7
Ahmed Fawzi Otoom:2

 

 

Productive colleagues

Hatice Gunes's 3 most productive colleagues in number of publications:

Maja Pantic:25
Massimo Piccardi:17
Stavros Petridis:4
 
 
 

Upcoming Courses

go to course
Emotional Design: How to make products people will love
91% booked. Starts in 4 days
go to course
UI Design Patterns for Successful Software
83% booked. Starts in 12 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Hatice Gunes

Ph.D

Picture of Hatice Gunes.
Update pic
Personal Homepage:
http://www.eecs.qmul.ac.uk/~hatice/

Current place of employment:
Queen Mary, University of London

Dr Hatice Gunes is a Lecturer (Assistant Professor) at the School of Electronic Engineering and Computer Science, Queen Mary University of London (QMUL), UK. She received her Ph.D. degree in Computer Science from University of Technology Sydney (UTS), Australia, in September 2007 for her multi-cue and multi-sensory approach to automatic recognition of emotions from face-and-body expressions, and their timing and synchronicity. Prior to joining QMUL, she was a postdoctoral researcher at Imperial College London, UK working on SEMAINE, an EU-FP7 award winning project that aimed to build a multimodal dialogue system which can interact with humans via a virtual character and react appropriately to the user's non-verbal behaviour, and MAHNOB that aimed at multimodal analysis of human naturalistic nonverbal behaviour. Her research interests lie in the areas of affective computing, visual information processing, and machine learning, focusing particularly on emotional data acquisition and annotation, automatic affective behaviour analysis and continuous prediction, multicue and multimodal emotion recognition. Dr Gunes has published more than 50 technical papers in these areas, and has also served as a Guest Editor of Special Issues in Image and Vision Computing Journal and the Int'l Journal of Synthetic Emotions, a member of the Editorial Advisory Board for the Affective Computing and Interaction Book (IGI Global, 2011), as an invited speaker at the Int'l Workshop on Social Signal Processing (WSSP 2011) and the Summer School on Affective Computing and Social Signal Processing (ACSSP 2010), and as a reviewer for numerous journals and conferences in these fields. From 2004 to 2007, she was a recipient of the Australian Government International Postgraduate Research Scholarship (IPRS) awarded to top quality international postgraduate students. Dr Gunes, together with co-authors, has also received a number of other awards for Outstanding Paper (IEEE FG 2011), Quality Reviewer (IEEE ICME 2011), Best Demo (IEEE ACII 2009), and Best Student Paper (VisHCI 2006). She is a member of the IEEE, the ACM, and the HUMAINE Association.

 

Publications by Hatice Gunes (bibliography)

 what's this?
2014

Multimodal Affective Computing (2014)

2010
 
Edit | Del

Gunes, Hatice and Pantic, Maja (2010): Automatic, Dimensional and Continuous Emotion Recognition. In International Journal of Synthetic Emotions, 1 (1) pp. 68-99. Available online

Recognition and analysis of human emotions have attracted a lot of interest in the past two decades and have been researched extensively in neuroscience, psychology, cognitive sciences, and computer sciences. Most of the past research in machine analysis of human emotion has focused on recognition of prototypic expressions of six basic emotions based on data that has been posed on demand and acquired in laboratory settings. More recently, there has been a shift toward recognition of affective displays recorded in naturalistic settings as driven by real world applications. This shift in affective computing research is aimed toward subtle, continuous, and context-specific interpretations of affective displays recorded in real-world settings and toward combining multiple modalities for analysis and recognition of human emotion. Accordingly, this article explores recent advances in dimensional and continuous affect modeling, sensing, and automatic recognition from visual, audio, tactile, and brain-wave modalities.

© All rights reserved Gunes and Pantic and/or their publisher

2009
 
Edit | Del

Petridis, Stavros, Gunes, Hatice, Kaltwang, Sebastian and Pantic, Maja (2009): Static vs. Dynamic Modeling of Human Nonverbal Behavior from Multiple Cues and Modalities. In: Proceedings of the 11th ACM International Conference on Multimodal Interfaces 2-6 November, 2009, Cambridge, MA, USA. pp. 23-30. Available online

Human nonverbal behavior recognition from multiple cues and modalities has attracted a lot of interest in recent years. Despite the interest, many research questions, including the type of feature representation, choice of static vs. dynamic classification schemes, the number and type of cues or modalities to use, and the optimal way of fusing these, remain open research questions. This paper compares frame-based vs window-based feature representation and employs static vs. dynamic classification schemes for two distinct problems in the field of automatic human nonverbal behavior analysis: multicue discrimination between posed and spontaneous smiles from facial expressions, head and shoulder movements, and audio-visual discrimination between laughter and speech. Single cue and single modality results are compared to multicue and multimodal results by employing Neural Networks, Hidden Markov Models (HMMs), and 2- and 3-chain coupled HMMs. Subject independent experimental evaluation shows that: 1) both for static and dynamic classification, fusing data coming from multiple cues and modalities proves useful to the overall task of recognition, 2) the type of feature representation appears to have a direct impact on the classification performance, and 3) static classification is comparable to dynamic classification both for multicue discrimination between posed and spontaneous smiles, and audio-visual discrimination between laughter and speech.

© All rights reserved Petridis et al. and/or their publisher

 
Edit | Del

Schroder, M., Bevacqua, E., Eyben, F., Gunes, Hatice, Heylen, D., Maat, M., Pammi, S., Pantic, M., Pelachaud, C., Schuller, B., Sevin, E., Valstar, M. and Wollmer, M. (2009): A Demonstration of Audiovisual Sensitive Artificial Listeners. In: Proceedings of the Third International Conference on Affective Computing and Intelligent Interaction 10-12 September, 2009, Amsterdam, The Netherlands. pp. 263-264. Available online

Sensitive Artificial Listeners (SAL) are virtual dialogue partners who, despite their very limited verbal understanding, intend to engage the user in a conversation by paying attention to the user's emotions and non-verbal expressions. The SAL characters have their own emotionally defined personality, and attempt to drag the user towards their dominant emotion, through a combination of verbal and non-verbal expression. The demonstrator shows an early version of the fully autonomous SAL system based on audiovisual analysis and synthesis.

© All rights reserved Schroder et al. and/or IEEE

 
Edit | Del

Gunes, Hatice and Piccardi, Massimo (2009): Automatic Temporal Segment Detection and Affect Recognition from Face and Body Display. In IEEE Transactions on Systems, Man, and Cybernetics – Part B, 39 (1) pp. 64-84. Available online

Psychologists have long explored mechanisms with which humans recognize other humans' affective states from modalities, such as voice and face display. This exploration has led to the identification of the main mechanisms, including the important role played in the recognition process by the modalities' dynamics. Constrained by the human physiology, the temporal evolution of a modality appears to be well approximated by a sequence of temporal segments called onset, apex, and offset. Stemming from these findings, computer scientists, over the past 15 years, have proposed various methodologies to automate the recognition process. We note, however, two main limitations to date. The first is that much of the past research has focused on affect recognition from single modalities. The second is that even the few multimodal systems have not paid sufficient attention to the modalities' dynamics: The automatic determination of their temporal segments, their synchronization to the purpose of modality fusion, and their role in affect recognition are yet to be adequately explored. To address this issue, this paper focuses on affective face and body display, proposes a method to automatically detect their temporal segments or phases, explores whether the detection of the temporal phases can effectively support recognition of affective states, and recognizes affective states based on phase synchronization/alignment. The experimental results obtained show the following: 1) affective face and body displays are simultaneous but not strictly synchronous; 2) explicit detection of the temporal phases can improve the accuracy of affect recognition; 3) recognition from fused face and body modalities performs better than that from the face or the body modality alone; and 4) synchronized feature-level fusion achieves better performance than decision-level fusion.

© All rights reserved Gunes and Piccardi and/or IEEE

 
Edit | Del

Petridis, Stavros, Gunes, Hatice, Kaltwang, Sebastian and Pantic, Maja (2009): Static vs. dynamic modeling of human nonverbal behavior from multiple cues and modalities. In: Proceedings of the 2009 International Conference on Multimodal Interfaces 2009. pp. 23-30. Available online

Human nonverbal behavior recognition from multiple cues and modalities has attracted a lot of interest in recent years. Despite the interest, many research questions, including the type of feature representation, choice of static vs. dynamic classification schemes, the number and type of cues or modalities to use, and the optimal way of fusing these, remain open research questions. This paper compares frame-based vs window-based feature representation and employs static vs. dynamic classification schemes for two distinct problems in the field of automatic human nonverbal behavior analysis: multicue discrimination between posed and spontaneous smiles from facial expressions, head and shoulder movements, and audio-visual discrimination between laughter and speech. Single cue and single modality results are compared to multicue and multimodal results by employing Neural Networks, Hidden Markov Models (HMMs), and 2- and 3-chain coupled HMMs. Subject independent experimental evaluation shows that: 1) both for static and dynamic classification, fusing data coming from multiple cues and modalities proves useful to the overall task of recognition, 2) the type of feature representation appears to have a direct impact on the classification performance, and 3) static classification is comparable to dynamic classification both for multicue discrimination between posed and spontaneous smiles, and audio-visual discrimination between laughter and speech.

© All rights reserved Petridis et al. and/or their publisher

2008
 
Edit | Del

Gunes, Hatice and Piccardi, Massimo (2008): From Mono-modal to Multi-modal: Affect Recognition Using Visual Modalities. In: Monekosso, Dorothy, Remagnino, Paolo and Kuno, Yoshinori (eds.). "Intelligent Environments: Methods, Algorithms and Applications". Springer-Verlag

 
Edit | Del

Otoom, Ahmed Fawzi, Gunes, Hatice and Piccardi, Massimo (2008): Feature Extraction Techniques for Abandoned Object Classification in Video Surveillance. In: Proceedings of the 15th IEEE International Conference on Image Processing 12-15 October, 2008, San Diego, California, USA. pp. 1368-1371. Available online

We address the problem of abandoned object classification in video surveillance. Our aim is to determine (i) which feature extraction technique proves more useful for accurate object classification in a video surveillance context (scale invariant image transform (SIFT) keypoints vs. geometric primitive features), and (ii) how the resulting features affect classification accuracy and false positive rates for different classification schemes used. Objects are classified into four different categories: bag (s), person (s), trolley (s), and group (s) of people. Our experimental results show that the highest recognition accuracy and the lowest false alarm rate are achieved by building a classifier based on our proposed set of statistics of geometric primitives' features. Moreover, classification performance based on this set of features proves to be more invariant across different learning algorithms.

© All rights reserved Otoom et al. and/or IEEE Press

 
Edit | Del

Piccardi, Massimo, Gunes, Hatice and Otoom, Ahmed Fawzi (2008): Maximum Likelihood Dimensionality Reduction in Gaussian Mixture Models with an Application to Object Classification. In: Proceedings of the 19th IAPR International Conference on Pattern Recognition December, 2008, Tampa, Florida, USA. pp. 1-4. Available online

Accurate classification of objects of interest for video surveillance is difficult due to occlusions, deformations and variable views/illumination. The adopted feature sets tend to overcome these issues by including many and complementary features; however, their large dimensionality poses an intrinsic challenge to the classification task. In this paper, we present a novel technique providing maximum-likelihood dimensionality reduction in Gaussian mixture models for classification. The technique, called hereafter mixture of maximum-likelihood normalized projections (mixture of ML-NP), was used in this work to classify a 44-dimensional data set into 4 classes (bag, trolley, single person, group of people). The accuracy achieved on an independent test set is 98% vs. 80% of the runner-up (MultiBoost/AdaBoost).

© All rights reserved Piccardi et al. and/or IEEE Press

 
Edit | Del

Zhang, Zui, Gunes, Hatice and Piccardi, Massimo (2008): Head Detection for Video Surveillance Based on Categorical Hair and Skin Colour Models. In: Proceedings of IEEE International Conference on Image Processing 7-11 November, 2008, Cairo, Egypt. pp. 1137-1140. Available online

We propose a new robust head detection algorithm that is capable of handling significantly different conditions in terms of viewpoint, tilt angle, scale and resolution. To this aim, we built a new model for the head based on appearance distributions and shape constraints. We construct a categorical model for hair and skin, separately, and train the models for four categories of hair (brown, red, blond and black) and three categories of skin representing the different illumination conditions (bright, standard and dark). The shape constraint fits an elliptical model to the candidate region and compares its parameters with priors based on human anatomy. The experimental results validate the usability of the proposed algorithm in various video surveillance and multimedia applications.

© All rights reserved Zhang et al. and/or IEEE Press

 
Edit | Del

Gunes, Hatice, Piccardi, Massimo and Pantic, Maja (2008): From the Lab to the Real World: Affect Recognition Using Multiple Cues and Modalities. In: Or, Jimmy (ed.). "Affective Computing, Focus on Emotion Expression, Synthesis and Recognition". Vienna, Austria: I-Tech Education and Publishingpp. 185-218

http://intechweb.org/downloadpdf.php?id=5184

© All rights reserved Gunes et al. and/or I-Tech Education and Publishing

2007
 
Edit | Del

Gunes, Hatice and Piccardi, Massimo (2007): Bimodal emotion recognition from expressive face and body gestures. In Journal of Network and Computer Applications, 30 (4) pp. 1334-1345.

 Cited in the following chapter:

Multimodal Affective Computing: [/encyclopedia/multimodal_affective_computing.html]


 
 Cited in the following chapter:

Multimodal Affective Computing: [/encyclopedia/multimodal_affective_computing.html]


 
 
Edit | Del

Valstar, Michel F., Gunes, Hatice and Pantic, Maja (2007): How to Distinguish Posed from Spontaneous Smiles using Geometric Features. In: Proceedings of the Ninth International Conference on Multimodal Interfaces 12-15 November, 2007, Nagoya, Japan. pp. 38-45. Available online

Automatic distinction between posed and spontaneous expressions is an unsolved problem. Previously cognitive sciences' studies indicated that the automatic separation of posed from spontaneous expressions is possible using the face modality alone. However, little is known about the information contained in head and shoulder motion. In this work, we propose to (i) distinguish between posed and spontaneous smiles by fusing the head, face, and shoulder modalities, (ii) investigate which modalities carry important information and how the information of the modalities relate to each other, and (iii) to which extent the temporal dynamics of these signals attribute to solving the problem. We use a cylindrical head tracker to track the head movements and two particle filtering techniques to track the facial and shoulder movements. Classification is performed by kernel methods combined with ensemble learning techniques. We investigated two aspects of multimodal fusion: the level of abstraction (i.e., early, mid-level, and late fusion) and the fusion rule used (i.e., sum, product and weight criteria). Experimental results from 100 videos displaying posed smiles and 102 videos displaying spontaneous smiles are presented. Best results were obtained with late fusion of all modalities when 94.0% of the videos were classified correctly.

© All rights reserved Valstar et al. and/or Association for Computing Machinery

 
Edit | Del

Valstar, Michel Franois, Gunes, Hatice and Pantic, Maja (2007): How to distinguish posed from spontaneous smiles using geometric features. In: Massaro, Dominic W., Takeda, Kazuya, Roy, Deb and Potamianos, Alexandros (eds.) Proceedings of the 9th International Conference on Multimodal Interfaces - ICMI 2007 November 12-15, 2007, Nagoya, Aichi, Japan. pp. 38-45. Available online

 
Edit | Del

Valstar, Michel F., Gunes, Hatice and Pantic, Maja (2007): How to distinguish posed from spontaneous smiles using geometric features. In: Proceedings of the 2007 International Conference on Multimodal Interfaces 2007. pp. 38-45. Available online

Automatic distinction between posed and spontaneous expressions is an unsolved problem. Previously cognitive sciences' studies indicated that the automatic separation of posed from spontaneous expressions is possible using the face modality alone. However, little is known about the information contained in head and shoulder motion. In this work, we propose to (i) distinguish between posed and spontaneous smiles by fusing the head, face, and shoulder modalities, (ii) investigate which modalities carry important information and how the information of the modalities relate to each other, and (iii) to which extent the temporal dynamics of these signals attribute to solving the problem. We use a cylindrical head tracker to track the head movements and two particle filtering techniques to track the facial and shoulder movements. Classification is performed by kernel methods combined with ensemble learning techniques. We investigated two aspects of multimodal fusion: the level of abstraction (i.e., early, mid-level, and late fusion) and the fusion rule used (i.e., sum, product and weight criteria). Experimental results from 100 videos displaying posed smiles and 102 videos displaying spontaneous smiles are presented. Best results were obtained with late fusion of all modalities when 94.0% of the videos were classified correctly.

© All rights reserved Valstar et al. and/or their publisher

2006
 
Edit | Del

Gunes, Hatice and Piccardi, Massimo (2006): Assessing facial beauty through proportion analysis by image processing and supervised learning. In International Journal of Human-Computer Studies, 64 (12) pp. 1184-1199. Available online

Perception of universal facial beauty has long been debated amongst psychologists and anthropologists. In this paper, we perform experiments to evaluate the extent of universal beauty by surveying a number of diverse human referees to grade a collection of female facial images. Results obtained show that there exists a strong central tendency in the human grades, thus exhibiting agreement on beauty assessment. We then trained an automated classifier using the average human grades as the ground truth and used it to classify an independent test set of facial images. The high accuracy achieved proves that this classifier can be used as a general, automated tool for objective classification of female facial beauty. Potential applications exist in the entertainment industry, cosmetic industry, virtual media, and plastic surgery.

© All rights reserved Gunes and Piccardi and/or Academic Press

 
Edit | Del

Gunes, Hatice and Piccardi, Massimo (2006): Creating and Annotating Affect Databases from Face and Body Display: A Contemporary Survey. In: Proceedings of the IEEE International Conference on Systems, Man and Cybernetics 8-11 October, 2006, Tainan, Taiwan. pp. 2426-2433. Available online

Databases containing representative samples of human multi-modal expressive behavior are needed for the development of affect recognition systems. However, at present publicly-available databases exist mainly for single expressive modalities such as facial expressions, static and dynamic hand postures, and dynamic hand gestures. Only recently, a first bimodal affect database consisting of expressive face and upperbody display has been released. To foster development of affect recognition systems, this paper presents a comprehensive survey of the current state-of-the art in affect database creation from face and body display and elicits the requirements of an ideal multi-modal affect database.

© All rights reserved Gunes and Piccardi and/or IEEE

 
Edit | Del

Gunes, Hatice and Piccardi, Massimo (2006): A Bimodal Face and Body Gesture Database for Automatic Analysis of Human Nonverbal Affective Behavior. In: Proceedings of the 18th IAPR International Conference on Pattern Recognition 22-24 August, 2006, Hong Kong. pp. 1148-1153. Available online

To be able to develop and test robust affective multimodal systems, researchers need access to novel databases containing representative samples of human multi-modal expressive behavior. The creation of such databases requires a major effort in the definition of representative behaviors, the choice of expressive modalities, and the collection and labeling of large amount of data. At present, public databases only exist for single expressive modalities such as facial expression analysis. There also exist a number of gesture databases of static and dynamic hand postures and dynamic hand gestures. However, there is not a readily available database combining affective face and body information in a genuine bimodal manner. Accordingly, in this paper, we present a bimodal database recorded by two high resolution cameras simultaneously for use in automatic analysis of human nonverbal affective behavior.

© All rights reserved Gunes and Piccardi and/or IEEE Computer Society

 
Edit | Del

Gunes, Hatice and Piccardi, Massimo (2006): Observer Annotation of Affective Display and Evaluation of Expressivity: Face vs. Face-and-Body. In: Proceedings of the HCSNet workshop on Use of vision in human-computer interaction 1-3 November, 2006, Canberra, Australia. pp. 35-42. Available online

A first step in developing and testing a robust affective multimodal system is to obtain or access data representing human multimodal expressive behaviour. Collected affect data has to be further annotated in order to become usable for the automated systems. Most of the existing studies of emotion or affect annotation are monomodal. Instead, in this paper, we explore how independent human observers annotate affect display from monomodal face data compared to bimodal face-and-body data. To this aim we collected visual affect data by recording the face and face-and-body simultaneously. We then conducted a survey by asking human observers to view and label the face and face-and-body recordings separately. The results obtained show that in general, viewing face-and-body simultaneously helps with resolving the ambiguity in annotating emotional behaviours.

© All rights reserved Gunes and Piccardi and/or their publisher

2005
 
Edit | Del

Gunes, Hatice and Piccardi, Massimo (2005): Affect recognition from face and body: early fusion vs. late fusion. In: IEEE International Conference on Systems, Man and Cybernetics 10-12 October, 2005, Hawaii, USA. pp. 3437-3443. Available online

This paper presents an approach to automatic visual emotion recognition from two modalities: face and body. Firstly, individual classifiers are trained from individual modalities. Secondly, we fuse facial expression and affective body gesture information first at a feature-level, in which the data from both modalities are combined before classification, and later at a decision-level, in which we integrate the outputs of the monomodal systems by the use of suitable criteria. We then evaluate these two fusion approaches, in terms of performance over monomodal emotion recognition based on facial expression modality only. In the experiments performed the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual facial modality. Moreover, fusion at the feature-level proved better recognition than fusion at the decision-level.

© All rights reserved Gunes and Piccardi and/or IEEE

 
Edit | Del

Gunes, Hatice and Piccardi, Massimo (2005): Fusing Face and Body Display for Bi-Modal Emotion Recognition: Single Frame Analysis and Multi-Frame Post Integration. In: Tao, Jianhua, Tan, Tieniu and Picard, Rosalind W. (eds.) Proceedings of the 1st International Conference on Affective Computing and Intelligent Interaction 22-24 October, 2005, Beijing, China. pp. 102-111. Available online

This paper presents an approach to automatic visual emotion recognition from two modalities: expressive face and body gesture. Face and body movements are captured simultaneously using two separate cameras. For each face and body image sequence single expressive frames are selected manually for analysis and recognition of emotions. Firstly, individual classifiers are trained from individual modalities for mono-modal emotion recognition. Secondly, we fuse facial expression and affective body gesture information at the feature and at the decision-level. In the experiments performed, the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual facial modality. We further extend the affect analysis into a whole image sequence by a multi-frame post integration approach over the single frame recognition results. In our experiments, the post integration based on the fusion of face and body has shown to be more accurate than the post integration based on the facial modality only.

© All rights reserved Gunes and Piccardi and/or Springer-Verlag

 
Edit | Del

Gunes, Hatice and Piccardi, Massimo (2005): Fusing face and body gesture for machine recognition of emotions. In: IEEE International Workshop on Robot and Human Interactive Communication 13-15 August, 2005, Nashville, USA. pp. 306-311. Available online

Research shows that humans are more likely to consider computers to be human-like when those computers understand and display appropriate nonverbal communicative behavior. Most of the existing systems attempting to analyze the human nonverbal behavior focus only on the face; research that aims to integrate gesture as an expression mean has only recently emerged. This paper presents an approach to automatic visual recognition of expressive face and upper body action units (FAUs and BAUs) suitable for use in a vision-based affective multimodal framework. After describing the feature extraction techniques, classification results from three subjects are presented. Firstly, individual classifiers are trained separately with face and body features for classification into FAU and BAU categories. Secondly, the same procedure is applied for classification into labeled emotion categories. Finally, we fuse face and body information for classification into combined emotion categories. In our experiments, the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual face modality.

© All rights reserved Gunes and Piccardi and/or their publisher

 
Edit | Del

Gunes, Hatice and Piccardi, Massimo (2005): Fusing Face and Body Display for Bi-modal Emotion Recognition: Single Frame Analysis and Multi-frame Post Integration. In: Tao, Jianhua, Tan, Tieniu and Picard, Rosalind W. (eds.) ACII 2005 - Affective Computing and Intelligent Interaction, First International Conference October 22-24, 2005, Beijing, China. pp. 102-111. Available online

2004
 
Edit | Del

Gunes, Hatice, Piccardi, Massimo and Jan, Tony (2004): Bimodal Modelling of Facial and Upper-Body Gesture for Affective HCI. In: Proceedings of OZCHI the annual conference of the Computer-Human Interaction Special Interest Group (CHISIG) of Australia 22-24 November, 2004, Wollongong, Australia. .

 
Edit | Del

Gunes, Hatice, Piccardi, Massimo and Jan, Tony (2004): Face and Body Gesture Analysis for Multimodal HCI. In: Masoodian, Masood, Jones, Steve and Rogers, Bill (eds.) Computer Human Interaction 6th Asia Pacific Conference - APCHI 2004 June 29 - July 2, 2004, Rotorua, New Zealand. pp. 583-588. Available online

 
Add publication
Show list on your website
 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/hatice_gunes.html