A system for Recognition of Emotions Based on Speech Analysis and Facial Feature Extraction for Applications to Human-Robot Interaction.

Advanced Human-Robot Interaction will be based on recognition of emotions. The most natural way to recognize emotions is extracting features from human face and speech analysis. we carry out an exploratory study on a methodology to automatically classify basic emotional states (happiness, anger, fear, sadness, surprise, disgust and neutral), based on facial feature extraction and phonetic and acoustic signal properties. The proposed methodology for sound recognition consists of generating and analyzing a graph of formant, pitch and intensity, using the open-source PRAAT program. The methodology uses for facial extraction is mathematical formulation (Bézier curves) of facial features and the face Action Units (AUs) to recognize the basic emotion. The proposed technique in facial expresion consists of three steps: (i) detecting the facial region within the image, (ii) extracting and classifying the facial features, (iii) recognizing the emotion. From the experimental results, it was possible to recognize the basic emotions in most of the cases.
PhD ExpO Year: 
Research Area: