Robust Features for Automatic Text-Independent Emotion Recognition from Speech

Veerla Tulasi Ambica, K. Ravi Chandra


Verbalization emotion apperception is one of the latest challenges in verbalization processing and Human Computer Interaction (HCI) in order to address the operational needs in authentic world applications. Besides human visages, verbalization has proven to be one of the most promising modalities for automatic human emotion apperception. Verbalization is a spontaneous medium of perceiving emotions which provide in-depth information cognate to different cognitive states of a human being. In the verbal channel, the emotional content is largely conveyed as constant paralinguistic information signals, from which prosody is the most consequential component. The lack of evaluation of affect and emotional states in human machine interaction is, however, currently constraining the potential deportment and utilizer experience of technological contrivances. In this Paper, verbalization prosody and cognate acoustic features of verbalization are utilized for the apperception of emotion from verbalized Finnish. More categorically, methods for emotion apperception from verbalization relying on long-term ecumenical prosodic parameters are developed. An information fusion method is developed for short segment emotion apperception utilizing local prosodic features and vocal source features. A framework for emotional verbalization data visualization is presented for prosodic features.
Keywords: Affective computing; data visualization; emotion recognition; machine learning; speech prosody; Emotion recognition; Global prosodic features; Local prosodic features; Emo-DB; IITKGP-SESC; Vowel onset point; Segment-wise emotion recognition; Region-wise emotion recognition

Full Text:


Copyright (c) 2015 Veerla Tulasi Ambica, K. Ravi Chandra

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.


All published Articles are Open Access at 

Paper submission: