Tutorial 2: Emotion and Mental State Recognition: Features, Models, System Applications and Beyond
Emotion recognition is the ability to identify what you are feeling from moment to moment and to understand the connection between your feelings and your expressions. In today’s world, human-computer interaction (HCI) interface undoubtedly plays an important role in our daily life. Toward harmonious HCI interfaces, automated analysis and recognition of human emotion has attracted increasing attention from researchers in multidisciplinary research fields. A specific area of current interest that also has key implications for HCI is the estimation of cognitive load (mental workload), research into which is still at an early stage. Technologies for processing daily activities including speech, text and music have expanded the interaction modalities between humans and computer-supported communicational artifacts.
In this tutorial, we will present theoretical and practical work offering new and broad views of the latest research in emotional awareness from audio and speech. We discuss several parts spanning a variety of theoretical background and applications ranging from salient emotional features, emotional-cognitive models, compensation methods for variability due to speaker and linguistic content, to machine learning approaches applicable to emotion recognition. In each topic, we will review the state of the art by introducing current methods and presenting several applications. In particular, the application to cognitive load estimation will be discussed, from its psychophysiological origins to system design considerations. Eventually, technologies developed in different areas will be combined for future applications, so in addition to a survey of future research challenges, we will envision a few scenarios in which affective computing can make a difference.