Publication Details

Emily K. Mower. "Emotions in Engineering: Methods for the Interpretation of Ambiguous Emotional Content". University of Southern California, 2010. (.pdf)

Abstract:

Emotion has intrigued researchers for generations. This fascination has permeated the engineering community, motivating the development of a ective computational models for the classi cation of a ective states. However, human emotion remains notoriously dicult to interpret computationally both because of the mismatch between the emotional cue generation (the speaker) and perception (the observer) processes and because of the presence of complex emotions, emotions that contain shades of multiple a ective classes. Proper representations of emotion would ameliorate this problem by introducing multidimensional characterizations of the data that permit the quanti cation and description of the varied a ective components of each utterance. Currently, the mathematical representation of emotion is an area that is under-explored.<p> Research in emotion expression and perception provides a complex and human-centered platform for the integration of machine learning techniques and multimodal signal processing towards the design of interpretable data representations. The focus of this dissertation is to provide a computational description of human emotion perception and combine this knowledge with the information gleaned from emotion classi cation experiments to develop a mathematical characterization capable of interpreting naturalistic expressions of emotion utilizing a data representation method called Emotion Pro les.<p> The analysis of human emotion perception provides an understanding of how humans integrate audio and video information during emotional presentations. The goals of this work are to determine how audio and video information interact during the human emotional evaluation process and to identify a subset of the features that contribute to speci c types of emotion perception. We identify perceptually-relevant feature modulations and multi-modal feature integration trends using statistical analyses of the evaluator reports. <p> The trends in evaluator reports are analyzed using emotion classi cation. We study evaluator performance using a combination of Hidden Markov Models (HMM) and Nave Bayes (NB) classi cation. The HMM classi cation is used to predict individual evaluator emotional assessments. The NB classi cation provides an estimate of the consistency of the evaluator's mental model of emotion. We demonstrate that evaluator reports created by evaluators with higher levels of estimated consistency are more accurately predicted than evaluator reports from evaluators that are less consistent. <p> The insights gleaned from the emotion perception and classi cation studies are aggregated to develop a novel emotional representation scheme, called Emotion Pro les (EP). The design of the EPs is predicated on the knowledge that naturalistic emotion expressions can be approximately described using one or more labels from a set of basic emotions. EPs are a quantitative measure expressing the degree of the presence or absence of a set of basic emotions within an expression. They avoid the need for a hardlabeled assignment by instead providing a method for describing the shades of emotion present in an utterance. These pro les can be used to determine a most likely assignment for an utterance, to map out the evolution of the emotional tenor of an interaction, or to interpret utterances that have multiple a ective components. The Emotion-Pro le technique is able to accurately identify the emotion of utterances with de nable ground truths (emotions with an evaluator consensus) and is able to interpret the a ective content of emotions with ambiguous emotional content (no evaluator consensus), emotions that are typically discarded during classi cation tasks. <p> The algorithms and statistical analyses presented in this work are tested using two databases. The rst database is a combination of synthetic (facial information) and natural human (vocal information) cues. The a ective content of the two modalities is either matched (congruent presentation) or mismatched (conicting presentation). The congruent and conicting presentations are used to assess the a ective perceptual relevance of both individual modalities and the speci c feature modulations of those modalities. The second database is an audio-visual + motion-capture database collected at the University of Southern California, the USC IEMOCAP database. This database is used to assess the ecacy of the EP technique for quantifying the emotional content of an utterance. The IEMOCAP database is also used in the classi cation studies to determine how well individual evaluators can be modeled and how accurately discrete emotional labels (e.g., angry, happy, sad, neutral) can be predicted given audio and motion-capture feature information. <p> The future directions of this work include the uni cation of the emotion perception, classi cation, and quanti cation studies. The classi cation framework will be extended to include evaluator-speci c features (an extension of the emotion perception studies) and temporal features based on EP estimates. This uni cation will produce a classi cation framework that is not only more e ective than previous versions, but is also able to adapt to speci c user emotion production and perception styles.

Keywords:

affect, recognition

HTML Reference:
<span class="author">Emily K. Mower</span>. "<span class="title">Emotions in Engineering: Methods for the Interpretation of Ambiguous Emotional Content</span>". <span class="school">University of Southern California</span>, <span class="month"></span> <span class="year">2010</span>.
Bibtex Reference:
@phdthesis{Mower-2010-786,
	author = "Emily K. Mower",
	title = "Emotions in Engineering: Methods for the Interpretation of Ambiguous Emotional Content",
	year = "2010",
	url = "http://robotics.usc.edu/publications/786/"
}
Affiliated Labs:

Interaction Lab
Signal Analysis and Interpretation Laboratory

Author Details:

Name:Emily K. Mower