Robotics Research Lab
CRES
USC Computer Science
USC Engineering
USC
/ Research / Projects / Emotional Modelling for HRI

Overview Background Approach Videos
Images Publications Support Contact Details

Top
Overview

This goal of this research is to develop a novel model for the classification and guidance of user emotion state using both physiological and vocal data. These data are obtained wirelessly from the user to provide an online emotion state analysis tool capable of detecting current emotion states, tracking the user state as it changes between emotions, and creating interaction behaviors in line with the user's current emotion state and emotion state trajectory in real time. The model is formally evaluated using human subjects and robots with varying levels of emotional awareness. Future extensions of this work will shows that robots with emotional awareness, emotion tracking, and principled behavior self-modification are capable of engaging in interactions with human users of longer duration and increased effectiveness than those without such capabilities.

Background

An emerging aspect of research in Human-Robot Interaction (HRI) centers on developing an understanding of how to detect, maintain, and recapture user interest. Interest detection and maintenance is an important in areas ranging from rehabilitation to tutoring activities. Fundamentally, measuring and cultivating user interest is important in any scenario in which the robot is charged with transmitting information or assistance to a user. However, one of the difficulties with tracking human emotion, is that when queried, users will not and in fact may not be able to always answer truthfully and their emotion state may change between queries. Furthermore, this need for constant queries may negatively effect the user's emotion state. Therefore, an additional method is necessary.

Physiological data analysis uses the user's physiological signals to estimate emotion state. Once the user's affective state has been determined, the robot will be able to adapt its behavior to either maintain the current user state or to influence the user to return from his current affective state back to a state designated as ideal. The robot can alter its behavior in several different ways including, changing the exercises suggested, altering its personality, it can change its physical proximity (distance to the user), or various other behavior changing ways.

Approach

The platform for experimentation is a series of exercises and games in which the user is asked to do a repetitive task requiring manual dexterity. The task requires some concentration but is not overly interesting. This reduces the possibility that the user prolongs the task based on the task itself and not the robotic behavior. The subjects are split into three groups. The first group interacts with a robot that never alters its behavior. The second group interacts with a robot that changes its behavior randomly, not based on any user cues. The third group interacts with a robot that changes its behavior based on user physiological cues. The behaviors altered include the robot's proximity to the user, vocalized statements, motion patterns, and camera usage. The physiological data is windowed and segmented based on moderator type and is classified using a K-Nearest Neighbors algorithm.

Videos

Images

Lay of the Land
Experimental 
Setup
Devices in Use
BodyMedia SenseWear Pro2 Armband
Robot 
Moderator
Simulated Robot 
Moderator

Publications
Support

This work is supported by the National Science Foundation's Graduate Research Fellowship Program.

Contact

Emily Mower: mower at usc dot edu
Emily Mower's Personal Page