Robotics Research Lab
CRES
USC Computer Science
USC Engineering
USC
/ Research / Areas

Embodied Communication and Interaction Influencing Social Dynamics
Long-Term Adaptation Activity Modeling and Understanding

Current Research Areas:

Embodied Communication and Interaction

We are exploring the role the robot's physical embodiment plays in its abilities to engage, influence, and interact. We are examining the difference physical embodiment makes on social interaction, compared to non-physical but otherwise identical interactions (such as via smart phones, PDAs, and computers). We are interested in both better understanding and leveraging the growing evidence in favor of physically embodied robots over disembodied alternatives, showing that robots are particularly effective at engaging human users and shaping behavior. We are seeking to develop principled methods for controlling the robot's embodiment and expressiveness though "body language", including micro-social behaviors: proxemics (social spacing), oculesics (eye gaze), kinesics (gesture, posture, and facial expressions), and deixis (spatial referencing). Our research also spans the study of spatial language, embodied personality expression, embodied communication, and the coordination of multiple embodied communication channels/modalities, including gestures based on sentence content, structure, and timing, and appropriate references to locations of joint task-related attention.

Individual project pages
Influencing Social Dynamics

We are developing a heterogeneous set of computational models for complex mechanisms such as liking, trust, and motivation drawn from social and cognitive science known to influence behavior and improve relationships. These models will allow robots to leverage physical embodiment in order to effectively convey, train, and facilitate the human user's pursuit of recommended activities (e.g., exercise, treatment, training, rehabilitation) and the internalization of the goals of those activities toward impacting user autonomy, self efficacy, and self determination. These efforts involve the study of micro-social behaviors (see Embodied Communication and Interaction) as means for social influence of lower-level social interactions including the position and orientation relative to the social partner, the use of mutual eye gaze and gaze aversion, and gesticulation to reflect dominance/authority, among others. We are also involved in modeling motivation through embodied interaction.

Individual project pages
Long-Term Adaptation

We are interested in developing and enhancing methods that allow robots to adapt to specific users and personalize their interaction in order to provide effective, sustained long-term human-robot interaction. Interaction with people is a challenging learning problem, as it involves a "moving target", since people's moods, health state, task expertise, and other factors are not necessarily accessible, change at various time-scales, and can influence behavior in unexpected ways. People typically do not have the patience to provide ample training examples, they do not always act consistently, and they require intuitive interfaces. Our work is driven by several motivations: 1) to leverage human-robot interaction in order to enhance the robot's ability to learn new concepts, skills, and tasks; 2) to enable the robot to personalize its interaction with the user so as to optimize its effectiveness in assisting the user; 3) to develop methods for learning at human timescales, which are both very short (within a social interaction) and very long (repeated encounters/sessions, months, and even years); and 4) to situate HRI learning in realistic domains, including classrooms, health care contexts, and homes. Our ongoing projects are exploring learning and adaptation in a variety of problem domains, including human-robot team collaboration, user adaptation, task learning from demonstration, and socially guided service task learning.

Individual project pages
Activity Modeling and Understanding

We are interested in developing computational models of user activity. Our research examines approaches to analyzing human activity data from various modalities, and novel applications of user modeling for HRI, as well as HCI. Through the use of latest sensor technologies, we are able to extract and utilize body movement, speech, and physiological signals during human-machine interactions in order to model and recognize user beliefs, desires, and intentions. We are enabling robots to detect spatially situated social cues, such as detecting transitions into, during, and out of an interaction. We are also developing methods for providing feedback to the user during exercise or other task-performance sessions, based on physiological data, such as heart rate, respiration rate, skin temperature, and accelerometry. Our methods employ a variety of machine learning and signal processing algorithms toward robust activity recognition and understanding for use in autonomous decision for socially situated agents in HRI and HCI contexts.

Individual project pages

Past Research Areas:
We are no longer conducting research in these areas.

Multi-Robot Coordination and Learning

The Interaction Lab's original research, from its inception (see the lab profile for the history of the Interaction Lab) was in multi-robot coordination and learning. Almost 15 years later, the focus of the lab's research has shifted, and WE ARE NO LONGER CONDUCTING RESEARCH IN THIS AREA. The one exception is HRI for human-robot teams, which spans the active research areas listed above.

More information and individual project pages
Humanoid Robot Control and Learning

Control of humanoid robots and agents is one of the exciting areas of robotics, and one that the Interaction Lab has focused on in the past. WE ARE NO LONGER CONDUCTING RESEARCH IN THIS AREA. Instead, we study embodied humanoid control in the context of Socially Assistive Robotics and Activity Modeling.

More information and individual project pages