To enable natural and productive human-robot interaction (HRI), a co-robot must both understand and control 'proxemics'—the social use of space—in order to communicate in ways commonly used and understood by humans. This project focuses on answering the question: How do social (speech and gesture), environmental (loud noises and low lighting), and personal (hearing and visual impairments) factors influence positioning and communication between humans and co-robots, and how should a co-robot adjust its social behaviors to maximize human perception of its social signals?

The work on this project has focused on socially assistive settings relevant to broad user populations; the project has developed principled literature-grounded data-driven models for the recognition and control of proxemic, deictic, and oculesics co-robot behavior in situated HRI. Experiments conducted with co-robots inform the development of data-driven and adaptive probabilistic models to estimate the proxemic, deictic, and oculesic interaction parameters for both the human user and the co-robot. In addition, models for other communication modalities such as light, sound, and gesture are being developed and validated in user studies.

This NRI research establishes a foundational component of HRI for co-robotics. The work informs co-robot design and control, and provides software and corpora of HRI data for use by other researchers. Beyond robotics, the work on this NRI informs, validates, quantifies, and extends longstanding findings in the social sciences.

This project includes a strong public and K-12 outreach component consisting of weaving the HRI themes and findings into high impact outreach events that include an annual large-scale open house, an effective means of increasing interest in STEM-related activities.

Automated Proxemic Behavior Recognition and Production
This research investigates proxemics in human-robot interaction (HRI). Proxemics is the study of the dynamic process by which people position themselves in face-to-face social encounters. This process is governed by sociocultural norms that, in effect, determine the overall sensory experience of each interacting participant. To facilitate situated and mobile HRI, this research seeks to develop functional and socially appropriate probabilistic computational models of proxemics for the purposes of both autonomous proxemic behavior recognition (of one or many people) and autonomous proxemic behavior production (by a sociable robot).

PhD dissertation work by Ross Mead

Sound, Light, and Gesture Communication Modalities
We are developing models of non-speech communication modalities, specifically sound, light, and gestures. For sound, we have explored how it can be modulated to improve auditory localization, the knowledge of the robot’s physical location in space. We are now developing a model of how the robot should utilize sound in different environmental conditions to improve co-located human’s spatial awareness of its presence and location. For light, we have performed a video-based study in which participants rated different light signals that varied in color, frequency, and patterns. Based on those ratings, we are now developing a mathematical model that describes user perceptions of these light signals in the context of different motions. For gestures, we working on investigating how bimanual robots may selectively perform co-verbal gestures in order to improve a specific interaction outcome, such as the listener’s information recall.

Gesture Recognition of Children's Bullying Behaviors
Recent work has shown that robots in human spaces may face interaction challenges that can escalate to social or physical harm for the robot or co-located humans. A robot capable of recognizing such interactions can 1) avoid or exit situations that threaten its physical operations and integration into human environments and 2) provide assistance to people experiencing intimidation or other forms of social bullying. We have designed an anti-bullying robot that explains why certain behaviors were inappropriate and utilized it in a study focusing on recognizing signs of bullying interactions from child users. This enabled us to collect a natural bullying behavior dataset and compare various machine learning classifiers for detecting bullying behavior.

Stochastic Gesture Generation
Gesturing in an important tool in human interactions. Gestures can convey certain meanings such as size or direction more easily than words, and beat gestures, which are primarily intended to assist the speaker organize their thoughts rather than to communicate, give conversations an organic and human feel. Gesturing in human-robot interaction (HRI) is a relatively unexplored area, yet has notable potential to improve the quality of communication by presenting information in a way that humans are more familiar with, increasing comprehension and human acceptance of the robot’s words. A gesture learning algorithm has been created to learn a general model for a gesture given only a few instances of training data. From this model, novel gestures can be stochastically generated that will appear visually similar to the training data. Unlike most gesture strategies employed with robotic platforms, which typically select from a discrete set of pre-generated gestures, this algorithm can generate an infinite set of unique gestures without repeats.