Robotics Research Lab
CRES
USC Computer Science
USC Engineering
USC
/ Research / Projects / Imitation Learning in HRI

We are pursuing a research program focused on developing a model of learning by imitation. As one of the most powerful yet poorly understood forms of learning in nature, imitation presents an important research problem in AI and machine learning, as well as in the behavior and neural sciences.

Imitation involves the interaction of perception, memory, and motor control, subsystems that typically utilize very different representations, and must interact to produce and learn novel behavior patterns. Gaining insight into the mechanisms of imitation is compelling from the standpoint of AI and behavioral sciences, since the propensity for it appears to be innate and the mechanism is phylogenetically old, but its true and complete form (capable of acquiring arbitrary novel skill from observation) is very rare in nature. From the practical standpoint, imitation even in its simple forms is a faster and more efficient form of acquiring new behaviors than its traditional classical conditioning and reinforcement learning counterparts; in humans it is critical during development and remains an important aspect of social interaction and adaptation throughout life.

Our work on imitation is driven by two main motivations:

We are addressing both by developing a model of imitation, and evaluating it on different test-beds and learning tasks. In the development of the model, we are constrained by cognitive science and neuroscience data. We perform psychophysical experiments in order to gain further insight into human perceptual and motor behavior in imitation. We also base our model on the philosophy that features a strong link between perception and action (such as that found in mirror neurons), the use of motor primitives for movement (such as spinal vector fields and central pattern generators, as well as cortical structures) that can also facilitate movement recognition, and the use of imitative abilities as a basis for communication and higher-level cognition.

We have conducted several pychophysical experiments to study movement perception and reconstruction in imitation. Using those data, and current theories from neuroscience and cognitive science, we are developing a model of learning by imitation, based on the notions of perceptual-motor primitives (related to basis behaviors), mirror neurons, and classification-based learning. We are experimenting with innate primitives, as well as using bottom-up methods for automatically extracting primitives from the movement data. We evaluate all models on complex test-beds: 2 humanoid robot testbeds, several 3D humanoid physical simulations, Sony Aibo robot dogs, and ActivMedia Pioneers. For a summary of these testbeds, look here.

Research Goals

Our immediate research goals are:

  1. To keep refining our bilogically-inspired model of learning by imitation, using neuroscience evidence of phylogenetically old structures such as mirror neurons and motor primitives as philosophical keystones and constraints.
  2. To explore different types of primitives, focusing on oscillatory, discrete, and postural.
  3. To gather and use a large corpus of human movement data for automatically deriving primitives.
  4. To validate the model on different implementations for various complex motor tasks, including dance patterns and the manipulation of objects, and test it on and across different test-beds.
  5. To extend the learning system to include hierarchical learning of arbitrarily complex tasks.
  6. To develop methods for evaluation of imitation performance.

We have a (somewhat old) poster demonstrating the project goals:

Poster w/ goals

Current Projects

Past Projects

Imitation Related Results

More details regarding the experimental test-beds for our imitation work is available here.

People
Alumni
Support

This work has been supported by DARPA Grant 5-39509-A under the Mobile Autonomous Robot Software (MARS-2020) program, National Science Foundation Grant No. 9896322, and DARPA Grant No. DABT63-99-1-0015.