Robotics Research Lab
CRES
USC Computer Science
USC Engineering
USC
/ Research / Projects / Extracting Motion Primitives from Movement Data

Introduction Methods Relevent Publications Support Contact Information

Introduction

Rather than explicitly planning the trajectory of motion of a limb anew for each movement, we are inspired by theories that posit that human beings tend to chose from a limited but possibly large repertoire of movement primitives. Our inspiration for primitives-based motor control is based on evidence from experiments carried out on frogs and rats, where complete movements (such as reaching and wiping) could be produced by potentiating an electrode in different regions of the spine of spinalized frogs. These movements are considered to be primitives for two reasons:

A very critical question remains as to what primitives should be used.

Our approach is to process visual data to derive the primitives involved and to build controllers for them. Looking for correlations or patterns in visual data is a complex and high dimensional task. To reduce the space we work in, we need to segment the data. Subsequently we use PCA to obtain a set of eigenvectors as descriptors of the movement. We are currently implementing a controller for this scheme.

Methods

We use real 3-D human motor data in our analysis. The data are transformed into intrinsic joint space. A phase plot (of the trajectory followed by a system's variables) for a 2-dof movement is shown below. The '+'s mark points at zero velocity crossings (zvc) for all degrees of freedom.

To derive the primitives, the joint space data are segmented and converted into a set of vectors. These vectors are analyzed using PCA to obtain a set of principal components or eigenvectors which we call eigenmovements thus named since they can be used as building blocks for any generic movement. Given the projection of a segment along one of the eigenmovements, we can reconstruct the original movement.

In order to make the approach practical, we need to build controllers for the humanoid. We are currently building the controllers for individual primitives and a system that would allow them to co-operate.

The following is a schematic diagram of the control architecture.

The desired position is compared with the actual state to obtain an error signal. This error signal is projected on to the primitives to obtain a measure of the activation of each of the eigenmovements or primitives. Each Primitive Controller then generates a suitable torque. The torques need to be modified based on the current position of the arm. This is achieved by passing the torque signal through the Position Correction module. The torques are now sent to the humanoid.

In summary, the goals of this project are:

  1. To classify natural human movements into a few primitives which can then be parameterized,
  2. To represent movement data in terms of parameterized primitives,
  3. To invert the above process to reproduce the movements from a sparse representation of human movement.

This project is a part of a larger imitation learning research program we are pursuing, described in more detail here, and also elaborated on in this paper.

Relevant Publications
Support

This work is supported in part by DARPAGrant DABT63-99-1-0015 under the Mobile Autonomous Robot Software (MARS) program, and in part by the National Science Foundation under Grant No. 9896322 and an Infrastructure Grant. The data used for this work was gathered by M. Mataric and M. Pomplun in a joint interdisciplinary project conducted at the National Institutes of Health Resource for Study of Neural Models of Behavior, at the University of Rochester. The humanoid simulator was obtained from Jessica Hodgins.

Contact Information

For more information contact Maja Mataric'.