Robotics Research Lab
CRES
USC Computer Science
USC Engineering
USC
/ Research / Projects / Derivation of Motion Primitives with Spatio-Temporal Dimension Reduction

Introduction Demonstration of the Methodology Papers Funding Details Contact Details

Introduction

Control for and interaction with humanoid robots is often restrictive due to limitations of the robot platform and the high dimensionality in controlling systems with many degrees of freedom. We focus on the problem of providing a skill-level interface for a humanoid robot. Such an interface serves as:

  1. A modular foundation for structuring task-oriented control.
  2. A parsimonious abstraction of motor-level control (e.g., PD-servo control).
  3. A Means for grounding interactions between humans and robots through common skill vocabularies.

Our approach to constructing skill-level interfaces is two-fold. First, we propose a representation for a skill-level interface as a behavior vocabulary, a repertoire of modular exemplar-based memory models expressing kinematic motion. A module in such a vocabulary encodes a flowfield in joint angle space that describes the flow of kinematic motion for a particular skill-level behavior, enabling prediction from a given kinematic configuration. Second, we propose a data-driven method for deriving behavior vocabularies from time-series data of human motion using spatio-temporal dimension reduction and clustering. Results from evaluating an implementation of our methodology are presented along with the application of derived behavior vocabularies as predictors for on-line humanoid robot control and on-line motion synthesis.

Demonstration of the Methodology

Test Motion Streams

This motion data was graciously provided by Jessica Hodgins. Do not redistribute this data without her permission. This data is provided for viewing purposes only.


Motion Stream Segmentation
Motion streams are segmented using Kinematic Centroid Segmentation to provide a dataset of motion segments. Examples of the segmentation for:

Arm waving
Punching

The segmentation method treats each arm as a pendulum attached at the shoulder. The end of the pendulum is shown by the red cube. Adjacent segments are shown in different colors and the intensity of the color indicates the time component of the segment.


Behaviors Derived from Stream 1
By applying spatio-temporal dimension reduction and clustering, motion segments are grouped into features. Each feature describes a primitive behavior. A further spatio-temporal dimension reduction and clustering is used to group primitive behaviors into meta-level behaviors that describe sequencing of the primitives.

Meta-level Behaviors Component Primitive Behaviors
Punching Primitive 33: enter/leave fighting posture Primitive 39: punching Primitive 17: punch followthrough Primitive 33: enter/leave fighting posture
Dancing "The Twist" Primitive 15: twist left Primitive 16: twist right Primitive 15: twist left
Arm Waving Primitive 6: extend arm (vertical) Primitive 11: return arm (horizontal/vertical) Primitive 11: return arm (horizontal/vertical) Primitive 8: extend arm (horizontal)

Arm waving
Punching
Dancing "Cabbage Patch"
Using the sequencing described by a meta-level behavior and interpolation on each of its primitives, new motion can be synthesized that is structurally reflective of the performed behavior.


Synthesized Motion
Actuation of Adonis dynamical humanoid simulation using synthesized punching motion.

Papers

Odest Chadwicke Jenkins and Maja J Mataric´, "Automated Derivation of Behavior Vocabularies for Autonomous Humanoid Motion", To appear in the Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems (Agents 2003), Melbourne, Austrailia, July, 2003. [PDF]

Odest Chadwicke Jenkins and Maja J Mataric´, "Deriving Action and Behavior Primitives from Human Motion Data", In the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2002), pages 2551-2556, Lausanne, Switzerland, 2002. [PDF]

Funding Details

This research was partially supported by the DARPA MARS Program grant DABT63-99-1-0015 and ONR MURI grant N00014-01-1-0890. The motion data was graciously provided by Jessica Hodgins and her motion capture staff.

Contact Details

Chad Jenkins