Robotics Research Lab
CRES
USC Computer Science
USC Engineering
USC
/ Research / Projects / Action-embedded Framework for Human-Robot Interation and Learning from Demonstration

Introduction Learning Communication Publications Support Contact

Introduction

The goal of this project is to develop an action-based framework for interaction between humans and robots that facilitates the transfer of knowledge (from humans to robots and between the robots themselves) and communication strategies that extend the ones typically used in the mobile robotic systems domain. We consider that the robots are equiped with a set of basic skills (in the form of behaviors) which constitute the basis for future task learning and communication.

Our methods for human-robot interaction are focused on learning hierarchical representations of complex tasks from a small number of robot experienced demonstrations, with minimal or no feedback from a human demonstrator. The approach relies on a hierarchical abstract behavior architecture, an extension of the robust and flexible behavior-based systems (BBS), which allows for the representation, learning and execution of complex, hierarchically structured tasks. We have developed an on-line algorithm that constructs hierarchical behavior network representations of robot tasks based on a robot's experienced demonstrations (videos) with a human or another robot teacher.

Our methods for robot-human interaction are focused on developing an action-based communication framework (videos) in which a robot can convey its intentions by suggesting them through actions rather then communicating them through conventional signs, sounds, gestures, or marks with previously agreed-upon meanings. We consider the mobile robot's actions a vocabulary that it could use to induce a human to assist it for parts of tasks that it is not able to perform on its own.

Learning from Experienced Demonstrations

Experimental setup
Description
Videos
sixt Learning to visit a number of targets in a certain order
  • Human demonstration [.AVI]
  • The robot reproducing the learned task [.AVI]
objmov Learning to traverse "gates" and move objects from a source place to a destination
  • Human demonstration [.AVI]
  • The robot reproducing the learned task [.AVI]
  • Human demonstration [.AVI]
  • The robot reproducing the learned task [.AVI]
Learning to slalom
  • Human demonstration [.AVI]
  • The robot reproducing the learned task [.AVI]

Action-Based Communication Framework

Experimental setup
Description & Videos
humrob
  • Traversing blocked gates with a person's help [.AVI]
  • Moving inaccessible located objects with a person's help
    • Interacting with an uninterested person[.AVI]
    • Interacting with an unhelpful person[.AVI]
    • Interacting with an helpful person[.AVI]

Publications
Support

This work is supported by DARPA Grant DABT63-99-1-0015 under the Mobile Autonomous Robot Software (MARS) program, the National Science Foundation Grant No. 9896322, and by the ONR Defense University Research Instrumentation Program Grant.

Contact

Monica Nicolescu
Maja Mataric.