Project Goals: The goal of this project is to develop a socially assistive, non-contact, infant-robot interaction system to provide contingent positive feedback to increase exploration and expand early movement practice. This research has the potential to advance knowledge about which aspects of movement infants can adjust and how to most effectively guide their movement to enhance their neuromotor control.

Project Status: We determined that infants were able to learn that their leg movement activated the robot (a contingent learning task). Next, we built on those foundational data to determine that infants adjusted both how much they moved and how they moved as they learned that their leg movements activated the robot. This is important for answering the following research question: Do infants merely move more to make the robot activate, or do they change how they move to make the robot activate? We also used an additional tool, head-mounted eye tracking, to determine that infants demonstrate predictive gaze when they learn the robot reward is contingent on their behavior. This is important because the existing definition of an infant learning the task is only that the infant moved more, leading to possible misclassification since moving more can be due to other factors beyond learning that their leg movements activated the robot (e.g., general excitement, fussiness). We have now shown that infants who learn the task do show predictive gaze (visual anticipation of the robot reward as opposed to a reactive visual response to the robot reward). We continue to work toward creating new adaptive, personalized models of novel robot behavior to influence infant gaze and movement behaviors during a closed-loop interaction with a robot.