This project is part of our National Science Foundation Expedition in Computing for Socially Assistive Robotics (NSF IIS-1139148). Human teaching is a multimodal and interactive process. Teachers often consider verbal and nonverbal cues along with performance signals to assess a student’s learning progress and offer quality feedback. Personalized and interactive education has been shown to improve learning, especially for students with special needs. However, most robotic tutoring systems often follow highly structured feedback models and do not utilize all available multimodal features.

The purpose of this research is to model engagement and help-seeking behaviors in children with Autism Spectrum Disorder (ASD) such that a robot tutor can offer preemptive support and feedback. Specifically, we are developing machine learning models using visual, audio, and performance features from over 15 month-long in-home deployments conducted through the Expedition Project. These models will allow us to understand the amount and type of data necessary for long-term adaptation and to assess the importance of personalization. We also intend to incorporate our multimodal model into a closed-loop, socially assistive robotic system capable of detecting disengagement and responding to an individual's help-seeking behaviors.