We are exploring how socially assistive robotic (SAR) systems can aid in the social and cognitive development of children, particularly those with autism spectrum disorder (ASD). Some children with ASD struggle with empathetic behaviors, which are often comprised of taking the perspective of others, identifying and distinguishing emotions, and having a shared affective response. Present work focuses on using the SAR system to teach and encourage perspective taking, utilizing an individual's multimodal verbal and nonverbal cues to inform robot behavior using a Q-learning reinforcement model to maintain engagement and promote empathetic behaviors. The aim is for the SAR system to adapt and personalize over time to the individual needs of each child.

The system includes a robot, a tablet, and a mini-PC. In this study, the robot serves as a peer to help the participant learn about perspective taking, an important component of empathy. The robot exhibits emotion using an animated face on the screen, moves its arms to make gestures, and provides informational feedback using Amazon Polly. The tablet allows the participant to answer questions and interact with the system. The mini-PC acts as the “brain” of the system, controlling the flow of data and storing information about each interaction for future analysis.

The tablet displays a web-based survey interface designed in-house. It displays professionally illustrated images that support the story the robot is telling and follows up each story with questions geared towards teaching perspective taking using five levels of difficulty established by experts in psychology. Once the student answers a question on the tablet, the answer along with other parameters such as response time are passed into our computational model, which determines what type of feedback the robot should give the child, either knowledge of result or extended feedback, and which level of difficulty to pull from for the next question. This model is implemented using Amazon Lex and incorporated into the system with Robotic Operating System (ROS). First, the robot provides the feedback and then we publish the next question to a rostopic, which the webpage reads and uses to update the tablet display accordingly. The interaction continues in this way until the end of the session, after which the data is stored in the mini-PC and used to improve the model for the next session.

After experimenting with different robot embodiments, we decided to use a QT Robot as its processing power, face display, and overall size best fit our project needs. In addition, the robot already has a built in NUC and Raspberry Pi that allow for simple interfacing with ROS, and it has a built in RPI camera which allowed us to track the face of the participant during the interaction. This allows us to prompt the user to re-enter the frame if they have left for some reason, ensuring quality data collection and avoiding skewing future analysis.

We have done demonstrations at various schools and other locations and look forward to deploying the system in a school supporting children with ASD.