In order to have a more effective interaction with humans, robots should be able to recognize human affective states and be able to leverage their actions to influence these states. The ability of socially assistive robots (SAR) to infer human mental state and respond appropriately can provide considerable benefits to intearction outcomes in situations such as therapeutic interventions and tutoring sessions. The goal of this research is to produce models both for estimation of an individual's affective state during an interaction with a robot, and for appropriate robot responses to these varied states. We aim to build these models by using multimodal data from interactions including speech, language, video, and physiological data. This work is foundational to develop a multimodal, real-time, and adaptive model for affective state estimation and response in human-robot interactions.