We are extending computational models to interpret and understand human activity using multi-modal data in the context of novel applications of human-robot interaction and human computer interaction. Using the latest sensor technologies, features of a user's body movements, speech, and physiological signals can be extract in real-time throughout an interaction, and used to classify the user's beliefs, desires, and intentions. Specifically, we are interested in enabling the robot to detect spatially-situated social cues, such as detecting transitions into, during, and out of an interaction. We are developing methods for the robot to provide feedback to a user during an exercise or other task-performance sessions that considers the user's physiological data, such as heart rate, respiration rate, skin temperature, and accelerometry. Our methods employ a variety of machine learning and signal processing techniques toward robust activity recognition and understanding for improved autonomous decision making by socially-situated agents in human-machine interaction contexts.