My overarching research is concerned with the identification and computational modeling of social behavior primitives for situated and embodied interactive agents for the purposes of enabling autonomous recognition and control in face-to-face human-robot interaction (HRI); categories of such social behaviors include proxemics (interagent spacing), oculesics (eye gaze), kinesics (body movements), deixis (referencing), chronemics (turn-taking and back-channeling), and vocalics (nonverbal aspects of speech).

My doctoral research in HRI considered the sensory experiences of the robot and the human user(s) in selecting spatial (distance and orientation), speech (voice loudness), and gesture (region of space used by gestures) parameters to conduct physically co-present and situated HRI, with a focus on assistive contexts.

I have also done extensive work in multi-robot coordination, developing algorithms for the control and interactions of large formations of robots, and considering the problem of forming impromptu teams of heterogeneous mobile robots. Other research areas include autonomous transportation and shared-autonomy human-robot interfaces, legged robotics, robotics in STEM education, ontology engineering and the Semantic Web.