James Alan Preiss

robotics, learning, control

CV | g-scholar | github | linkedin | arxiv


I am a postdoctoral scholar in Computing + Mathematical Sciences at Caltech, where I am co-advised by Yisong Yue, Soon-Jo Chung, and Adam Wierman. I completed my Ph.D. with Prof. Gaurav Sukhatme at the University of Southern California in 2022.

I am currently on the academic job market.

My interests are robotics, machine learning, optimization, control, and their intersections — both theoretical and practical. My long-term goal is to make robot learning reliable by 1) building insight into problem structure, 2) designing algorithms with rigorous performance guarantees, and 3) deploying them on a diverse set of robots.

Earlier, I developed software for 3d scanning at Geomagic and interactive statistics at the JMP division of SAS Institute. I completed a B.S. in Mathematics and B.A. in Photography at The Evergreen State College.

This page last updated November 2023.

research topics

Online/Adaptive Control

In the real world we cannot prepare for all possible situations in advance. Our algorithms adapt a control policy to an adversarial time-varying environment with regret guarantees. For parameter­ized policies, our gradient-based method substantially generalizes previous work, including a local result for nonconvex settings. For finite policy sets, our switching-based method can select the best black-box controller even if some are unstable.

Reinforcement Learning

Directly optimizing a policy by strategic trial-and-error can improve performance and reduce effort compared to classical methods. For example, our sim-to-real transfer for quadrotors. However, RL can also be unreliable, and its theory (such as our work on REINFORCE) does not currently extend to the complexity of robots. Developing reliable RL algorithms for robotics requires further theoretical insights.

Planning and Control with Learning Components

Planning and control with learned models can be a final goal or a subroutine of model-based reinforcement learning. For deformable objects (shown), the full state is partially observable, so we learn an abstract state-space model from input-output data using a recurrent neural network. We then apply nonlinear estimation and control techniques to the learned model.

video graphics by David Millard

research philosophy

Problem Structure and Tractability

Current theory on learning to control provides guarantees under strong assumptions that exclude many robotic applications. But heursitic algorithms can work well in practice. To close this gap, we need a more sophisticated understanding of why the problem instances arising in robotics are often tractable.

Animation shows a topological phenomenon from our work on suboptimal coverings, a framework to quantify the "size" of infinite sets of control tasks.

Robotics Foundations

Robot learning algorithms cannot be reliable unless the underlying hardware and software are reliable too. I co-developed (with Wolfgang Hönig) the Crazyswarm platform to control 49+ quadrotors from one PC. Crazyswarm is widely used in universities worldwide for multi-robot research, including our work on multi-quadrotor planning and its extension.

Some of my other open-source libraries are listed below.

publications

* = equal contributors.

Ph.D. Dissertation

open source software

contact

(first initial) (middle initial) (last name) @ caltech.edu