POGO

The broad impact of our research will be to lay the foundation for robots to function autonomously in unstructured environments over significant periods of time. In terms of the complexity of the dynamics, humanoid robots are one of the most challenging platforms to use for accurate navigation and manipulation. Inertial sensing is critically important for reactive control, while visual sensing relates the 3D world to body coordinates and enables manipulation. Thus, accurate pose estimation and accurate registration of the visual world relative to this pose is critical. However, several issues complicate performing these tasks. First, humanoid robots move with varying acceleration patterns, which contaminate information collected from the accelerometers of IMUs. Second, there may be a long kinematic chain between the vision system of the robot and the location of the manipulator end effector(s) and/or other IMUs or cameras on the body. Even a small calibration error along this chain can be amplified to a level that renders manipulation too error prone, e.g., leading to failure when grasping a cup with fluid.Visual and inertial sensors are likely to be deployed together in a wide variety of future robotic systems. We foresee the use of inexpensive cameras and IMUs, already frequently used in combination on autonomous ground vehicles, on platforms ranging from micro-air vehicles to humanoid robots. If we wish to build ‘power-up-and-go' systems at a low cost, it will be necessary to trade mechanical precision (at assembly time) for computational effort during calibration. Beyond calibrating the camera-IMU transform, to produce a complete, self- contained visual-inertial sensing unit it will also be necessary to calibrate the intrinsic parameters of individual camera lenses.Today, monocular and stereo camera calibration is usually performed by manually positioning a known calibration object or target in front of the imager(s) and capturing a sequence of frames. This approach has a number of drawbacks. The first is the need for an additional calibration object; in many situations, particularly for robots operating in6extreme environments or for long durations, it may not be practical to provide access to such an object. The second is the requirement that a human operator be in-the-loop to (in some cases) position the imaging sensors relative to the calibration object, and to select images for feature extraction. Clearly, removing a robot from service to perform this type of calibration is inconvenient, and often is not possible. We will extend our existing visual-inertial calibration algorithms to incorporate the camera intrinsic and lens distortion parameters. Although the addition of an IMU introduces complexity beyond standard camera calibration, the IMU data can also be used to bound the magnitude of the translational and rotational motion of the camera. This information can be integrated into an estimator to rapidly determine the local scene structure (map), and hence to improve the speed and performance of calibration.Increasingly, mobile robots like the PR2 are carrying manipulator arms and sensor masts. Multiple IMUs may be mounted at various positions (e.g., the main chassis and the mast), requiring tracking of parameters related to the lengths and orientations of various articulated segments over time. We believe that all of these problems can be addressed using the same comprehensive visual-inertial calibration framework. Our extension to multiple-link systems (e.g., humanoid arms) will include a detailed analysis of motions which ensure that the transform parameters remain observable and can be calibrated accurately. Measurements from joint angle encoders will also be fused, when available,improving the convergence time. Further, manipulator arms can provide an advantage for camera calibration: it is possible to assist the calibration procedure by moving the articulated segments within the camera's field of view. This is literally a form of ‘self'-calibration, in which we deliberately move the end effector in front of the camera(s) in such a way to expose all of the degrees of freedom in the system.To assess the accuracy of calibration and manipulation, we will perform a set of experiments with the PR2. We will address the manipulation of cups and bottles on a table and transporting them to other locations with accuracy requirements, e.g., pouring water into cups and placing these cups on point targets. These tasks will also work while a visual attention algorithm is running simultaneously on the robot, used to scan the environment for potential safety problems.As a result of this work, we will enable ‘power-up-and-go' robotic systems that are able to operate autonomously for long periods without requiring tedious manual sensor (re-) calibration. We believe that this will be an essential capability for future autonomous mobile platforms.