The broad impact of our research will be to lay the foundation for robots to function autonomously in unstructured environments over significant periods of time. In terms of the complexity of the dynamics, humanoid robots are one of the most challenging platforms to use for accurate navigation and manipulation. Inertial sensing is critically important for reactive control, while visual sensing relates the 3D world to body coordinates and enables manipulation. Thus, accurate pose estimation and accurate registration of the visual world relative to this pose is critical. However, several issues complicate performing these tasks. First, humanoid robots move with varying acceleration patterns, which contaminate information collected from the accelerometers of IMUs. Second, there may be a long kinematic chain between the vision system of the robot and the location of the manipulator end effector(s) and/or other IMUs or cameras on the body. Even a small calibration error along this chain can be amplified to a level that renders manipulation too error prone, e.g., leading to failure when grasping a cup with fluid.Visual and inertial sensors are likely to be deployed together in a wide variety of future robotic systems. We foresee the use of inexpensive cameras and IMUs, already frequently used in combination on autonomous ground vehicles, on platforms ranging from micro-air vehicles to humanoid robots. If we wish to build ‘power-up-and-go' systems at a low cost, it will be necessary to trade mechanical precision (at assembly time) for computational effort during calibration. Beyond calibrating the camera-IMU transform, to produce a complete, self- contained visual-inertial sensing unit it will also be necessary to calibrate the intrinsic parameters of individual camera lenses.Today, monocular and stereo camera calibration is usually performed by manually positioning a known calibration object or target in front of the imager(s) and capturing a sequence of frames. This approach has a number of drawbacks. The first is the need for an additional calibration object; in many situations, particularly for robots operating in6extreme environments or for long durations, it may not be practical to provide access to such an object. The second is the requirement that a human operator be in-the-loop to (in some cases) position the imaging sensors relative to the calibration object, and to select images for feature extraction. Clearly, removing a robot from service to perform this type of calibration is inconvenient, and often is not possible. We will extend our existing visual-inertial calibration algorithms to incorporate the camera intrinsic and lens distortion parameters. Although the addition of an IMU introduces complexity beyond standard camera calibration, the IMU data can also be used to bound the magnitude of the translational and rotational motion of the camera. This information can be integrated into an estimator to rapidly determine the local scene structure (map), and hence to improve the speed and performance of calibration.Increasingly, mobile robots like the PR2 are carrying manipulator arms and sensor masts. Multiple IMUs may be mounted at various positions (e.g., the main chassis and the mast), requiring tracking of parameters related to the lengths and orientations of various articulated segments over time. We believe that all of these problems can be addressed using the same comprehensive visual-inertial calibration framework. Our extension to multiple-link systems (e.g., humanoid arms) will include a detailed analysis of motions which ensure that the transform parameters remain observable and can be calibrated accurately. Measurements from joint angle encoders will also be fused, when available,improving the convergence time. Further, manipulator arms can provide an advantage for camera calibration: it is possible to assist the calibration procedure by moving the articulated segments within the camera's field of view. This is literally a form of ‘self'-calibration, in which we deliberately move the end effector in front of the camera(s) in such a way to expose all of the degrees of freedom in the system.To assess the accuracy of calibration and manipulation, we will perform a set of experiments with the PR2. We will address the manipulation of cups and bottles on a table and transporting them to other locations with accuracy requirements, e.g., pouring water into cups and placing these cups on point targets. These tasks will also work while a visual attention algorithm is running simultaneously on the robot, used to scan the environment for potential safety problems.As a result of this work, we will enable ‘power-up-and-go' robotic systems that are able to operate autonomously for long periods without requiring tedious manual sensor (re-) calibration. We believe that this will be an essential capability for future autonomous mobile platforms.

OIT (under construction)

We are pleased to announce the release for p2os version 0.2.1. This release includes a working drivable urdf model of the Pioneer2dx for gazebo/rviz.

We are pleased to announce the second release of the p2os stack. This release is intended to operate with the cturtle release of ROS, though we believe it will also work with the boxturtle and unstable releases without issue.

This release includes the following features and bugfixes:

p2os_urdf:
* lowered loop rate of publisher
* made publisher node publish continuous joint values so that whole model shows up in rviz
* added sonar rim for p3dx
* added launch file for pioneer3dx in gazebo
* actuatable pioneer3dx in gazebo
* masses may not be right
* inertial values are total guesses

p2os_dashboard:
* p2os_dashboard: fixed motorstatesub so that it subscribes to correct topic
* put robot_monitor button back into p2os_dashboard

p2os_launch:
* fixed diagnostics so that sensors show up in correct category
* added teleop_keyboard.launch to control a pioneer with a keyboard. This launch file depends on the pr2_teleop package. Updated the default key for run mode in teleop_joy.launch.
* added example launch files for diagnostic aggregator
* updated launch file tree. added launch file for sicklms.

p2os_driver:
* added diagnostic support to p2os_driver
* added status messages for stall and voltage
* diagnosed publisher on batt topic
* added header to batt msg (probably will remove later)
* hwid from name, type, subtype of pioneer

We are pleased to announce the first release of the p2os stack. This release is intended to operate with the boxturtle release of ROS. This release includes the following features: p2os_driver:

  • command linear and angular velocity
  • enable/disable motors
  • gripper support

p2os_urdf:

  • visualize only the Pioneer2DX robot for use with RVIZ
  • will NOT work with gazebo

p2os_dashboard:

  • view battery monitor and control motor enable/disable

USC is a PR2 Beta Program Winner!