Localization
Our robot localizes from an unknown pose using a particle filter. The motion and sensor model for the filter are described in the following document:
We initially attempted using these models in a Gaussian sum filter--that is, a bank of extended Kalman filters, each with a weight--but we found that the sensor model was too non-Gaussian and nonlinear to achieve robust estimation.
End Effector Testing
To confirm our use of the granular jammer, we built a prototype of the system. This included a preliminary 3D printed holder, a balloon filled with coffee grounds, a 3D printed valve, an air tube, and a vacuum pump. We tested this prototype in its ability to life various object, such as pencils. While the grip was not always perfect, the performance of the prototype justified further refinement of the end effector.
We then tested the jammer on the testbed itself. The valves were easy enough to turn with minimal normal force on the devices. The granular jammer formed around the device, the pump was turned on, and the jammer was able to turn the spigot, gate, and shuttlecock valves. The breaker box served as more of a challenge because it required a higher force to manipulate. However, we found that approaching the switches from an angle allowed us to flip the switches.
Computer Vision
Our vision system makes use of an Intel Realsense D435 and OpenCV to robustly determine the states, configurations, and 3D positions of the valves and switches. For example, it can tell if a switch is UP or DOWN, or if a valve is oriented towards the front or upwards. Different routines are developed for different devices, such as the small spigot valves, the bigger wheel valves, the flat shuttlecocks, and the breaker boxes. Using the Realsense mounted above the robot chassis, we first preprocess the image to find the regions of interest. Then, we compute the central coordinate of the 3D point cloud representing the device. We also determine the state and configuration of the device based on the shape and position of the device as seen in the 2D image. Getting accurate positions and configurations of the devices is essential because we need this information to command the robot arm to position and orient the end-effector.
The implementation is in C++ and is based on Realsense SDK.
4-Wheel Omni Drivetrain
Our robot uses a 4-wheel omni drive for locomotion. The kinematics are described in the following document: