System Requirements
Size limit: 1.5’ (length) x 1.5’ (width) x 2.5’ (height) volume.
Budget: <= $1000 for materials purchased for the robot.
Turn a valve to a desired angle specified by a mission file, relative to an unspecified initial angle. The robot should do this with a tolerance of +/-15°.
Actuate a breaker switch to a desired state--up or down--specified by a mission file, from an unspecified initial state.
Travel to a sequence of up to 5 stations in the order specified by a mission file, without leaving the testbed and from an unspecified initial pose.
Time: fulfill the tasks laid out in a mission file within N + 0.5(D - N) minutes, where N is the number of stations in the file, and D is the number of devices in the file.
The robot must not damage anything it interacts with.
Use a dedicated power supply.
Robustness: the robot should not use any prototype kits or toys.
Functional Architecture
After the robot parses the mission file, it sorts the stations into alphabetical order to minimize mission time. It then determines a series of waypoints the chassis must visit, estimates its initial pose, and travels to the first waypoint, continually estimating its pose using onboard sensors. Once it reaches the first waypoint, it estimates the location of the device it must manipulate using its camera, then it controls the chassis and arm to perform the manipulation. It then travels to the next waypoint and repeats the process.
Software Architecture
We designed our software within a ROS framework to motivate code modularity and to easily integrate with existing tools. We decided to assign higher-level planning, control, and state estimation tasks to the onboard computer (the Raspberry Pi), and assign lower-level I/O tasks to the MCU.
Hardware and Firmware Architecture
Our system, powered by 110Vac, makes use of a single-board computer (SBC, the Raspberry Pi 4B) and an MCU development board (Arduino Mega), both powered by 5V, to interface with sensors and actuators. The camera communicates directly with the SBC through USB 3.0, while the ToF sensors talk to the Arduino using I2C. Our motors, powered by 12 Vdc, are controlled using PID by the RoboClaw 2x15A motor controllers, which receive commands from the Arduino. The arm joints are controlled by the Raspberry Pi through Ethernet, and the end-effector receives commands from the Arduino.
System CAD
Final system CAD
Vision Pipeline
The vision pipeline uses an Intel Realsense D435 to obtain 2D and 3D frames of the devices. We first preprocess the image to find the regions of interest. Then, we compute the central coordinate of the 3D point cloud representing the device. We also determine the state and configuration of the device based on the shape and position of the device as seen in the 2D image. Different routines have been developed for different devices. The position, state, and configuration of each device are computed in a one-shot fashion and are jointly used to determine how to position and orient the end-effector and approach the device following different trajectories.
Weight Table