The interaction between each subsystem is shown in the diagram below. The components in blue are the virtual components. The components in green are the physical systems of the robot. The main communication hub of our system is the Intel NUC. It communicates back and forth with every other system on our robot. This allows the planner on the NUC to know the state of the robot at all times.
Our system utilizes three microcontrollers to interface with motors and sensors. They include an Intel NUC, a Raspberry Pi 3, and an Arduino Mega. The communication diagram below illustrates communication connections between the various electronics.
The Intel NUC serves as the master controller board for the entire robot by sending and receiving information to the Raspberry Pi 3 and HEBI motors using ROS and the router. Secondly, the NUC is used to process the images received from the Kinect and Raspberry Pi Camera because it contains the most powerful processor. The Raspberry Pi 3 interfaces with the Arduino Mega and publishes the images from the Raspberry Pi Camera to the Intel NUC. The Raspberry Pi Camera is attached near the end-effector to assist with close-proximity localization of our end-effector while the Kinect v2 is used to survey the environment and sense the initial location of objects. Finally, the Arduino Mega reads sensor input from the ultrasonic sensors, DC motor encoders, and limit switches. In addition, it sends commands to the stepper motors on the turntable, X-gantry, and Z-gantry and the DC motors on the base through the motor controllers.
We power all of our robot off a single 12V 15Ah Lithium Iron Phosphate Battery that can output 30A current continuously and 60A max for 2 seconds. In the power diagram below, we depict the connections and power consumption for each major electronic component.
From the diagram it may seem as though the battery we have chosen is insufficiently able to power the robot; however, we are not running all of the high-powered components at the same time, so our chosen battery was sufficient with its 30A continuous discharge. The components that are continuously on are the Intel NUC (12V 5A), Kinect v2 (12V 1.5A), Router (12V 1A), Raspberry Pi 3 (5V 3A), and Arduino Mega (12 0.2A). In total, those components will require around 9A of the total 30A available. As we do not run our DC motors at their stall current, their actual power consumption is around 12A. Finally, we do not run our HEBI motors to their stall current either, so their total max consumption maxes out at around 24V 4.5A, which is around 12V 9A. Since, we never run the HEBI motors at the same time as the base DC motors, our battery was able to handle our robot's workloads.
The software architecture diagram shown above illustrates which boards our software nodes are running on and the intercommunication between the various nodes. We installed and are running Ubuntu 16.04 on both the Intel NUC and the Raspberry Pi 3. We are utilizing ROS Kinetic Kame as the operating system for the robot because of its inbuilt message passing interface between computers, widespread support of cameras, and ease of use.
Our vision system consists of two parts, the Microsoft Kinect v2 and the Raspberry Pi Camera v2. We use the iai_kinect2 ROS package to enable the Kinect to publish RGB and Depth images to ROS topics that we then subscribe to in our computer vision node. In addition, we use the raspicam ROS node to publish RGB images from the Raspberry Pi that we can then subscribe to in the NUC. Our sensors also have their own nodes based on the information sent from the Arduino Mega to the Raspberry Pi 3. For our computer vision node, we use OpenCV for basic image processing to detect objects at each station and their orientations. In addition, we used Tensorflow to demonstrate our ability to classify the objects at each station in real-time.
Finally, for controlling the robot, we have a planner node that reads in the mission file and communicates with the computer vision node, the sensor nodes, and the motor controller nodes to complete each task in the mission file. In addition, we use the hebiros ROS package to control the HEBI actuators on our robot’s arm.
The figure above depicts the flow of our ShipBot. In the first phase, the robot is initialized with the mission file, and localizes itself using the ultrasonic sensors and the Kinect v2. Then, the robot enters a loop where it moves to the desired station, captures an RGB and depth image using the Kinect, processes it to determine the location of the target object, determines the arm configuration necessary to reach the target object using inverse kinematics, and then plans a path from the current location of the arm to the desired location. Finally, the robot observes the target object up close using the Raspberry Pi camera and processes the current configuration of the object. Then, the planner takes the congregated information and instructs the robot to actuate the target object according to the mission. Upon completion, the robot does another visual check to ensure the objective has been completed correctly before moving on to its next objective. This command loop is repeated until all objectives specified by the mission file have been properly completed.