Construction:
Size: 1.5’x1.5’x2.5’
Power: Portable onboard power source for mobility (LiPo or Li-Ion)
Budget: $1000 for purchases
Completely autonomous
Tight tolerances
Strong construction and portable in case robot must be moved to work on in different locations
Low center-of-mass with heavy counterweight to avoid tipping
Components easily accessible for easy replacement/repair
Performance:
Testbed:
3’x5’ operating area
Eight 1’ stations
12”-24” component height
Rotary Valve, Spigot Valve, Shuttlecock Valve, Breaker Panel
1-minute setup period
Robust to arbitrary start location
Robust to rocking of operating area
Time: 1min x Number of Stations + 30 x Number of additional devices at station + 20 x Number of Devices (extra time)
Parses mission file for given commands
Indicates via alarm/sound when finished
Robust to loss of state or faulty state estimation
Robust to lighting conditions
The functional architecture is shown below. The ShipBot is powered by batteries so that it can operate without power tethering. Initially, the program onboard the central computer parses the mission file to retrieve the station, device and manipulation information and sort the mission list so that the station is visited in order. To operate a device, the ShipBot first navigates to the coordinate corresponding to the station from the origin based on LIDAR. The central computer then performs path planning and sends velocity commands to the main microcontroller, which translates the velocity command to individual motor signal inputs with PID. Once the navigation is complete, the central computer and main microcontroller coordinate to actuate the end effector and perform the manipulation specified in the mission file. Finally, the robot returns to the original position and proceeds to operate the next device in the mission file until all devices specified in the file are visited and manipulated.
The electrical architecture is shown above. Two LiPo batteries are wired in parallel to provide, at minimum, 30 to 45 mins of operating time, even with the stepper motor enabled. Because of two pairs of these batteries, testing on the ground was made relatively simple and convenient with the LiPo charger. When fully charged, the LiPo’s would reach a voltage of 16.8V. To avoid damage to the batteries, the batteries were discharged until they reached 14.3V.
A small protoboard was used to create a 15V power bus with the battery outputs, and this bus paired with a series of ground points for different components. The main 15V power bus goes to the two motor drivers and stepper driver for the mobility and arm subsystem. The power bus then split off to the 12V and 5V buck converters needed for the linear actuator boards, and Jetson Nano, respectively. Because the Jetson Nano was being powered directly from a dedicated bus, the team did not experience any issues with power cycling that occured earlier during system integration. For example, when powering the Jetson from a laptop, the Jetson would power cycle whenever the LIDAR sensor would get connected because laptop USB ports could not supply enough power for both devices.
The Jetson Nano runs a Docker image of ROS Noetic that subscribes to and publishes messages to the Arduino over UART serial communication. All that is required to accomplish this is a USB-A to Arduino cable. The ROS topics include information about encoder ticks, current motor velocities, desired motor velocities, and more, as explained in the Software Architecture section.
The Wi-Fi dongle was needed with this model of the Jetson to be able to SSH into it; this made working with the Jetson a lot easier because everything could be done without a monitor. Code was even able to be uploaded to the Arduino without the use of the command-line interface. By being able to upload code to the Arduino without the explicit use of a monitor, the team was also more easily able to see the motor response to different setpoints, making PID tuning more accessible.
The ROS mapping architecture is shown above. In order to create a map of the testbed environment using the LIDAR scanner, the above ROS node/topic architecture was implemented. In essence, the gmapping package used the LIDAR scan data and the odometry data from the wheel encoders to create a map. The odometry was used to determine where the robot has moved since the mapping started, and the scan data was recorded relative to the moving position of the robot. The robot was driven by using teleoperative control around the testbed and collected scan data, then the map_server package was used to save the map image to a .pgm file and the map metadata to a .yaml file.
Note that in both the diagram above and below, the /tf topic is used by nearly all the mapping/localization/navigation nodes, thus it is not directly connected to each for simplicity. This /tf topic describes the transforms between each link and joint of the robot, as well as between the map, the robot, and the odometry frame for the robot describing its movement.
Communication between the central computer and the microcontroller is facilitated by the rosserial node (not depicted), which sends messages over UART in both directions, allowing the microcontroller to subscribe and publish to topics as if it were executing on the same processor.
The ROS architecture diagram shown above depicts the autonomous operation of the robot once the map has been generated (following the procedure above). In this diagram, the ShipBot_client node begins by parsing the mission file, uses a dictionary of predetermined poses (x, y, theta) to find the pose corresponding to the first station in the mission file (reordered alphabetically), and sends this in a MoveBaseGoal service request to the move_base node. The node’s global planner creates a global plan from the current estimated pose (determined by amcl node) to the goal pose (using Dijkstra’s algorithm or A*), then the node’s local planner determines specific /cmd_vel messages to direct the robot toward the goal. Both planners update their plans as the state estimate changes.
This state estimate, determined by the amcl node, relies on the Adaptive Monte Carlo Localization algorithm, which uses a particle filter model to store the probability of the robot existing in a given state through a distribution of particles (poses) in the configuration space. This distribution is updated by the odometry from the wheel encoders (transition model) and the LIDAR scan data (observation model). The pose representing the most likely state estimate given the current state probability distribution is used as the state estimate by the navigation code.
In order for either of the amcl or move_base nodes to localize or navigate within the testbed, a map is needed for reference, as goal poses are in reference to the origin of the pre-generated map. The map server publishes the map given the map’s .yaml file, which itself references the map’s .pgm image file. Images in this form have known, unknown, and obstacle pixels, which encompass the workspace of the robot. The team allowed the robot to navigate within the unknown portions of the map because the surrounding obstacles are sparse and result in a map with sections unexplored by the LIDAR scanner.
Once a goal pose is reached, the ShipBot_client adjusts the position of the base and arms using feedback from the camera data. This feedback comes in the form of positional errors. Horizontal error between the center of the image and the center of the device to be manipulated is corrected via movement of the mobility platform left and right. Vertical error is corrected by the vertical linear actuator. Depth error is theoretically corrected by the horizontal actuator, but the team was unable to reliably determine depth from the RGB camera. In further iterations, the team would hope to use a depth camera, such as a Realsense, to accurately receive depth feedback. The camera processing node (valve_angle_finder) also sends the state of the device in the form of an angle (in the case of spigot and rotary valves). This is used to determine the relative angular displacement necessary to reach the absolute desired angle provided in the mission file. This feedback data from the camera is used to align the end effector with the device, then manipulate the device according to the desired state specified in the mission file.
The ShipBot_client node then sends a MoveBaseGoal service request to the move_base node with the origin pose (0, 0, pi). This is done in order to avoid the pipe intrinsically by pulling into and out of stations without translating sideways between them. This works fairly reliably and requires no extra sensors or CV software.
After completing all assigned missions, the robot returns to the origin to await its next mission file.
Below is the overall mechanical system, which includes base sub-assembly, arm sub-assembly and gripper sub-assembly.
The main frame of the mobility platform is seen below, earlier this document. The platform is equipped with four mecanum wheels, which grant the ShipBot high level of mobility within the testbed. The platform attaches to the bottom of the robotic arm and houses almost all of the electronics of ShipBot. A YDLIDAR X2L LIDAR unit is mounted at the front of the ShipBot underneath the main frame. The robotic arm is positioned at the back end of the robot to move the center of mass backwards. This positioning ensures that the ShipBot will not tip even if both the linear actuators on the robotic arm are at max extension. The electronics reside on an acrylic panel fixed above the front of the main frame. The laser cut 180 mm x 220 mm x ⅛ inch acrylic panel has mounting holes for the Jetson Nano, Arduino Mega 2560, 2 motor drivers, 2 buck converters and the power bus. The arrangement of the electronics maximized space utilization and wiring cleanliness. The two batteries are secured onto both sides of the back of the mobility platform with zip-ties, which keep the batteries in place while the robot is in motion and allow for quick and easy battery change. Last but not least, an acrylic case consisting of acrylic panels covers the front of the ShipBot and improves the overall appearance.
The design of the robotic arm incorporated Design for Manufacturing (DFM) ideology. All the components of the robotic arm can be either purchased directly from McMaster-Carr or manufactured quickly and easily with laser cutting. The robotic arm can be roughly separated into two sections. The vertical arm consists of a 4040 t-slotted aluminum extrusion beam that is rigidly attached to the mobility platform, a linear bearing that slides freely on the aluminum beam, two laser cut acrylic panels that attaches to the linear bearing, four threaded standoffs and a shorter aluminum extrusion sandwiched between the acrylic panels and a linear actuator that actuates the vertical arm assembly. The forearm assembly’s aluminum beam is rigidly attached to the shorter beam on the vertical arm. The structure of the forearm is similar to that of the vertical arm, but oriented horizontally. The end effector (gripper) then attaches to the end of the forearm to complete the robotic arm assembly. This design allows for stable and supported actuation of the end effector in both z-axis and x-axis.
The robot gripper sub-assembly is designed to have two degrees of freedom. The initial design relies on two motors to rotate only one gripper to do both vertical and horizontal operations. This design is more complex and requires a larger gripper assembly, therefore a simpler design was proposed. For the current design, bevel gears are added to drive two grippers in two directions. Only one motor is needed to drive both grippers to do operations in two directions at the same time. For the gripper design, the gripper would rely on friction to tightly grip circular valves. High friction material would be added to the gripper disk surface to increase the friction. There is an extrude mounted at the top of the gripper disk to flip the breaker box switches. The physical gripper assembly and also a test gripper are shown below.
The webcam was used for computer vision capable of detecting the center of a valve, as well as the angle it was positioned in. This data was then fed back to the central computer over a ROS topic so that small, corrective maneuvers could be achieved with the linear actuators and wheels. The webcam was mounted on the top of the forefront gripper subassembly for a wider and unobscured view. The camera mount was 3D printed.
The LIDAR hardware used was the YDLIDAR X2L unit (shown below), which features planar scanning, adjustable angle range, drivers and ROS integration. This allowed for seamless integration with our localization code, the ydlidar_lidar_publisher node publishing to the /scan topic that is subscribed to directly by the amcl and move_base nodes. The unit, mounted upside down underneath the mobility platform, required software tuning to ensure the published LaserScan was oriented correctly and accurately conveyed obstacle position relative to the robot’s center.