Design

Requirements

Requirements

Functional Architecture

In the above figure, we see that there are four primary components to our functional architecture: Power, Sensors, Motion Control, and Central Processing. Due to the COVID-19 pandemic, we were unable to properly set up battery operation. As a result, our system is tethered to AC power. The Central Processing component is further broken down in our ROS Software Architecture diagram, shown below.

Cyber-Physical Architecture

In the figure, we see the Cyber-Physical architecture of RoboDutchman. Here, we see that there are two routers, a local router and a main router. The local router allows the Jetson Nano to communicate with both the Hebi modules as well as the main router. The main router is connected to the internet, which allows us to setup SSH and NX protocols for remote access of the Jetson Nano.

Software Subsystems

Our software architecture shown above (designed with ROS in mind) is composed of six primary components: Base Localizer, Base Planner, Target Localizer, Central Planner, Arm Localizer, and Arm Planner. These packages work in the following way:


  1. Base Localizer: using information from the onboard sensors and the wheel actuators, this package provides an estimated position of the robot relative to some known map.

  2. Base Planner: using the position estimate from the Base Localizer and target position information from the Central Planner, this package creates and executes a trajectory for the wheel actuators.

  3. Target Localizer: using the information from the camera, this package estimates the position and state (valve/breaker type and setting) of the object in view of the camera.

  4. Central Planner: given information from the Target, Base, and Arm Localizers, this package acts as the central logic for the robot, sending commands to each of the other two planners.

  5. Arm Localizer: given information from the arm actuators, this package provides an estimated position of the robot's end effector.

  6. Arm Planner: using the position estimate from the Arm Localizer and target position information from the Central Planner, this package creates and executes a trajectory for the arm actuators.

Design Concepts

Robot Chassis

In many ShipBots that we studied from previous years, we saw an omnidirectional drive implemented for base locomotion. As we watched the videos of these robots perform, we noticed that this method of driving was prone to slipping while interacting with a target on the testbed. We also noted that implementing this meant more actuators which increased power consumption and debugging difficulty. For these reasons, we chose to implement a differential drive with two passive casters, shown in our initial sketch in the Figure below. Another feature for which we wanted to utilize our base was as another rotational degree of freedom for our robot arm, rather than having another actuator to power at the base of the arm. Since we chose to use the base for this purpose, we had to consider the shape of the base. As we observed the test bed, we noticed a pronounced corner in the test bed that could pose a challenge to a differential drive robot. For this reason, we chose to make our base a circle so that we could turn in place and not get stuck in the corner.

Differential Drive w/ a Rounded Base

Robot Arm

We decided to use a 4 degree of freedom robotics arm to interact with the targets. As depicted in the figure, our arm has one shoulder joint, one elbow joint, and two wrist joints. Because the shoulder is only one joint, the arm cannot rotate about the vertical plane, which means that its work space is planar. In order to position the end of the arm at an arbitrary point in 3D space it is necessary to move the entire robot base. While this does complicate the software, we made this design decision for two reasons: fewer joints increase stability, as well as decrease complexity for planning trajectories, which makes the arm perform faster.

Image of 4 DOF Robot Arm

End-Effector

As we observed ShipBots from recent years, we found that the end-effector known as the "Grannular Jammer" was quite popular. It seemed relatively simple to make and it allowed for a custom fit to any surface it was pressed upon. Another end-effector we found interesting was from Team E - Pirates from Spring 2019, utilizing a multi-pin spring loaded design. Upon doing a trade study on both of these (shown in the table), we found that the Spring-Loaded multi-pin end-effector would be easier to fabricate, give us enough dexterity to complete the course, and most importantly it was a passive element, which meant fewer components to power, debug, and integrate into the central planner. A very rudimentary sketch of the end-effector we chose is shown in the figure.

Cross-Section Sketch of the Spring-Loaded Multi-Pin End-Effector

Base Localization

In order to determine the position of the robot on the course, we decided to map our position relative to a generated map. To generate this map, we fused dead reckoned estimates from the Hebi Actuators along with LIDAR information from the RPLIDAR A1 in a SLAM algorithm. For our implementation, we decided to use the gmapping ROS package to handle the mapping portion of localization. To localize within this map, we would fuse the dead reckoned estimates with the LIDAR data using a particle filter to determine our position on the map. For our implementation, we decided to use the AMCL ROS package for the particle filter portion of the base localization.

Base Planning

In planning the base motion from some state A (position and orientation) to some other state B, we decided to take a three-step approach. First, the robot would pre-rotate to the position which would best allow the robot to move straight to the target. Then, the robot would move either straight forward or straight backward toward objective B. Afterwards, the robot would post-rotate until the robot's current orientation matched that of objective B. To incorporate feedback into the base planning loop, a proportional controller dependent on orientation error was used to determine to determine the angular velocity sent to the robot during the rotation components. On the straight component, Pure Pursuit was used to accurately follow a straight line.

Object Detection and Classification

In the vision system, we decided to use D435i sensor which has a single 1280x1024 resolution camera and a IR sensor for measuring the depth along with an IMU sensor for reporting the motion of the camera.We utilize the camera and IR sensor to measure depth of each pixel and get corresponding position. We can also generate a point-cloud for each 2D pixel and depth. But during final stages, we ended up majorly employing the camera for Target Bounding box detection and classification.