Implementation

System Design

UML Diagram of TesRoo Algorithm

We had three main classes to run our algorithm and simulation. Above is a UML diagram of our system design. In the blue boxes, we have our class names. In the white boxes we have the attributes. In the yellow we have the relevant functions for the class, with the most important functions bolded. Note that not all functions are included in this UML diagram; we've included the ones most relevant to our algorithm. In additional, the Github Repository of all of our code, including these three classes, is included for convenience.

TesRoo

This is the main class that is used to develop the key algorithms of TesRoo. The two main functions are explore()and vacuum().

explore() uses the original frontier based exploration technique. Specifically, we start off with an occupancy grid and group frontier points into frontier groups. A frontier point is an observed or 'known' free space cell in the occupancy grid. Frontier points of the same frontier group are frontier points that are connected (in the graph theory sense of connected). The exploration algorithm selects the closest frontier group based on its centroid and travels to that frontier group (by moving to the closest frontier point within that group). We use A* and select a few waypoints on the created path.

vacuum() uses a more straightforward DFS grid traversal approach where untraversed cells are marked as empty. When the next point to vacuum is far away, we use A* and waypoints to travel to that point without hitting obstacles.

OccGrid

This is the class that contains the occupancy grid for TesRoo. This class also contains the implementation of our Frontier Exploration algorithm. The key function is this class is the getNearestFrontierCentroid() function that returns the nearest frontier centroid. grid2map() and map2grid() are functions to translate between coordinates.

GridCell

This class stores information about each cell in the occupancy grid. All the attributes for each cell is listed in the UML diagram.

RQT Graphs

For this RQT graph, we used existing ROS packages and infrastructure for our simulation. In order to do so, we used five separate launch files and a relay in order to get everything working together. We ultimately didn't tie everything into one master launch file because we were trying many different approaches. We had one launch for each of the following:

  • gazebo simulation with turtlebot3 waffle in world model

  • image processing

  • move_base, a controller node that directs movement of the turtlebot3 in gazebo

  • rtabmap_ros, a real time appearance based slam algorithm

  • explorelite, a greedy frontier exploration node that which provides goal states

Overall, the simulation corresponding to the bottom RQT graph works as follows:

  1. Launch all of our nodes

  2. Our cameras in the gazebo simulation provides raw images by publishing them to corresponding topics in the rqt graph.

  3. Image processing processes these images and provides these images to our visual slam algorithm rtabmap.

  4. Rtabmap creates a pointcloud of the images which are then converted into a grid map.

  5. The grid map is relayed to explorelite, a frontier exploration node, which generates frontier points.

  6. These frontier points are passed as goal points to move base, which moves the turtlebot waffle out in the gazebo simulation.

  7. Steps 2-6 are repeated until there are no more frontier points.

  8. Once the map has been generated, we perform a depth first search on a graph version of our map for vacuuming. Locations that want to be avoided will be omitted from the graph version of our map.

For this RQT graph, we implemented our own algorithms for frontier exploration, vacuuming and controls simulation. Specifically, our algorithms are implemented in the red 'turtlebot_controller' node. Implementation details are detailed above.

Wire Sensing

To address the major complaint of robotic vacuums "eating" and tangling wires and charging cables, we added a wire detection component, so that TesRoo could avoid interacting with them. We created a dataset of images with and without wires, augmented it, and then trained various pre-trained models on it. To access our dataset, visit our github. Our best model is the pre-trained AlexNet from PyTorch fine-tuned on our dataset, which achieves 91.1% accuracy. In the future, we'd like to expand our dataset further and also include image segmentation to better avoid only the wires and still clean surrounding areas.

Hardware Design

Because most of our project was dedicated to improving the path planning and mapping component of the traditional vacuum robot as opposed to the vacuuming component, we opted to try to create a physical proof-of-concept of our design. We created a remote-controlled car for testing the frontier exploration algorithms that we had designed and developed in the simulation in Gazebo.

Parts Used

Webcams

2 standard webcams to allow for cheap stereo image processing.

Raspberry Pi

Runs all the algorithms: image processing, visual slam, frontier exploration, controller.

Arduino

An arduino for motor control.

Motor driver

L298N motor driver board

DC motors

2 DC motors to actuate motion and drive the car