Results + Demo

Comparing Laser vs. Stereo

In the left video, we see TesRoo exploring and mapping the environment using a LiDAR camera.

In the right video, we see TesRoo exploring and mapping the environment using two stereo cameras.

The LiDAR sensor maps out the environment much faster as it has 360 degrees of sensing, while the two stereo cameras have a much more restricted field of view. However, the inefficiency of the stereo camera approach can be greatly improved by using more stereo cameras placed around TesRoo. Furthermore, the slower exploration is not a big deal when considering the actual use case of the exploration algorithm. We only need to explore the environment once to create the map. After that process, we can simply vacuum.

Other Environments + Multi-Session SLAM

In the video on the left, we can see our turtlebot doing quite well, mapping 95 percent of a space of approximately 1000 square feet in less than 16 minutes of wall clock time. The green points in the video are the frontier points and this space was quite simple for the turtlebot to map since there were no obstacles. After the turtlebot completed mapping most of the space, we ended the program and saved the state of the map.

In the video on the right, we startup the same map as before, but with obstacles now in place. As we see in the first few minutes, the space was saved very well and our turtlebot localizes itself immediately. We then reset the map around the 2 minute mark to see how frontier exploration performs in the same space with obstacles. Immediately, we noticed the downfall of visual slam in obstacle avoidance for obstacles lying in the plane and close to the floor. From the start, our turtlebot gets stuck in the brown glass. After getting itself out of the glass, it then gets stuck on top of rocks at around the 10:30 minute mark. This problem of detecting obstacles on the ground is the same issue that plagues robot vacuums. Likely, current visual slam in a constrained engineering application can't perform obstacle avoidance to the level that would be desired as currently constructed.

We test our exploration algorithm and mapping on a cafe environment. We noticed that our SLAM algorithm wasn't good at detecting short raised flooring like the brown flooring.

The blue points on the left are frontier points detected by the algorithm and the green points are potential frontier points to traveled to.

Wire Sensing

Our best wire sensing model attained 91.1% accuracy on our augmented test set. Some examples are below.

Correctly detects no wires: even though the long dark shadows resemble them.

Correctly detects no wires: despite home office background where there are often wires present.

Correctly detects wire: despite it being messy.

Correctly detects wire: despite it being white.

The model is quite good at identifying wires and not considering visually similar objects. However, it does struggle with light-colored wires and very thin ones, as well wires that are partially obscured. This is demonstrated to the left with images that were mistakenly identified as containing no wires.

Hardware Proof-of-Concept

For our hardware proof of concept, we initially wrote a controller to be able to move the car. However, our car wasn't performing as well as we wanted, so we used a different car platform as well as added a sonar sensor since they cost $1 and most vacuum robots already have them anyways. The videos demonstrate the ability of a wheeled robot being able to drive around and not crash too often into objects as well as the possibilities of using VSLAM. However, our code is still somewhat flawed since we aren't detecting whether our car can fit through a small window quite well.