I found that the problem I had with the MQTT connection was because for some reason the connection between the two Python files disappeared after the program loaded the CV model. Because of this, I had to re-establish the connection after the model was loaded.
The next problem that I encountered involved getting the robot to actually move instead of just printing something like "move forward". The way the Python files I made move the robot is by publishing to Kobuki's velocity topic using rospy. However, when I tried to implement that, I found that the line initializing the rospy node would run forever.
I had no idea why this was happening, so after trying many things I decided to scrap the MQTT connection entirely and put everything in one file. This however, did not work. I eventually found that for some reason, the rospy node only works when I use:
#!/usr/bin/env python
meanwhile, TensorFlow, along with the rest of my code, only works with:
#!/usr/bin/env python3.11
This was a huge roadblock for me, but after some time I finally decided that instead of implementing the movement commands within this same program, which would make it faster overall, I would have to call the original Python movement files I wrote each time that the robot would have to move.
With this, I finally finished the following part of this project, however it still needs much work on the speed aspect of it.
I connected the camera to the Jetson and started implementing the computer vision algorithm on the Turtlebot. I had to upgrade Python to the latest version in order to support TensorFlow, and I also had to install a few more packages to the Jetson.
I had a hard time getting the connection between detect.py and follow.py to work properly. The connection is seemingly fine and messages can be sent, however, the most important message, which is the one including the coordinates of the boxes was not being sent. This was especially odd because the exact same code was working fine on my personal laptop.
I replaced the Odroid on the Turtlebot with the Jetson, and ensured that all of the previous functionalities were still running well, such as the move forward program that I wrote.
The Turtlebot did not have enough voltage output to support the Jetson, so the Jetson needs to be connected to a wall outlet with a power adaptor.
I installed most of the required packages on the Jetson, including ROS, the Kobuki packages, etc.
I flashed a microchip for the Jetson.
I wrote some more programs in addition to the "move forward" code that I mentioned in my 4/21/2023 report. These additional programs were move backward, turn left, and turn right.
I also decided that the computer vision model would probably run too slowly on the Odroid that I had on the Turtlebot, so I decided to switch to a Nvidia Jetson.
I started working on figuring out how to use the model to determine when the robot should move forward or back and turn. To do this, I analyzed the code from the model, the inner workings of how the model was adding boxes and labels to images, as well as a similar project I found earlier.
Eventually, I was able to determine how to get the size of a "person" box, and I added to my original program some lines that would print "move forward" if the area of the person box decreased past a certain point, "move backward" if it increased, "turn left" if the person box was in the left side of the screen, and "turn right" if it was in the right.
This works with two separate programs, a detect.py, and a follow.py. The detect.py finds the person and sends the coordinates of their box to the follow.py using MQTT. The follow.py then determines what to do based on those coordinates. These two files must run in separate terminals.
I played with the model a little more, and I figured out how to get it working with a video input rather than with just a picture input. I tested this video input on many things and it worked well.
I started working on the computer vision aspect of my program. I am using a Stereo Labs ZED Mini camera for this part of the project. I followed the tutorials that were on the Stereo Labs website to learn how to use the camera. However, the tutorials for object detection and body mapping were not working for me, so I decided to try to find a different object detection method.
I looked through many different tutorials and models, and I finally found a TensorFlow collection of models for object detection that I will be using. I started playing around with the Google Colab that came with these models, and I tested it out with different pictures that I took. Then, I started trying to figure out how to get it to work with a video. I found a similar project which put a video through computer vision, so I looked at its GitHub page for help.
I examined the robot's preset code for moving forward and extracted from it just the bare minimum parts that would allow the robot to move forward at a continuous speed forever. When I ran this new code, the robot moved smoothly and did not shake at all. However, this new code does not have the code for cliff events, bumper events, or anything similar that the original code had. This is ok, however, as for my project I will not be focusing on these types of events and I will mainly focus on getting everything to work in the ideal scenario.
Since the launch files were not working, I found a way to get Kobuki to run Python files. What inspired this was the .py Kobuki files meant to test the robot's operations. For example, there was a test sounds file, and a test move forward file. I made a python file and placed it in the same directory as the test files. I then used the other files to experiment a bit with the file I made to see what I can do. Sometimes when I ran the Python file I made, the robot would drive smoothly, and other times it would be still very shaky.
I got a longer cord to connect the robot to the power supply, that way the robot can drive around. I drove it around with the keyop launcher a bit. The robot was driving very shakily, I'm not sure if it is a hardware or software problem. There was nothing obstructing the wheels.
I looked through the tutorials that Kobuki has on understanding their robot. I then looked at their tutorial on how to make your own controller. I followed it, but at the end the launch file did not work, I'm not sure why.
After being unsuccessful with the Turtlebot launch files, I have decided to try to Kobuki launch files. Kobuki is the name of the Roomba-like robot base. This turned out to be successful, and I was able to get the Kobuki minimal launch and the keyop launch.
I connected the Turtlebot's Odroid to a screen using an HDMI cord so that I could more easily connect it to the wifi. After I successfully configured the network, I was able to finally SSH into the Turtlebot. I then tried to use the Turtlebot minimal launch and keyop launch files to move the Turtlebot around using keyboard control, but I was not able to.
I got a microSD card and flashed an image of Ubuntu 22.04 onto it for the Odroid. I tried to configure its network so I could SSH into the Turtlebot, but I was not able to get the Odroid to connect to the wifi
I managed to install all of the TurtleBot packages on a Linux machine that has Ubuntu 20.04, with ROS1 Noetic installed. The next step is to SSH to the robot's Odroid so that I can control the robot.
Currently, I am using the robot by connecting it straight to a power source while the battery is getting done.
The solution I have found to my battery problem is to wire a new battery with a more commonly found connector type, an XT60 connector, to the connector type required by the robot, a 4-pin square molex connector. This way, I can charge the battery and connect it to the robot. I will not be doing this for myself, someone else is helping me.
I attempted to install all of the TurtleBot packages onto my machine, but I found that they were all made for an older version of ROS and Ubuntu. Since I have Ubuntu 22.04 and ROS2 Humble, I will need to either find a different machine to run the software on or find a way to get it to work on my machine.
I found that there was something wrong with the robot's batteries. They were old and flat, and in addition, the charger was nowhere to be found. In addition, the connector type needed to connect the battery to the robot is old and not easy to find. I will need to get a new battery.
Created this website to document my progress.
Downloaded ROS2 onto my personal Linux Machine. Practiced using ROS2 with tutorials found on the ROS website.
Researched all about my robot, everything from hardware to software. Some websites I looked at:
How to Build a GPS-Guided Robot
TurtleBot2 User Manual
Building a robot – Make the robot follow you
These Robots Follow You to Learn Where to Go
Project proposal document written and submitted.