To design a robot capable of autonomously navigating an apartment to deliver packages efficiently.
Autonomous delivery requires SLAM and Path Planning that is gathered through data from our sensors. Integrating and creating this system will be our main challenge.
We find this project interesting because this is a problem we face as friends living in the same apartment complex
Autonomous delivery
Healthcare supplies bot
Warehouse Management
Hospitality Helper Bot
To create this robot, we needed to create a system that is able to sense, plan, and actuate. Our desired functionality is to create a robot that can autonomously navigate our Apartment complex to deliver small or large packages.
For our robot to navigate autonomously, we decided to implement SLAM. So, we needed to get our odometry data from our encoders and imu and transform that using the TF2 library. We also needed our lidar sensor. These data will be fed into our SLAM package. This helps us map our environment by building an occupancy grid.
Once, we have our map, we also want our robot to move from one unit to another, so we have to have a goal position our robot will navigate to. This is important so that our path-planning algorithm can figure out a path to our goal point. Using the package from Nav2, we should be able to convert the map data and goal point data into a path that outputs velocity topics so our robot so it can now navigate accurately dodge obstacles and arrive safely at our destination
Spongebob: Our Platform Robot
Goal:
Our goal was to build a robot that could carry bigger items. This robot is capable of moving large objects like chairs and boxes!
Pros:
Larger Form Factor and able to deliver more
Very Powerful Velodyne 3D Lidar
Cons:
Built from Scratch
More complex motor integration (industry-level driver)
Problem with Encoder (no Odometry)
Core Components
Industrial Encoders
Velodyne VLP-16 LiDar
Differential Wheel Robot
Pros:
Pre-Built with build-in libraries
More Flexibility in Motor Controls (Mecanum Wheels)
Cons:
Small Form Factor
Simple 2D Lidar
Core Components:
310 encoder geared motor
M200 Lidar
Mecanum Wheels
RDK X3: Our Final Robot
Creating our own path planning and SLAM algorithm package with all the logic and calculation will give us control over the performance and version of our package.
On the other hand, we could use LIO-SAM and Nav2 where LIO-SAM is heavily supported in ROS1 but not really in ROS2. On top of that, ROS1 is going to be deprecated soon, so this is one consideration to put in mind.
Since, this is a college project we want to showcase as soon as possible, using a prebuilt library like LIO-SAM in ROS1 helps us build fast. However, if we are a company going to build a swarm of robots, we would consider creating our own package to handle better versioning and full control of the performance of our algorithms
We would say that our design is pretty robust. We figured out the topics needed. We used Industry-grade hardware and packages that are commonly used to perform SLAM and PathPlanning. We used LIO-SAM with 3.6 K stars and Nav2 with 2.7 K stars.
As we mentioned previously, our implementation will require us to move from ROS1 to ROS2 when the ROS1 gets deprecated. We will also have to figure out a replacement for LIO-SAM. So our current implementation isn't as durable, but it's robust and efficient.
We want a clear idea of what the environment looks like and where the robot is. SLAM heavily rely on LIDAR and Encoder data to determine mapping and localization. We also added data from our IMU to increase the confidence on where we are.
We also want to know where we should go and how to get there. With the help of the Nav2 library, we are able to determine a path from where we are to where we need to be
Once the Path is created, we want to actually get there. Utilizing the given Motor Control Library, we eventually integrate it so that can input a Twist message to make the robot follow the path to the goal
When we are working on "Spongebob", we managed to get a 3D lidar working, but our odometry isn't working properly because our integrated encoder seem to have a problem. This is where we had a major shift into our final robot. We decided to start implementing with RDK X3. Since, we initially have the RDK hardware in mind, moving to a smaller robot isn't as challenging. So here we changed the implementation to use RDK as our robot instead.
When we decided to use RDK, we needed to use Nav2. This means, that although our system design will stay the same, we need to find other libraries. This is where the challenge comes. We had to figure out which SLAM library to use because we are moving from 3D to 2D and some libraries are better at doing that without needing to overkill the system. We ended up with using cartographer as our SLAM algorithm and continued using nav2 as our path planning.
This design is better than our first approach because, we are migrating to a newer ROS version. This means our packages aren't easily deprecated, but this also means that some of our libraries aren't supported yet. But with the same design, we concluded that our approach is still efficient and robust, but with the added maintainable.
the WitMotion 9-axis IMU and the Seeed Studio 6-axis IMU. However, the WitMotion IMU had outdated documentation and only supported ROS1, making it less practical. Instead, we successfully integrated the Seeed Studio IMU and utilized its data effectively.
We discovered excellent documentation and a GitHub repository for the Velodyne 3D LiDAR, which we integrated into our ROS2 stack by adapting the codebase and incorporating it into our custom launch file.
During the process, we encountered a challenging integration issue: although the LiDAR responded to pings after connecting via Ethernet, no data was received. After troubleshooting, we realized the LiDAR wasn't accepting the parameters as expected. To resolve this, we configured the settings through its GUI.
With our robot launch file available, we wanted to add another node to launch our SLAM. The thing is we needed to ensure that our slam algorithm have the proper data points needed. We need to have our /odom and /imu data ready. These data needed to be transformed too using TF2, so we created a launchfile to handle this and integrate all this together.
Once we have our occupancy grid topic, we have to install the nav2 package. After installing it, we had to perform a nav2 bring_up launch that creates a node and takes in our existing /map data so that we can build our cost map and perform and create our path.
Our end-to-end system follows our initial general design. We first get the data of our M200 Lidar. This will publish the /laser_scan topic. We then record the data of our odometry and IMU to get our /odom and /imu topics, which we will feed into the main cartographer algorithm that we integrate. This allows us to run rviz that gives us the /occupancy_grid topic. We then can add a goal point using Nav2 which will build our cost map and generate our nav_msgs/Path message. This message is useful by our motor controller to publish the correct velocity to our wheels.
We would say that our project was a success. We managed to integrate all the sensor data into a path planning algorithm such that our robot can move as we want.
While we are at RViz, we are able to set a Nav2 goalpoint, and the robot is able to navigate to the endpoint within the map we have created. This accomplishes our initial goal to send packages autonomously to a destination.
Since actively maps the map using SLAM, our robot is able to dynamically stop or continue moving if there are temporary objects like people walking or something else.
We would say that our project met our goal expectations. We are able to create a map, save the map, and this saved map could be used to autonomously send packages to different households.
Often we had to deal with a missing topic. We had to think through and debug why the topic isn't published. Is it because we had the wrong version, wrong path, wrong data input, wrong launch file, wrong launch config, etc
packages installed under /opt/ros/humble or other places can sometime be conflicting with the packages built under our install folder on our folder. We had to go extra miles to understand what is happening and get it to work
We had to make sure that the software we chose could be integrated with our robot. For example, finding out different SLAM libraries for ROS1 vs ROS2 and making sure that it support the LiDar we are using.
We used some of the libraries that we could use from Yahboom's library and Nav2's library to ease our implementation of SLAM and Path Planning.
We could potentially improve our food delivery system where we created rest APIs with our autonomous system so users can interact directly with the robot.
We rely on the WiFi of our robot to transmit the nav2 goal point to our robot from our computer. Ideally, we want to increase this range to be able to reach our apartment complex.
We could improve the small robot to include a small card at the back so that it could maybe carry small stationary or keys to our destination apartment. This means, we likely have to modify our URDF, but other than that adding this improvement isn't a great change.
Carlson Jansen
Carlson is an undergraduate Mechanical Engineering major. His interest is in electronics and prototyping. He has worked on a several projects involving ROS, Arduino, Raspberry Pi, and 3D CAD.
Contributions: integrated on integrating Cartographer SLAM and Nav2 to imlement the autonomous driving for RDK X3
Jan Dustin Tengdyanto
Dustin is a Mechanical Engineering M.Eng student who went to undergrad in UC San Diego. His interest in robotics robust and scalable infrastructure and mapping plus planning using 3D LiDAR.
Contributions: performed most of the integration for IMU, Odom, and SLAM for both Spongebob and RDK
Jonathan Goenadibrata
Jonathan is a Mechanical Engineering M.Eng student who went to undergrad in UC San Diego. He is interested in applying controls and feedback theory into practice
Contributions: worked on building the hardware of RDK X3 alongside building the system design for the robot.
Ronald Arifin
Ronald is an undergraduate in EECS. His interest lies in backend and infrastructure. He has experience working in software systems.
Contributions: worked on integrating Nav2 and SLAM alongside creating the final website for the project
Code: https://github.com/ronaldarifin/spongebob
Slides: https://docs.google.com/presentation/d/14BUSxQquAc0IDZFBsx1bHzqw8fpzeZVDAIoiPV-7cxM/edit#slide=id.g31e2653fa3f_3_0