As you now have an idea and are ready to experiment, you might want to read some of these useful tips gathered from our experiments.
Our Duckiebot has 2GB of RAM. For running computational-intensive tasks such as image recognition or path planning, the device may easily become overwhelmed. We recommend deploying these tasks in your laptop, and have it listen to the necessary data using the default broadcasted wheels and image ROS topics.
Deploying code on the physical robot is quite time-consuming and may require all the team to be together, which may not be possible at all times. We recommend that you prepare your algorithms to be agnostic/decoupled and listen and write to ROS topics only as the communication channels. You can then create a ROS node running the simulator and publish the simulator image data, for example, in a ROS topic. When the time comes, the algorithm should work just as fine when deployed to the real robot.
The ROS ecosystem may be quite fragile in the sense that programs require a very specific set of dependencies and C++, one of the core languages used, still fails to have proper package management. As such, make sure to put your software components (simulator, ROS nodes, ...) in Docker containers to make your project usable to others, including other people in your team with possibly different OS configurations.
Developing in ROS may be quite tricky, as you might have noticed by now. In the quest for ready-to-deploy software, you will likely encounter some difficulties and install some unwanted dependencies on your machine. Make sure to use a ready-to-dispose VM with a supported OS in case your installation becomes bloated.
Use well-established algorithms and packages. For example, we resorted to using a ORB-SLAM3 implementation tailored to ROS.