In this project a Gazebo construction site environment for two TurtleBots 3 was created, the world includes a textured ground plane, clouds, and several textured 3D models. The 3D models were processed using Blender. This guide assumes you have a basic understanding of ROS 2 and simulation environments like Gazebo.
Our downloaded model should have the following files:
model_file.fbx/stl/dae
Base Color
Roughness
Height
Normal
Metallic
Ambient Occlusion
Now it is time to link each image with it corresponding slot. Pay attention to not use special blocks as shown on the images below. For more details you can check the following link:
https://www.cgbookcase.com/learn/how-to-use-pbr-textures-in-blender/
Full textures with operations doesn't work.
Simplified way can be exported.
When exporting the file, it must be saved as a DAE file and the textures must be copied, to generate the texture file.
3D model.
Files generated.
The meshes imported from Solidworks or Inventor have a very high face count and are not optimised for launch in Gazebo. The more complex mesh you spawn in Gazebo, the greater the computational time needed to determine potential collisions and rendering. In Blender the Decimate Modifier function can be used to downscale meshes.
14.626 Faces.
1.199 faces.
To optimize the simulation and not decrease the real time factor in Gazebo the collision part of the SDF file must be replaced by a primitive geometric form.
In the turtlebot3_tc_test.world file the camera initial position was defined to a more convenient location. The cloud speed was set just for aesthetic. To load model in a random location the distribution type "random" can be used and then defined the location, and size of the cube where the models will be placed, also it can be type "grid". Check the code below for more details
Also, disabling shadows also boosts the rendering FPS, especially when launching the simulation in CPU-only modes.
Make sure to use minimal light sources as long as your perception algorithms perform optimally. Only proceed to add sources of directional light if you cannot modify the default Sun model’s range and attenuation parameters to achieve your end cause. The performance of vision-based algorithms such as image segmentation using Deep Learning Models or Visual SLAM depends on the amount of light in the environment, so this method will not be very useful in those cases.
In order to load the meshes of the robot that are stored into the description package, in ROS 1 we can use the package://<path> command to find the location of the meshes. But this doesn't work in ROS2.
In ROS2, we must use file://$(find robot_description)/<path> as shown in the snippet below.
Simulation running at 60 FPS and real time factor 1.0 in an old MSI laptop. This environment can be use to test SLAM algorithms as well to perform object detection.