Simulator setup

Intro

  • installation

  • models

  • usage

Blender RGBD setup

Blender RGBD plugin is based on the code from Javonne Martin, available at BlenderToRGBD. This code is slightly modified and imported in the provided .blend file. To run this plugin you must install the required python libraries within blender. Although Blender can be set up to use your OS python environment, this is usually not the case, and it uses its own libraries.

To install the necessary python libraries in Blender follow these steps:


WINDOWS

  1. Locate blender python folder: On windows this should be simple, locate the folder "C:\Program Files\Blender Foundation\Blender 2.xx\2.xx\python\bin". On the Linux side, a similar folder can be found however, this depends on the type of system and the installation.

  2. Using the python in this folder install pip. Again on the windows side, type the follwoing command:

$python -m ensurepip

  1. To install additional python libraries run the following set of commands:

$python -m pip install scikit-image

$python -m pip install scikit-image[optional]


LINUX - Blender from downloaded tar source; this works on ubuntu 20.04 with py3

  1. go to ~/path_to_untared_blender/2.xx/python

  2. install pip calling bin/python3.xm lib/python3.x/ensurepip

  3. install packages with pip calling

bin/pip3 install --target lib/python3.7 scikit-image

bin/pip3 install --target lib/python3.7 scikit-image[optional]


Once the installation is complete you should be good to go, press play to run the python script within the blender environment. This should render images and store them in the designated folder:

c:/tmp/rgbd_sim/ folder

Images are generated using cycles render engine and the depth map is stored as a depth image in a separate folder. You can use the code to change the final number of rendered keyframes.

Blender file is available here.


RGBD ROS package

This package is used to convert depth and rgb images to point cloud data. The source code can be found at the blender_rgbd_ros github page.

Detailed overview and instructions on how to run the code are provided at the package github page.

Detection NN & ROS package

SSD with MobileNet V2 as a base is used for object detection. It is trained in TensorFlow2 using Tensorflow Object Detection API. Model is pretrained on the COCO dataset and fine-tuned on the custom synthetic dataset generated in Blender.

ROS package used for the inference, ros_object_detector, is available here. It subscribes to both the image and the point cloud topic and publishes labeled image and the list of 3D positions of the detected objects. 3D position of the object is obtained by applying the bounding box of the detected object to the organized point cloud and extracting the centroid of the filtered point cloud.

Run the inference with:

roslaunch ros_object_detector blender_simulation.launch

SSD trained for pepper detection is available in the 'models' directory of the package. You should edit saved_model and label_map parameters (paths to tf2 saved model and label map) when using a custom model. When running in parallel with the blender_rgbd_ros, you might wish to set parameter rate (loop rate of images in Hz) to lower value.


NOTE: faster r-cnn model is too large for github. If you do not have a pretrained faster r-cnn model in your models folder, you should change the "saved_model" parameter in blender_simulation.launch into what you do have, e.g. pepper_ssd (currently available in the linked repo)

Counting package

  • installation (downloads)

  • usage

Putting it all together

  • tips

  • usage