In this page, we introduce both the simulators applicable to synthetic transparent object rendering and the physical sensors that has been used for transparent object perception.
The simulators can be divided into two different types. The first kind of simulator aims to render realistic synthetic images at the cost of heavy computation, such as the Cycles and LuxCoreRender engines in Blender. The drawback of this kind of simulator is that it is normally not capable of adding robotic physics, which makes it difficult to train a model with reinforcement learning. The other kind of simulators such as Unreal Engine and Omniverse(with RTX Real-Time as the render engine) can simulate robotic physics such as the interaction between a robotic arm and transparent objects. The different simulators are summarised in Table I.
Figure 1. Visual comparison of different render engines in Blender.
Several popular projects or tutorials are summarised here.
(1) LuxCoreRender for transparent objects 3PTelephant/TransparentObjectRender: Blender project for transparent object rendering (github.com)
(2) Depth sensor in Blender DREDS/DepthSensorSimulator at main · PKU-EPIC/DREDS (github.com)
(3) SuperCaustics in Unity MMehdiMousavi/SuperCaustics: Real-time, open-source simulation of transparent objects for deep learning applications (github.com)
(4) BlenderProc DLR-RM/BlenderProc: A procedural Blender pipeline for photorealistic training image generation (github.com)
(6) Nvidia Isaac Gym: NVIDIA-Omniverse/IsaacGymEnvs: Isaac Gym Reinforcement Learning Environments (github.com)
Figure 2. Example of sensors used for transparent object perception. (a) RGB-D sensors provided by Intel; (b) Stereo Sensor provided by STEREOLABS; (c) Light-field camera provided by Lytro. (d) Polarised camera provided by LUCID; (e) RGB-Thermal Camera provided by FLIR; (f) GelSight tactile sensors.