We use real-world images and poses from our robot to train novel-view rendering and reconstruction model like NeRF and Gaussian Splatting. Engineered a high-performance simulation environment leveraging the aforementioned model. This allows for realistic agents movement within the simulated scenes parallelly. Spearheaded the initiative to train agents for navigation without map in the dataset. Pioneered the application of the simulation-trained model to real-world robotic operations. We use USCILab3D dataset, a large-scale, long-term, outdoor dataset, to train a set of splat files. Our simulator is based on rendering images from the splat files by querying a 6D pose computed by a novel motion model.
Above: Multi-view images obtained from 5 cameras that are mounted on the Beobotv3 robot.
Above: Our motion-model uses an elevation and an occupancy map to estimates poses and detect collisions
Pipeline
Create sequence graph and navigate
This is an example show how to create sequence graph from node graph, and navigate in the sequence graph.
Our approach ensures that simulations run optimally by maximizing the performance of both computational resources and rendering. By using Gaussian Splatting, we have optimized every aspect of our simulator to provide a seamless and fluid experience, enabling you to accomplish tasks swiftly and effortlessly.