Seeing More with Lidar: Distance, Reflectivity, and Velocity Imaging from Single Photons
While lidar is widely used for generating 3D point clouds, conventional processing often discards much of the information contained in the raw data. Single-photon lidar (SPL) leverages pulsed lasers, ultra-sensitive detectors, and picosecond-resolution electronics to recover scene information from extremely weak reflections. We demonstrate that SPL enables reliable estimation not only of distance but also of surface reflectivity and object velocity from individual photon detections. These richer measurements promise to advance lidar–camera fusion, pedestrian detection, and point cloud registration, among other perception tasks.
Biosketch
Joshua Rapp is a Principal Research Scientist at Mitsubishi Electric Research Laboratories (MERL) in Cambridge, MA, USA. He received the PhD degree in electrical engineering from Boston University in 2020 and was a postdoc at Stanford University from 2020 to 2021.
Dr. Rapp’s research focuses on optical acquisition and statistical signal processing for 3D sensing, with applications in autonomous vehicles, airflow sensing, and factory automation. His contributions to single-photon lidar have earned him several honors, including the IEEE Signal Processing Society Best PhD Dissertation award (2021), the Boston University Best Electrical Engineering Dissertation Award (2020), the IEEE Signal Processing Society Young Author Best Paper Award (2020), and a Best Student Paper Award at the IEEE International Conference on Image Processing (2018).
He is a member of IEEE, Optica, Eta Kappa Nu, and Tau Beta Pi, and serves on the IEEE Signal Processing Society Computational Imaging Technical Committee. He is also a Consulting Associate Editor for the IEEE Open Journal on Signal Processing.
Bridging the Synthetic-to-Real Gap: Neural Rendering for Scalable Autonomous Vehicle Simulation
Achieving high-fidelity, scalable, and cost-effective validation of autonomous vehicle perception systems remains a critical challenge in the industry. In particular, traditional synthetic datasets, while controllable and scalable, often fail to generalize due to the domain gap between simulation and real-world inputs.
In this talk, we present aiMotive’s neural reconstruction and rendering pipeline, integrated into aiSim—our ISO 26262-certified simulator—for advancing realism in synthetic environments while retaining full control and scalability. We explore how cutting-edge techniques like neural radiance fields (NeRF), hybrid rendering, and Gaussian Splatting can be combined with procedural and manual scene generation to create photorealistic 3D reconstructions of real-world environments.
We discuss key technical challenges addressed by our pipeline:
Achieving high visual fidelity and multi-sensor realism in dynamic, safety-critical simulation environments
Balancing rendering quality with real-time performance constraints required for hardware-in-the-loop (HiL) and closed-loop ADAS/AD validation
Developing an end-to-end workflow that allows integration of both real-world captures and synthetic content under a common, simulation-friendly format
Our approach supports both rasterized and raytracing techniques, allowing scenario designers to toggle between performance and photorealism. This has enabled aiSim to generate robust and diverse data for testing AI-based perception systems, minimizing the synthetic-to-real domain gap without compromising safety or scalability.
This presentation will offer insights into our development journey, share benchmarks on visual fidelity and performance, and explore how neural rendering is reshaping simulation standards across the industry. Attendees will leave with a clear understanding of the potential and practical considerations of integrating neural reconstruction into AV simulation workflows.
Biosktech
Tamás Matuszka is a Lead AI Research Scientist at aiMotive, where he leads the development of an AI-powered multimodal automatic annotation pipeline for both dynamic and static objects. His team combines deep learning, computational geometry, and foundational models to push the boundaries of automated data labeling. They also work on neural reconstruction models that are integrated into aiMotive’s in-house simulator, aiSim, supporting features such as open/closed-loop model evaluation, sensor transfer, and more. Tamás’s research has been featured at premier conferences including CVPR, NeurIPS, ICLR, ECCV, and SIGGRAPH, in the form of demonstrations, workshop papers, and posters.
Before joining aiMotive, he served as R&D Director at INDE R&D, where he focused on blending Augmented Reality with deep learning techniques. Prior to that, he was a visiting researcher at the Korea Advanced Institute of Science and Technology (KAIST). Tamás holds a PhD in Computer Science from Eötvös Loránd University, Budapest, where his dissertation explored Augmented Reality supported by Semantic Web Technologies.