3D-DLAD 2023
5th Workshop on 3D-Deep Learning for Automated Driving
8th Edition of Deep Learning for Automated Driving (DLAD) workshop
IEEE Intelligent Vehicles Symposium (IV’2023) -Anchorage, Alaska, USA
Intelligent Transportation Systems (ITS) is the IEEE society for automotive broadly including autonomous driving. The Intelligent Vehicles Symposium (IV’2023) (link) is a premier forum sponsored by the IEEE Intelligent Transportation Systems Society (ITSS). This workshop workshop series is on ‘Deep Learning for Autonomous Driving’ (DLAD) but focused on 3D data processing from Lidar, Radars, Cameras, HDMaps and TOF sensors.
Deep Learning has become a de-facto tool in Computer Vision and 3D processing by boosting performance and accuracy for diverse tasks such as object classification, detection, optical flow estimation, motion segmentation, mapping, etc. Recently Lidar sensors are playing an important role in the development of Autonomous Vehicles as they overcome some of the main drawbacks of a camera like degraded performance under changes in illumination and weather conditions. In addition, Lidar sensors are capturing a wider field of view while directly obtaining 3D information, which is essential to assure the security of the different traffic players. However, it becomes a computationally challenging task to process daunting magnitudes of more than 100k points per scan. To address the growing interest on deep learning for lidar point-clouds, both from an academic research and industry in the domain of autonomous driving, we propose the current workshop to disseminate the latest research.
Previous edition of the workshop series:
3D-DLAD-v4 IV 2022, Aachen, Germany 2022
3D-DLAD-v3 IV 2021, Nagoya, Japan 2021
3D-DLAD v2 IV 2020, Las Vegas, USA
CoFED-DLAD 2020, Rhodes, Greece
DLAD-BP, ITSC 2019, Auckland, Newzealand
3D-DLAD-v1 IV 2019, Paris, France
DLAD ITSC 2017, Yokohama, Japan
List of topics:
Semantic Scene Completion with Camera-LIDAR
3D Self-supervised Occupancy and flow estimation using LiDAR, Camera
Automating HDMaps Generation and Auto-labeling pipelines
Progress in LiDAR pointcloud semantic segmentation, detection architectures
Upsampling Lidar point cloud and Domain adaptation
Deep Learning for Camera based Monocular 3D-Detection & Depth Estimation,
3D Deep learning for perception with Radar
New sensors technologies : FMCW LiDARs, HD Radars
Deep Learning for Lidar localization, VSLAM, meshing, pointcloud inpainting
Deep Learning for Odometry and Map/HDmaps generation with Lidar cues.
Deep fusion of automotive sensors (Lidar, Camera, Radar).
Graph Neural Networks (GNNs) for 3D data, meshes and pointclouds
Design of datasets and active learning methods for pointclouds
Generalization techniques for different Lidar sensors, multi-Lidar setup and point densities.
Real-time implementation on embedded platforms (Efficient design & hardware accelerators).
Challenges of deployment in a commercial system (Functional safety & High accuracy).
References:
Bai, Xuyang, et al. "Transfusion: Robust lidar-camera fusion for 3d object detection with transformers." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [pdf]
Shi, Guangsheng, Ruifeng Li, and Chao Ma. "PillarNet: Real-Time and High-Performance Pillar-Based 3D Object Detection." European Conference on Computer Vision. Springer, Cham, 2022. [pdf]
Scalability in perception for autonomous driving: Waymo open dataset, Sun, Pei, et al. CVPR 2020.
Mersch, Benedikt, et al. "Receding Moving Object Segmentation in 3D LiDAR Data Using Sparse 4D Convolutions." RAL 2022.
Chen, Xieyuanli, et al. "Moving object segmentation in 3D LiDAR data: A learning-based approach exploiting sequential data." IROS 2021.
Sun, Pei, et al. "Rsn: Range sparse net for efficient, accurate lidar 3d object detection." CVPR 2021 [pdf]
What you see is what you get: Exploiting visibility for 3d object detection, Hu, Peiyun, et al. CVPR 2020 [pdf]
Cubuk, Ekin D., et al. "Randaugment: Practical automated data augmentation with a reduced search space." CVPRW 2020. [pdf]
Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic Segmentation CVPR 2021 [pdf]
Xu, Chenfeng, et al. "Image2Point: 3D Point-Cloud Understanding with 2D Image Pretrained Models."ECCV 2022. [link]
HDNET: Exploiting HD Maps for 3D Object Detection. Proceedings of The 2nd Conference on Robot Learning, in PMLR 87:146-155 Yang, B., Liang, M. & Urtasun, R.. (2018).
Fast LIDAR localization using multiresolution Gaussian mixture maps. In Robotics and Automation (ICRA), 2015 IEEE International Conference on (pp. 2814-2821). IEEE. Wolcott, R. W., & Eustice, R. M. (2015, May).
Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Stachniss, C., & Gall, J. (2019). SemanticKITTI: A dataset for semantic scene understanding of lidar sequences. In Proceedings of the IEEE International Conference on Computer Vision (pp. 9297-9307). [link]
Caesar, Holger, et al. "nuscenes: A multimodal dataset for autonomous driving." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020. [link]
Cortinhal, Tiago, George Tzelepis, and Eren Erdal Aksoy. "SalsaNext: Fast, uncertainty-aware semantic segmentation of LiDAR point clouds." International Symposium on Visual Computing. Springer, Cham, 2020. [link]
Mei, Jilin, and Huijing Zhao. "Scene Context Based Semantic Segmentation for 3D LiDAR Data in Dynamic Scene." arXiv preprint arXiv:2003.13926 (2020). [link]
DeepMapping: Unsupervised Map Estimation From Multiple Point Clouds Ding L, Feng C. CVPR 2019 [link].
The perfect match: 3d point cloud matching with smoothed densities, Gojcic, Zan, et al. CVPR. 2019.