This website provides supplementary materials for the paper "Generate Realistic Test Scenes for V2X Communication Systems".
Abstract
Accurately perceiving complex driving environments is essential for ensuring the safe operation of autonomous vehicles. With the tremendous progress in deep learning and communication technologies, cooperative perception with Vehicle-to-Everything (V2X) technologies has emerged as a solution to overcome the limitations of single-agent perception systems in perceiving distant objects and occlusions. Despite the considerable advancements, V2X cooperative perception systems require thorough testing and continuous enhancement of system performance. Given that V2X driving scenarios entail intricate communications with multiple vehicles across various geographic locations, creating V2X test scenarios for these systems poses a significant challenge. Moreover, current testing methodologies rely on manual data collection and labeling, which are both time-consuming and costly.
In this paper, we design and implement V2XGen, an automated testing generation tool for V2X cooperative perception systems. V2XGen utilizes a high-fidelity approach to generate realistic cooperative object instances and strategically place them within the background data in crucial positions. Furthermore, V2XGen adopts a fitness-guided V2X scene generation strategy for the transformed scene generation process and improves testing efficiency. We conduct experiments on V2XGen using multiple cooperative perception systems with different fusion schemes to assess its performance on various tasks. The experimental results demonstrate that V2XGen is capable of generating realistic test scenes and effectively detecting erroneous behaviors in different V2X-oriented driving conditions. Furthermore, the results validate that retraining systems under test with the generated scenes can enhance average detection precision while reducing occlusion and long-range perception errors.
The website is organized as follows:
Home page: The motivation for why V2XGen is urgently needed, which is followed by an illustration and an introduction of our research workflow.
Approach: This section contains details and visualizations illustrating the main functional modules (e.g., Transformation Operators and Fitness-Guided Testing) in V2XGen's workflow.
Research Questions: This section contains more details and visualizations of our experiment as supplements to our paper.
Replication Package: This section provides the essential procedures required to reproduce the experimental results in our paper.
Motivation
Han et al. [1] present challenges for conducting collaborative perception in real-world autonomous driving and discuss the corresponding future research directions.
First, the training in collaborative perception systems heavily relies on full-labeling large-scale datasets. The annotations are labor-intensive and time-consuming, especially for collaboration systems involving multiple agents, which seriously affects the research on collaborative perception in the real world (Section ⑤-D).
Second, it is vital to achieving robust and accurate perception performance under various complex and critical scenes in autonomous driving. Although several large-scale datasets have emerged in recent years, they are primarily designed for common scenarios and fail to cover complex and challenging scenes (Section ⑤-B).
Collaborative perception is a multi-agent system in which agents share perceptual information to overcome visual limitations in ego AV (Section ① ). To ensure continuous improvement of the V2X system, developers must build two typical types of V2X application scenarios (i.e., occlusion and long-range perception in the ego vehicle’s viewpoint) to iteratively test and optimize V2X systems’ corresponding basic functionality.
Therefore, we design and implement V2XGen for generating realistic and perspective-consistent test data to satisfy test input specifications for V2X cooperative perception systems.
Workflow
The workflow of V2XGen
V2XGen is a tool devised for automated test generation of the V2X cooperative perception module within ADS. The figure above presents the high-level workflow of V2XGen.
First, the developer initiates the process by selecting seeds from real-world V2X scenes.
Second, V2XGen then applies the metamorphic relations specifically defined for cooperative perception systems. It is equipped with two basic transformation operators (i.e., insertion and deletion) and three composite transformation operators (i.e., scale, translation, and rotation), which are combinations of the basic operators.
Subsequently, V2XGen efficiently selects test scenes likely to reveal functional potential defects such as occlusion and long-range perception errors in the systems under the fitness-guided metric.
Finally, V2XGen can enhance the system’s cooperative perception performance through retraining with generated transformed scenes.
Reference
【1】Han, Yushan, Hui Zhang, Huifang Li, Yi Jin, Congyan Lang, and Yidong Li. "Collaborative perception in autonomous driving: Methods, datasets, and challenges." IEEE Intelligent Transportation Systems Magazine (2023).