This website provides supplementary materials for the paper "Drivence: Realistic Driving Sequence Synthesis for Testing Multi-sensor Fusion Perception Systems".
The website is organized as follows:
Home page: This section provides an introduction to Drivence, including its motivation and an overview of its workflow.
Approach: This section details the main components of Drivence's workflow, complemented by visualizations. Key modules include the Occupancy Grid Mapping Module, Lane Waypoint Generation Module, Trajectory Generation Module and Multi-sensor Simulation Module.
Data visualisation: This section showcases examples of sequence-level multi-modal test cases generated by Drivence, featuring both original and generated driving sequences, including typical examples of 8 scenarios and 6 driving patterns.
Experiment. This section outlines the details of the systems under test (SUTs.), presents a visual comparison with current testing tools, and visualizes several error-revealing test sequences .
Replication Package: This section provides the necessary procedures to reproduce the experimental results presented in our paper.
DRIVENCE is an automated testing tool for MSF-based perception systems; its motivations and significance are further highlighted through a recent systematic survey [1] conducted among practitioners in autonomous driving. Note that the gray highlighted sentences are the instructions provided in the survey.
(1) Most industry autonomous driving systems employ or offer multi-sensor fusion strategies to improve the performance of perception systems.
70% of interviewees and over 68% of survey partici pants said their driving systems used at least three of four types of sensors, e.g., cameras, LiDARs, radars, GPS.
(2) Testing perception systems is a common task during ADS development. Morover, It is a common practice to test perception systems with multi-modal sensor drving sequence (i.e. segments of driving recordings) as input
Common Practice 4: In addition to testing control logic, ADS developers also need to construct segments of driving recordings to test DL models, which take multi-modal sensor data as input, not just road images.
(3)There are too many possible driving scenarios to test, while construct test data manually is time-consuming.
In addition to writing unit tests for control logic, ADS developers also need to test DL models, especially those models in the perception and prediction modules. ..., ADS developers need to manually label and clip driving recordings collected from on-road or simulation testing to construct small recording segments for testing these DL models, which are referred to as unit tests in ADS development. ..., there are too many possible driving scenarios to test, and it is time-consuming to manually process driving recordings to construct test scenarios
(4) However, existing techniques struggle to facilitate the testing of perceptual systems, as they either limit the synthesis to a set of static scenes (rather than driving sequences captured in real scenarios) or focus solely on single-sensor perception systems. These methods can hardly practitioners in testing.
These multi-module ADSs take multiple types of sensor data as input. Yet the majority of test generation techniques only generate road image data. Therefore, it is worth investigating how to generate multi-modal sensor data for new driving scenarios.
(5) Indeed, generating modal-consistent sequences is a challenging task, as the synthesized sequence-level test data must ensure modal-consistency across all sensors at every timestamp.
This multi-modality of sensor data makes it more difficult to generate test cases. ..., When transforming one type of sensor data, such as adding an object, other types of sensor data must be updated consistently.
To bridge this gap, we propose DRIVENCE, an automated tool for generating driving sequences to test MSF-based perception systems.
Given a test seed (i.e., a multi-modal driving sequence), the goal is to generate new test data by inserting one or multiple new traffic participants (i.e., NPC cars) into this driving sequence. To achieve this, Drivence operates in three steps using distinct modules.
First, an Occupancy Grid Mapping Module is employed to construct a global map that differentiates drivable and non-drivable areas.
Second, a Trajectory Generation Module module is used to create realistic candidate trajectories within the drivable regions. Drivence utilizes both global and local path planners to ensure the generated trajectories adhere to physical motion rules and avoid collisions with existing traffic participants (e.g., other NPC cars and pedestrians).
Third, a Multi-sensor Simulation Module module is utilized to derive new test data by integrating randomly selected object instances into the seed data using the generated trajectories. Drivence enables simultaneous rendering of multiple trajectories and enhances scenario interactions with features like improved occlusion handling and more realistic lighting and shadow effects.
Additionally, to enable efficient testing, Drivence craft metamorphic relations to automatically generate test oracles for evaluating perception systems.
[1] Guannan Lou, Yao Deng, Xi Zheng, Mengshi Zhang, and Tianyi Zhang. 2022. Testing of autonomous driving systems: where are we and where should we go?. In Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on theFoundations of Software Engineering.