This page provides a brief overview of the essential procedures required to reproduce the experimental results outlined in our research paper. For the sake of facilitating future investigations, we have made our replication package accessible to the public through a GitHub repository. This provides a basis for researchers to validate and build on our findings.
This section covers the MulTest's environment, a quick demo, complete dependencies, experiment reproduction steps and customisation.
The main folder structure is as follows:
MultiTest
├── _assets
│ └── shapenet object database
├── _datasets
│ ├── kitti kitti dataset
│ └── kitti_construct generate test cases
├── _queue_guided seed queue
├── system systems under test
├── blender blender script
├── config sensor and algorithm configuration
├── core core module of MultiTest
├── third third-party repository
├── eval_tools tools for evaluation AP
├── fitness_score.py fitness metric proposed in MultiTest
├── init.py environment setup script
├── ioutest.py calculating 2d and 3d IOU
├── logger.py log
├── visual.py data visualisation scripts
├── demo.py quick start demo
└── main.py MultiTest main file
We experiment MultiTest with PyTorch 1.8.0 and Python 3.7.11. All experiments are conducted on a server with an Intel i7-10700K CPU (3.80 GHz), 48 GB RAM, and an NVIDIA GeForce RTX 3070 GPU (8 GB VRAM).
This section presents a quick quick of how to leverage MultiTest to generate multimodal data.
Run the following command to install the dependencies
pip install -r requirements.txt
Then, build project with the follow command:
python build_script.py
Finally, set your project path config.common_config.project_dir="YOUR/PROJECT/PATH"
Install blender.
MultiTest leverage blender, an open-source 3D computer graphics software, to build virtual camera sensor.
install blender>=3.3.1 from this link
setting the config config.camera_config.blender_path="YOUR/BLENDER/PATH"
Install S2CRNet [optional].
MultiTest leverage S2CRNet to improve the realism of the synthesized test cases.
download repo from link to MultiTest/third/S2CRNet
git clone git@github.com:stefanLeong/S2CRNet.git
setting the config config.camera_config.is_image_refine=True
Install CENet [optional].
MultiTest leverage CENet to split road from point cloud and get accurate object positions.
download repo from link to MultiTest/third/CENet
git clone git@github.com:huixiancheng/CENet.git
After installing all the necessary configurations, you can run the demo.py file we provided to generate multi-modal data. The folder MultiTest/assets/shapnet and MultiTest/_datasets/kitti contains some seed data to support quick starts.
python init.py
python demo.py
The result can be found at MultiTest/_datasets/kitti_construct/demo. Then we can run visual.py to visualize the synthetic data
In order to reproduce our experiment, we should install the complete dependency. Before that, we should install all the dependencies from the "Quick Start" section.
In order to reproduce our experiments, we need to carefully configure the environment for each system.
These system are derived from the MSF benchmark. Detailed configuration process are provided here.
These systems should be placed in the directory MultiTest/system/SYSTEM_NAME
Run the following command to generate multi-modal data:
python main.py --system_name "SYSTEM" --select_size "SIZE" --modality "MODAL"
The result can be found at MultiTest/_datasets/kitti_construct/SYSTEM.
Here are some examples:
Generate 200 multi-modal data to test CLOCs
python main.py --system_name CLOCs --select_size 200 --modality multi
Generate point cloud data only to test Second
python main.py --system_name Second --select_size 200 --modality pc
Generate image data only to test Rcnn
python main.py --system_name Rcnn --select_size 200 --modality image
Synthetic multi-modal data only
python main.py --system_name random --select_size 200 --modality multi
This subsection describes how to reproduce our experimental results.
RQ1: Realism Validation
Generation of multimodal data from 200 randomly selected seeds.
python main.py --system_name random --select_size 200
Validating the realism of synthetic image.
Install pytorch-fid from here
pip install pytorch-fid
Calculating the FID value
python -m pytorch_fid "MultiTest/datasets/kitti/training/image_2" "/MultiTest/_datasets/kitti_construct/SYSTEM/training/image_2"
Validating the realism of synthetic LiDAR point cloud.
Install frd from here
Calculating the FRD value
python lidargen.py --fid --exp kitti_pretrained --config kitti.yml
Validating the modality-consistency of synthetic multi-modal data.
The result can be found at Multimodality/RQ/RQ1/consistent.
RQ2: Fault Detection Capability
Generation of multimodal data with fitness guidance from 200 randomly selected seeds.
python main.py --system_name "SYSTEM" --select_size 200
Evaluate the AP value and the number of errors with each error category on the generated test cases of a perception system.
python RQ2_tools.py --system_name "SYSTEM" --seed_num 200 iter=1
RQ3: Performance Improvement
Formatting the generated data into KITTI format of a perception system for retraining
python copy_data.py --system_name "SYSTEM"
The retraining dataset can be found at _workplace_re/SYSTEM/kitti.
Copy the dataset to the dataset directory of the appropriate system and execute the training script provided by each systems.
Run MultiTest on a custom dataset:
Prepare dataset in KITTI dataset format.
Set your dataset path config.common_config.kitti_dataset_root="YOUR/DATASET/PATH"
Run MultiTest with custom 3D models:
Prepare your model files in gltf format.
Set your model path config.common_config.assets_dir ="YOUR/ASSETS/PATH"
Run MultiTest with custom MSF systems:
Place the system in the directory MultiTest/system/YOUR_SYSTEM_NAME
Provide the inference interface at line 548 of the main.py file