On this page, we summarize the necessary steps for replicating the experimental results in our paper. We have made our replication package publicly available in the GitHub repository, as a reference for future research.
The directory AI-CPS-SIEGE/ contains 5 sub-directory ACC/, AFC/, APV/, BBC/, and LKA/. Each of them stores the indicated AI-CPS introduced in Experimental Systems.
The directory AI-CPS-SIEGE/ has 5 sub-directories, each storing one AI-CPS model introduced. Each model directory has a similar structure, and we take the directory LKA/ as an example.
LKA/ involves the following sub-directories:
abstraction/ stores the scripts for the abstraction process.
data_generation/ has one sub-directory: generated_date/.This folder generated_date/ stores the simulated data from each DRL controller. And the script LKA_data_generator.mlx initiates the simulation with selected controllers.
DRL_training/ stores the scripts for training new DRL agents for the AI-CPS. This folder contains three scripts, LKA_DDPG_training.mlx, LKA_TD3_training.mlx, LKA_SAC_training.mlx, and one sub-directory trained_agent/. The former scripts train DRL agents with specified algorithms, and the latter collected the pre-trained agents.
ensemble/ stores the script LKA_AC_ensemble_training.mlx for training the "higher-level" coordinator DRL agent. And a sub-directory trained_agent contains the pre-trained coordinator agents.
evaluation/ stores the.mat files that record the evaluation results for each type of controller.
model/ stores all the Simulink .slx that are used for system simulations.
The usage of DRL controller training and the main functions are introduced below.
We recommend the users use our pre-trained models for replicating the experimental results; however, the users also have the option to train new DRL controllers by themselves, following the steps below.
Navigate to AI-CPS-SIEGE/[MODEL]/DRL_training/ ;
Run [MODEL]_[DRL]_training.mlx to run a training script, that uses the [DRL] DRL algorithm for model [MODEL] .
During the training, the users will see a window showing the training status.
An RL agent will be generated when the training is finished, and it will be saved in a .mat file, named as [MODEL]-[DRL]-[DATE].mat
For example, you can use the following command to train a DDPG agent for the ACC system:
ACC_DDPG_training.mlx
A window will jump out to show the status of training including episode number, episode reward, episode step, the total number of steps and etc.
The user can terminate the training process by clicking the STOP button on the left top corner, otherwise, the training will get terminated when the maximum episodes number has been reached which is 5000 by default. For different systems, depending on the environment and reward function setup, the episode numbers needed to find the optimal policy are different. Users can monitor the results on average reward and average steps to determine whether the episodes are enough to output a trustworthy agent.
When the training is finished, an MAT file will be outputted which contains the trained agent. For the example above, the MAT file will be ACC-DDPG-17-Jun-0.8-0.2-0.05.mat. This agent can be directed loaded by the corresponding Simulink model to behave as a controller.
We need to simulate the target CPS to collect enough data to construct the abstract model. The steps to generate simulation experience are as follows:
Navigate to AI-CPS-SIEGE/[MODEL]/data_geneartion/ ;
Run [MODEL]_data_generator.mlx to run a training script. You can change the name of the controller used for simulation inside the script. We recommend to get simulation data from various controllers to obatin a comprehensive data collection about system behavior.
A .mat file will be generated when the simulation is finished, named as [MODEL]_data_[agent]_[DATE].mat
This .mat file will be directly loaded by the abstraction script to construct an abstract model for the target system.
The directory AI-CPS-SIEGE/[MODEL]/abstraction/ includes the scripts to generate the abstract model based on the collected simulation data. To reproduce the experimental results, users should follow the steps below:
Navigate to AI-CPS-SIEGE/[MODEL]/abstraction/
Run [MODEL]_abstration.mlx;
The abstract model will be generated after a while, with the name [MODEL]_abstract_model.mat .
The steps of reproducing evaluation are listed below:
Navigate to AI-CPS-SIEGE/[MODEL]/evaluation/;
Run [MODEL]_evaluation.mlx.
The evaluation results will be generated after a while, with the file name eval_[MODEL]_[method]_result.mat, where [method] indicates the type of control system for evaluation.
All the pre-trained agents have been listed under the [MODEL]/trained_agent/ folder, alternatively, we can use your own agent trained from the previous section. All pre-designed Simulink models are available under the [MODEL]/model/ folder.