On this page, we summarize the necessary steps for replicating the experimental results in our paper. We have made our replication package publicly available at our GitHub repository , as a reference for future research.
The directory Gym_Envs/ contains a Tasks/ sub-directory. It contains 8 robotics manipulation tasks introduced in Benchmark. The directory Falsification_Tool/ contains our falsification framework that is compatible with physical simulators and OpenAI Gym environments. The directory Evaluation/ contains the evaluation results for different AI controllers.
Please see the instructions provided on the GitHub page for details about how to run the benchmark and the falsification framework.
To train new DRL controllers with our benchmark, following the steps below.
Navigate to Gym_Envs/ ;
Run RL_PYTHON_PATH skrl_train_PPO.py task=FrankaBallBalancing num_envs=1024 headless=True to run a training with Ball Balancing task;
Task names: FrankaPointReaching, FrankaPegInHole, FrankaBallBalancing, FrankaBallPushing, FrankaBallCatching, FrankaDoorOpen, FrankaClothPlacing, FrankaCubeStacking
DRL training algorithms: skrl_train_PPO.py, skrl_train_DDPG.py, skrl_train_TRPO.py, skrl_train_TD3.py, skrl_train_SAC.py
To evaluate trained AI controllers, an example could be:
Navigate to Evaluation/;
Run PYTHON_PATH manipulator_eval.py
Task names and controllers can be changed by modifying corresponding values given in the script
To run the falsification for different trained AI controllers:
Navigate to Falsification_Tool/;
Run PYTHON_PATH manipulator_testing.py
Task names and controllers can be changed by modifying corresponding values given in the script