The synthetic sequences recorded to evaluate our work "Distributed multi-target tracking and active perception with mobile camera networks" (demo video) can be downloaded from here. The dataset includes six sequences captured in two photo-realistic environments built in Unreal Engine 4.27 and designed using our "Framework for Fast Prototyping of Photo-realistic Environments with Multiple Pedestrians" (github):
Font environment: green and open area where three different sequences have been acquired Font Sparse (5 people), Font Medium (10 people) and Font Busy (15 people).
Street environment: commercial street where three different sequences have been acquired Street Sparse (5 people), Street Medium (10 people) and Street Busy (15 people).
Each sequence includes records from 3 static cameras and 2 mobile cameras (drones) with a length of 500 frames of 1440x900 pixels.
The structure of each sequence is as follows:
The ground truth of the pedestrians in the image plane is given by the json file "pedetrian_2dGT.json" included in each one of the camera folders. The structure of the ground truth dictionary is:
{'frame_index_0': [{'id': pedestrian_name, 'xmin':xmin, 'ymin':ymin, 'xmax':xmax, 'ymax':ymax},
{...}],
'frame_index_1': [{'id': pedestrian_name, 'xmin':xmin, 'ymin':ymin, 'xmax':xmax, 'ymax':ymax},
{...}],
Regarding the state of the mobile cameras in each step, this information is gathered in the "state_cam_info.json" file. The structure of each file is:
[{'n_iteration': frame_index_0, 'pos_x': x_coordinate, 'pos_y': y_coordinate, 'pos_z': z_coordinate,
'orient_w': quaternios_orientation_w, 'orient_x': quaternios_orientation_x,
'orient_y': quaternios _orientation_y, 'orient_z': quaternios_orientation_z},
{'n_iteration': frame_index_1, ...},
...
]
⭐ The Unreal Engine Project can be also downloaded from here .