The automated analysis of crowds and the identification of crowd behaviors are crucial for predicting adverse events, and in particular for appropriately designing public spaces, as well as for the real-time management of people flows.
Scene understanding, especially motion analysis and trajectory prediction of targets, has been extensively exploited to understand the dynamic of an observed scene. Crowd models have been used to detect anomalies, predict paths and perform data-driven simulations. Most models and algorithms for the analysis of crowds are developed and tested using real-world videos and target different applications including personal mobility, safety and security, and enabling assistive robotics in public spaces.
In crowd analysis as in multiple other research fields, the ever increasing demand for data to train modern machine learning and deep learning algorithms appears to be unstoppable. When dealing with supervised learning, the basic requirement is the availability of a large collection of labeled data. If the amount of annotated data is scarce, supervised solutions often overfit, leading, in the first place, to poor generalization capabilities. The literature has shown that this problem can be mitigated with a variety of regularization techniques, such as the dropout, batch normalization, transfer learning between different datasets, pre-training the network on different datasets, or implementing few-shot and zero-shot learning techniques.
Still, there is an ever growing demand for data, to which researchers respond with larger and larger datasets, at a huge cost in terms of acquisition, storage, and annotation of images and clips. However, when dealing with complex problems, it is common to validate the developed algorithms across different datasets, facing inconsistencies in annotations (i.e. segmentation maps vs bounding boxes), the use of different standards (i.e. the number of joints of human skeletons in OpenPose and SMPL).
The use of synthetically-generated data can overcome such limitations, as the generation engine can be designed to fulfill an arbitrary number of requirements, all at the same time. For example, the same bounding box can hold for multiple viewpoints of the same object/scene; the 3D position of the object is always known, as well as its volume, the appearance, and the motion features. These considerations have motivated the adoption of computer-generated content to satisfy two requirements: (a) the visual fidelity and (b) the behavioral fidelity.
With this respect, crowd analysis provides a rich and diversified use case, in which synthetic data can play a relevant role: the scene should replicate the appearance of a crowd, which consists of multiple subjects of different appearance exhibiting different behaviors. These elements imply fulfilling the requirements of both visual fidelity and behavioral fidelity, simulating and modeling the diversity of motion patterns, as well as the ongoing social interactions.
We expect contributions involving, but not limited to crowd analysis applications, and synthetic data applications.
Potential topics include, but are not limited to:
● Crowd analysis
● Trajectory prediction
● Crowd simulation
● Synthetic data for crowds
● People counting
● Anomaly detection
● Crowd simulators
● Behavioral and interactions models
Submission of papers
Prospective authors are invited to submit full-length papers, that must be written in English and submitted in PDF format. Each submission must be between 4 and 6 A4 pages long. Manuscripts should be original (not submitted/published anywhere else) and written in accordance with the standard IEEE double-column paper template. Each submission will be double-blindly peer-reviewed by at least two experts. This requires the paper to be anonymous, please read the instructions included in the paper templates.
Notice
The IEEE Signal Processing Society and IEEE Computer Society enforce a “no-show” policy. Any accepted paper included in the final program is expected to have at least one author or qualified proxy attend and present the paper at the conference. Authors of the accepted papers included in the final program who do not attend the conference will be subscribed to a “No- Show List”, compiled by the Society. The “no-show'“ papers will not be published by IEEE on IEEEXplore or other public access forums, but these papers will be distributed as part of the on- site electronic proceedings and the copyright of these papers will belong to the IEEE.