CALL FOR PAPERS - International Journal of Computer Vision
Special Issue on Synthetic Visual Data

IJCV Special Issue on Synthetic Visual Data

Aims and Scope

The recent successes in many visual recognition tasks, such as image classification, object detection, and semantic segmentation can be attributed in part to the availability of large labeled datasets such as ImageNet. In fact, recent results indicate that the reliability of visual models might not be limited by the algorithms themselves but by the type and amount of data available. Therefore, to tackle more challenging tasks, such as global video scene understanding, progress is needed not only on the algorithmic front but also on the data front, both for learning and quantitative evaluation. However, acquiring and densely labeling a large visual dataset with ground truth information (e.g. semantic labels, depth, optical flow, ...) for each new problem is not a scalable alternative.

Observing the parallel progress of the computer graphics community, computer vision (CV) researchers have recently revived the use of synthetic visual data to train and benchmark CV algorithms. The reasons for this renaissance include improved photo-realism, better and easier digital authoring tools (e.g., game engines), large libraries of 3D models, and common hardware (e.g., GPUs) to efficiently handle both the generation and use of such visual data. Recent research reports promising results on a variety of applications – ranging from optical flow to scene understanding – using a variety of generation strategies – from real-world images mixed with 3D models to full on creation of dynamic virtual worlds. There are still many open research challenges remaining, such as clarifying the importance of (photo-)realism, overcoming the real-to-virtual gap, virtual humans and their behaviors, procedural generation, and simulation testing.

This special issue of IJCV follows the very successful workshop on Virtual/Augmented Reality for Visual Artificial Intelligence (VARVAI) which was held in conjunction with ECCV 2016. We welcome submissions exploring novel ways to generate and use synthetic visual data for fundamental CV problems and their applications.

Topics of Interest

The topics of interest include, but are not limited to:
  • Synthesizing visual data for CV: using game/physics/rendering engines & 3D CAD models; augmentation / transformation of real-world images and videos; procedural generation; digitization of real-world scenes, objects, persons, motions; photorealism; computational efficiency and large scale data generation;
  • Training with synthetic visual data: assessing the gap between real and synthetic data; pre-training; data augmentation; transfer learning and domain adaptation; active and reinforcement learning;
  • Evaluating CV algorithms using synthetic visual data: assessing generalization performance, especially in rare conditions; fine-grained ablative analyses; CV unit tests;
  • Applications: tracking, re-identification; human pose estimation, action recognition, and event detection; object-, instance-, and scene-level segmentation; optical flow, scene flow, depth estimation; visual question answering and spatiotemporal reasoning; recognition of objects, text, faces, emotions, ...
For example relevant works, please refer to the program of the VARVAI workshop.

If you are unsure whether your work could be a good fit to this special issue, do not hesitate to contact the guest editors (contact information below).

Submission Process

Authors are encouraged to submit high-quality, original work that has neither appeared in, nor is under consideration by, other journals. All open submissions will be peer reviewed subject to the standards of the journal. Manuscripts based on previously published conference papers must be extended substantially. Manuscripts should be submitted to: Detailed instructions are available on the IJCV website. Please select “S.I.: Synthetic Visual Data” in the menu “Choose Article Type” after clicking on “Submit new manuscript”.

Important Dates

Paper submission deadline: June 16th, 2017.

Guest Editors