Can we model high-level visual properties such as image quality, memorability and photographic style? Can we leverage large-scale datasets to mitigate the intrinsic uncertainty of these properties? And most importantly, can we successfully employ such models in applications for multimodal classification, retrieval and recommendation?
This workshop seeks to stimulate and inform researchers into tackling the next level of problems in this exciting area of research that we call visual analysis beyond semantics (vABS). The workshop itself is motivated from two directions:
1. While still in its nascent stage, research into computational models for visual analysis beyond semantics have already shown great potential and interesting results. However, as several recent papers published in latest CVPR and ICCV show, the techniques currently employed are mainly derived from content understanding (analysis pipelines involving SIFT, BOV and large-margin classifiers). Most recent and advanced computer vision/machine learning techniques like visual attributes, recommendation, implicit feedback, etc. are completely neglected. Moreover, current approaches are not leveraging multimodal information (visual and textual data).
2. ABS focuses on non-factual and uncertain information related to personal preferences, tastes and opinions. As a consequence, research in this novel field touches many aspects of learning, vision, cognitive science and perception. For this very reason, we believe that CVPR is the perfect venue to find and bring together such a heterogeneous and complementary set of competences.
Goals and Topics
Some specific areas of interest include, but are not limited to:
Papers should describe original and unpublished work about the above or closely related topics. Each paper will receive double blind reviews. Authors should take into account the following:
The author kit provides a LaTeX2e template for submissions, and an example paper to demonstrate the format. Please refer to this example for detailed formatting instructions.
A paper ID will be allocated to you during submission. Please replace the asterisks in the example paper with your paper's own ID before uploading your file.