Can we model high-level visual properties such as image quality, memorability and photographic style? Can we leverage large-scale datasets to mitigate the intrinsic uncertainty of these properties? And most importantly, can we successfully employ such models in applications for multimodal classification, retrieval and recommendation?

 This workshop seeks to stimulate and inform researchers into tackling the next level of problems in this exciting area of research that we call visual analysis beyond semantics (vABS). The workshop itself is motivated from two directions:

 1. While still in its nascent stage, research into computational models for visual analysis beyond semantics have already shown great potential and interesting results. However, as several recent papers published in latest CVPR and ICCV show, the techniques currently employed are mainly derived from content understanding (analysis pipelines involving SIFT, BOV and large-margin classifiers). Most recent and advanced computer vision/machine learning techniques like visual attributes, recommendation, implicit feedback, etc. are completely neglected. Moreover, current approaches are not leveraging multimodal information (visual and textual data).

 2. ABS focuses on non-factual and uncertain information related to personal preferences, tastes and opinions. As a consequence, research in this novel field touches many aspects of learning, vision, cognitive science and perception. For this very reason, we believe that CVPR is the perfect venue to find and bring together such a heterogeneous and complementary set of competences.  

Goals and Topics

Some specific areas of interest include, but are not limited to:

  • analysis of image attractiveness ( high- and low-level image quality assessment)
  • face analysis and aesthetics
  • image and text memorability
  • image and visual text understanding
  • visual attributes for analysis beyond semantics
  • multimodal/multimedia benchmarks
  • visual style and affordances
Related workshops: 
  • ACM Multimedia, 2012 HP Challenge: Understanding the Emotional Impact of Images and Videos,
  • ECCV 2012: VISART, Where computer Vision Meets Art Workshop,
  • ICIP 2008: Special Session on Image Aesthetics, Mood and Emotion,

Important Dates 

  • Submission: March 30th, 2013 11:59pm. EST
  • Submission: April 5th, 2013 11:59pm. EST (EXTENDED DEADLINE)
  • Notification: May 3rd, 2013
  • Camera ready: May 8th, 2013
  • Workshop:  June 28th, 2013


    Papers should describe original and unpublished work about the above or closely related topics. Each paper will receive double blind reviews. Authors should take into account the following:

    • All papers must be written in English and submitted in PDF format.
    • Papers must be submitted online through the CVPR submission CMT system.
    • The maximum paper length is 8 pages. The workshop paper format guidelines are the same as the Main Conference papers.
    • Submissions will be rejected without review if they: contain more than 8 pages, violate the double-blind policy or violate the dual-submission policy.

    The author kit provides a LaTeX2e template for submissions, and an example paper to demonstrate the format. Please refer to this example for detailed formatting instructions.

    A paper ID will be allocated to you during submission. Please replace the asterisks in the example paper with your paper's own ID before uploading your file.


    • Alexander Berg, Stony Brooks
    • Teofilo Campos, University of Surrey
    • Christel Chamaret, Technicolor
    • Serge Belongie, UCSD
    • Alessio Del Bue, IIT
    • Edward Gibson, MIT
    • Derek Hoiem, UIUC
    • Diane Larlus, Xerox Research Centre Europe
    • Naila Murray, Xerox Research Centre Europe
    • Devi Parikh, Virginia Tech
    • Florent Perronnin, Xerox Research Centre Europe
    • Hanspeter Pfister, Harvard
    • Nicu Sebe, University of Trento
    • Antonio Torralba, MIT
    • Joost Van de Weijer, UAB
    • Maria Vanrell, CVC Barcelona

    Keynote Speakers


    ..just drop us an email

      Subpages (1): Program