Workshop at ICCV'17

This workshop was held in conjunction with ICCV 2017, Venice, Italy.

The workshop was a big success, thank you for participating!

Date: October 23rd (PM only)

Location: Sala Magnano (Palazzo del Casinò, 2nd floor)

Invited speakers

Kate Saenko

Boston University

Andrew Zisserman

University of Oxford, DeepMind

Program

Posters

Important Dates [Deadlines extended!]

Paper submission deadline: September 25th September 15th

Paper acceptance notification: October 6th (earlier for papers that have already been submitted).

Challenge submission deadline: October 13th September 15th

Exception: LSMDC 2017 Movie Description submission deadline: October 1st

Call for challenge participation

The workshop organizers have composed two challenges [The Large Scale Movie Description and Understanding Challenge (LSMDC) and MovieQA] based on movies and associated data sources and will run the following tasks:

  • LSMDC
    • Movie description
    • Movie annotation and retrieval
    • Movie fill-in-the-blank task
  • MovieQA
    • Question-answering in movies
    • Video retrieval based on plot synopses sentences

We require a short report detailing your method (1 paragraph minimum). We encourage you to write up your method as a paper submission (details below) but this is not required.

Call for paper submission

The goal of this workshop is to bring together researchers working on diverse topics in the area of multimodal video, story, and language understanding, in order to obtain a better view of existing challenges and new research directions. Possible solution pathways include learning from natural language descriptions, transfer learning, reasoning across long video sequences, understanding plots, recognizing characters, taking audio and speech into account, and in general, developing better models and algorithms for understanding video and multimodal data.

We aim to have a forum around the topic of the challenges, and invite exiting submissions on topics which include but are not limited to:

    • Generating descriptions for videos.
    • Generating Audio Descriptions for movies.
    • Multi-sentence descriptions for images and videos.
    • Video retrieval given natural sentence description.
    • Visual question answering for video.
    • Fill-in-the blank tasks.
    • Language as supervision for video understanding.
    • Using textual descriptions as weak supervision for video understanding.
    • Using dialogs and/or audio for video understanding.
    • Understanding video and plots.
    • Recognizing characters in TV series / movies.
    • Novel tasks with Audio Descriptions / DVS dataset.
    • Story understanding and telling.
    • Deep learning and other learning approaches for video understanding, description, story modeling.
    • Analysis of challenge datasets and approaches.

We welcome submissions of novel work or work which has been accepted recently elsewhere (e.g. ICCV).

Submissions should be 4-8 pages plus references. Please, send your work to lsmdc2015+17 at gmail.com

Presentation

Accepted papers will be presented as posters; some may be selected for spotlight or oral presentations.

The challenge winners will be asked to present their work as both oral presentation and poster. Authors may request to have only title and authors’ names published on the workshop page (default is to make submitted papers public).

Contact

Please direct your inquires to: lsmdc2015+17 at gmail.com

Organizers

Anna Rohrbach​

Max Planck Institute for Informatics

Makarand Tapaswi

University of Toronto

Atousa Torabi​

Disney Research

Tegan Maharaj

École Polytechnique de Montréal

Marcus Rohrbach

Facebook AI Research

Sanja Fidler

University of Toronto

Christopher Pal

École Polytechnique de Montréal

Bernt Schiele​

Max Planck Institute for Informatics