This workshop was held in conjunction with ICCV 2017, Venice, Italy.

The workshop was a big success, thank you for participating!

Date: October 23rd (PM only)
Location: Sala Magnano (Palazzo del Casinò, 2nd floor)

Invited speakers

Antonio Torralba

Kate Saenko 
Boston University

Andrew Zisserman 
University of Oxford, DeepMind


14:00­ Welcome
14:05 Invited talk: Antonio Torralba (MIT)
Learning to See and Hear (slides)
14:30 Movie Description, Retrieval, Fill-In-the-Blank Challenges: introduction & results
Oral: Oliver Nina, Scott Clouse, Alper Yilmaz (pdf)
15:08­ Oral: YoungJae Yu, Jongseok Kim, Gunhee Kim (slides)
15:16 Oral: Jianfeng Dong, Shaoli Huang, Duanqing Xu, Dacheng Tao (slides)
15:25 Invited talk: Kate Saenko (Boston University)
Explaining Deep Vision and Language Models (slides)
15:50Coffee break & Poster Session
16:30Invited talk: Andrew Zisserman (University of Oxford, DeepMind)
Recognizing characters in TV series / movies revisited
16:55MovieQA, Plot-based Retrieval: introduction & results (slides)
Oral: Seil Na, Sangho Lee, Gunhee Kim (slides)
17:23Oral: Kyung-Min Kim, Seong-Ho Choi, Sung-Jae Cho, Shin-Hyung Kim, Byoung-Tak Zhang (slides)
17:35Invited talk: Ivan Laptev (INRIA)
Human actions: What should we learn?
18:00Closing Remarks


Important Dates [Deadlines extended!]

Paper submission deadline:  September 25th September 15th
Paper acceptance notification: October 6th (earlier for papers that have already been submitted).

Challenge submission deadline: October 13th September 15th
Exception: LSMDC 2017 Movie Description submission deadline: October 1st

Call for challenge participation

The workshop organizers have composed two challenges [The Large Scale Movie Description and Understanding Challenge (LSMDC) and MovieQA] based on movies and associated data sources and will run the following tasks:

    • Movie description
    • Movie annotation and retrieval
    • Movie  fill-in-the-blank task
  • MovieQA
    • Question-answering in movies
    • Video retrieval based on plot synopses sentences
We require a short report detailing your method (1 paragraph minimum). We encourage you to write up your method as a paper submission (details below) but this is not required.

Call for paper submission

The goal of this workshop is to bring together researchers working on diverse topics in the area of multimodal video, story, and language understanding, in order to obtain a better view of existing challenges and new research directions. Possible solution pathways include learning from natural language descriptions, transfer learning, reasoning across long video sequences, understanding plots, recognizing characters, taking audio and speech into account, and in general, developing better models and algorithms for understanding video and multimodal data.


We aim to have a forum around the topic of the challenges, and invite exiting submissions on topics which include but are not limited to:

  • Generating descriptions for videos.

  • Generating Audio Descriptions for movies.

  • Multi-sentence descriptions for images and videos.

  • Video retrieval given natural sentence description.

  • Visual question answering for video.

  • Fill-in-the blank tasks.

  • Language as supervision for video understanding.

  • Using textual descriptions as weak supervision for video understanding.

  • Using dialogs and/or audio for video understanding.

  • Understanding video and plots.

  • Recognizing characters in TV series / movies.

  • Novel tasks with Audio Descriptions / DVS dataset.

  • Story understanding and telling.

  • Deep learning and other learning approaches for video understanding, description, story modeling.

  • Analysis of challenge datasets and approaches.


We welcome submissions of novel work or work which has been accepted recently elsewhere (e.g. ICCV).

Submissions should be 4-8 pages plus references. Please, send your work to lsmdc2015+17 at


Accepted papers will be presented as posters; some may be selected for spotlight or oral presentations.

The challenge winners will be asked to present their work as both oral presentation and poster. Authors may request to have only title and authors’ names published on the workshop page (default is to make submitted papers public). 


Please direct your inquires to: lsmdc2015+17 at


Anna Rohrbach

Max Planck Institute for Informatics

Makarand Tapaswi

University of Toronto

Atousa Torabi​

Disney Research
Tegan Maharaj
École Polytechnique de Montréal

Marcus Rohrbach

Facebook AI Research

Sanja Fidler
University of Toronto

Christopher Pal

École Polytechnique de Montréal

Bernt Schiele​

Max Planck Institute for Informatics