Deep video understanding is a difficult task which requires systems to develop a deep analysis and understanding of the relationships between different entities in video, to use known information to reason about other, more hidden information, and to populate a knowledge graph (KG) with all acquired information. To work on this task, a system should take into consideration all available modalities (speech, image/video, and in some cases text). The aim of this new challenge is to push the limits of multimodal extraction, fusion, and analysis techniques to address the problem of analyzing long duration videos holistically and extracting useful knowledge to utilize it in solving different types of queries. The target knowledge includes both visual and non-visual elements. As videos and multimedia data are getting more and more popular and usable by users in different domains, the research, approaches and techniques we aim to be applied in this Grand Challenge will be very relevant in the coming years and near future.
Interested participants are invited to apply their approaches and methods on an extended novel Deep Video Understanding (DVU) dataset being made available by the challenge organizers. This includes the 10 movies from the 2020 version of this challenge (HLVU) with a Creative Commons license, and has been supplement with the Land Girls TV series licensed for us in this challenge by the BBC, and additional Creative Commons license movies added for the 2021 challenge. The dataset will be annotated by human assessors and final ground truth, both at the overall movie level (Ontology of relations, entities, actions & events, Knowledge Graph, and names and images of all main characters), and the individual scene level (Ontology of locations, people/entities, attributes for these and interactions between) will be provided for 50% of the dataset to participating researchers for training and development of their systems. The organizers will support evaluation and scoring for a hybrid of main query types, at the overall movie level and at the individual scene level distributed with the dataset (please refer to the dataset webpage for more details):
Example Question types at Overall Movie Level:
Multiple choice question answering on part of Knowledge Graph for selected movies.
Possible path analysis between persons / entities of interest in a Knowledge Graph extracted from selected movies.
Fill in the Graph Space - Given a partial graph, systems will be asked to fill in the graph space.
Example Question types at Individual Scene Level:
Find next or previous interaction, given two people, a specific scene, and the interaction between them.
Find a unique scene given a set of interactions and a scene list.
Fill in the Graph Space - Given a partial graph for a scene, systems will be asked to fill in the graph space.
Match between selected scenes and set of scene descriptions written in natural language
Run submission XML files should be emailed directly to Keith Curtis (keith.curtis@nist.gov), and CC to George Awad (george.awad@nist.gov). Please indicate ACMMM Grand Challenge in the subject line. Please refer to the Supported datasets page for XML sample queries, response files and DTD required to follow when submitting your results.
Grand Challenge papers will go through a single-blind review process. (Author names and affiliations should be included.) Papers should be limited to 4 pages in length + up to 2 extra pages for references only. Please check the main ACMMM2021 conference website for further details.
Papers should be submitted directly via the main conference submission site: https://cmt3.research.microsoft.com/ACMMM2021
Each Grand Challenge Submitted paper should be formatted as 4-page short paper and will be included in the main conference proceeding.
Complete HLVU annotations for development and testing data ,used in 2020, available: drive.google.com/drive/u/0/folders/1q1Ca0aFJrF9tB8hsw-mrI9d4tzy5wlPZ
DVU development data release: Available now at www-nlpir.nist.gov/projects/trecvid/dvu/training/
Testing dataset release : Available now from https://www-nlpir.nist.gov/projects/trecvid/dvu/testing/ (PLEASE see the datasets page for important announcement)
Testing queries release (phase 1): Available now [All movie-level queries, in addition to 50% of Scene-level queries] from Here (Readme file is available). In addition, the queries are available from alternative location (in case the first link is not working).
Testing queries release (phase 2): Available now [50% of Scene-level queries] from Here (please refer to the readme file)
Run submissions due to organizers (phase 1): July 11, 2021 [Solutions to movie-level queries and 50% of Scene-level queries of phase 1]
Paper submission deadline: July 11, 2021
Run submissions due to organizers (phase 2): July 25, 2021 [Solutions to 50% of Scene-level queries of phase 2]
Results released back to participants: August 1, 2021
Notification to authors: August 1, 2021
camera-ready submission: August 10, 2021
ACM Multimedia dates: October 20 - 24, 2021