Call for Papers

Deep video understanding is a difficult task which requires systems to develop a deep analysis and understanding of the relationships between different entities in video, to use known information to reason about other, more hidden information, and to populate a knowledge graph (KG) with all acquired information. To work on this task, a system should take into consideration all available modalities (speech, image/video, and in some cases text). The aim of this workshop is to push the limits of multimodal extraction, fusion, and analysis techniques to address the problem of analysing long duration videos holistically and extracting useful knowledge to utilize it in solving different types of queries. The target knowledge includes both visual and non-visual elements. As videos and multimedia data are getting more and more popular and usable by users in different domains, the research, approaches and techniques we aim to be applied in this workshop will be very relevant in the coming years and near future.

This workshop will support the following research contributions:

Contributions related (but not limited) to the following topics applied on the available DVU dataset or any external datasets are invited:

    • Multimodal feature extraction for movies and extended video

    • Multimodal fusion of computer vision, text/language processing and audio for extended video / movie analysis

    • Machine Learning methods for movie-based multimodal interaction

    • Sentiment analysis and multimodal dialogue modeling for movies

    • Knowledge Graph generation, analysis, and extraction for movies and extended videos

Submission

We invite submissions of long papers (up to 8 pages excluding references), short papers (up to 4 pages excluding references), and poster abstracts (up to 3 pages excluding references), formatted according to the ACM template available here (https://www.acm.org/binaries/content/assets/publications/consolidated-tex-template/acmart-master.zip), or directly from Overleaf (www.overleaf.com/gallery/tagged/acm-official#.WOuOk2e1taQ). Submissions shall be single blind, i.e. do not need to be anonymized. Workshop papers will be indexed by ACM Digital Library in an adjunct proceedings.

Papers submitted at ICMI 2022 must not have been published previously. A paper is considered to have been published previously if it has appeared in a peer-reviewed journal, magazine, book, or meeting proceedings that is reliably and permanently available afterward in print or electronic form to non-attendees, regardless of the language of that publication. A paper substantially similar in content to one submitted to ICMI 2022 should not be simultaneously under consideration for another conference or workshop.

ICMI 2022 does not consider a paper on arXiv.org as a dual submission.

Paper submissions will be handled electronically via EasyChair : https://easychair.org/conferences/?conf=dvu2022

Important Dates