Classical notions of information, such as Shannon entropy, measure the amount of information in a signal in terms of the frequency of occurrence of symbols. Such notions are very useful for tasks such as data compression. However, they are not as useful for tasks such as visual recognition, where the semantic content of the scene is essential. Moreover, classical notions of information are affected by nuisance factors, such as viewpoint and illumination conditions, which are irrelevant to the recognition task. The goal of this workshop is to bring together researchers in computer vision, machine learning and information theory, to discuss recent progress on defining and computing new notions of information that capture the semantic content of multi-modal data. Topics of interest include but are not limited to information theoretic approaches to scene understanding, representation learning, domain adaptation, and generative adversarial networks as well as the interplay between information and semantic content.
- René Vidal, Seder Professor of Biomedical Engineering and Director of the Mathematical Institute for Data Science, Johns Hopkins University.
- John Shawe-Taylor, Professor of Computer Science and Director of the Centre for Computational Statistics and Machine Learning (CSML), University College London.
- Extended abstract submission (deadline extended): May 18, 2019 (23:59 PST)
- Notification to authors: May 27, 2019
- Workshop date: June 16, 2019 (Full day, Location: 202 C)
Call for Extended Abstract Submission
- We welcome submission of extended abstracts of recent works on topics at the intersection of computer vision, machine learning, and information theory.
- Topics of interest include (but not limited to) information theoretic approaches to scene understanding, representation learning, domain adaptation, GANs, as well as the interplay between information and semantic content.
- Accepted works will be presented in the workshop either in the poster session or as one of the contributed talks. No proceedings will be published as part of this workshop.