MINT

Maximum Information keypoiNTs (MINT):

Entropy-driven Unsupervised Keypoint Representation Learning in Videos 

(ICML 2023)

Ali Younes, Simone Schaub-Meyer, Georgia Chalvatzak

Abstract:
Extracting informative representations from videos is fundamental for effectively learning various downstream tasks. We present a novel approach for unsupervised learning of meaningful representations from videos, leveraging the concept of image spatial entropy (ISE) that quantifies the per-pixel information in an image. We argue that local entropy of pixel neighborhoods and their temporal evolution create valuable intrinsic supervisory signals for learning prominent features. Building on this idea, we abstract visual features into a concise representation of keypoints that act as dynamic information transmitters, and design a deep learning model that learns, purely unsupervised, spatially and temporally consistent representations directly from video frames. Two original information-theoretic losses, computed from local entropy, guide our model to discover consistent keypoint representations; a loss that maximizes the spatial information covered by the keypoints and a loss that optimizes the keypoints' information transportation over time. We compare our keypoint representation to strong baselines for various downstream tasks, e.g., learning object dynamics. Our empirical results show superior performance for our information-driven keypoints that resolve challenges like attendance to static and dynamic objects or objects abruptly entering and leaving the scene.

Information maximization

Information transportation

Video results for different downstream tasks:

Object detection and tracking & Learning dynamics on CLEVRER dataset.

Object discovery in realistic scenes on MIME and SIMITATE datasets.

Imitation learning on MAGICAL dataset.