Manifold Learning (ML) has been the subject of intensive study over the past two decades in the computer vision and machine learning communities. Originally, manifold learning techniques aim to identify the underlying structure (usually low-dimensional) of data from a set of observations (in the form of high-dimensional vectors). The recent advances in deep learning make one wonder whether data-driven learning techniques can benefit from the theoretical findings from ML studies. This innocent looking question becomes more important if we note that deep learning techniques are notorious for being data-hungry and supervised (mostly). On the contrary, many ML techniques unravel data structures without much supervision. This workshop considers itself as the frontier to raise the question of how classical ML techniques can help deep learning and vice versa and targets studies and discussions that bridge the gap. Aside from the above, the use of Riemannian geometry in tackling/modelling various problems in computer vision has seen a surge of interest recently. The benefits of geometrical thinking can be understood by noting that in many applications, data naturally lies on smooth manifolds, hence distances and similarity measures computed by considering the geometry of the space naturally result in better and more accurate modelling. Various studies demonstrate the benefits of geometrical techniques in analysing images and videos such as face recognition, activity recognition, object detection and classification, biomedical image analysis, and structure-from-motion to name a few. Besides being mathematically appealing, Riemannian computations based on the geometry of underlying manifolds are often faster and more stable than their classical counterpartsIn this workshop, we will explore the latest development in machine learning techniques developed to work on/benefit from the non-linear manifolds. We will also target challenges and future directions related to the application of non-linear geometry, Riemannian manifolds in computer vision and machine learning. This workshop also acts as an opportunity for cross-disciplinary discussions and collaborations.
Theoretical Advances related to manifold learning:
Dimensionality Reduction (e.g., Locally Linear Embedding, Laplacian Eigenmaps and etc.)
Clustering (e.g., discriminative clustering)
Kernel methods
Metric Learning
Time series on non-linear manifolds
Transfert learning on non-linear manifolds
Generative Models on non-linear manifolds
Subspace Methods (e.g., Subspace clustering)
Advanced Optimization Techniques (constrained and non-convex optimization techniques on non-linear manifolds)
Mathematical Models for learning sequences
Mathematical Models for learning Shapes
Deep learning and non-linear manifolds
Low-rank factorization methods
Applications:
Biometrics
Image/video recognition
Action/activity recognition
Facial expressions recognition
Learning and scene understanding
Medical imaging
Robotics
Other related topics not listed above
Workshop submission deadline: October 17th
Workshop author notification: November 10th
Camera-ready submission: November 15th
Finalized workshop program: December 1st