This course aims at exploring concepts and practical implications of dimensionality reduction ranging from error minimization, structure preservation, and classical methods to non-linear, spectral and divergence methods, and applications on data visualization and classification.
Basics of dimensionality reduction
Principal components and its variants
Spectral DR approaches
Divergence-based DR methods
Other methods
Applications: Data visualization and classification
[1] Tripathy, B.K., Sundareswaran, A. and Ghela, S., 2021. Unsupervised learning approaches for dimensionality reduction and data visualization. CRC Press.
[2] Lespinats, S., Colange, B. and Dutykh, D., 2022. Nonlinear Dimensionality Reduction Techniques. Springer International Publishing.
[3] Lee, J.A. and Verleysen, M., 2007. Nonlinear dimensionality reduction (Vol. 1). New York: Springer.
General course instructions and guidelines
Lectures
Lecture 0: Motivation and course presentation
Lecture 1: Basics of dimensionality reduction
Lecture notes Section 2 and 3
Lecture 2: Kernel PCA
Lecture 3: Structure preservation
Lecture 4: Spectral dimensionality reduction: Laplacian Eigenmaps
Lecture 5: Divergence-based dimensionality reduction: t-SNE vs UMAP
Lecture 6: Interactive dimensionality reduction and applications
Extra lecture: Expectile regression for DR
Assignments