Moving to Machine Learning after a PhD in Stochastic Analysis, I have initially approached the field from a theoretical perspective: much of my research is focused on Statistical Learning Theory, which, in simple terms, concerns itself with estimating the number of i.i.d. observations required to train a machine learning model up to a certain accuracy. I maintain a strong interest in understanding why neural networks behave the way they do and in theoretically explaining their surprising generalization abilities. I have proved the first norm-based generalization bounds for CNNs which take the convolutional structure into account, as well as improved dependence on the number of classes/labels in various multi-class and multi-label settings. In many cases, insights into the workings of machine learning methods or even theoretically motivated algorithmic improvements can arise from theoretical study. Thus, I see a natural connection between the applied and theoretical parts of my work.
In the last couple of years, I have developed a strong interest in Matrix Completion and Recommender Systems, which offer a rare and impressive combination of fascinating unsolved mathematical problems and rich application areas. Matrix completion is the problem of completing a partially observed matrix assuming some kind of (typically low-rank) structure. Such methods can be applied in any field where the observed variable (the value of the matrix at the given entry) depends on a combination of two variables (the row and the column) each chosen from a moderate-sized finite set (the set of rows or columns), but observing all of the combinations would be prohibitively expensive or impractical. Recommender Systems are a natural application, where the rows and columns traditionally correspond to users (people) and items (movies, songs, books, wines, doctors, job advertisements to respond to, etc.). However, the field has been successfully applied to drug interaction prediction, the prediction of activity coefficients of solvents in chemical engineering, and even personalized medicine. There are many optimization methods that indirectly constrain the space where the ground truth matrix is assumed to lie (explicit rank restriction, nuclear norm regularization, max norm regularization, to name but a few), and each comes with its own challenges regarding sample complexity, optimization guarantees, and unique areas of practical applicability.
In my NeurIPS 2021 paper " Fine-grained Generalization Analysis of Inductive Matrix Completion", I have provided distribution-free sample complexity guarantees for Inductive Matrix Completion (IMC) which bridge the gap between the theoretical study of matrix completion and IMC. I have also provided a modified regularizer which brings the rate down to the same as in the uniform sampling case. More recently, in my 2024 ICML paper "Generalization Analysis of Deep Nonlinear Matrix Completion", I have shown generalization bounds matrix completion with Schatten p quasi norm constraints in both the uniform sampling and arbitrary sampling regimes. The bounds match existing results for the i.i.d. setting for the nuclear norm when p=1, and in the arbitrary sampling regime, the bounds improve when the Schatten index decreases. Thanks to powerful recent results from the study of deep linear networks, Schatten p quasi norm constraints are known to be equivalent to the popular "deep matrix factorization" framework. Thus, the results provide arguably the first justification for the positive effect of increased depth in a matrix completion context. In addition, the same paper contains several extensions to nonlinear models, the simplest of which is FRMC, a model that involves a latent low-rank matrix and a single entry-wise rescaling function. The proofs rely on a "multi-class chaining" argument inspired by Dudley's entropy theorem that may be of independent interest.
I am also working on other low-rank methods: the paper Beyond Smoothness: Incorporating Low-Rank Analysis into Nonparametric Density Estimation contains some of the first and most comprehensive theoretical results for low-rank density estimation via low-rank histograms, including both bias and variance analysis.
For a more detailed description of each paper, see my research statement on the SMU website: Research Statement.
A. Ledent, P. Kasalicky, R. Alves and H. Lauw. Conv4Rec: A 1-by-1 Convolutional AutoEncoder for User Profiling through Joint Analysis of Implicit and Explicit Feedbacks. Transactions on Neural Networks and Learning Systems (TNNLS), 2025 (accepted).
J. Poernomo, N. Lee Tan, R. Alves and A. Ledent. Probabilistic Modeling, Learnability and Uncertainty Estimation for Interaction Prediction in Movie Rating Datasets. ACM Conference on Recommender Systems (RecSys, Late Breaking Results), 2025 (accepted).
T. Zmeskalova, A. Ledent, M. Spisak, P. Kordik and R. Alves. Recurrent Autoregressive Linear Model for Next-Basket Recommendation. ACM Conference on Recommender Systems (RecSys, Late Breaking Results), 2025 (accepted).
A. Ledent and R. Alves. Generalisation Analysis of Deep Non-linear Matrix Completion Proceedings of the Forty-first International Conference on Machine Learning (ICML), 2024.
R. Alves*, A. Ledent*, R Assunção, P Vaz-De-Melo and M Kloft, Unraveling the Dynamics of Stable and Curious Audiences in Web Systems. WWW '24: Proceedings of the ACM on Web Conference 2024.
R. Alves and A. Ledent. Context-Aware REpresentation: Jointly Learning Item Features and Selection From Triplets. IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2024.
R Lopes, R Alves, A Ledent, RLT Santos, and M Kloft. Recommendations with minimum exposure guarantees: A post-processing framework. Expert Systems with Applications (Eswa), 2024.
P. Kasalicky, A. Ledent and R. Alves. Uncertainty-adjusted Inductive Matrix Completion with Graph Neural Networks. ACM Conference on Recommender Systems (RecSys, Late Breaking Results Paper), 2023.
A. Ledent, R. Alves, Y. Lei, Y. Guermeur, M. Kloft. Generalization Bounds for Inductive Matrix Completion in Low-noise Settings. Proceedings of the AAAI Conference on Artificial Intelligence (oral), 2023.
A. Ledent, R. Alves, Y. Lei, M. Kloft. Fine-grained Analysis of Inductive Matrix Completion. Advances in Neural Information Processing Systems (NeurIPS), 34, 2021.
A. Ledent*, R. Alves*, and M. Kloft. Orthogonal Inductive Matrix Completion. IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2021.
R. Alves*, A. Ledent*, and Marius Kloft. Uncertainty-Adjusted Recommendation via Matrix Factorization With Weighted Losses. IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2023.
R. Alves, A. Ledent, and M. Kloft. Burst-induced Multi-Armed Bandit for Learning Recommendation. ACM Conference on Recommender Systems (RecSys), 292-301, 2021.
R. Alves*, A. Ledent*, R. Assunção, and M. Kloft. An Empirical Study of the Discreteness Prior in Low-Rank Matrix Completion. NeurIPS 2020 Preregistration Workshop. Proceedings of Machine Learning Research (PMLR) 148:111-125, 2021.
H. Nong Minh and A. Ledent. "Generalization Analysis for Contrastive Representation Learning under Non-IID Settings", Proceedings of the Forty-second International Conference on Machine Learning (ICML), 2025.
H. Nong Minh, A. Ledent, Y. Lei and K.C. Yeaw. "Generalization Analysis for Deep Contrastive Learning Representation Learning", Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2025.
W. Mustafa, P. Liznerski, A. Ledent, D. Wagner, P. Wang, M. Kloft. Non-vacuous Generalization Bounds for Adversarial Risk in Stochastic Neural Networks. International Conference on Artificial Intelligence and Statistics (AiStats), 2024.
R. Vandermeulen, A. Ledent. Beyond Smoothness: Incorporating Low-Rank Analysis into Nonparametric Density Estimation. Advances in Neural Information Processing Systems (NeurIPS), 34, 2021.
A. Ledent, W. Mustafa, Y. Lei, and M. Kloft. Norm-based generalisation bounds for convolutional neural networks.
Proceedings of the AAAI Conference on Artificial Intelligence, 35(9): 8279-8287, 2021.
L. Wu, A. Ledent, Y. Lei and M. Kloft. Fine-grained Generalization Analysis of Vector-valued Learning.
Proceedings of the AAAI Conference on Artificial Intelligence, 35(12): 10338-10346, 2021.
W. Mustafa, Y. Lei, A. Ledent and M. Kloft. Fine-grained Generalization Analysis of Structured Output Prediction. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI), 2021.
Y. Lei*, A. Ledent*, and M. Kloft. Sharper Generalization Bounds for Pairwise Learning. Advances in Neural Information Processing Systems (NeurIPS), 33: 21236-21246, 2020.
A. Ledent* and P. Liu*. "Explainable Neural Networks with Guarantees: A Sparse Estimation Approach", Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2025.
S. Varshneya, A. Ledent, P Liznerski, A Balinskyy, P Mehta, W Mustafa and M. Kloft. Interpretable Tensor Fusion. Proceedings of the Thirty-third International Joint Conference on Artificial Intelligence (IJCAI), 2024.
S. Varshneya, A. Ledent, R. A. Vandermeulen, Y. Lei, M. Enders, D. Borth, M. Kloft. Learning Interpretable Concept Groups in CNNs. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI), 2021.
L. Zhou, A. Ledent, Q. Hu, T. Liu, J. Zhang and Marius Kloft. Model Uncertainty Guides Visual Object Tracking. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4): 3581-3589, 2021.
*= Equal Contribution