stochastic optimization, gradient descent, and its applications to machine learning
scheduling, resource allocation, stochastic networks
reinforcement learning, policy-gradient methods, Markov decision processes
random matrices applied to optimization
Brief description of my research:
I study algorithms and models used in operations research and machine learning that typically involve learning with data. We can find these, for example, in applications such as training neural networks, detecting communities in large networks, allocating resources in supply chains, and generally, in problems involving decision-making under uncertainty. My research involves examining the performance of learning algorithms that learn an optimal model or policy (depending on the application), and how structural assumptions of the problem influence the algorithm and its outcome.
Lately, I am particularly interested in Reinforcement Learning (RL) algorithms. In particular, commonly used RL algorithms can typically adapt to any environment but suffer from large sample complexity requirements, that is, they need a lot of data to learn. Understanding how an agent can use key prior feature information from the environment as part of the learning process could improve current RL approaches by reducing the need for data.
Below you can find my list of publications. Alternatively, please refer to my Google Scholar page.
7. The suboptimality ratio of projective measurements restricted to low-rank subspaces, preprint, (2024)
6. Score-Aware Policy-Gradient Methods and Performance Guarantees using Local Lyapunov Stability (with Céline Comte, Matthieu Jonckheere, and Jaron Sanders), Journal of Machine Learning Research, 26(132), 1-74, (2024)
5. Detection and evaluation of clusters within sequential data (with Alexander Van Werde, Gianluca Kosmella, and Jaron Sanders), Data Mining and Knowledge Discovery, 39(6), 1-30, (2024)
4. Spectral norm bounds for Markov chain random matrices (with Jaron Sanders), Stochastic Processes and their Applications 158, 134-169, (2023)
3. Universal Approximation for Dropout Neural Networks (with Oxana A. Manita, Mark A. Peletier, Jacobus W. Portegies, and Jaron Sanders), Journal of Machine Learning Research 23 (19), 1-46, (2022)
2. Asymptotic convergence rate of dropout on shallow linear neural networks (with Jaron Sanders), Proceedings of the ACM on Measurement and Analysis of Computing Systems 6.2, 1-53, (2022)
1. Almost sure convergence of dropout algorithms for neural networks (with Jaron Sanders), Accepted in Journal of Machine Learning Research, (2025)
Asymptotics of stochastic learning in structured networks, Phd Thesis, Eindhoven University of Technology, 2023