I am currently Maître de Conférences (~Associate Professor) at Université du Littoral de la Côte d'Opale (ULCO). I have a multidisciplinar background as I studied Mathematics, Bioinformatics and Computer Science at Université Paris-Sud (now Paris-Saclay), where I completed my Master's in 2011. I defended my Ph.D. in 2015 under the supervision of Nikolaus Hansen and Anne Auger, in the field of black-box optimization.
Following my doctoral work, I pursued international collaborations, first at Shinshu University (Japan) with Youhei Akimoto, and subsequently as a Postdoctoral Fellow at KTH (Sweden) with Jimmy Olsson (2016-2018). Since 2019, I have held my current position at ULCO. As of March 2025, I am an associate member of the RandOpt INRIA team at Ecole Polytechnique, and share my time between teaching at ULCO and doing research at Polytechnique.
My research is centred on algorithms that learn and adapt to the geometry and characteristics of a problem online — primarily in the context of numerical black-box optimization, but also for adaptive MCMC and machine learning.
I am interested in both theory (mainly stochastic processes and information geometry) and applications, especially for building a good understanding of the problems I work on. I want this understanding to be useful, whether in guiding modelling choices on how to encode prior knowledge to make the problem simpler, or in algorithm design, to help the algorithm autonomously learn a good description of the problem or avoid various pitfalls. To this end, I particularly care about methodology and performance measures, so that perceived improvements are not merely overfitting a particular problem or scenario, but will generalize to the complexity of the real-world.
My work in numerical optimisation is particularly focused on different variants of CMA-ES, a derivative-free optimisation algorithm which adapts its sampling distribution online to fit the local landscape of the objective function. I focus on "difficult" settings — ill-conditioning, noise, constraints or multiobjective optimisation — where CMA-ES provides a robust baseline. My theoretical work in this setting analyses the stability and dynamics of different components of CMA-ES in multiple scenarios. Part of my work, in order to make such analysis simpler (or even feasible), is to develop new mathematical tools, that is finding the right viewpoint and langage in which proofs become simple and natural.
Recent work include trying to improve on optimisation algorithms update rules through reinforcement learning, and transferring the adaptation mechanisms of CMA-ES to the context of adaptive MCMC, as well as proposing in this context new performance measures and new mathematical tools to make the ergodicity of the adaptive process easier to prove.
I am interested in most applications where progress can be made by finding a better way to either model or learn the underlying problem. I enjoy collaborating on applications outside of my field: I have previously worked with mathematician colleagues on constructing automatically counter-examples to conjectures in combinatorics and graph theory, or with industry partners on human detection with recurrent networks. If you are working on a specific case where you think my perspective could be helpful, feel free to reach out.
E-mail: alexandre [dot] chotard [at] univ-littoral [dot] fr
I frequently co-organize or participate events to discuss science, optimisation and "AI" with non-scientific audiences --- ranging from schools, libraries, to company seminars and public festivals like the Fête de la Science.
I am always happy to share about my interests, whether in probability, optimisation, mathematics, machine learning, or the research process itself, so feel free to reach out if you are interested in organizing such an event.
I prefer interactive settings and direct discussion over monologing in a more traditional slides-based presentation. I find that format more lively, and better at addressing what people are actually interested in. I like to start with giving proper definitions and context so that we know what we are talking about, before moving on to underlying philosophical considerations. As science does not exist in a vacuum, I deem important to discuss the associated social context and issues (even when it leads me to say that I lack expertise on the subject), so you may expect more than a technical discussion.
Lesson material can be found on moodle, or whatever platform we discussed in the first course. Feel free to email me if you have questions on the course (if your question is "what will be on the exam?" my answer is "everything we discussed about").
List of subjects I have taught for ~192 hours per year (except 96 hours from Sep. 2025-2026, in which I work half-time):
machine learning (mostly supervised and reinforcement learning, some search algorithms) to students from L3 to M2,
algorithmics, python, C and C++,
theoretical computer science, including some information theory and compression algorithms,
numerical optimization.
Alexandre Chotard (2026). Ergodicity of an Adaptive MCMC Sampler under a Probability Bound. [HAL ]
Alexandre Chotard, Anne Auger (2019). Verifiable Conditions for the Irreducibility and Aperiodicity of Markov Chains by Analyzing Underlying Deterministic Models. In Bernoulli. [ArXiV]
Alexandre Chotard, Anne Auger, Nikolaus Hansen (2015). Markov Chain Analysis of Cumulative Step-size Adaptation on a Linear Constraint Problem. In Evolutionary Computation Journal. [HAL]
Alexandre Chotard, Anne Auger (2025). On the Robustness of Nelder-Mead to Positive and Negative Noise Outliers with Heavy-Tails on the BBOB Test Suite. GECCO 2025 Companion - Genetic and Evolutionary Computation Conference Companion, Jul 2025, Malaga, Spain. pp.1859-1866. [HAL]
Alexandre Chotard, Anne Auger (2025). On the Robustness of BFGS to Positive and Negative Noise Outliers on the BBOB Test Suite. GECCO 2025 Companion - Genetic and Evolutionary Computation Conference Companion, Jul 2025, Malaga, Spain. pp.1850-1858. [HAL]
Alexandre Chotard, Martin Holena (2014). A Generalized Markov-Chain Modelling Approach to $(1, \lambda)$-ES Linear Optimization. In PPSN XIII, Springer, Lecture Notes in Computer Science, pp.902-911. [HAL]
Alexandre Chotard, Anne Auger, Nikolaus Hansen (2014). Markov Chain Analysis of Evolution Strategies on a Linear Constraint Optimization Problem. IEEE Congress on Evolutionary Computation (CEC) 2014, pp.159-166. [HAL]
Alexandre Chotard, Anne Auger, Nikolaus Hansen (2012). Cumulative Step-size Adaptation on Linear Functions. In PPSN XII, Springer, Lecture Notes in Computer Science, pp.72-81. [HAL]
Alexandre Chotard, Martin Holena (2014). A Generalized Markov-Chain Modelling Approach to $(1, \lambda)$-ES Linear Optimization: Technical report. Includes proofs of the PPSN XIII paper. [HAL]
Alexandre Chotard, Anne Auger, Nikolaus Hansen (2012). Cumulative Step-size Adaptation on Linear Functions: Technical report. Includes proofs of the PPSN XII paper, and some developments. [HAL]
Alexandre Chotard (2015). Markov Chain Analysis of Evolution Strategies. Work done under the supervision of Nikolaus Hansen and Anne Auger, in team TAO, at Inria Saclay, Université Paris-Sud. [TEL][PDF]