Historically, my mathematical research has often fallen within the scope of differential equations in their various flavours: ordinary, partial, deterministic or stochastic; more particularly within the framework of rough paths and the theory of regularity structures. More recently, this research has been guided by a desire to mathematically understand machine learning algorithms and their properties. Of particular interest and relevance to my past accumulated experience are the machine learning techniques tailored for handling streaming data, such as recurrent neural networks (RNNs). The data streams treated by such networks appear naturally in many fields such as (audio or video) signal processing or financial data. 

Despite the empirical success of RNNs and their many variants (long short-term memory networks (LSTMs), gated recurrent units (GRUs), etc.), several fundamental mathematical questions pertaining to their functioning remain open. These questions include inquiries into the type of information retained by RNNs, learning guarantees based on available data and architectural complexity, and crucially, addressing classical training issues like instability, non-convergence, and catastrophic forgetting. 

My research aims to advance the understanding of RNNs by approaching these systems through the lenses of the theories of statistical learning, high-dimensional probability, rough paths and the signature-based decompositions. In particular, I am investigating the power and limitations of such architecture by analysing the role played by its main components: recurrence, presence of stochasticity, the choices of dimensions, hyperparameters and activation functions. Ultimately, I would like to exploit the development of such mathematical insight to enhance the performance of RNN-related training algorithms and possibly derive new learning paradigms.