Reading Groups

REINFORCEMENT LEARNING

Current chairs: Antoine Moulin (ES/UK), Luca Viano (CH/ES)

The goal is to present and discuss reinforcement learning papers, i.e. to present the general idea as well as the most important takeaways, and then discuss the assumptions, limitations, and compare it to related works. Papers can be about deep RL or RL theory, depending on the interest of the participants.


Slack channel: #rg-reinforcement-learning


HUMAN-CENTRIC MACHINE LEARNING (HCML)

Current chairs: Piera Riccio (ES), Adrián Arnaiz-Rodríguez (ES), Gergely Németh (ES), Aditya Gulati (ES)

The HCML reading group aims to gather researchers and students interested in both getting a wide vision of the topic and also deeply diving into it. Reading papers about different topics inside HCML, and also discussing new problem set-ups, different approaches, and sources of bias will lead us to a broad understanding of how algorithmic and human decisions influence each other.


Slack channel: #rg-human-centric-ml


Mathematics of deep learning

Current chairs: Linara Adilova (DE/CH), Sidak Pal Singh (CH/DE), Oishi Deb (UK/CH)

In this group we want to discuss the theoretical directions of the research on the state-of-the-art neural networks, that are aiming at explaining the mechanisms behind the training, generalization, architecture and initialization choosing, etc. This reading group is not restricted by any particular application, so various tasks can be considered, ranging from the purely theoretical linear networks and binary classification to state-of-the-art transformer networks for natural language processing. Our main goal is to keep up with the most advanced areas of research in understanding mathematics of deep neural networks.


Slack channel: #rg-mathematics-dl


REPRESENTATION learning

Current chairs: Artur Szałata (DE), Julia Hindel (DE), Alejandro Tejada (DE)

Representation learning is concerned with learning representations of data that make it easier to extract useful information when building classifiers or other predictors. In the case of probabilistic models, a good representation is often one that captures the posterior distribution of the underlying explanatory factors for the observed input (from https://arxiv.org/pdf/1206.5538.pdf). Here we focus on unsupervised and self-supervised representation learning. The purpose of the reading group is to discuss papers on topics related to unsupervised and self-supervised representation learning, such as disentanglement, interpretability, manifold learning, and applications every month. We may invite speakers who published a paper of interest or send them the questions we come up with. The goal is to familiarize ourselves with the literature and stay up to date on the latest developments in the field.


Slack channel: #rg-representation-learning