The reading group
The aim of the reading club is to provide a bi-weekly space to learn about RL and its intersection with LLMs (more on this below). Our current set up is sessions where a speaker can explain a topic in this area, followed up by a friendly discussion. The idea is to shape the format of the reading club based on people's preferences and interests!
Topics
The goal of this reading group is to explore the intersection of reinforcement learning and large language models, from foundations to cutting-edge research. We plan on beginning with some core RL concepts (MDPs, policy gradients, value functions) and LLM fundamentals (transformers, pretraining, scaling laws), then progress to contemporary topics including RLHF, RL approaches to reasoning and planning, and so on.
Suitable for researchers and students with basic ML background interested in understanding how RL techniques are shaping the next generation of language models. We'll discuss key papers, share insights from recent work, and explore open problems in using RL to enhance model reasoning capabilities.
You can suggest topics here.
Where?
Every other Thursday at 1pm at Huxley 410.
How can I join?
Email me or Sergio Estan Ruiz.
Session 1 (Tom Coates): LLMs 101
Abstract: "What is this transformer thing, anyway?" I will describe what is going on inside a Large Language Model, starting from fundamental definitions.
Session 2 (Tom Coates): LLMs 102
Abstract: Finishing the talk from session 1.
Session 2 (Sara Veneziale): TBC