Speaker
Dr. Ross Mawhorter, Postdoctoral Scholar at the UofA, supervised by Dr. Matthew Guzdial
Title
The Very Hungry Researcher: Form and Function in Procgen Research
Abstract
Procedural Content Generation, or procgen, is a research field that studies methods for automatically producing all kinds of different designs. In my research, I focus on procedural level design for videogames. While there are many different methods for procedural level design, spanning essentially the whole field of AI (from constraint solving to reinforcement learning to machine learning), all of these methods leave something to be desired. There is a tendency to conflate form (the actual content being generated) and function (the player experiences that the content supports). In this talk, I will survey the field of procgen, and describe how the existing methods handle this dichotomy. I will explain where progress has been made, but I will also describe the ways that these contributions still leave me hungry.
Presenter Bio
Ross Mawhorter is a recent PhD graduate from the University of California, Santa Cruz. His passion for research led him to abandon sunny California for the frozen north as a postdoc in Matthew Guzdial's GRAIL lab. He studies procgen, working in his free time on generating Super Metroid ROMhacks. Along with procedural level design, his academic interests include complexity theory and cophylogeny.
Timing & Location
UComm Seminar Room 2-108
pizza from 11:30, seminar from noon to 1
Add event to calendar
Speaker
Rohan Saha, PhD student, University of Alberta, supervised by Dr. Alona Fyshe
Title
Understanding the Staged Dynamics of Transformers in Learning Latent Structure
Abstract
Language modeling has shown us that transformers can discover latent structure from context, but the dynamics of how they acquire different components of that structure remain poorly understood, leading to assertions that models just remix training data. In this work, we use the Alchemy benchmark in a controlled setting (Wang et al., 2021) to investigate latent structure learning. We train a small decoder-only transformer on three task variants: 1) inferring missing transitions from partial contextual information, 2) composing simple rules to solve multi-transition sequences, and 3) decomposing complex multi-step examples to infer intermediate transitions. By factorizing each task into interpretable components, we show that the model learns the different latent structure components in discrete stages. We also observe an asymmetry: the model composes fundamental transitions robustly, but struggles to decompose complex examples to discover the atomic transitions. Finally, using causal interventions, we identify layer-specific plasticity windows during which freezing substantially delays or prevents stage completion. These findings provide insight into how a transformer model acquires latent structure, offering a detailed view of how capabilities evolve during training.
Presenter Bio
Rohan Saha is a PhD student at the University of Alberta studying the learning dynamics of transformer models.
Timing & Location
UComm Seminar Room 2-108
pizza from 11:30, seminar from noon to 1
Add event to calendar
NO SEMINAR - Good Friday
Monthly Amii Fellow Seminar
Speaker
Dr. Bahareh Tolooshams, Amii Fellow and Assistant Professor in the Electrical & Computer Engineering Department at the University of Alberta
Title
TBA
Abstract
TBA
Presenter Bio
TBA
Timing & Location
Amii HQ 2nd floor event space (10065 Jasper Ave)
pizza from 11:30, seminar from noon to 1
Add event to calendar
TBA
Speaker
Calarina Muslimani, PhD student at the University of Alberta, supervised by Dr. Matthew Taylor
Title
Reward Design and Evaluation in Reinforcement Learning
Abstract
This talk will focus on the design and evaluation of reward functions in reinforcement learning. I will begin by discussing the key challenges in reward design and the difficulties in determining whether a reward function is properly specified. Then, I’ll introduce an approach to support RL practitioners in designing more effective and aligned reward functions.
Presenter Bio
Calarina (Callie) Muslimani is a fourth-year PhD student at the University of Alberta in the Reinforcement Learning and Artificial Intelligence (RLAI) Lab, advised by Matthew E. Taylor. Her research focuses on designing human-aligned reward functions for reinforcement learning, including developing metrics to evaluate reward functions and creating reward learning algorithms.
Timing & Location
UComm Seminar Room 2-108
pizza from 11:30, seminar from noon to 1
Add event to calendar
Speaker
TBA
Monthly Amii Fellow Seminar
Speaker
Dr. Amber Simpson, Amii Fellow and Professor, Faculty of Medicine & Dentistry - Radiology & Diagnostic Imaging Department
Title
TBA
Abstract
TBA
Presenter Bio
TBA
Website
TBA
Timing & Location
Amii HQ 2nd floor event space (10065 Jasper Ave)
pizza from 11:30, seminar from noon to 1
Add event to calendar
TBA
Speaker
Dr. Liam McCoy, Neurology Resident Physician, University of Alberta, and Research Affiliate, Massachusetts Institute of Technology, hosted by Dr. Randy Goebel
Title
TBA
Abstract
TBA
Presenter Bio
TBA
Timing & Location
UComm Seminar Room 2-108
pizza from 11:30, seminar from noon to 1
Add event to calendar
TBA
Speaker
Jiamin He, PhD student at the University of Alberta, supervised by Dr. Martha White
Title
TBA
Abstract
TBA
Presenter Bio
TBA
Timing & Location
UComm Seminar Room 2-108
pizza from 11:30, seminar from noon to 1
Add event to calendar
TBA
NO SEMINAR - Upper Bound
Speaker
TBA
More dates coming soon!