Amy Zhang is an assistant professor and Texas Instruments/Kilby Fellow in the Department of Electrical and Computer Engineering at UT Austin starting Spring 2023 and an affiliate member of the Texas Robotics Consortium. Her work focuses on improving sample efficiency and generalization of reinforcement learning algorithms through bridging theory and practice, and developing new decision making algorithms for real world problems.
Amy completed her PhD in computer science at McGill University and Mila – Quebec Artificial Intelligence Institute, where she was advised by Joelle Pineau and Doina Precup. Previously, she was a research scientist at Facebook AI Research, a postdoctoral fellow at UC Berkeley, and obtained an M.Eng. in EECS and dual B.Sci. degrees in Mathematics and EECS from MIT. She also spent two years on the board of directors for Women in Machine Learning.
RLC ICBINB 2024 Talk: "Fixing Issues in Goal-Conditioned RL" ⬇️
Audrey Durand is an Assistant Professor in Computer Science and Software/Computer/Electrical Engineering at Université Laval (Québec City, Canada), is a core member of the Institute Intelligence and Data (IID) and is also affiliated with Mila — Quebec Artificial Intelligence Institute through a Canada CIFAR AI Chair. Her research is mostly focused on Reinforcement Learning and Bandits with applications primarily related to Healthcare.
RLC ICBINB 2024 Talk: "A tale of wins and losses" ⬇️
Will Dabney is a senior staff research scientist at DeepMind, where he studies reinforcement learning with forays into other topics in machine learning and neuroscience. His research agenda focuses on finding the critical path to human-level AI. Overall, he believes we are in fact only a handful of great papers away from the most significant breakthrough in human history. With the help of his collaborators, he hopes to move us closer; one paper, experiment, or conversation at a time.
RLC ICBINB 2024 Talk: "Reinforcement Learning is hard. Really hard."
Sonali Parbhoo is an Assistant Professor and leader of the AI for Actionable Impact Group at Imperial College London. Her research focuses on sequential decision-making in uncertainty, causal inference and building interpretable models to improve clinical care and deepen our understanding of human health, with applications in areas such as HIV and critical care. Prior to this, Sonali was a postdoctoral research fellow at Harvard. Her work has been published at a number of machine learning conferences (NeurIPS, AAAI, ICML, AISTATS) and medical journals (Nature Medicine, Nature Communications, 2 AMIA, PLoS One, JAIDS). She was also a Swiss National Science Fellow and was named a Rising Star in AI in 2021. Sonali received her PhD in 2019 from the University of Basel, Switzerland, where she built intelligent sequential decision-making models for understanding the interplay between host and virus in the fight against HIV.
RLC ICBINB 2024 Talk: "Mind Your Evaluation: On the Challenges with Off-Policy Evaluation for Real World Decision-Making"
Shengpu Tang is an incoming Assistant Professor of Computer Science at Emory University. He recently finished his PhD in Computer Science & Engineering at the University of Michigan in 2024, supervised by Prof. Jenna Wiens. Shengpu's research aims to use AI to enhance healthcare decision making, with a particular focus on reinforcement learning. His past contributions span the technical areas of reinforcement learning (NeurIPS’23, NeurIPS’22 oral, MLHC’21, ICML’20), time-series and sequence data modeling (MLHC’19), dataset bias (HealthAffairs’21), as well as translating AI solutions to advance precision medicine (IDWeek'24 oral, AJS’21, JAMIA’20, JCO-CCI’20), with some work specifically addressing the COVID-19 pandemic (BMJ’22, AnnalsATS’20).
RLC ICBINB 2024 Talk: "Reinforcement Learning for Healthcare Decision Making"
Steven is a senior lecturer and member of the RAIL Lab research group at the University of the Witwatersrand in Johannesburg, South Africa. His research interests revolve around artificial intelligence, and reinforcement learning in particular, working to developing agents capable of learning symbolic state representations that can be transferred between tasks.
RLC ICBINB 2024 Talk: "Safety in (negative) numbers? Rethinking what to learn for safe RL"