James is a Senior Staff Research Scientist at Sony AI. He received his PhD in computer science from the University of Maryland, Baltimore County. He went on to do postdoctoral research at Brown University from 2013-2016. He researches autonomous decision-making agents, focusing on Reinforcement Learning (RL), RL that involves interaction with people, and RL for robotics. He was the creator of the Brown-UMBC Reinforcement Learning and Planning (BURLAP) Java library, which was one of the largest early efforts to make an open-source RL library.
Ida is a Principal Researcher at Microsoft Research NYC. In this role, she broadly focus on building and evaluating generative AI, inspired by her research in cognitive neuroscience, reinforcement learning, and NeuroAI. Specifically, she study how humans and AI build models of the world and use them in memory, exploration, & planning. She builds and tests brain & behavior inspired algorithms for learning & reasoning, e.g., AI for gaming with Xbox. Her approach combines reinforcement learning, neural networks, large language models, & machine learning with behavioral experiments, fMRI, & electrophysiology.
Roberta is a Senior Staff Research Scientist at Google DeepMind in the Open-Endedness team. She is also an Honorary Lecturer at UCL, advising PhD students and co-teaching a course on Open-Endedness and General Intelligence. Previously, she led the AI Engineer and AI Scientist team at Meta GenAI in London, focused on building AI agents that drive scientific discovery through planning, reasoning, tool use, and learning from feedback. She also led the Tool Use team for Llama 3 and contributed to products like Meta AI and AI Studio. Her research interests include reinforcement learning, open-ended learning, self-supervised learning, and large language models—particularly in building robust, generalizable, and aligned AI agents that can learn continuously and act autonomously in complex environments.
Pablo Samuel Castro was born and raised in Quito, Ecuador, and moved to Montréal after high school to study at McGill University. For his PhD, he studied reinforcement learning with Doina Precup and Prakash Panangaden at McGill. Castro has been working at Google for over eleven years. He is currently a staff research scientist at Google DeepMind in Montreal, where he conducts fundamental reinforcement learning research and is a regular advocate for increasing LatinX representation in the research community. He is also an adjunct professor in the Department of Computer Science and Operations Research (DIRO) at Université de Montréal. In addition to his interest in coding, AI and math, Castro is an active musician.
Julian is a leading researcher at the intersection of artificial intelligence and games. His work explores how AI can make games more adaptive, engaging, and intelligent, and how games can serve as testbeds for advancing general AI. He focuses on search-based procedural content generation, player modeling, and evolutionary reinforcement learning, with the goal of creating game agents and content that respond meaningfully to player behavior and preferences. He also works on benchmarking AI through competitions, such as the General Video Game Competition, to push progress toward more general and robust learning systems. Julian’s research combines evolutionary algorithms, behavioral modeling, and large-scale game data analysis to better understand how AI can learn to play—and design—games in diverse, human-like ways.
Michael is a full professor at the University of Alberta. His research focuses on machine learning, games, and robotics, and he's particularly fascinated by the problem of how computers can learn to play games through experience. He is the leader of the Computer Poker Research Group, which has built some of the best poker-playing programs on the planet. The programs have won international AI competitions and were the first to beat top professional players in a meaningful competition. He is also a principal investigator in the Reinforcement Learning and Artificial Intelligence (RLAI) group and the Alberta Ingenuity Centre for Machine Learning (AICML). He completed his Ph.D. at Carnegie Mellon University, where his dissertation focused on multiagent learning, and he was extensively involved in the RoboCup initiative. His research has been featured on television programs such as Scientific American Frontiers, National Geographic Today, and Discovery Channel Canada, as well as in the New York Times, Wired, CBC and BBC radio, and twice in exhibits at the Smithsonian Museums in Washington, DC.