Leslie Pack Kaelbling is Professor of Computer Science and Engineering at the Computer Science and Artificial Intelligence Laboratory (CSAIL)at the Massachusetts Institute of Technology. She has made research contributions to decision-making under uncertainty, learning, and sensing with applications to robotics, with a particular focus on reinforcement learning and planning in partially observable domains.
She holds an A.B in Philosphy and a Ph.D. in Computer Science from Stanford University, and has had research positions at SRI International and Teleos Research and a faculty position at Brown University. She is the recipient of the US National Science Foundation Presidential Faculty Fellowship, the IJCAI Computers and Thought Award, and several teaching prizes and has been elected a fellow of the AAAI. She was the founder and editor-in-chief of the Journal of Machine Learning Research.
Dorsa Sadigh is an assistant professor in Computer Science and Electrical Engineering at Stanford University. Her research interests lie in the intersection of robotics, learning, and control theory. Specifically, she is interested in developing algorithms for safe and adaptive human-robot and human-AI interaction. Dorsa received her doctoral degree in Electrical Engineering and Computer Sciences (EECS) from UC Berkeley in 2017, and received her bachelor’s degree in EECS from UC Berkeley in 2012. She is awarded the Sloan Fellowship, NSF CAREER, ONR Young Investigator Award, AFOSR Young Investigator Award, Okawa Foundation Fellowship, MIT TR35, and the IEEE RAS Early Academic Career Award.
Dale Schuurmans is a Research Scientist at Google Brain, Professor of Computing Science at the University of Alberta, a Canada CIFAR AI Chair, and a Fellow of AAAI. He has served as an Associate Editor in Chief for IEEE TPAMI, an Associate Editor for JMLR, AIJ, JAIR and MLJ, and as a Program Co-chair for AAAI-2016, NIPS-2008 and ICML-2004. He has worked in many areas of machine learning and artificial intelligence, including model selection, on-line learning, adversarial optimization, boolean satisfiability, sequential decision making, reinforcement learning, Bayesian optimization, semidefinite methods for unsupervised learning, dimensionality reduction, and robust estimation. He has published over 200 papers in these areas, and has received paper awards at NeurIPS, ICML, IJCAI, and AAAI.
Deepak Pathak is a faculty in the School of Computer Science at Carnegie Mellon University. He received his Ph.D. from UC Berkeley and his research spans computer vision, machine learning, and robotics. He is a recipient of the faculty awards from Google, Samsung, Sony, GoodAI, and graduate fellowship awards from Facebook, NVIDIA, Snapchat. His research has been featured in popular press outlets, including The Economist, The Wall Street Journal, Quanta Magazine, Washington Post, CNET, Wired, and MIT Technology Review among others. Deepak received his Bachelor's from IIT Kanpur with a Gold Medal in Computer Science. He co-founded VisageMap Inc. later acquired by FaceFirst Inc.
Machel is a researcher at the University of Tokyo, working on NLP Research at Matsuo Lab advised by Yutaka Matsuo. Machel is also an incoming Research Scientist at Google AI and PhD student at the University of Washington co-advised by Luke Zettlemoyer and Noah A. Smith. Machel was previously a visiting student at Carnegie Mellon University, advised by Graham Neubig. Machel is currently working on multilingual NLP, low-resource NLP, edit models, as well as other topics as well.
Thomas leads the Science Team at Huggingface Inc., a Brooklyn-based startup working on Natural Language Generation and Natural Language Understanding. Thomas graduated from Ecole Polytechnique (Paris, France), and worked on laser-plasma interactions at the BELLA Center of the Lawrence Berkeley National Laboratory (Berkeley, CA). Thomas got his PhD in Statistical/Quantum physics at Sorbonne University and ESPCI (Paris, France) and a law degree from Pantheon Sorbonne University. In 2015, Thomas was consulting for many Deep-Learning/AI/ML startups before joining his friend's startup to build a science team at Hugging Face. Thomas is interested in Natural Language Processing, Deep Learning and Computational Linguistics. Much of his research is about Natural Language Generation (mostly) and Natural Language Understanding (as a tool for better generation).
Jim is a research scientist at NVIDIA AI. His research interest spans foundation models, embodied agents, robotics, game AI, multimodal learning, and large-scale AI systems. He obtained his Ph.D. degree in Computer Science at Stanford University, advised by Fei-Fei Li. Previously, Jim did research internships at NVIDIA, Google Cloud AI, OpenAI, Baidu Silicon Valley AI Lab, and Mila-Quebec AI Institute. He graduated summa cum laude with a Bachelor's degree in Computer Science from Columbia University. Jim was the Valedictorian of Class 2016 and a recipient of the Illig Medal at Columbia. Personal website: https://jimfan.me.
Gabriel Barth-Maron
Gabriel is a Research Engineer at DeepMind in London. His research interests span reinforcement learning, representation learning, and multimodal modeling. Most recently Gabriel has been working on multi-modal, multi-task, multi-embodiment generalist policies such as Gato. He is also interested in building tools that accelerate the pace of research in machine learning and AI, such as Acme, Launchpad, and Reverb. Previously Gabriel worked on NLP and semantic search for the startups AppSnatcher and dMetrics. He holds a BA in mathematical economics and a ScM in computer science from Brown University.