Speakers

Pieter Abbeel (BS/MS EE KU Leuven, 2000; PhD CS Stanford, 2008, Advisor: Andrew Ng) is professor at UC Berkeley (EECS, BAIR) since 2008 and is a Research Scientist at OpenAI since 2016. Pieter has developed apprenticeship learning algorithms which have enabled advanced helicopter aerobatics, including maneuvers such as tic-tocs, chaos and auto-rotation, which only exceptional human pilots can perform. His group has enabled the first end-to-end completion of reliably picking up a crumpled laundry article and folding it and has pioneered deep reinforcement learning for robotics, including learning locomotion and visuomotor skills. His work has been featured in many popular press outlets, including BBC, New York Times, MIT Technology Review, Discovery Channel, SmartPlanet and Wired. His current research focuses on robotics and machine learning with particular focus on deep reinforcement learning, deep imitation learning, deep unsupervised learning, and AI safety. Pieter has won various awards, including the Sloan Research Fellowship, the Air Force Office of Scientific Research Young Investigator Program (AFOSR-YIP) award, the Okawa Research Grant, the 2011 TR35, the IEEE Robotics and Automation Society (RAS) Early Career Award, and the Dick Volz Best U.S. Ph.D. Thesis in Robotics and Automation Award. Pieter was awarded the NSF PECASE (Presidential Early Career Awards for Scientists and Engineers) from President Barack Obama in 2016.

Matt Botvinick M.D., Ph.D. is Director of Neuroscience Research at DeepMind, London, UK, and Honorary Professor at the Gatsby Computational Neuroscience Unit, University College London. Following undergraduate work at Stanford University and medical studies at Cornell University, Botvinick completed a Ph.D. in Psychology and Cognitive Neuroscience at the Center for the Neural Basis of Cognition at Carnegie Mellon University. He joined DeepMind and UCL after having held faculty positions at the University of Pennsylvania and Princeton University. His current research lies at the boundaries between cognitive neuroscience, computational neuroscience and artificial intelligence.

Emma Brunskill is an assistant professor of computer science at Stanford University. She is a Rhodes Scholar, a Microsoft Faculty Fellow, a NSF CAREER awardee and a ONR Young Investigator Program recipient and her group's has been recognized by multiple best paper nominations. She seeks to create interactive machine learning agents that help people realize their goals. To do this her group works on both the algorithmic and theoretical reinforcement learning challenges that arise in this pursuit, and in testing these ideas in real systems, with a particular focus on creating learning software that learns.

Jan Peters is a full professor (W3) for Intelligent Autonomous Systems at the Computer Science Department of the Technische Universitaet Darmstadt and at the same time a senior research scientist and group leader at the Max-Planck Institute for Intelligent Systems, where he heads the interdepartmental Robot Learning Group. Jan Peters has received the Dick Volz Best 2007 US PhD Thesis Runner-Up Award, the 2012 Robotics: Science & Systems - Early Career Spotlight, the 2013 INNS Young Investigator Award, and the IEEE Robotics & Automation Society‘s 2013 Early Career Award. In 2015, he was awarded an ERC Starting Grant. He has studied Computer Science, Electrical, Mechanical and Control Engineering at TU Munich and FernUni Hagen in Germany, at the National University of Singapore (NUS) and the University of Southern California (USC), and been an visiting researcher at the ATR Telecomunications Research Center in Japan. He has received four Master‘s degrees in these disciplines as well as a Computer Science Ph.D. from USC.

Doina Precup received her B.Sc. from the Computer Science Department, Technical University Cluj-Napoca, Romania, in 1994, and the M.S. and PhD from the Department of Computer Science, University of Massachusetts, Amherst, in 1997 and 2000 respectively. In July 2000 she joined the School of Computer Science at McGill University, as a tenure-track assistant professor. Doina Precup's research interests lie mainly in the field of machine learning. She is especially interested in the learning problems that face a decision-maker interacting with a complex, uncertain environment. Doina uses the framework of reinforcement learning to tackle such problems. Her current research is focused on developing better knowledge representation methods for reinforcement learning agents. Doina Precup is also more broadly interested in reasoning under uncertainty, and in the applications of machine learning techniques to real-world problems.

Jürgen Schmidhuber is a professor of Artificial Intelligence (Ordinarius) at the Faculty of Computer Science at the University of Lugano (IDSIA). Since age 15 or so, the main goal of professor Jürgen Schmidhuber has been to build a self-improving Artificial Intelligence (AI) smarter than himself, then retire. His lab's Deep Learning Neural Networks (since 1991) such as Long Short-Term Memory (LSTM) have transformed machine learning and AI, and are now (2017) available to billions of users through the world's most valuable public companies, e.g., for greatly improved (CTC-based) speech recognition on over 2 billion Android phones (since mid 2015), greatly improved machine translation through Google Translate (since Nov 2016) and Facebook (over 4 billion LSTM-based translations per day as of 2017), Siri and Quicktype on almost 1 billion iPhones (since 2016), the answers of Amazon's Alexa, and numerous other applications. In 2011, his team was the first to win official computer vision contests through deep neural nets, withsuperhuman performance. His research group also established the field of mathematically rigorous universal AI and recursive self-improvement in universal problem solvers that learn to learn (since 1987). His formal theory of creativity & curiosity & fun explains art, science, music, and humor. He also generalized algorithmic information theory and the many-worlds theory of physics, and introduced the concept of Low-Complexity Art, the information age's extreme form of minimal art. He is recipient of numerous awards, and Chief Scientist of the company NNAISENSE, which aims at building the first practical general purpose AI.

David Silver leads the reinforcement learning research group at DeepMind and was lead researcher on AlphaGo. He graduated from Cambridge University in 1997 with the Addison-Wesley award. Subsequently, David co-founded the video games company Elixir Studios, where he was CTO and lead programmer, receiving several awards for technology and innovation. David returned to academia in 2004 at the University of Alberta to study for a PhD on reinforcement learning, where he co-introduced the algorithms used in the first master-level 9x9 Go programs. David was awarded a Royal Society University Research Fellowship in 2011, and subsequently became a lecturer at University College London. David consulted for DeepMind from its inception, joining full-time in 2013. His recent work has focused on combining reinforcement learning with deep learning, including a program that learns to play Atari games directly from pixels. David led the AlphaGo project, culminating in the first program to defeat a top professional player in the full-size game of Go.

Accepted speakers: Nicholas Denis (Ottawa), Anna Harutyunyan (VU Brussel), Xiangyu Kong (Peking), Saurabh Kumar (GeorgiaTech), Shayegan Omidshafiei (MIT), Melrose Roderick (Brown)