Yiding Jiang: Yiding Jiang is an AI resident at Google Research. He previously received Bachelor of Science in Electrical Engineering and Computer Science from University of California, Berkeley. He has worked on projects in deep learning, reinforcement learning and robotics. He has published papers related to predicting generalization of neural networks and evaluating complexity measures.
Pierre Foret: Pierre Foret is an AI resident at Google Research. He previously received a Master of Financial Engineering from University of California, Berkeley and a Master in applied math from ENSAE Paristech. His research interests lie in the intersection of optimization and generalization in deep learning.
Scott Yak: Scott Yak is a Software Engineer at Google Research. He has previously received a Bachelor of Science and Engineering at Princeton University. He is currently working on AutoML and Neural Architecture Search at Google. He has published work on predicting generalization of neural networks.
Behnam Neyshabur: Behnam Neyshabur is a senior research scientist at Google. Before that, he was a postdoctoral researcher at New York University and a member of Theoretical Machine Learning program at Institute for Advanced Study (IAS) in Princeton. In summer 2017, He received a PhD in computer science at TTI-Chicago. He is interested in machine learning and optimization and his primary research is on optimization and generalization in deep learning. He has co-organized ICML 2019 workshops on "Understanding and Improving Generalization in Deep Learning'' and "Identifying and Understanding Deep Learning Phenomen''. He has published several papers related to complexity measures and generalization in deep learning.
Hossein Mobahi: Hossein Mobahi is a research scientist at Google Research. His recent efforts covers the intersection of machine learning, generalization and optimization, with emphasis on deep learning. Prior to joining Google in 2016, he was a postdoctoral researcher in the Computer Science and Artificial Intelligence Lab (CSAIL) at MIT. He obtained his PhD in Computer Science from the University of Illinois at Urbana-Champaign (UIUC). He is the recipient of Computational Science & Engineering Fellowship, Cognitive Science & AI Award, and Mavis Memorial Scholarship. He is interested in machine learning and optimization and his primary research is on optimization and generalization in deep learning. He has published several works on generalization and theoretical foundations of self-distillation.
Isabelle Guyon: Isabelle Guyon is chaired professor in “big data” at the Université ParisSaclay, specialized in statistical data analysis, pattern recognition and machine learning. She is one of the cofounders of the ChaLearn Looking at People (LAP) challenge series and she pioneered applications of the MIcrosoft Kinect to gesture recognition. Her areas of expertise include computer vision and and bioinformatics. Prior to joining ParisSaclay she worked as an independent consultant and was a researcher at AT&T Bell Laboratories, where she pioneered applications of neural networks to pen computer interfaces (with collaborators including Yann LeCun and Yoshua Bengio) and coinvented with Bernhard Boser and Vladimir Vapnik Support Vector Machines (SVM), which became a textbook machine learning method. She worked on early applications of Convolutional Neural Networks (CNN) to handwriting recognition in the 1990’s. She is also the primary inventor of SVMRFE, a variable selection technique based on SVM. The SVMRFE paper has thousands of citations and is often used as a reference method against which new feature selection methods are benchmarked. She also authored a seminal paper on feature selection that received thousands of citations. She organized many challenges in Machine Learning since 2003 supported by the EU network Pascal2, NSF, and DARPA, with prizes sponsored by Microsoft, Google, Facebook, Amazon, Disney Research, and Texas Instrument. Isabelle Guyon holds a Ph.D. degree in Physical Sciences of the University Pierre and Marie Curie, Paris, France. She is president of Chalearn, a nonprofit dedicated to organizing challenges, vice president of the Unipen foundation, adjunct professor at NewYork University, action editor of the Journal of Machine Learning Research, editor of the Challenges in Machine Learning book series of Microtome, and program chair of the upcoming NIPS 2016 conference.
Gintare Karolina Dziugaite: Gintare Karolina Dziugaite is a Fundamental Research Scientist at Element AI. Dziugaite recently graduated from the University of Cambridge, where she completed her doctorate in Zoubin Ghahramani’s machine learning group. The focus of her thesis was on constructing generalization bounds to understand existing learning algorithms in deep learning and propose new ones. She continues to work on explaining generalization phenomenon in deep learning using statistical learning tools. She was a lead organizer for the 2019 ICML workshop on Machine Learning with Guarantees. She has published several works in generalization for deep learning.
Daniel Roy: Daniel M. Roy is an Assistant Professor in the Department of Statistical Sciences at the University of Toronto and Canada CIFAR AI Chair. Prior to joining Toronto, Roy was a Research Fellow of Emmanuel College and Newton International Fellow of the Royal Society and Royal Academy of Engineering, hosted by the University of Cambridge. Roy completed his doctorate in Computer Science at the Massachusetts Institute of Technology. Roy has co-organized a number of workshops, including the 2008, 2012, and 2014 NeurIPS Workshops on Probabilistic Programming, a 2016 Simons Institute Workshop on Uncertainty in Computation, and special sessions in 2019 at the Statistical Society of Canada meeting and 2016 at the Mathematical Foundations of Programming Semantics conference. Last year at ICML, he organized a workshop on Machine Learning with Guarantees. He has published several works in generalization for deep learning.
Suriya Gunasekar: Suriya Gunasekar is a senior researcher at the Machine Learning and Optimization (MLO) Group of Microsoft Research. Prior to joining MSR, she was a Research Assistant Professor at Toyota Technological Institute at Chicago. She received my PhD in ECE from The University of Texas at Austin. She has published several works in optimization and implicit regularization.
Samy Bengio: Samy Bengio (PhD in computer science, University of Montreal, 1993) is a research scientist at Google since 2007. He currently leads a group of research scientists in the Google Brain team, conducting research in many areas of machine learning such as deep architectures, representation learning, sequence processing, speech recognition, image understanding, large-scale problems, adversarial settings, etc. He was the general chair for Neural Information Processing Systems (NeurIPS) 2018, the main conference venue for machine learning, was the program chair for NeurIPS in 2017, is action editor of the Journal of Machine Learning Research and on the editorial board of the Machine Learning Journal, was program chair of the International Conference on Learning Representations (ICLR 2015, 2016), general chair of BayLearn (2012-2015) and the Workshops on Machine Learning for Multimodal Interactions (MLMI'2004-2006), as well as the IEEE Workshop on Neural Networks for Signal Processing (NNSP'2002), and on the program committee of several international conferences such as NIPS, ICML, ICLR, ECML and IJCAI.