Invited Speakers

Prof. Song Han (MIT): Song Han is an Associate Professor at MIT’s EECS. He received his PhD degree from Stanford University. His research focuses on efficient deep learning computing. He proposed “deep compression” technique that can reduce neural network size by an order of magnitude without losing accuracy, and the hardware implementation “efficient inference engine” that first exploited pruning and weight sparsity in deep learning accelerators. 

Webpage: https://songhan.mit.edu/

Prof. Anima Anandkumar (Caltech and NVIDIA): She is a Bren Professor at Caltech and Director of ML Research at NVIDIA. She was previously a Principal Scientist at Amazon Web Services. She has received several honors such as Alfred. P. Sloan Fellowship, NSF Career Award, Young investigator awards from DoD, and Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is part of the World Economic Forum's Expert Network. She is passionate about designing principled AI algorithms and applying them in interdisciplinary applications. Her research focus is on unsupervised AI, optimization, and tensor methods.

Webpage: http://tensorlab.cms.caltech.edu/users/anima/

Dr. Prateek Jain (Sr. Staff Research Scientist, Google AI; Adjunct Faculty, IIT Kanpur): He leads the Machine Learning Foundations and Optimization team at Google AI, Bangalore, India. His research interests are in machine learning, non-convex optimization, high-dimensional statistics, and optimization algorithms in general. He is also interested in applications of machine learning to privacy, computer vision, text mining and natural language processing. Earlier, he was a Sr. Principal Research Scientist at Microsoft Research India.

Webpage: https://www.prateekjain.org/

Dr. Sangdoo Yun (Research Scientist at Naver AI Research): Sangdoo Yun is a Research Scientist at Naver AI Research (South Korea, 2018-now). He received his PhD in computer vision from Seoul National University (South Korea, 2013-2017). His research interest is training robust and strong vision models, data augmentation, dataset condensation, etc. He has published about 20 papers on top-tier machine learning conferences including NeurIPS, ICLR, ICML, CVPR, ECCV, ICCV. He was the co-organizer of workshop "ImageNet: Past, Present, and Future'' in NeurIPS 2021 conference.

Webpage: https://sangdooyun.github.io/