Speakers and TOPICs

Shakir Mohamed (DeepMind):

Variational Inference


Shakir is senior staff scientist at DeepMind in London, having joined DeepMind in 2013. Shakir's research is in the areas of probabilistic machine learning and artificial intelligence; and organises his research and thinking around three pillars, on foundations of reasoning and intelligence, towards addressing global challenges, and Transformation. Shakir has worked in generative models and statistical inference, particularly in variational inference which will be the topic of his MLSS talk, but also in applied topics in healthcare, weather and climate, and in areas of social transformation and diversity.

At DeepMind, Shakir is a member of its diversity steering committee, and leads its LGBT+ employee resource group. Shakir leads the Deep Learning Indaba, an independent grassroots organisation whose aim is to build pan-African capacity and ownership in AI. Shakir also sits on the diversity advisory board for the NeurIPS conference, and was recently elected to the ICML board of directors. He was a programme chair for ICLR 2019 and DALI2019, and will be the senior programme chair for ICLR 2020.

Prior to joining DeepMind, Shakir held a Junior Research Fellowship from the Canadian Institute for Advanced Research (CIFAR) as part of the programme on Neural Computation and Adaptive Perception, working with Nando de Freitas. Shakir holds a PhD in Statistical Machine Learning from the University of Cambridge having worked under the supervision of Zoubin Ghahramani. He is from Johannesburg, South Africa, where he completed his undergraduate and masters degrees in electrical and information engineering.

John Duchi (Stanford University):

Optimization


John Duchi is an assistant professor of Statistics and Electrical Engineering and (by courtesy) Computer Science at Stanford University, with graduate degrees from UC Berkeley and undergraduate degrees from Stanford. His work focuses on large scale optimization problems arising out of statistical and machine learning problems, robustness and uncertain data problems, information theoretic aspects of statistical learning, and tradeoffs between real-world resources and statistical efficiency. He has won a number of awards and fellowships, including best paper awards at the Neural Information Processing Systems conference, the International Conference on Machine Learning, INFORMS Applied Probability Society Best Student Paper Award, an ONR Young Investigator Award, an NSF CAREER award, a Sloan Fellowship in Mathematics, the Okawa Foundation Award, and the Association for Computing Machinery (ACM) Doctoral Dissertation Award (honorable mention).

Kevin Webster (FeedForward, Imperial College London)

Deep Learning


Kevin obtained his PhD in 2003 from the Department of Mathematics at Imperial College, in the area of dynamical systems. He has also held postdoctorate positions at Imperial College, and was awarded a Marie Curie Individual Fellowship, which he spent at the Potsdam Institute for Climate Impact Modelling in Germany. During these positions his research interests became more focused on machine learning, and specifically adapting ML technologies for numerical analysis problems in dynamical systems. He was the Head of Research at the London music AI startup Jukedeck, where he oversaw the development of the deep learning framework for automatic music composition. In 2018 he set up his own machine learning consultancy, FeedForward, with a focus on the music & the creative industries. His particular interest in the field of deep learning is generative modelling. @kn_webster / kevin.webster@imperial.ac.uk

Pierre Richemond (Imperial College London)

Deep Learning


Pierre is currently researching his PhD in deep reinforcement learning at the Data Science Institute of Imperial College. He also helps run the Deep Learning Network and organize thematic reading groups there. Prior to that, he has worked in electronics as a research engineer and in quantitative finance as a trader. He has studied electrical engineering at ENST, probability theory and stochastic processes at Universite Paris VI - Ecole Polytechnique, and business management at HEC. His other research interests in the field of deep learning include neural network theory, as well as stochastic optimization methods. @KloudStrife / p.richemond17@imperial.ac.uk

Kai Arulkumaran (Imperial College London)

Deep Learning


Kai is currently researching his PhD in deep learning at the Department of Bioengineering at Imperial College. During his PhD he has been a research intern at Microsoft Research, Twitter Magic Pony, Facebook AI Research and DeepMind. He also founded the Deep Learning Network at Imperial College to organise guest lectures and a reading group on the topic of deep learning. He is an advocate for open-source software and a well-known contributor to the Torch/PyTorch ecosystems. Before his PhD he studied computer science at the University of Cambridge and worked as a web developer. @KaiLashArul / kailash.arulkumaran13@imperial.ac.uk

Katja Hofmann (Microsoft Research Cambridge):

Reinforcement Learning


Katja Hofmann is a Senior Researcher at Microsoft Research Cambridge, where she leads a team that focuses on Reinforcement Learning in Game Intelligence.

Sanmi Koyejo (University of Illinois at Urbana-Champaign; Google AI Accra):

Interpretability


Sanmi Koyejo is an Assistant Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign and a research scientist at Google AI in Accra. Koyejo's research interests are in developing the principles and practice of adaptive and robust machine learning. Additionally, Koyejo focuses on applications to neuroscience and biomedical imaging. Koyejo has been the recipient of several awards including a best student paper award from the conference on uncertainty in artificial intelligence (UAI), a Kavli Fellowship, an IJCAI early career spotlight, and a trainee award from the Organization for Human Brain Mapping (OHBM). Koyejo serves on the board of the Black in AI organization, which he co-founded.

James Hensman (PROWLER.io):

Gaussian Processes


James Hensman is Director of Research at PROWLER.io. He was previously a Lecturer in biostatistics at Lancaster university, and a fellow of the medical research council. His research interests are in probabilistic modelling, especially approximate inference, Gaussian processes, and latent variables models. He is also interested in how machine learning can impact biomedical research, and has worked on RNA-sequence quantification and gene expression modelling.

Moustapha Cissé (AIMS Rwanda and Google AI, Accra):

AI for Good

Julien Cornebise (Element AI)

AI for Good


Julien Cornebise is a Director of Research, AI for Good at Element AI and head of the London Office. He is also an honorary Associate Professor at University College London. Prior to Element AI, Julien was at DeepMind (later acquired by Google) as an early employee, where he led several fundamental research projects used in early demos and fundraising. After leaving DeepMind in 2016, he worked with Amnesty International. Julien holds an MSc in Computer Engineering, an MSc in Mathematical Statistics, and a PhD in Mathematics, specialized in Computational Statistics, from University Paris VI Pierre and Marie Curie and Telecom ParisTech. He received the 2010 Savage Award in Theory and Methods from the International Society for Bayesian Analysis for his PhD work.

Lorenzo Rosasco (IIT, University of Genova, MIT):

Kernels


Lorenzo Rosasco is team leader at the Istituto Italiano di Tecnologia (IIT) and research scientist at the Massachusetts Institute of Technology (MIT). He holds an assistant professor position at the Computer Science Department of the University of Genova, Italy, from which is on leave of absence. He is currently involved in establishing the Computational and Statistical Learning Laboratory, a joint laboratory based on a collaborative agreement between IIT and MIT. He received his PhD from the University of Genova in 2006. Between 2006 and 2009 he was a postdoctoral fellow at Center for Biological and Computational Learning at the Massachusetts Institute of Technology. His research focuses on studying theory and algorithms for computational learning. Dr. Rosasco has developed and analyzed methods to learn from small as well as large samples of high dimensional data, using analytical and probabilistic tools, within a multidisciplinary approach drawing concepts and techniques primarily from computer science and but also from physics, engineering and applied mathematics.

Michael Betancourt (Symplectomorphic):

Markov Chain Monte Carlo


Michael Betancourt is the principal research scientist with Symplectomorphic, LLC where he develops theoretical and methodological tools to support practical Bayesian inference. He is also a core developer of Stan, where he implements and tests these tools. In addition to hosting tutorials and workshops on Bayesian inference with Stan he also collaborates on analyses in epidemiology, pharmacology, and physics, amongst others. Before moving into statistics, Michael earned a B.S. from the California Institute of Technology and a Ph.D. from the Massachusetts Institute of Technology, both in physics.

Twitter: @betanalpha

Sarah Filippi (Imperial College London):

Approximate Bayesian Computation


Sarah Filippi holds a joint Lectureship position between School of Public Health and the Department of Mathematics at Imperial College London. She previously was a Lecturer in the Statistic Department of Oxford University (2014-2016) and a faculty fellow at the Alan Turing Institute. Prior to this, she held a Medical Research Council Fellowship (2011-2104) in the Theoretical Systems Biology group at Imperial College London.

The core of her research lies in biostatistics and computational statistics methodology development motivated by applications in and around computational biology, biomedical genetics and clinical studies. Her work focuses in addressing how novel statistical and computational approaches can aid in the analysis of large-scale real-world health problems. She develops novel Bayesian statistical procedures and applies them to investigate complex biological systems in health and disease. Over the past eight years, she has conducted her research in inter-disciplinary teams and have become increasingly immersed in biomedical research problems through collaboration with experimentalists and clinicians.

Timnit Gebru (Google):

Fairness in Machine Learning


Timnit Gebru is a computer scientist and the technical co-lead of the Ethical Artificial Intelligence Team at Google. She works on algorithmic bias and data mining. She is an advocate for diversity in technology and is the cofounder of Black in AI, a community of black researchers working in artificial intelligence.

Karen Livescu (Toyota Technological Institute at Chicago):

Speech Processing


Karen Livescu is an Associate Professor at TTI-Chicago. She completed her PhD and post-doc in Electrical Engineering and Computer Science at MIT and her Bachelor's degree in Physics at Princeton University. Karen's main research interests are at the intersection of speech and language processing and machine learning. Her recent work includes multi-view representation learning, acoustic word embeddings, cross-modal training of speech and language models, and automatic sign language recognition. At TTIC she teaches courses on unsupervised learning and speech technologies. She regularly serves as area chair for speech, natural language processing, and machine learning conferences, and was a program chair for ICLR 2019 and technical chair for ASRU 2015/2017/2019.

Samory Kpotufe (Columbia University):

Learning Theory


Samory Kpotufe is Associate Professor in the Statistics department at Columbia University. Prior to this he was Assistant Professor at ORFE, Princeton University, 2014 to 2018. Samory’s research is in the area of Statistical Machine Learning, with a focus on high-dimensional problems and nonparametric methods. His current interests revolve around statistical tradeoffs in the practice of ML under constraints, e.g., time or space constraints, and constraints in sampling.

Barbara Engelhardt (Princeton University):

Machine Learning in Computational Biology


Barbara E. Engelhardt is an associate professor at Princeton Computer Science Department. Previously, she spent three years Duke University, where she had been an assistant professor in Biostatistics and Bioinformatics and Statistical Sciences. She graduated from Stanford University and received her Ph.D. from the University of California, Berkeley, advised by Professor Michael Jordan. She did postdoctoral research at the University of Chicago, working with Professor Matthew Stephens. Interspersed among her academic experiences, she spent two years working at the Jet Propulsion Laboratory, a summer at Google Research, and a year at 23andMe, a DNA ancestry service. Professor Engelhardt received an NSF Graduate Research Fellowship, the Google Anita Borg Memorial Scholarship, and the Walter M. Fitch Prize from the Society for Molecular Biology and Evolution. As a faculty member, she received the NIH NHGRI K99/R00 Pathway to Independence Award, a Sloan Faculty Fellowship, and an NSF CAREER Award. Professor Engelhardt’s research interests involve developing statistical models and methods for the analysis of high-dimensional biomedical data, with a goal of understanding the underlying biological mechanisms of complex phenotypes and human disease. She is on a leave of absence 2019-2020 working with Genomics Plc, a UK-based statistical genetics startup.

Stefanie Jegelka (MIT):

Submodularity


Stefanie Jegelka is an X-Window Consortium Career Development Associate Professor in the Department of EECS at MIT. She is a member of the Computer Science and AI Lab (CSAIL), the Center for Statistics and an affiliate of IDSS and ORC. Before joining MIT, she was a postdoctoral researcher at UC Berkeley, and obtained her PhD from ETH Zurich and the Max Planck Institute for Intelligent Systems. Stefanie has received a Sloan Research Fellowship, an NSF CAREER Award, a DARPA Young Faculty Award, the German Pattern Recognition Award and a Best Paper Award at the International Conference for Machine Learning (ICML). Her research interests span the theory and practice of algorithmic machine learning.