After a PhD at MINES Paris on automatic parallelisation for accelerators like GPUs, Mehdi joined Apple to work on Clang/LLVM. One of his main contributions to LLVM has been scaling link-time optimizations with ThinLTO. He then joined the Tesla Autopilot group, taking a break from working on compiler, but not for long: he soon went back to it in order to build MLIR at Google and later drive the launch of the OpenXLA initiative. He's now a Distinguished Engineer at Nvidia working on Deep Learning Frameworks.
Lorenzo Chelini is a compiler engineer at NVIDIA. He holds a Ph.D. in Computer Engineering from the Technical University of Eindhoven, a Master's degree from the Polytechnic of Turin, and a Bachelor's from the University of Pisa. Lorenzo actively contributes to the LLVM ecosystem, mainly MLIR and Polygeist.
Mathieu Fehr is a final-year PhD student at the University of Edinburgh, currently visiting at the University of Cambridge. A large part of his research focuses on improving the accessibility of compiler technology, which includes the design and development of xDSL, a smoother entry-point for MLIR. His broader research interests encompass advancing declarative approaches in compiler design to facilitate formal reasoning and enable an ecosystem of compilation tools, including verifiers, fuzzers, and superoptimizers.
Sasha Lopoukhine is a PhD student at the University of Cambridge, researching making machine learning compilers more approachable and extensible. His recent work has been to leverage xDSL to implement a backend for linear algebra micro-kernels targeting ETH's Snitch core, outperforming the state-of-the-art LLVM backend by a factor of 20.
William Moses is an Assistant Professor at the University of Illinois in the Computer Science and Electrical and Computer Engineering departments and Researcher at Google. He received a Ph.D. in Computer Science from MIT, where he also received his M.Eng in electrical engineering and computer science (EECS) and B.S. in EECS and physics. William's research involves creating compilers and program representations that enable performance and use-case portability, thus enabling non-experts to leverage the latest in high-performance computing and ML. He is known as the lead developer of Enzyme (NeurIPS '20, SC '21, SC '22'), an automatic differentiation tool for LLVM capable of differentiating code in a variety of languages, after optimization, and for a variety of architectures and the lead developer of Polygeist (PACT '21, PPoPP '23), a polyhedral compiler and C++ frontend for MLIR. He has also worked on the Tensor Comprehensions framework for synthesizing high-performance GPU kernels of ML code, the Tapir compiler for parallel programs (best paper at PPoPP '17), and compilers that use machine learning to better optimize (AutoPhase/TransformLLVM). He is a recipient of the ACM SIGHPC Doctoral Dissertation Award, a U.S. Department of Energy Computational Science Graduate Fellowship and the Karl Taylor Compton Prize, MIT's highest student award.
Matthias is a software engineer at NVIDIA Switzerland. He received a Ph.D. in Mathematical and Computing Sciences from the Tokyo Institute of Technology. He has been contributing to MLIR and other MLIR-based open source projects over the last three years.
Alex Zinenko is the Chief Scientist at Brium Inc., a young innovative company in the domain of high-performance AI. Previously, he worked as a staff research engineer at Google DeepMind and a research engineer at Inria. Alex obtained his PhD from the University Paris Saclay (Paris Sud XI) for his work on “Interactive Program Restructuring”. His research interests span from compilation to high-performance systems, to interactive software visualization united for the common goal of making programming efficient programs effectively.