About the Tutorial
The current landscape of Machine Learning (ML) and Deep Learning (DL) is rife with non-uniform models, frameworks, and system stacks. It lacks standard tools and methodologies to evaluate and profile models or systems. Due to the absence of standard tools, the state of the practice for evaluating and comparing the benefits of proposed AI innovations (be it hardware or software) on end-to-end AI pipelines is both arduous and error-prone — stifling the adoption of the innovations in a rapidly moving field.
The goal of the tutorial is to bring experts from the industry and academia together to shed light on the following topics to foster systematic development, reproducible evaluation, and performance analysis of deep learning artifacts. It seeks to address the following questions:
What are the benchmarks that can effectively capture the scope of the ML/DL domain?
Are the existing frameworks sufficient for this purpose?
What are some of the industry-standard evaluation platforms or harnesses?
What are the metrics for carrying out an effective comparative evaluation?
2026.hpca-conf.org/attending/registration
Coming Soon
Tom St. John is a member of technical staff at Gimlet Labs. He also serves as the chair of the MLPerf automotive advisory board. Prior to his current role, he served as technical lead for MTIA training performance at Meta AI and led the distributed machine learning performance optimization efforts within Tesla Autopilot. His research primarily focuses on the intersection of parallel programming models and computer architecture design, and the impact that this has on large-scale machine learning.
Carole-Jean Wu is a Director of AI Research at Meta. She is a founding member and a Vice President of MLCommons – a non-profit organization that aims to accelerate machine learning innovations for the benefits of all. Dr. Wu also serves on the MLCommons Board as a Director, chaired the MLPerf Recommendation Benchmark Advisory Board, and co-chaired for MLPerf Inference. Prior to Meta/Facebook, Dr. Wu was a professor at ASU. Dr. Wu’s expertise sits at the intersection of computer architecture and machine learning. Her work spans across datacenter infrastructures and edge systems, such as developing energy- and memory-efficient systems and microarchitectures, optimizing systems for machine learning execution at-scale, and designing learning-based approaches for system design and optimization. She is passionate about pathfinding and tackling system challenges to enable efficient and responsible AI technologies.
Vijay Janapa Reddi is a Professor in John A. Paulson School of Engineering and Applied Sciences at Harvard University. Prior to joining Harvard University, he was a an Associate Professor at The University of Texas at Austin where he continues to be an Adjunct Professor. His research interests include computer architecture, compilers and runtime systems, specifically in the context of mobile and edge computing systems to improve their performance, power efficiency, and reliability.