Workshop on Benchmarking Machine Learning Workloads

To be held along with IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS)

August 23rd, 2020

Important

All attendees will need to register for the workshop via the ISPASS registration page

Program

*All times are EDT time zone

  • 1:00 PM - 1:05 PM Introduction, Tom St. John (Tesla Inc.)

  • 1:05 PM - 1:50 PM Invited Talk: "Characteristics of Facebook's Deep Learning Recommendation Training Models", Bilge Acun (Facebook) <slides>

  • 1:50 PM - 2:10 PM Paper: "Bosch Deep Learning Hardware Benchmark", Armin Runge, Thomas Wenzel, Dimitrios Bariamis, Benedikt Sebastian Staffler, Lucas Rego Drumond, Michael Pfeiffer (Robert Bosch GmbH) <slides>

  • 2:10 PM - 2:30 PM Paper: "Characterization of Data Generating Neural Network Workloads on x86 Server Architecture", Antara Ganguly (IIT Bombay), Shankar Balachandran, Anant Nori (Intel), Virendra Singh (IIT Bombay), Sreenivas Subramoney (Intel) <slides>

  • 2:30 PM - 3:15 PM Invited Talk: "Benchmarking DNN Models on Future Inference and Training Platforms", Tushar Krishna (Georgia Tech) <slides>

  • 3:15 PM - 3:25 PM Break

  • 3:25 PM - 4:10 PM Invited Talk: "MLPerf-HPC: A Benchmark Suite for Large Scale Machine Learning on HPC Systems", Steve Farrell (Lawrence Berkeley National Laboratory) <slides>

  • 4:10 PM - 4:55 PM Invited Talk: "TinyML", Evgeni Gousev (Qualcomm)

  • 4:55 PM - 5:00 PM Conclusion, Vijay Janapa Reddi (Harvard University)

About

With evolving system architectures, hardware and software stacks, diverse machine learning (ML) workloads, and data, it is important to understand how these components interact with each other. Well-defined benchmarking procedures help evaluate and reason the performance gains with ML workload-to-system mappings. We welcome all novel submissions in benchmarking machine learning workloads from all disciplines, such as image and speech recognition, language processing, drug discovery, simulations, and scientific applications. Key problems that we seek to address are: (i) which representative ML benchmarks cater to workloads seen in industry, national labs, and interdisciplinary sciences; (ii) how to characterize the ML workloads based on their interaction with hardware; (iii) which novel aspects of hardware, such as heterogeneity in compute, memory, and networking, will drive their adoption; (iv) performance modeling and projections to next-generation hardware. Along with selected publications, the workshop program will also have experts in these research areas presenting their recent work and potential directions to pursue.

Deadlines

  • Paper submission deadline: March 1, 2020

  • Author Notification: March 9, 2020

  • Camera-ready papers due: March 23, 2020

(All deadlines are at midnight EST, and are firm.)

Submission

We solicit short/position papers (2-4 pages) as well as longer-full papers (4-6 pages). Submitting a paper to the workshop will not prevent you from submitting the paper in the future to a conference; there are no official proceedings. So the workshop provides an ideal ground for getting early feedback on your work!

The page limit includes figures, tables, and appendices, but excludes references. Please use standard IEEE LaTeX or Word ACM templates. All submissions will need to be made via EasyChair.

Each submission will be reviewed by at least three reviewers from the program committee. Papers will be reviewed for novelty, quality, technical strength, and relevance to the workshop. All accepted papers will be made available online and selected papers will be invited to submit extended versions to a journal after the workshop.

Submissions are not double blind (author names must be included).


Program Committee

  • Prasanna Balaprakash, Argonne National Laboratory

  • Chris De Sa, Cornell University

  • Justin Gottschlich, Intel

  • Jiajia Li, Pacific Northwest National Laboratory

  • Abid Malik, Brookhaven National Laboratory

  • Jennifer Myers, SiMa.ai

  • Gennady Pekhimenko, University of Toronto

  • Karthik Swaminathan, IBM

  • Wei Wei, Alibaba Group

  • Carole-Jean Wu, Facebook/Arizona State University

Organizers

Vijay Janapa Reddi

Harvard University

Tom St. John

Tesla

Murali Krishna Emani

Argonne National Laboratory

Murali Emani is an Assistant Computer Scientist in the Data Science group with the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. His research interests are in Scalable Machine Learning, Parallel programming models, High Performance Computing, Runtime Systems, Emerging HPC architectures, Online Adaptation. Prior, he was a Postdoctoral Research Staff Member at the Lawrence Livermore National Laboratory, US. He obtained his PhD from the Institute for Computing Systems Architecture at School of Informatics, University of Edinburgh, UK. Murali published in top conferences including PACT, PLDI, CGO, SC and has three granted patents. He served as technical program committee member for conferences including ICPP'19, CCGRID'19, PACT '18, CCGRID '18, ICPP '18. He is the co-founder of MLPerf HPC working group and chaired the first Birds-of-feather session on Machine Learning benchmarking on HPC systems at SC’19.

Vijay Janapa Reddi is a Chair of MLPerf Inference and an Associate Professor in John A. Paulson School of Engineering and Applied Sciences at Harvard University. His research interests include computer architecture and runtime systems, specifically in the context of autonomous machines/robots and mobile and edge computing systems. Dr. Janapa Reddi is a recipient of multiple honors and awards, including the National Academy of Engineering (NAE) Gilbreth Lecturer Honor (2016), IEEE TCCA Young Computer Architect Award (2016), Intel Early Career Award (2013), Google Faculty Research Awards (2012, 2013, 2015, 2017), Best Paper at the 2005 International Symposium on Microarchitecture (MICRO), Best Paper at the 2009 International Symposium on High Performance Computer Architecture (HPCA), MICRO and HPCA Hall of Fame (2018 and 2019, respectively), and IEEE’s Top Picks in Computer Architecture awards (2006, 2010, 2011, 2016, 2017). He received a Ph.D. in computer science from Harvard University

Tom St. John is a staff machine learning scientist at Tesla, where he leads the distributed machine learning performance optimization efforts within the Autopilot organization. Prior to his current role, he served as the director of the AI Co-Design Center at Wave Computing. His research primarily focuses on the intersection of parallel programming models and computer architecture design, and the impact that this has on large-scale machine learning. His work has resulted in a number of patents and publications, including a best paper award at ADAPT’14..