Workshop on Trends in Machine-Learning

(and impact on computer architecture)

The videos and slides are online ! (see links next to speakers names in programme)

Thanks for your patience, and for your wonderful participation at the workshop !

SIGARCH Visioning Workshop Series

Overview

Machine-Learning has now become a pervasive tool used throughout the industry. This trend, combined with the plateauing of Moore's Law, has made hardware accelerators for machine-learning one of the promising paths forward.

The goal of this workshop is to help the architecture community understand where Machine-Learning is headed, so that researchers & engineers can appropriately plan and design their accelerators. Essentially, we want to kick off a two-way conversation between the Machine-Learning community and the Architecture community.

For that purpose, we have assembled a list of prestigious speakers (by invitation only) from the Machine-Learning domain, both from industry and academia.

Programme

  • 08:15 Opening
  • 08:30 Bill Dally [slides, video], NVIDIA (Chief Scientist and Senior Vice President of Research)
    • Efficient Methods and Hardware for Deep Learning
  • 09:10 Jason Mars [slides, video], clinc.com (CEO and co-founder of clinc.com)
    • Clinc, Inc.
  • 09:50 Break (30 mins)
  • 10:20 Ofer Dekel [slides, video], MSR (Principal Researcher in the Machine Learning and Optimization group)
    • Machine Learning on the Edge
  • 11:00 Yoshua Bengio [slides, video], Univ. Montreal (one of Deep Neural Nets "founding fathers")
    • Towards more hardware-friendly deep learning
  • 11:40 Scott Legrand [slides, video: pending copyright release form], A9 (Led DSSTNE, Amazon framework for sparse ML)
    • All I want for Christmas... Is a CUDA ASIC
  • 12:20 Lunch (65 mins)
  • 13:25 Ali Farhadi [slides, video], Univ. Washington & AI2, (Professor & Senior Research Manager, Allan Institute for Artificial Intelligence - AI2)
    • TBD
  • 14:05 Tianqi Chen [slides, video], Author of MxNet (selected as AWS deep learning framework), University of Washington
    • An End to End IR Stack for Deep Learning Systems
  • 14:20 Vincent Vanhoucke [slides, video], Google (Principal Scientist and Tech Lead in Brain)
    • Learning to Co-Design
  • 15:00 Break (30 mins)
  • 15:30 Carey K. Kloss [slides, video], Intel (Sr. Director, was VP of Hardware Engineering at Nervana)
    • Memory Bandwidth in Deep Learning HW
  • 16:10 Yangqing Jia [slides, video], Facebook (Author of Caffe, and Lead of large-scale platform for AI at Facebook)
    • TBD
  • 16:50 Eugenio Culurciello [slides, video], Purdue Univ. & FWDNXT (co-inventor of NeuFlow)
    • Snowflake: Deep neural network Accelerator
  • 17:30 Gregory [slides, video], Baidu (Head of the Systems Research, Baidu Silicon Valley AI Lab)
    • The Road to Exascale AI

Date & Location

The TiML workshop will take place on June 25th, 2017, in Toronto, as part of the ISCA 2017 Conference (venue).

Please register via the conference registration page.

Philosophy & Goals

The industry and the academic community have now fully embraced machine-learning as a major application domain, with many hardware solutions being explored by different companies (e.g., Nvidia, Intel, Microsoft, IBM or Google), and academic groups.

With the advent of custom accelerators, the organization of hardware research is profoundly changing, with the need for hardware researchers and engineers to essentially become experts in the algorithmic field their hardware accelerators are targeting.

In a rapidly evolving domain like machine-learning, one of the key challenges for hardware researchers and engineers is to reconcile the longer timeline of hardware design with the fast algorithmic evolutions by understanding, anticipating, and later reflecting in their designs, future trends in machine-learning.

This is the first goal of this workshop: to help steer machine-learning accelerator designs towards the most important and foreseeable evolutions in machine-learning techniques, and to help hardware accelerator designers achieve the delicate balance between efficiency and flexibility.

The second goal of the workshop is to observe that machine-learning accelerators progress will plateau if hardware researchers and engineers passively try to support whatever algorithmic variation machine-learning is exploring. Customization has become a major scalability path, and a too high demand on generality will hamper the ability of hardware researchers and engineers to scale up the efficiency of their accelerators. So our goal is also to kick off a two-way conversation between the hardware and machine-learning communities on trends in machine-learning and their impact on hardware, and hopefully lead to co-design ideas.

Workshop Organizers

  • Olivier Temam, Google
  • Luis Ceze, Univ. Washington (SIGARCH "visioning" workshops committee member)
  • Joel Emer, MIT and Nvidia (SIGARCH "visioning" workshops committee member)
  • Karin Strauss, Microsoft Research (SIGARCH "visioning" workshops committee member)

Contact: temam@google.com