The videos and slides are online ! (see links next to speakers names in programme)
Thanks for your patience, and for your wonderful participation at the workshop !
Machine-Learning has now become a pervasive tool used throughout the industry. This trend, combined with the plateauing of Moore's Law, has made hardware accelerators for machine-learning one of the promising paths forward.
The goal of this workshop is to help the architecture community understand where Machine-Learning is headed, so that researchers & engineers can appropriately plan and design their accelerators. Essentially, we want to kick off a two-way conversation between the Machine-Learning community and the Architecture community.
For that purpose, we have assembled a list of prestigious speakers (by invitation only) from the Machine-Learning domain, both from industry and academia.
The TiML workshop will take place on June 25th, 2017, in Toronto, as part of the ISCA 2017 Conference (venue).
Please register via the conference registration page.
The industry and the academic community have now fully embraced machine-learning as a major application domain, with many hardware solutions being explored by different companies (e.g., Nvidia, Intel, Microsoft, IBM or Google), and academic groups.
With the advent of custom accelerators, the organization of hardware research is profoundly changing, with the need for hardware researchers and engineers to essentially become experts in the algorithmic field their hardware accelerators are targeting.
In a rapidly evolving domain like machine-learning, one of the key challenges for hardware researchers and engineers is to reconcile the longer timeline of hardware design with the fast algorithmic evolutions by understanding, anticipating, and later reflecting in their designs, future trends in machine-learning.
This is the first goal of this workshop: to help steer machine-learning accelerator designs towards the most important and foreseeable evolutions in machine-learning techniques, and to help hardware accelerator designers achieve the delicate balance between efficiency and flexibility.
The second goal of the workshop is to observe that machine-learning accelerators progress will plateau if hardware researchers and engineers passively try to support whatever algorithmic variation machine-learning is exploring. Customization has become a major scalability path, and a too high demand on generality will hamper the ability of hardware researchers and engineers to scale up the efficiency of their accelerators. So our goal is also to kick off a two-way conversation between the hardware and machine-learning communities on trends in machine-learning and their impact on hardware, and hopefully lead to co-design ideas.
Contact: temam@google.com