The 1st Workshop on Parallel and Distributed Machine Learning 2019 (PDML19)

Kyoto, Japan on August 5th, 2019

Held in conjunction with The 48th International Conference on Parallel Processing

Abstract on the workshop theme

Parallel and distributed computing has been making tremendous impacts on the recent advancement of data-oriented machine learning such as deep learning. Accelerating ML workloads with HPC systems can present opportunities to enable more complicated machine learning. However, significant challenges remain to be addressed due to limited computation power against the huge volume of datasets. In this workshop, we bring together researchers in the field of machine learning and facilitate discussions for their experiences, new ideas and the latest trends to leverage HPC for ML, ML for HPC and ML applications in HPC.

Organizer Committee

    • Naoya Maruyama, Lawrence Livermore National Laboratory
    • Rio Yokota, Tokyo Institute of Technology
    • Kento Sato, RIKEN Center for Computational Science

Technical Program Committee

    • Tal Ben-Nun, ETH Zurich
    • Keisuke Fukuda, Preferred Networks
    • Masaaki Kondo, University of Tokyo/RIKEN Center for Computational Science
    • Naoya Maruyama, Lawrence Livermore National Laboratory
    • Akira Naruse, NVIDIA
    • Kento Sato, RIKEN Center for Computational Science
    • Koichi Shirahata, Fujitsu Laboratories
    • Mohamed Wahib, National Institute for Advanced Industrial Science and Technology
    • Rio Yokota, Tokyo Institute of Technology