ICML 2019 Workshop

Joint Workshop on On-Device Machine Learning & Compact Deep Neural Network Representations

(ODML-CDNNR)

Description

This joint workshop aims to bring together researchers, educators, practitioners who are interested in techniques as well as applications of on-device machine learning and compact, efficient neural network representations. One aim of the workshop discussion is to establish close connection between researchers in the machine learning community and engineers in industry, and to benefit both academic researchers as well as industrial practitioners. The other aim is the evaluation and comparability of resource-efficient machine learning methods and compact and efficient network representations, and their relation to particular target platforms (some of which may be highly optimized for neural network inference). The research community has still to develop established evaluation procedures and metrics.

The workshop also aims at reproducibility and comparability of methods for compact and efficient neural network representations, and on-device machine learning. Contributors are thus encouraged to make their code available.

Areas/Topics

Topics of interest include, but are not limited to:

  • Model compression for efficient inference with deep networks and other ML models
  • Learning efficient deep neural networks under memory and compute constraints for on-device applications
  • Low-precision training/inference & acceleration of deep neural networks on mobile devices
  • Sparsification, binarization, quantization, pruning, thresholding and coding of neural network
  • Deep neural network computation for low power consumption applications
  • Efficient on-device ML for real-time applications in computer vision, language understanding, speech processing, mobile health and automotive (e.g., computer vision for self-driving cars, video and image compression), multimodal learning
  • Software libraries (including open-source) optimized for efficient inference and on-device ML
  • Open datasets and test environments for benchmarking inference with efficient DNN representations
  • Metrics for evaluating the performance of efficient DNN representations
  • Methods for comparing efficient DNN inference across platforms and tasks

Submission Instructions

An extended abstract (3 pages long using ICML style, see https://icml.cc/Conferences/2019/StyleAuthorInstructions ) in PDF format should be submitted for evaluation of the originality and quality of the work. The evaluation is double-blind and the abstract must be anonymous. References may extend beyond the 3 page limit, and parallel submissions to a journal or conferences (e.g. AAAI or ICLR) are permitted.


Submissions will be accepted as contributed talks (oral) or poster presentations. Extended abstract should be submitted through EasyChair (https://easychair.org/my/conference.cgi?conf=odmlcdnnr2019). All accepted abstracts will be posted on the workshop website and archived.


Selection policy: all submitted abstracts will be evaluated based on their novelty, soundness and impacts. At the workshop we encourage DISCUSSION about NEW IDEAS.

Important Dates

  • Submission: Apr. 7, 2019 Apr. 14, 2019
  • Notification: Apr. 24, 2019
  • Workshop: Jun. 14 or 15, 2019

* deadlines 23:59 anywhere on Earth (UTC-12)

Organizers

  • Sujith Ravi, Google Research
  • Zornitsa Kozareva, Google
  • Lixin Fan, JD.com
  • Max Welling, Qualcomm & University of Amsterdam
  • Yurong Chen, Intel Labs China
  • Werner Bailer, Joanneum Research
  • Brian Kulis, Boston University
  • Haoji (Roland) Hu, Zhejiang University
  • Jonathan Dekhtiar, Nvidia
  • Yingyan Lin, Rice University
  • Diana Marculescu, Carnegie Mellon University