Call for Papers

We invite researchers to submit papers related to the following topics of interest, but not limited to:

  • Model compression for efficient inference with deep networks and other ML models
  • Learning efficient deep neural networks under memory and compute constraints for on-device applications
  • Low-precision training/inference & acceleration of deep neural networks on mobile devices
  • Sparsification, binarization, quantization, pruning, thresholding and coding of neural network
  • Deep neural network computation for low power consumption applications
  • Efficient on-device ML for real-time applications in computer vision, language understanding, speech processing, mobile health and automotive (e.g., computer vision for self-driving cars, video and image compression), multimodal learning
  • Software libraries (including open-source) optimized for efficient inference and on-device ML
  • Open datasets and test environments for benchmarking inference with efficient DNN representations
  • Metrics for evaluating the performance of efficient DNN representations
  • Methods for comparing efficient DNN inference across platforms and tasks

Submission Instructions

An extended abstract (3 pages long using ICML style, see https://icml.cc/Conferences/2019/StyleAuthorInstructions ) in PDF format should be submitted for evaluation of the originality and quality of the work. The evaluation is double-blind and the abstract must be anonymous. References may extend beyond the 3 page limit, and parallel submissions to a journal or conferences (e.g. AAAI or ICLR) are permitted.

Submissions will be accepted as contributed talks (oral) or poster presentations. Extended abstract should be submitted through EasyChair (https://easychair.org/my/conference.cgi?conf=odmlcdnnr2019). All accepted abstracts will be posted on the workshop website and archived.

Selection policy: all submitted abstracts will be evaluated based on their novelty, soundness and impacts. At the workshop we encourage DISCUSSION about NEW IDEAS.