NeurIPS 2018 On-Device ML Workshop
2nd Workshop on Machine Learning on the Phone and other Consumer Devices (MLPCD 2)
Deep learning and machine learning, in general, has changed the computing paradigm. Products of today are built with machine intelligence as a central attribute, and consumers are beginning to expect near-human interaction with the appliances they use. However, much of the Deep Learning revolution has been limited to the cloud, enabled by popular toolkits such as Caffe, TensorFlow, and MxNet, and by specialized hardware such as TPUs. In comparison, mobile devices until recently were just not fast enough, there were limited developer tools, and there were limited use cases that required on-device machine learning. That has recently started to change, with the advances in real-time computer vision and spoken language understanding driving real innovation in intelligent mobile applications. Several mobile-optimized neural network libraries were recently announced (CoreML, Caffe2 for mobile, TensorFlow Lite), which aim to dramatically reduce the barrier to entry for mobile machine learning. Innovation and competition at the silicon layer has enabled new possibilities for hardware acceleration. To make things even better, mobile-optimized versions of several state-of-the-art benchmark models were recently open sourced. Widespread increase in availability of connected “smart” appliances for consumers and IoT platforms for industrial use cases means that there is an ever-expanding surface area for mobile intelligence and ambient devices in homes. All of these advances in combination imply that we are likely at the cusp of a rapid increase in research interest in on-device machine learning, and in particular, on-device neural computing.
Significant research challenges remain, however. Mobile devices are even more personal than “personal computers” were. Enabling machine learning while simultaneously preserving user trust requires ongoing advances in the research of differential privacy and federated learning techniques. On-device ML has to keep model size and power usage low while simultaneously optimizing for accuracy. There are a few exciting novel approaches recently developed in mobile optimization of neural networks. Lastly, the newly prevalent use of camera and voice as interaction models has fueled exciting research towards neural techniques for image and speech/language understanding. This is an area that is highly relevant to multiple topics of interest to NIPS -- e.g., core topics like machine learning & efficient inference and interdisciplinary applications involving vision, language & speech understanding as well as emerging area (namely, Internet of Things).
- Model compression for efficient inference with deep networks and other ML models
- Privacy preserving machine learning
- Low-precision training/inference & Hardware acceleration of neural computing on mobile devices
- Real-time mobile computer vision
- Language understanding and conversational assistants on mobile devices
- Speech recognition on mobile and smart home devices
- Machine intelligence for mobile gaming
- ML for mobile health other real-time prediction scenarios
- ML for on-device applications in the automotive industry (e.g., computer vision for self-driving cars)
- Software libraries (including open-source) optimized for on-device ML
Please email firstname.lastname@example.org with any questions.