Deep neural networks (DNN) that are being used for analyzing images, videos, signals and texts demand for large amount of memory size and intensive computing power. The largely successful GPT4 model contains more than few trillion parameters. Such models, although extremely powerful, have very limited usage in real-life applications like Industrial IoT, self-driven automobiles, algorithmic screening for health condition detection that are intended to be deployed over constrained mobile or edge devices. The requirement of running large models on resource-constrained edge devices has led to significant research interests in the topic of DNN model compression. Traditionally, data compression (image/ video/ audio) has been championed by signal processing researchers. Incidentally, many of these techniques are being leveraged for compressing DNN. Unfortunately, most of these papers are primarily being published in machine learning conference venues. Given the contribution of signal processing community in the compression domain, it is imperative that a premier signal processing venue like IEEE ICASSP will be a more appropriate venue for such research works. To the best of our knowledge there has been little effort to organize it at a signal processing venue. h
This workshop is a satelliteĀ workshop in IEEE ICASSP 2024 (https://2024.ieeeicassp.org/).