Workshop on
Resource Efficient Deep Learning
for Computer Vision

ICCV Workshop, 2023

Mission of the project

Despite the success of CNNs and, more recently, transformers, the scale of deep learning models has grown exponentially due to the rapid development of computational resources. Going forward, it is critical to focus on the practical training and inference efficiency of these models, as well as ensuring that even the largest models have viable real-world applications. Moreover, compute efficiency of computer vision models is becoming increasingly crucial for downstream tasks such as segmentation, object tracking, and action recognition, among others.

There exist several different directions towards making deep learning efficient for computer vision, thereby leading to a reduction in the required computational memory or the associated training and inference time. RCV aims at bringing together researchers and industry practitioners who work towards building efficient computer vision models with deep learning. This workshop will provide a conducive environment for deep learning practitioners to connect, learn and collaborate.

A unique aspect of this workshop is that it will also serve as a platform to discuss research efforts toward budget-aware model training and inference. Most research that exists focuses on making deep learning methods efficient, however, the resource associated with real-world AI devices can vary drastically, and a method termed efficient for a certain choice of resource budgets might be completely inefficient for a different one. Further, the recent workshops on the topic of efficient deep learning have also focused minimally on this aspect. In this workshop, we will also focus on methods that aim at performing budget-aware model training and inference, thereby maximally utilizing the available resources. To push research efforts in this direction, we are also organizing two challenges focusing on resource-efficient model training and inference where participants would be required to optimize the model training process under computational memory constraint as well as the inference process under latency constraint.

Further, it is to be noted that the workshop will also cover more general topics from the domain of efficient deep learning for computer vision that has not been outlined above. In summary, the topics that will be covered during this workshop are as follows.

Designing efficient neural architectures

Light-weight and efficient architectures (CNNs, Transformers, etc.), resource-aware neural architecture search, efficient architecture search strategies for downstream computer vision applications.

Compression of existing models

Structured and unstructured pruning, quantization and binarization, efficient training and inference of models under different resource constraints ( no. of accelerators, device memory, wall clock time etc.), extreme compression of models and stability analysis, transferability or robustness of the compressed models.

Deep learning on very large images

Novel architectures for processing and training large images, processing large images with deep learning under resource constraints, applications to remote sensing, geophysics, earth sciences etc.

Efficient fine-tuning of large models

Efficient transfer learning on large-scale vision and language models, fin-tuning with adapters, etc.  


Efficient processing of videos in computer vision

Benchmarking datasets for model efficiency

Sponsors & Partners