The Fourth Workshop on:
Computational Aspects of Deep Learning (CADL)
Computational Aspects of Deep Learning (CADL)
Introduction
Over the past decade, Deep Learning (DL) has enabled remarkable advances in various research fields such as Computer Vision (CV), Natural Language Processing and Pattern Recognition. This shift has made AI in general (and thus CV as well) a computational science where massive models are trained on large-scale infrastructures, accelerating scientific discovery and leading to better results in several applications like image or video classification, segmentation, and so on. However, harnessing such computational power requires careful optimization and design of neural architectures and their training procedures to obtain model effectiveness, applicability at scale, and reasonable energy consumption.
This aspect has become even more critical with the advent of transformer-based language and vision models, up to that point that the development, training and inference costs of such architectures makes them accessible only to few big actors in the research space.
The scope of our well-established workshop on Computational Aspects of Deep Learning (CADL), now at its fourth edition, is to bring together DL and CV experts with different backgrounds and research goals to discuss challenges, exchange ideas and identify solutions that allow advances in the DL/CV field in a computationally efficient and energy-saving way, thus allowing at the same time better inclusiveness. The workshop closely relates with the ECCV conference through its focus on advancing the DL and CV fields in a computationally efficient and energy-saving manner, while also promoting inclusiveness.
Therefore our workshop considers submissions from a wide range of applications in the AI/DL/CV areas:
Applied DL in compute limited environments (e.g. embedded or automotive)
Novel architectures and operators for data-intensive scenarios
Energy or power reduction in DL
Training frameworks and efficient algorithms
Distributed, efficient reinforcement and imitation learning algorithms
Large-scale pre-training techniques for real-world applications
Distributed training approaches and architectures
HPC and massively parallel architectures for DL
Model pruning, gradient compression, and quantization for efficient inference
Methods to reduce the memory/data transmission footprint
Differentiable metrics to estimate computational, energy or power costs
Hardware accelerators for DL
Efficient data storage and loading in training
Efficient integration of vision and language models
DL for CV on edge devices