5th International Workshop on
Computational Aspects of Deep Learning (CADL)
Computational Aspects of Deep Learning (CADL)
Over the past decade, Deep Learning (DL) has revolutionized numerous research fields, transforming AI into a computational science where massive models are trained on large-scale infrastructures. This paradigm shift has accelerated scientific discovery and improved results across various applications. However, harnessing such computational power demands meticulous optimization of neural architectures and training procedures to ensure model effectiveness, scalability, and energy efficiency.
The advent of transformer-based language and vision models has further intensified this challenge. The substantial development, training, and inference costs associated with these architectures have limited their accessibility to a select few major players in the research arena.
Now in its fifth edition, the Computational Aspects of Deep Learning (CADL) workshop aims to address these critical challenges. CADL brings together experts from Deep Learning and High-Performance Computing (HPC) backgrounds to:
Discuss computational challenges in AI
Exchange innovative ideas
Identify solutions for advancing AI in a computationally efficient and energy-conscious manner
The workshop aligns closely with the ISC High Performance conference, focusing on the intersection of AI and high-performance computing.
Evolution and Focus
Since its inception, CADL has continuously adapted to the rapidly evolving AI landscape, with an increasing emphasis on scalability and energy efficiency. This edition builds upon previous successes, incorporating emerging trends and addressing new challenges in the field.
Key Areas of Interest
The workshop welcomes submissions across a wide range of applications, including:
Applied DL in compute-limited environments (e.g., embedded or automotive systems)
Novel architectures for data-intensive scenarios
Energy and power reduction in DL
Efficient training frameworks and algorithms
Distributed learning approaches (reinforcement, imitation, and training)
Large-scale pre-training techniques for real-world applications
HPC and massively parallel architectures for DL
Model optimization (pruning, compression, quantization)
Memory and data transmission efficiency
Differentiable metrics for computational and energy cost estimation
Hardware accelerators for DL
Efficient data handling in training
Integration of vision and language models
The insights and techniques shared at CADL aim to benefit both the scientific community and industry practitioners. By focusing on optimizing DL models and infrastructures, the workshop contributes to the development of more efficient and accessible AI systems across various domains.
We invite full paper submissions, which will be presented as talks or posters during the workshop. Outstanding contributions may be selected for oral presentations. Detailed submission guidelines and deadlines are available in the designated section of this website.
CADL offers an exceptional platform for networking and collaboration among researchers, practitioners, and industry professionals. Attendees will have ample opportunities to engage in discussions, share experiences, and forge new partnerships in the dynamic field of computational deep learning.
Join us at CADL to be part of shaping the future of efficient and accessible AI computing!