Paper Submission Deadline: Monday, Mar 3, 2025 (23:59 AoE)
Notification of Acceptance: Friday, March 28, 2025
Camera-Ready Submission: Friday, May 16, 2025
Workshop Date: June 13, 2025
All deadlines are at 23:59 AoE (Anywhere on Earth) time.
Submission follows the ISC format:
Conference Proceedings
The post-conference Workshop Proceedings will be published with Springer’s Lecture Notes in Computer Science (LNCS).
Upload your paper in PDF format, up to 12 pages (optional: two extra pages to incorporate reviewers comments). Use the LNCS formatting for your paper. More instructions can be found here: https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines?
Optimization strategies for reducing energy consumption in deep learning
Novel architectures and operators for data-intensive scenarios
Distributed and efficient reinforcement learning algorithms
Large-scale pre-training techniques for real-world applications
Distributed training approaches and architectures
Utilization of HPC and massively parallel architectures for deep learning
Frameworks and optimization algorithms for training deep networks
Model pruning, gradient compression, and quantization techniques for reducing computational complexity
Methods to reduce memory and data transmission footprints
Differentiable metrics for estimating computational costs, energy consumption, and power usage of models
Design, implementation, and application of hardware accelerators for deep learning
Efficient and cost-effective models and methods promoting diversity and inclusivity in deep learning
Acceleration of training and inference for large language models (e.g., GPT) and other generative models
Efficient deep learning for edge and embedded devices
Green AI: Sustainable practices and carbon footprint reduction in AI research and deployment
Scalable and efficient multi-modal learning architectures
Optimization techniques for transformer-based models in various domains
Efficient federated learning algorithms and architectures
Computational aspects of AI safety and robustness
Resource-efficient few-shot and zero-shot learning techniques
This revised list maintains the original focus while adding a few relevant topics that reflect recent advancements and concerns in the field. The additions (items 14-20) broaden the scope to include emerging areas of interest in computational aspects of deep learning.
The Microsoft CMT service will be used for managing the peer-reviewing process for this workshop. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.
We are pleased to announce that the winner of the Best Paper award is ‘Direct Feedback Alignment for Recurrent Neural Networks’.
Congratulations to Andrea Cossu and the other authors for the outstanding contribution and impact of their research!