Efficient Deep Learning (DL) models are increasingly recognized as one of the keys for successful Artificial Intelligence (AI) applications. Breakthroughs in AI, including Large Language Models (LLMs), have been largely driven by massive datasets and computationally intensive architectures. However, their high energy consumption raises concerns about sustainability, scalability, and accessibility. Efficient DL approaches, including efficient LLMs, randomized and semi-randomized neural networks, deep reservoir computing, neuromorphic hardware, knowledge distillation, weight quantization, model compression, and hardware acceleration, offer promising solutions to lower computational and energy costs, while maintaining effective performance. These methodologies enable efficient and robust AI systems across a wide range of applications, including signal analysis, audio-video processing, industrial process modeling, control, and automation. This workshop aims at gathering contributions advancing theory, methodologies, and applications in efficient DL, highlighting computational efficiency, real-time performance, adaptability, and scalability in modern AI systems.
Topics of Interest
Potential topics of interest for the workshop include (without being limited to) the following:
- Parameter Quantization and Pruning
- Model Compression
- Knowledge Distillation
- Hardware Acceleration
- Efficient Weight Representations for Neural Networks
- Photonic Neural Networks
- Real-time Processing
- Reservoir Computing
- Semi-randomized Neural Networks
- Efficient Large Language Models
- Efficient Graph Machine Learning
- Neuromorphic Hardware for Deep Learning
- Deep Learning on Programmable Logic
- AI on Edge Computing
- Energy-Efficient Machine Learning
- Efficient Reinforcement Learning
- AI applications (e.g., time series analysis, multimedia processing, industrial and automotive applications)
Workshop Paper Submission Deadline:Â 8th June 2026
See IMPORTANT DATES for more info.