A Decade of Sparse Training:
Why Do We Still Stick to Dense Training?
AAAI 2026 tutorial TH01, Tuesday, 20 January, 8:30-12:30
Room: Peridot 202
AAAI 2026 tutorial TH01, Tuesday, 20 January, 8:30-12:30
Room: Peridot 202
Abstract:
This tutorial targets researchers, practitioners, and advanced students in machine learning who seek to reduce the computational and energy costs of training large neural networks without sacrificing performance. Participants will learn the theory, algorithms, and system-level aspects of dynamic sparse training (DST) across supervised learning, reinforcement learning, generative Artificial Intelligence (AI), and truly sparse implementations. The tutorial combines algorithmic foundations with practical demonstrations using open-source code and commodity hardware, offering an innovative end-to-end perspective that connects cutting-edge research to executable systems. By the end, attendees will understand DST’s performance-efficiency trade-offs, gain hands-on experience, and join a dedicated Slack community to continue discussions, share results, and collaborate on advancing DST toward sustainable, Green AI.
Organizers:
Dr. Elena Mocanu, Assistant Professor, University of Twente, The Netherlands
Jafar Badour, University of Twente, The Netherlands
Dr. Qiao Xiao, Postdoctoral Researcher, Eindhoven University of Technology, The Netherlands
Boqian Wu, University of Twente, The Netherlands; University of Luxembourg, Luxembourg
Dr. Decebal Constantin Mocanu, Associate Professor, University of Luxembourg, Luxembourg
Slides: