Advances in Neuromorphic Computing: Models, Hardware, and Optimization
ICASSP 2026 Tutorial — April 5, 2026
About This Tutorial
Modern AI is transforming the world — but at a steep energy cost. This tutorial explores neuromorphic computing, a brain-inspired computing paradigm aimed at dramatically improving the intelligence-to-watt ratio of AI systems.
Spanning three hours, this tutorial brings together experts from Northeastern University and Cornell University to cover the full neuromorphic computing stack: from computational models and specialized hardware to training and optimization algorithms.
What You Will Learn
How spiking neural networks exploit dynamic sparsity and event-driven communication for energy efficiency
How neuromorphic computing units can implement intra-token and inter-token processing
How to design neuromorphic state-space models and neuromorphic transformers
The principles of in-memory computing and how co-locating memory and computation overcomes the von Neumann bottleneck
Local learning rules as biologically plausible, memory-efficient alternatives to backpropagation
The latest advances in neuromorphic hardware platforms
Practical optimization and training techniques for neuromorphic systems
Tutorial Structure and Slides
Part I: Models (Osvaldo Simeone) - 1 hour
Part II: Hardware (Bipin Rajendran) - 1 hour
Part III: Optimization and Training (Tianyi Chen) - 1 hour
Presenters
Osvaldo Simeone and Bipin Rajendran — Northeastern University London
Tianyi Chen — Cornell University