Schedule

June 8, 2023
Room 240

Eastern Time (GMT - 4)

Schedule at a glance

Start Time - Topic - Speaker

9:50am - Opening Notes - Vijay Janapa Reddi and Jason Yik

10:00am - Keynote - Mike Davies (Intel)

10:50am - Morning Break

10:55am - Light-AI Interaction: Bridging Photonics and AI with Cross-Layer Hardware-Software Co-Design - Jiaqi Gu (UT Austin)

11:10am - DOTA: A Dynamically-Operated Photonic Tensor Core for Energy-Efficient Transformer Accelerator - Hanqing Zhu (UT Austin)

11:25am - The Intel Neuromorphic Deep Noise Suppression Challenge - Jonathan Timcheck (Intel)

12:00pm - Leveraging Residue Number System for Designing High-Precision Analog Deep Neural Network Accelerators - Cansu Demirkiran (Boston University)

12:15pm - Towards Cognitive AI System: a Survey and Prospective on Neuro-Symbolic AI - Zishen Wan (Georgia Tech)

12:30pm - Lunch Break

2:00pm - Accelerating AI Through Photonic Computing and Communication - Jessie Rosenberg (Lightmatter)

2:35pm - Neural Circuit Theory: Bridging the Gap Between Neuroscience and Deep Learning - Ben Scellier (Rain Neuromorphics)

3:10pm - Quantum (and) AI: The Next Generation of Computing - Stefan Leichenauer (SandboxAQ)

3:45pm - Afternoon Break

3:50pm - Cross-Layer Optimization for AI with Algorithm-Hardware Co-design - Helen Li (Duke)

4:20pm - Speaker Panel and Open Discussion

4:55pm - Closing Notes

9:50am - 10:00am

Opening Notes

Vijay Janapa Reddi and Jason Yik

10:00am - 10:50am

Keynote: The Neuromorphic Path to Faster, More Efficient, and More Intelligent Computing

Mike Davies (Intel)
Mike Davies is Director of Intel’s Neuromorphic Computing Lab. Since 2014 he has been researching neuromorphic architectures, algorithms, software, and systems, and has fabricated several neuromorphic chip prototypes to date, including the Loihi series.  In the 2000s, as a founding employee of Fulcrum Microsystems and director of its silicon engineering, Mike pioneered high-performance asynchronous design methods and led the development of several generations of industry leading Ethernet switches. Before that, he received B.S. and M.S. degrees from Caltech.

10:50am - 10:55am

Morning Break

10:55am - 11:10am

Light-AI Interaction: Bridging Photonics and AI with Cross-Layer Hardware-Software Co-Design

Jiaqi Gu (Univerity of Texas at Austin)

11:10am - 11:25am

DOTA: A Dynamically-Operated Photonic Tensor Core for Energy-Efficient Transformer Accelerator

Hanqing Zhu (Univerity of Texas at Austin)

11:25am - 12:00pm

The Intel Neuromorphic Deep Noise Suppression Challenge - Jonathan Timcheck (Intel)

A critical enabler for progress in neuromorphic computing research is the ability to transparently evaluate different neuromorphic solutions on important tasks and to compare them to state-of-the-art conventional solutions. The Intel Neuromorphic Deep Noise Suppression Challenge (Intel N-DNS Challenge), inspired by the Microsoft DNS Challenge, tackles a ubiquitous and commercially relevant task: real-time audio denoising. Audio denoising is likely to reap the benefits of neuromorphic computing due to its low-bandwidth, temporal nature and its relevance for low-power devices. The Intel N-DNS Challenge consists of two tracks: a simulation-based algorithmic track to encourage algorithmic innovation, and a neuromorphic hardware (Loihi 2) track to rigorously evaluate solutions. For both tracks, we specify an evaluation methodology based on energy, latency, and resource consumption in addition to output audio quality. We make the Intel N-DNS Challenge dataset scripts and evaluation code freely accessible, encourage community participation with monetary prizes, and release a neuromorphic baseline solution which shows promising audio quality, high power efficiency, and low resource consumption when compared to Microsoft NsNet2 and a proprietary Intel denoising model used in production. We hope the Intel N-DNS Challenge will hasten innovation in neuromorphic algorithms research, especially in the area of training tools and methods for real-time signal processing. We expect the winners of the challenge will demonstrate that for problems like audio denoising, significant gains in power and resources can be realized on neuromorphic devices available today compared to conventional state-of-the-art solutions. 

12:00pm - 12:15pm

Leveraging Residue Number System for Designing High-Precision Analog Deep Neural Network Accelerators

Cansu Demirkiran (Boston University)

12:15pm - 12:30pm

Towards Cognitive AI System: a Survey and Prospective on Neuro-Symbolic AI

Zishen Wan (Georgia Institute of Technology)

12:30pm - 2:00pm

Lunch Break

2:00pm - 2:35pm

Accelerating AI Through Photonic Computing and Communication - Jessie Rosenberg (Lightmatter)

As AI workloads continue to grow, two major limiting factors are power consumption and interconnect bandwidth. By leveraging the high data throughput and scalability of photonic systems, silicon photonics presents an opportunity to break through performance bottlenecks in both of these areas. Photonic matrix multiplication systems can perform operations ~10x faster than the typical clock speed of electronic systems, while photonic interconnects improve memory bandwidth and enable larger and more flexible network topologies. Integration of photonics and CMOS enables scalability and cost advantages, and allows photonic components to seamlessly integrate with existing compute architectures and infrastructure. We will present recent developments in silicon photonics for AI workloads, discuss design and manufacturing challenges that allow scaling from the device to full system level, and contrast with other methods of analog and digital compute. 

2:35pm - 3:10pm

Neural Circuit Theory: Bridging the Gap Between Neuroscience and Deep Learning - Ben Scellier (Rain Neuromorphics)

We introduce Neural Circuit Theory (NCT), a mathematical framework which bridges neuroscience, deep learning, and electrical circuit theory. We show how NCT can describe biological neural circuits and leads to physical formulations of bio-plausible algorithms for credit assignment, such as Equilibrium Propagation and Difference Target Propagation. We show how these formulations can lead to quadratic speedups in the inference and training speed of energy-based models as well as estimation of curvature information in feedforward networks. Finally, we discuss the geometric structure that is embedded in NCT, which naturally contains information about the topology of the network.

3:10pm - 3:45pm

Quantum (and) AI: The Next Generation of Computing - Stefan Leichenauer (SandboxAQ)

We are still many years away from large-scale quantum computers, which are poised for massive impact in a number of areas. Quantum computers will be used as part of general, hybrid computing platforms in the cloud, and will provide a key ingredient to push through barriers impossible to breach without them. They are not magical devices: classical computing, powered by AI, will still be handling most of the workload. It is possible that Quantum will also unlock new advances in AI itself, though this is not something to take for granted. In this talk I will discuss all of these issues, including steps we can take today to prepare for the quantum future.

3:45pm - 3:50pm

Afternoon Break

3:50pm - 4:20pm

Cross-Layer Optimization for AI with Algorithm-Hardware Co-design - Helen Li (Duke University)

The advancement of Artificial Intelligence (AI) and its swift deployment on resource-constrained systems relies on refined algorithm-hardware co-design. In this talk, we first propose our solution to craft efficient lightweight AI algorithms via model compression and neural architecture search on broad AI applications, such as image recognition, 2D/3D semantic segmentation, and recommender systems. Then, we involve efficient cross-optimization design and distributed learning to brew swift scalable AI algorithms with specialized compute kernel and hardware architecture. Finally, we demonstrate the improvements in performance-efficiency trade-off on alternative real-world applications, such as electronic design automation and adversarial machine learning. Through these explorations, we present our vision for the future of the full-stack optimization of AI solutions.

4:20pm - 4:55pm

Speaker Panel and Open Discussion

Join our invited speakers in open discussion and debate on next-generation AI computing approaches!

4:55pm - 5:00pm

Closing Notes