Title: Meta-Optical Encoders for High Resolution Real Time Computer Vision
Abstract: Light’s ability to perform massive linear operations parallelly has recently inspired numerous demonstrations of optics-assisted artificial neural networks (ANN). However, a clear advantage of optics over purely digital ANN in a system-level has not yet been established. While linear operations can indeed be optically performed very efficiently, the lack of nonlinearity and signal regeneration require high-power, low-latency signal transduction between optics and electronics. Additionally, a large power is needed for lasers and photodetectors, which are often neglected in the calculation of the total energy consumption. Here, instead of mapping traditional digital operations to optics, we co-designed a hybrid optical-digital ANN, that operates on incoherent light, and is thus amenable to operations under ambient light. Keeping the latency and power constant between a purely digital ANN and a hybrid optical-digital ANN, we identified a low-power/ latency regime, where an optical encoder provides higher classification accuracy than a purely digital ANN. We estimate our optical encoder enables ~ 10kHz rate operation of a hybrid ANN with a power of only 23mW. However, in that regime, the overall classification accuracy is lower than what is achievable with higher power and latency. Our results indicate that optics can be advantageous over digital ANN in applications, where the overall performance of the ANN can be relaxed to prioritize lower power and latency.
Biography: Arka Majumdar is a Professor in Electrical and Computer Engineering and Physics at the University of Washington. He received B. Tech. from IIT-Kharagpur (2007), where he was honored with the President’s Gold Medal. He completed MS (2009) and Ph.D. (2012) in Electrical Engineering at Stanford University. He spent one year at the University of California, Berkeley (2012-13), and then in Intel Labs (2013-14) as postdoc before joining UW. Prof. Majumdar is the recipient of multiple Young Investigator Awards from the AFOSR (2015), NSF (2019), ONR (2020) and DARPA (2021), Intel early career faculty award (2015), Amazon Catalyst Award (2016), Alfred P. Sloan fellowship (2018), UW college of engineering outstanding junior faculty award (2020), iCANX Young Scientist Award (2021), IIT-Kharagpur Young Alumni Achiever Award (2022) and DARPA Director’s Fellowship (2023). He is co-founder and technical advisor of Tunoptix, a startup commercializing software defined meta-optics.
Date and Time: April 11, 2025, 5:00 pm - 6:00 pm (U.S. Eastern Time)
Recording: Video
Title: Photonics for Neuromorphic Computing and its Applications
Abstract: Artificial Intelligence (AI) technology is fundamentally reshaping the current information era. In the post-Moore's Law era, traditional digital computing hardware for AI encounters increasing challenges in power consumption and latency. Photonics for neuromorphic computing, harnessing superior parallelism, interconnect capabilities, and the extensive bandwidth of light, emerges as a promising hardware platform for AI computing. This talk will introduce our explorations into advanced photonic platforms, from 2D integrated photonics to 3D metasurfaces, to realize high-speed intelligent signal processing and machine vision applications. This talk will also introduce our effort in making optical systems a more accurate and reliable platform for neuromorphic computing.
Biography: Chaoran Huang is currently an Assistant Professor at the Chinese University of Hong Kong. She has broad research interests in optical computing, photonic integrated circuits, and optical communications. Her current research focuses on developing novel photonic devices, integrated circuits, and complementary algorithms for high-performance AI computing and information processing. She has published over 50 papers, including Nature Electronics, Nature Communications, Optica etc. She has served as co-chair, TPC member of many international conferences such as OFC, CLEO, ECOC, and the editorial board member of Communication Engineering in the Nature Portfolio. She was the recipient of the 2019 Rising Stars Women in Engineering Asia and the 2022 Optica 20th Anniversary Challenge Prize.
Date and Time: December 9th, 2024, 10:00 pm - 11:00 pm (U.S. Eastern Time)
Recording:
Title: Scaling Up Photonic Tensor Cores with Device-Circuit-Signaling Co-Design
Abstract: Photonic tensor cores have grown in popularity over the past few years for accelerating tensor-based kernels found in abundance in deep learning workloads because they offer potentially massive spatial parallelism (across wavelengths and waveguides), sub-nanosecond-scale start-to-solution latency, and near-dissipation-free dynamic operation. However, several shortcomings severely limit the practically achievable parallelism, processing throughput, and energy efficiency in existing photonic tensor core architectures. For instance, the wavelength-selective analog operation of existing designs makes them highly prone to crosstalk noise and other optical signal penalties and losses. These penalties and losses interplay with an already tight optical power budget to incur strong trade-offs for achievable spatial parallelism, operating data rate, and analog precision. This talk will present how co-designing low-dissipation, low-noise, and high-speed electro-photonic devices, crosstalk- minimal circuit organizations, and mixed unary/analog signaling methods can overcome these shortcomings to realize photonic tensor cores with scaled-up throughput and energy-efficiency benefits for accelerating tensor-based kernels found in a variety of deep learning workloads.
Biography: Dr. Ishan Thakkar is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Kentucky, Lexington, KY. He received his Ph.D. and M.S. in Electrical Engineering from Colorado State University (CSU), Fort Collins, CO.
His research broadly focuses on designing and optimizing unconventional (more-than-Moore) architectures and technologies for energy-efficient, reliable, and secure computing. More specific more-than-Moore computing technology interests of his include integrated electro-photonics, in- memory computing, mixed analog/stochastic/unary computing, monolithic 3D (M3D) integration, and neuromorphic in-materia computing.
Dr. Thakkar has 55+ peer-reviewed publications in top journals and premier conferences. His research contributions have been recognized with 6 Best Paper Awards and Nominations from IEEE/ACM-sponsored peer-reviewed journals and conferences. He also received the Outstanding Reviewer Award from the IEEE/ACM CODES+ISSS conference at ESWEEK 2022. He has served on 10+ chaired positions in organizing committees of various IEEE/ACM conferences/workshops. He has been a technical program committee member of 30+ premier IEEE/ACM conferences. He is currently an Associate Editor for the IEEE TCVLSI Newsletter. He serves in the ACM SIGDA Executive Team as the Social Media Chair.
Date and Time: August 9th, 2024, 10:00 am - 11:00 am (U.S. Eastern Time)
Recording: Video
Title: Revolutionize Your Chip Design with gdsfactory
Abstract: For efficient design, verification and validation of integrated circuits and components it is important to have an easy to customize and extend workflow. Python has become the standard programming language for machine learning, scientific computing and engineering. In this tutorial we will use gdsfactory to design photonic circuits: a heater, a photonic MZI filter and a ring resonator. This will allow you to leverage the machine learning tools (python, jupyter notebooks) into chip design.
Biography: Joaquin has designed chips in python for 15 years in Companies such as Intel, Hewlett Packard Labs, PsiQuantum and Google X. Joaquin started the open source gdsfactory project in 2019 to build better chips in python, Since then it has been downloaded over 2 Million times. Joaquin also works closely with open source developers at Google with the build your own silicon program, where they are trying to repeat the successful eco-system that open source tensorflow has created for machine learning since 2016 into building open source chip design tools.
Date and Time: July 19th, 2024, 5:00 pm - 6:00 pm (U.S. Eastern Time)
Recording: Video
Title: Photonic Architectures for In-Memory Computing Using Nonvolatile Optical Materials
Abstract: Photonics information processing strategies offer the unique ability to perform analog computation with ultra-low latency and high efficiency. However, designing compact and reconfigurable photonic architectures which scale well is a challenge. The combination of bistable optical materials (such as phase-change materials like Ge2Sb2Te5) and integrated photonics is a promising approach which enables nonvolatile optical memory on-chip with low drift, compact footprint, and high-speed readout. This talk will focus on using photonic memories—together with wavelength division multiplexing and “in-memory” computing techniques—to enable high-speed matrix-vector operations for machine learning applications.
Biography: Dr. Nathan Youngblood joined the Department of Electrical and Computer Engineering at the University of Pittsburgh as an Assistant Professor in September 2019. As a postdoctoral researcher at the University of Oxford from 2017 to 2019, he developed phase-change optical systems and photonic architectures for non-von Neumann computing. In 2016, he received a PhD in Electrical Engineering from the University of Minnesota where his research focused on integrating 2D materials with silicon photonics for optoelectronic applications. Nathan leads the Youngblood Photonics Lab at Pitt, whose goal is to develop reconfigurable photonic materials, devices, and architectures which have potential to transform the field of artificial intelligence by minimizing computing latency and energy consumption. Nathan’s work has been published in leading journals such as Nature, Nature Photonics, and Science Advances, and featured in popular news outlets such as The Times, London and the Daily Mail.
Date and Time: June 28, 2024, 11:00 am - 12:00 pm (U.S. Eastern Time)
Title: The challenging road towards an optical computing advantage for optimization and AI workloads
Abstract: Given the fast increases in computational requirements for AI workloads, which demand for tremendous energy-efficiency and throughput enhancements during the next decade, alternative ways to compute are getting traction again. In this talk, we will first give a literature overview of recent progress in the field of optical computing that is trying to address this need, and we will highlight remaining challenges and best practices when studying novel hardware proposals. Specifically, we will emphasize recent proposals for optical accelerators targeting AI and/or optimization workloads. We will explain how the heterogeneous IIIV-on-Silicon fabrication flow that Hewlett Packard Labs initially developed for O-band silicon photonic interconnects in HPC systems, can with minor modifications provide a promising platform for photonic neuromorphic computing, providing access to light sources, photodiodes, modulators, and non-volatile memory devices, providing an outlook to both inference and training capabilities of photonic matrix-vector product engines. Finally, we show in simulation how matrix decomposition techniques can be used to reduce the number of required on-chip devices when implementing weight matrices on-chip.
Biography: Thomas Van Vaerenbergh received the master's degree in applied physics and the Ph.D. degree in photonics from Ghent University, Ghent, Belgium, in 2010 and 2014, respectively. He was awarded the scientific prize Alcatel-Lucent Bell/FWO for his PhD thesis on all-optical spiking neurons in silicon photonics. In 2014, he joined the Large-Scale Integrated Photonics team in Hewlett Packard Labs, part of Hewlett Packard Enterprise (HPE), in Palo Alto, California. Since 2019, he is based in HPE Belgium and has been expanding HPE’s research activities related to photonics and AI in the EMEA region. His main research interests include analog photonic and electronic accelerators for combinatorial optimization and AI workloads, and inverse design of photonic devices and circuits based on physics-informed machine learning.
Date and Time: May 31, 2024, 10:00 am - 11:00 am (U.S. Eastern Time)
Title: A Computer Engineering Journey to Optical Neural Networks: Infrastructure, Algorithms, and Co-design
Abstract: Despite the significant progress in customized ML/AI accelerator designs, the Pareto-frontier encompassing performance, energy efficiency, and carbon emissions of digital accelerators remains unchanged due to the reliance on conventional technologies. As an alternative, optical neural networks (ONNs), such as diffractive optical neural networks (DONNs), promise vast improvements in terms of computing speed, power efficiency, and carbon dioxide emissions. Nonetheless, designing and deploying DONNs face critical challenges. These primarily stem from the requirements for domain-specific infrastructure, algorithms, and the hurdles posed by multi-disciplinary domain knowledge in optical physics, fabrication, ML, and co-design. In this presentation, I will share our journey towards the automatic and agile design of DONNs. We address the challenges of building tangible DONN systems through multi-disciplinary developments encompassing physics, algorithms, co-design, and hands-on prototyping. I will begin by introducing the core concepts of DONNs and the associated design difficulties. Subsequently, I will detail our comprehensive design infrastructure, LightRidge, and the physics-aware hardware-software co-design algorithms that facilitate immediate DONN fabrication and deployment using physical prototypes. I will conclude with recent case studies showcasing the application in complex tasks, like autonomous driving, facilitated by ML-assisted architecture exploration.
Biography: Cunxi Yu is an Assistant Professor at the University of Maryland, College Park. His research interests focus on novel algorithms, systems, and hardware designs for computing and security. Before joining University of Maryland, Cunxi was Assistant Professor with University of Utah, and held PostDoc at Cornell University. His work received the Best Paper Award at DAC (2023), Best Paper Nominations at ASP-DAC (2017) and TCAD (2018), NSF CAREER Award (2021), and American Physical Society DLS poster award (2022). Cunxi earned his Ph.D. from UMass Amherst in 2017.
Date and Time: April 11, 2024, 10:30 am - 11:30 am (U.S. Eastern Time)
Recording: Video
Title: Optimizing Silicon-Photonic AI Accelerators under Imperfections
Abstract: Silicon-photonic AI accelerators (SPAAs) are being explored as promising successors to CMOS-based accelerators owing to their ultra-high speed and low energy consumption. However, their accuracy and energy efficiency can be catastrophically degraded in the presence of inevitable imperfections such as fabrication process variations, optical losses, thermal crosstalk, and quantization errors due to low-precision encoding. In this talk, we will present a comprehensive analysis of these imperfections using a bottom-up approach. We will explore how these imperfections interact with one another and how their impact can vary widely based on the SPAA tuned parameters, physical location of the affected optical components, and the nature and distribution of the imperfections. We will also introduce a suite of novel photonic-aware low-cost design automation techniques that can significantly improve the resilience of SPAAs in the presence of these imperfections. These techniques can be easily combined with existing bias control and mitigation techniques in SPAAs.
Biography: Sanmitra Banerjee is a Senior Design-for-X (DFX) Methodology Engineer at NVIDIA Corporation, Santa Clara, CA, and an Adjunct Faculty at Arizona State University. He received the B.Tech. degree from Indian Institute of Technology, Kharagpur, in 2018, and the M.S. and Ph.D. degrees from Duke University, Durham, NC, in 2021 and 2022, respectively. His research interests include machine learning based DFX techniques, and fault modeling and optimization of emerging AI accelerators under process variations and manufacturing defects.
Date and Time: March 1st, 2024, 8:00 PM - 9:00 PM (U.S. Eastern Time)
Title: Classical and quantum photonic neural networks: Insitu training and real-time applications
Abstract: Artificial intelligence (AI) powered by neural networks has enabled applications in many fields (medicine, finance, autonomous vehicles). Digital implementations of neural networks are limited in speed and energy efficiency. Neuromorphic photonics aims to build processors that use light and photonic device physics to mimic neurons and synapses in the brain for distributed and parallel processing while offering sub-nanosecond latencies and extending the domain of AI and neuromorphic computing applications. We will discuss photonic neural networks enabled by CMOS-compatible silicon photonics. We will highlight applications that require low latency and high bandwidth, including wideband radio-frequency signal processing, fiber-optic communications, and nonlinear programming (solving optimization problems). We will briefly introduce a quantum photonic neural network that can learn to act as near-perfect components of quantum technologies and discuss the role of weak nonlinearities.
Biography: Bhavin J. Shastri is an Assistant Professor of Engineering Physics at Queen’s University and a Faculty Affiliate at Vector Institute. He received a Ph.D. degree in electrical engineering from McGill University in 2012 and was a Banting Postdoctoral Fellow at Princeton University. Dr. Shastri is the recipient of the 2022 SPIE Early Career Achievement Award and the 2020 IUPAP Young Scientist Prize in Optics "for his pioneering contributions to neuromorphic photonics.” He is a co-author of the book Neuromorphic Photonics, a term he coined with Prof. Prucnal. He is a Senior Member of Optica and IEEE.
Date and Time: Februray 16th, 2024, 8:00 pm - 9:00 pm (U.S. Eastern Time)
Title: Optical Neural Networks: Neuromorphic Computing and Sensing in the Optical Domain
Abstract: In this talk, I will overview our work on analog neural networks based on photonics and other controllable physical systems. In particular, I will discuss why neural networks may serve as an ideal computational model, with the potential to harness the computational power of analog stochastic physical systems in a robust and scalable fashion. I will utilize photonic neural networks as a practical example to demonstrate their robust operation in low-energy regimes, which are typically constrained by quantum noise. Our experimental results indicate that photonic hardware offers a better energy scaling law than electronic for large-scale linear operations. This advantage is particularly significant for the scalability of modern foundational AI models, such as Transformers. Finally, I will show how nonlinear photonic neural networks may also help to enhance computational sensing for a diversity of applications, ranging from autonomous system control to high-throughput biomedical assays.
Biography: Tianyu Wang is an Assistant Professor to the Department of Electrical and Computer Engineering at Boston University. He is interested in developing novel methods for imaging, sensing, and computing by leveraging emerging technologies from photonics and artificial intelligence.
Date and Time: January 5th, 2024, 8:00 pm - 9:00 pm (U.S. Eastern Time)
Slides: PDF
Title: Delocalized Photonic Deep Learning on the Internet's Edge
Abstract: Abstract: Advanced machine learning models are currently impossible to run on edge devices such as smart sensors and unmanned aerial vehicles owing to constraints on power, processing, and memory. We introduce an approach to machine learning inference based on delocalized analog processing across networks. In this talk, I'll detail Netcast, which uses cloud-based “smart transceivers” to stream weight data to edge devices, enabling ultraefficient photonic inference. We demonstrate image recognition at ultralow optical energy of 40 attojoules per multiply (<1 photon per multiply) at 98.8% (93%) classification accuracy. We reproduce this performance in a Boston-area field trial over 86 kilometers of deployed optical fiber, wavelength multiplexed over 3 terahertz of optical bandwidth. Netcast allows milliwatt-class edge devices with minimal memory and processing to compute at teraFLOPS rates reserved for high-power (>100 watts) cloud computers.
Biography: Alex Sludds received his B.S, M.Eng and Ph.D in Electrical Engineering and Computer Science from MIT in 2018, 2019 and 2023 respectively. Alex was an NSF graduate research fellow and has published in leading journals and conferences including Science, Nature Photonics, Science Advances and Physical Review X. His research interests focus on how the dense integration of silicon electronics and photonics enable orders of magnitude advances in computation and communication. Alex works as a photonic architect at Lightmatter.
Date and Time: December 1st, 2023, 10:30 am - 11:30 am (U.S. Eastern Time)