September 9, 2024 - Vilnius, Lithuania
Paper submission deadline: June 30, 2024 (extended)
Notification of acceptance: July 15, 2024
Workshop date: September 9 2024
The intersection between deep learning and neuromorphic computing presents a promising avenue for the development of energy-efficient intelligent systems. Deep learning models have shown remarkable performance in a variety of applications, but their widespread deployment is hindered by the high computational costs associated with their training. Neuromorphic computing, inspired by the structure and function of the human brain, offers a promising alternative by leveraging physical artificial neurons to perform computation.
The potential of this intersection lies in its ability to deploy the power of deep learning on intelligent devices that require energy efficiency for working autonomously, enabling a truly pervasive AI. The advantages of neuromorphic hardware over classical digital computing architectures include faster speed, lower power consumption, higher integration density, analog computing, and larger data throughput. This makes neuromorphic hardware an attractive alternative for implementing deep learning models in real-world applications.
The goal of this interdisciplinary workshop is to bring together researchers from different fields such as Computer Science, Engineering, Physics, Materials Science, Control Theory, and Dynamical Systems to share and discuss new exciting ideas at the intersection of deep learning and neuromorphic computing. The workshop will also place a strong emphasis on the application of these technologies in diverse sectors such as robotics, biomedical engineering, and environmental monitoring, encouraging a broader spectrum of practical implementations.
The workshop will be structured in two tracks. The first track will focus on deep learning concepts for neuromorphic implementations, including reservoir computing, light-weight neural networks, partially untrained structured networks, continuous-time recurrent neural networks, neural ODEs, spiking and oscillatory neural networks, training algorithms beyond backpropagation and unconventional computing paradigms. This track includes the integration of reinforcement learning and genetic algorithms for hardware optimization, addressing interoperability with existing deep learning frameworks and the software scalability challenges. Discussions will also include the ethical, privacy, and societal implications of deploying these technologies, ensuring a holistic view of their impact.
The second track emphasizes neuromorphic hardware for deep learning, including electronic, mechanical, and photonic neuromorphic hardware, in-memory and analogue computing architectures, in-materia computing architectures, integration and scalability of neural networks in hardware, hardware implementation of neuronal and synaptic functions, massively parallel hardware networks, physical computing and quantum computing.
Bio. Dr Adnan Mehonic is an Associate Professor of Nanoelectronics at UCL, specialising in memristive and neuromorphic technologies. He has authored over 100 publications and holds 12 patents. As the co-founder and CTO of the startup ‘Intrinsic,’ which has attracted $10 million in investment, his work is focused on advancing novel non-volatile memories and energy-efficient computing architectures. Dr. Mehonic is also the Programme Director for the MSc in Nanotechnology and serves as the Editor-in-Chief of APL Machine Learning.
Abstract. The increasing demand for computing power has highlighted the limitations of current CMOS-based technologies and the Von Neumann architecture. To address this, alternative paradigms such as memristor-based accelerators have emerged. In this presentation, I will discuss the need for improved embeddable non-volatile memory, resistance switching in silicon oxide (SiOx)-based ReRAM devices, and explore pathways for enhancing device performance. Additionally, I will present the rationale and challenges associated with using memristive crossbar arrays as analogue hardware accelerators. Two algorithmic approaches are proposed to address inherent issues in memristor-based systems. The first approach involves using committee machines during inference, while the second explores non-ideality-aware training of memristor-based artificial neural networks (ANNs).
Bio. Gouhei Tanaka received the Ph.D. degree in complexity science from The University of Tokyo, Japan, in 2005. He is currently a Professor with the Department of Computer Science, Nagoya Institute of Technology, and a Visiting Professor with the International Research Center for Neurointelligence, The University of Tokyo. His research interests include complex system dynamics, mathematical engineering, neural networks, reservoir computing, and neuromorphic computing.
Abstract. The amount of data obtained with sensors and measurement devices are dramatically increasing in our society. Data processing with large-scale AI in data centers demands increased energy consumption, causing a serious problem globally. Therefore, it is an urgent issue to develop edge AI devices with low energy consumption and mitigate the problem. Neuromorphic computing is a computational framework inspired by the structure and function of the human brain, targeted for realizing energy-efficient AI systems and hardware. However, there are still large gaps between artificial neural networks and real neuronal circuits. In this presentation, we focus on the diversity of network constituents, which is missing in many artificial neural networks but surely present in biological ones. Our challenge is to understand the biological role of neuronal diversity and identify its benefits in information processing. We introduce our recent studies on diversity-based neural networks in the reservoir computing framework. The topics include diverse-time-scale reservoir computing for prediction of multiscale dynamics, reservoir state's statistics for time series anomaly detection, and memristor network-based reservoir computing for time series classification. We also discuss the future directions and perspectives on diversity-based neural networks.
We cordially invite submissions of original research papers, position papers, and extended abstracts We cordially invite submissions of original research papers, position papers, and extended abstracts that converge at the intersection of deep learning and neuromorphic computing. We solicit contributions not only focusing on technical aspects, but also considering broader ethical and societal implications of energy-efficiency in AI.
Topics of interest include, but are not limited to:
Reservoir computing, light-weight and semi-randomized neural networks
Continuous-time recurrent neural networks and Neural ODEs
Spiking neural networks
Unconventional computing
Advanced Training Algorithms beyond Backpropagation
Reinforcement Learning in Neuromorphic Systems
Interoperability and Compatibility Challenges
Scalability of hardware-friendly Neural Networks design
Ethical and societal implications of energy-efficient AI systems
Electronic, mechanical, and photonic neuromorphic hardware
In-memory and analogue computing architectures
In-materia computing architectures
Hardware integration of neural networks
Hardware implementation of neuronal and synaptic functions
Massively parallel hardware networks
Quantum computing
Physical computing
Integration and Scalability of Neural Networks in Hardware
We welcome submissions from researchers working in a variety of fields, including but not limited to Computer Science, Engineering, Physics, Materials Science, Control Theory, and Dynamical Systems. Submissions should clearly demonstrate the relevance to the intersection of deep learning and neuromorphic computing, and highlight the potential impact of the proposed research on the development of energy-efficient intelligent systems.
All submissions will be reviewed by the program committee, and accepted papers will be presented at the workshop. The workshop will feature keynote talks by leading experts in the field, as well as opportunities for discussion and collaboration.
Papers should be written in English and formatted according to the Springer LNCS format.
Full papers should be no more than 16 pages in length (including references), while position papers and extended abstracts should be no more than 6 pages in length (ncluding references).
Submissions should be made through the workshop's CMT submission page.
Papers authors will have the faculty to opt-in or opt-out for publication of their submitted papers in the joint post-workshop proceedings published by Springer Communications in Computer and Information Science, organised by focused scope and possibly indexed by WOS. Notice that novelty is not essential for contributions that will not appear in the workshop proceedings (presentation-only contributions), as we invite abstracts and papers that have already been presented or published elsewhere with the aim of maximizing the dissemination and cross-pollination of ideas among the deep learning and neuromorphic hardware communities.
Student grants, funded by the Artificial Intelligence Journal (AIJ), are available to students who are main or first authors of submitted papers. Specifics on the application process will follow in the coming weeks. Please indicate your student authorship when submitting your paper to qualify. These grants will assist with the costs of participating in the workshop.
The best poster award of the workshop is funded by the journal APL Machine Learning.
Department of Computer Science, University of Pisa, Italy
Department of Computer Science, University of Pisa, Italy
Italian National Institute of Metrological Research (INRiM), Turin, Italy