15 September 2025, Porto, Portugal 🇵🇹
Paper submission deadline: 14 June 2025
Notification of acceptance:14 July 2025
The intersection between deep learning and neuromorphic computing presents a promising avenue for the development of energy-efficient intelligent systems. Deep learning models have shown remarkable performance in a variety of applications, but their widespread deployment is hindered by the high computational costs associated with their training. Neuromorphic computing, inspired by the structure and function of the human brain, offers a promising alternative by leveraging physical artificial neurons to perform computation.
The potential of this intersection lies in its ability to deploy the power of deep learning on intelligent devices that require energy efficiency for working autonomously, enabling a truly pervasive AI. The advantages of neuromorphic hardware over classical digital computing architectures include faster speed, lower power consumption, higher integration density, analog computing, and larger data throughput. This makes neuromorphic hardware an attractive alternative for implementing deep learning models in real-world applications.
The goal of this interdisciplinary workshop is to bring together researchers from different fields such as Computer Science, Engineering, Physics, Materials Science, Control Theory, and Dynamical Systems to share and discuss new exciting ideas at the intersection of deep learning and neuromorphic computing. The workshop will also place a strong emphasis on the application of these technologies in diverse sectors such as robotics, biomedical engineering, and environmental monitoring, encouraging a broader spectrum of practical implementations.Â
The workshop will be structured in two tracks. The first track will focus on deep learning concepts for neuromorphic implementations, including reservoir computing, light-weight neural networks, partially untrained structured networks, continuous-time recurrent neural networks, neural ODEs, spiking and oscillatory neural networks, training algorithms beyond backpropagation and unconventional computing paradigms. This track includes the integration of reinforcement learning and genetic algorithms for hardware optimization, addressing interoperability with existing deep learning frameworks and the software scalability challenges. Discussions will also include the ethical, privacy, and societal implications of deploying these technologies, ensuring a holistic view of their impact.
The second track emphasizes neuromorphic hardware for deep learning, including electronic, mechanical, and photonic neuromorphic hardware, in-memory and analogue computing architectures, in-materia computing architectures, integration and scalability of neural networks in hardware, hardware implementation of neuronal and synaptic functions, massively parallel hardware networks, physical computing and quantum computing.
We cordially invite submissions of original research papers, position papers, and extended abstracts that converge at the intersection of deep learning and neuromorphic computing. We solicit contributions not only focusing on technical aspects, but also considering broader ethical and societal implications of energy-efficiency in AI.Â
Topics of interest include, but are not limited to:
Deep learning concepts for neuromorphic implementations:
Reservoir computing, light-weight and semi-randomized neural networks
Continuous-time recurrent neural networks and Neural ODEs
Spiking neural networks
Unconventional computing
Advanced Training Algorithms beyond Backpropagation
Reinforcement Learning in Neuromorphic Systems
Interoperability and Compatibility Challenges
Scalability of hardware-friendly Neural Networks design
Ethical and societal implications of energy-efficient AI systems
Neuromorphic hardware for deep learning:
Electronic, mechanical, and photonic neuromorphic hardware
In-memory and analogue computing architectures
In-materia computing architectures
Hardware integration of neural networks
Hardware implementation of neuronal and synaptic functions
Massively parallel hardware networks
Quantum computing
Physical computing
Integration and Scalability of Neural Networks in Hardware
We welcome submissions from researchers working in a variety of fields, including but not limited to Computer Science, Engineering, Physics, Materials Science, Control Theory, and Dynamical Systems. Submissions should clearly demonstrate the relevance to the intersection of deep learning and neuromorphic computing, and highlight the potential impact of the proposed research on the development of energy-efficient intelligent systems.
All submissions will be reviewed by the program committee, and accepted papers will be presented at the workshop. The workshop will feature keynote talks by leading experts in the field, as well as opportunities for discussion and collaboration.
Papers should be written in English and formatted according to the Springer LNCS format.Â
Full papers should be no more than 16 pages in length (including references), while position papers and extended abstracts should be no more than 6 pages in length (including references).
Submissions should be made through the workshop's CMT submission page.
Papers authors will have the faculty to opt-in or opt-out for publication of their submitted papers in the joint post-workshop proceedings published by Springer Communications in Computer and Information Science, organised by focused scope and possibly indexed by WOS. Notice that novelty is not essential for contributions that will not appear in the workshop proceedings (presentation-only contributions), as we invite abstracts and papers that have already been presented or published elsewhere with the aim of maximizing the dissemination and cross-pollination of ideas among the deep learning and neuromorphic hardware communities.Â
Miguel C. Soriano is a Tenured Scientist at the Spanish National Research Council (CSIC), based at the Institute for Cross-Disciplinary Physics and Complex Systems (IFISC). His pioneering work sits at the intersection of photonics, complex systems, and nonlinear dynamics, with a strong focus on novel hardware for Artificial Intelligence. He is a leading expert in Photonic Reservoir Computing, where he co-developed the first hardware implementations of recurrent neural networks for high-speed, energy-efficient neuromorphic processing and co-edited the foundational book on the subject. Dr. Soriano's research now extends to the exciting frontier of Quantum Reservoir Computing, continuing his interdisciplinary mission to advance AI through innovative hardware solutions.
Deep and recurrent hardware neural networks folded in time
Abstract: Implementing deep Recurrent Neural Networks (RNNs) in hardware poses significant challenges in complexity and resource cost. This talk introduces a novel and efficient paradigm for building deep RNNs by creating an architecture that is folded in time within a single, physical neuromorphic system. Based on the principles of Reservoir Computing, our model leverages a multi-level ring topology where deep layers are implemented sequentially, effectively multiplexing computational depth onto the hardware's temporal evolution. Crucially, the design incorporates hardware-friendly nonlinearities and noise models from the outset. We will showcase the performance of this architecture through a physical electronic implementation, demonstrating its power on time-series tasks that demand both memory and nonlinearity. The results validate our time-folding approach as a practical pathway for realizing deep recurrent computations in compact, energy-efficient neuromorphic hardware.Student grants, funded by the Artificial Intelligence Journal (AIJ), are available to students who are main or first authors of submitted papers. Specifics on the application process will follow in the coming weeks. Please indicate your student authorship when submitting your paper to qualify. These grants will assist with the costs of participating in the workshop.
The best poster award of the workshop is funded by the journal APL Machine Learning.
Department of Computer Science, University of Pisa
Department of Computer Science, University of Pisa
Italian National Institute of Metrological Research (INRiM), Turin, Italy