Deep Learning meets Neuromorphic Hardware
Workshop of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML-PKDD 2023
September 18, 2023 - Turin, Italy
The goal of this interdisciplinary workshop is to bring together researchers from different fields such as Computer Science, Engineering, Physics, Materials Science, Control Theory, and Dynamical Systems to share and discuss new exciting ideas at the intersection of deep learning and neuromorphic computing.
Important dates
- Extended paper submission deadline: June 23, 2023
- Notification of acceptance: July 12, 2023
- Camera-ready paper deadline: October 1st, 2023
- Workshop date: September 18, 2023 [FULL DAY]
The intersection between deep learning and neuromorphic computing presents a promising avenue for the development of energy-efficient intelligent systems. Deep learning models have shown remarkable performance in a variety of applications, but their widespread deployment is hindered by the high computational costs associated with their training. Neuromorphic computing, inspired by the structure and function of the human brain, offers a promising alternative by leveraging physical artificial neurons to perform computation.
The potential of this intersection lies in its ability to deploy the power of deep learning on intelligent devices that require energy efficiency for working autonomously, enabling a truly pervasive AI. The advantages of neuromorphic hardware over classical digital computing architectures include faster speed, lower power consumption, higher integration density, analog computing, and larger data throughput. This makes neuromorphic hardware an attractive alternative for implementing deep learning models in real-world applications.
Workshop structure
The workshop will be structured in two tracks.
The first track will focus on deep learning concepts for neuromorphic implementations, including reservoir computing, light-weight neural networks, semi-randomized neural networks, partially untrained structured networks, continuous-time recurrent neural networks, neural ODEs, spiking neural networks, unconventional computing, supervised training algorithms beyond backpropagation, and unsupervised training algorithms for neural networks.
The second track will focus on neuromorphic hardware for deep learning, including electronic, mechanical, and photonic neuromorphic hardware, in-memory and analogue computing architectures, in-materia computing architectures, hardware integration of neural networks, hardware implementation of neuronal and synaptic functions, massively parallel hardware networks, and quantum computing.
We welcome submissions from researchers working in a variety of fields, including but not limited to Computer Science, Engineering, Physics, Materials Science, Control Theory, and Dynamical Systems. Submissions should clearly demonstrate the relevance to the intersection of deep learning and neuromorphic computing, and highlight the potential impact of the proposed research on the development of energy-efficient intelligent systems.
All submissions will be reviewed by the program committee, and accepted papers will be presented at the workshop. The workshop will feature keynote talks by leading experts in the field, as well as opportunities for discussion and collaboration.
Topics
We invite submissions of original research papers, as well as position papers and extended abstracts, on the intersection of deep learning and neuromorphic computing.
Topics of interest include, but are not limited to:
Deep learning concepts for neuromorphic implementations:
Reservoir computing
Light-weight neural networks
Semi-randomized neural networks
Partially untrained structured networks
Continuous-time recurrent neural networks
Neural ODEs
Spiking neural networks
Unconventional computing
Supervised training algorithms beyond backpropagation
Unsupervised training algorithms for neural networks
Neuromorphic hardware for deep learning:
Electronic, mechanical, and photonic neuromorphic hardware
In-memory and analogue computing architectures
In-materia computing architectures
Hardware integration of neural networks
Hardware implementation of neuronal and synaptic functions
Massively parallel hardware networks
Quantum computing
Submission
Papers should be written in English and formatted according to the Springer LNCS format. Please, use this template for preparing your submission.
We invite both abstracts (up to 4 pages, references excluded) and full papers (up to 14 pages, references excluded). Submissions should be made through the workshop's CMT submission page.
Papers authors will have the faculty to opt-in or opt-out for publication of their submitted papers in the joint post-workshop proceedings published by Springer Communications in Computer and Information Science, organised by focused scope and possibly indexed by WOS.
Notice that novelty is not essential for contributions that will not appear in the workshop proceedings (presentation-only contributions), as we invite abstracts and papers that have already been presented or published elsewhere with the aim of maximizing the dissemination and cross-pollination of ideas among the deep learning and neuromorphic hardware communities.
At least one author of each accepted paper must have a full registration and be in-person at the conference to present the paper. Papers without a full registration or in-presence presentation won't be included in the post-workshop Springer proceedings.
Link for submissions: https://cmt3.research.microsoft.com/ECMLPKDDworkshop2023/Submission/Index
After logging in, create a new submission in your author console, and select the track on "Deep learning meets Neuromorphic Hardware".
Please note that there is a section in the submission form where you can indicate whether your submission is for presentation-only or not.
For any information, don't hesitate to contact the workshop organizers at deeplearningneuromorphic_ecml23@googlegroups.com
Keynote speakers
Daniele Ielmini
Politecnico di Milano, Italy
Hava Siegelmann
University of Massachusetts, Amherst
Organizing Committee
Andrea Ceni, University of Pisa, Italy
Claudio Gallicchio, University of Pisa, Italy
Gianluca Milano, INRiM Turin, Italy
Program Committee
Giacomo Pedretti, Hewlett Packard Laboratories, USA
Carlo Ricciardi, Politecnico di Torino, Italy
Kohei Nakajima, University of Tokyo, Japan
Gouhei Tanaka, University of Tokyo, Japan
Xavier Hinaut, Inria, Bordeaux, France
Petia Koprinkova-Hristova, Bulgarian Academy of Sciences, Bulgaria
Fatemeh Hadaeghi, Universitätsklinikum Hamburg-Eppendorf, Germany
Lyudmila Grigoryeva, University of St. Gallen, Switzerland
Peter Tiňo, University of Birmingham, UK
Luca Pedrelli, University of Pisa, Italy
Doreen Jirak, Istituto Italiano di Tecnologia, Italy
Xavier Porte, University of Strathclyde, UK
Azarakhsh Jalalvand, Princeton University, USA
Peter Ford Dominey, CNRS, France