Reservoir Computing (RC) has emerged as a computationally efficient approach to training recurrent neural networks (RNNs), offering unique capabilities in handling temporal data without the high resource demands of traditional deep learning models. With the increasing need for low-power, fast, and adaptable neural networks across domains like real-time signal processing, edge computing, and neuromorphic systems, RC presents a compelling solution. Yet, as deep learning methods evolve, RC faces critical challenges that call for new theoretical insights, model innovations, and hardware implementations. Specifically, RC must address limitations in scalability, robustness, and adaptability to maintain its relevance alongside modern neural architectures.
This special session aims to advance Reservoir Computing (RC) by tackling key challenges that span theoretical, model, application, and hardware dimensions. A primary goal is to expand RC’s theoretical foundations to enhance stability and adaptability, especially in dynamic environments, drawing on insights from dynamical systems and control theory to make RC more robust and reliable in real-world applications.
The session also seeks to explore hybrid architectures that combine RC with deep learning approaches, such as graph-based and hierarchical models, enabling RC to scale and handle complex, high-dimensional spatiotemporal tasks, including video and text analysis. Another objective is to foster innovative applications in fields requiring efficient, low-latency data processing, such as robotics, neuroscience, and IoT, where RC’s computational efficiency proves advantageous for real-time and resource-constrained contexts.
Moreover, the session promotes advances in hardware implementations to enable energy-efficient AI, supporting RC’s deployment on neuromorphic platforms like photonic, and spintronic substrates. This direction highlights RC’s potential for ultra-fast, low-energy applications critical to embedded systems and edge AI.
Finally, the session aims to foster interdisciplinary collaboration by joining insights from machine learning, physics, neuroscience, and hardware engineering. This collaborative approach will drive innovation, positioning RC as a foundational framework within modern AI that addresses both theoretical challenges and practical needs across various domains.
We invite researchers to submit papers on all aspects of RC research, targeting contributions on theory, models, and applications.
A list of topics relevant to this session includes, but is not limited to, the following:
New Reservoir Computing models and architectures, including Echo State Networks and Liquid State Machines
Hardware, physical and neuromorphic implementations of Reservoir Computing systems
Learning algorithms in Reservoir Computing
Reservoir Computing in Computational Neuroscience
Reservoir Computing on the edge systems
Novel learning algorithms rooted in Reservoir Computing concepts
Novel applications of Reservoir Computing, e.g., to images, video and structured data
Federated and Continual Learning in Reservoir Computing
Deep and modular Reservoir Computing neural networks
Theory of complex and dynamical systems in Reservoir Computing
Extensions of the Reservoir Computing framework, such as Conceptors