Technische Universität Graz, Austria
Reservoirs are good at temporal processing and easy to train. They are suitable for temporal processing because they employ a recurrent neural network or some other type of dynamical system. They are easy to train, since they only adapt linear readouts from this dynamical system for a specific task, rather than tackling the more difficult problem of training the recurrent network itself. They also appear to be more plausible as models for brain processing than many models that arise in deep learning. In deep learning and modern learning-driven AI one rarely uses reservoirs for tasks that have a temporal dimension because their performance is sub-optimal. Instead, one relies on recurrent network of LSTM (Long-Short Term Memory) units that are trained by backpropagation through time (BPTT). In addition, one applies Learning-to-Learn (L2L) methods in order to enable networks to learn from few examples. I will discuss recent progress in narrowing the performance gap between reservoir computing and deep learning for the case of spike-based reservoirs.
The University of Birmingham, UK
Parametrized state space models in the form of recurrent networks are often used in machine learning to learn from data streams exhibiting temporal dependencies. To break the black box nature of such models it is important to understand the dynamical features of the input driving time series that are formed in the state space. I will talk about a framework for rigorous analysis of such state representations in vanishing memory state space models, such as echo state networks (ESN). In particular, we will view the state space as a temporal feature space and the readout mapping from the state space as a kernel machine operating in that feature space. This viewpoint leads several rather surprising results linking the structure of the reservoir coupling matrix with properties of the dynamic feature space.