Deep Randomized Neural Networks

Tutorial @ WCCI 2020

Sunday, 19th July 2020 – Scottish Event Campus (SEC), Glasgow (UK)

Deep Neural Networks (DNNs) are a fundamental tool in the modern development of Machine Learning. Beyond the merits of properly designed training strategies, a great part of DNNs success is undoubtedly due to the inherent properties of their layered architectures, i.e., to the introduced architectural biases. In this tutorial, we analyze how far we can go by relying almost exclusively on these architectural biases. In particular, we explore recent classes of DNN models wherein the majority of connections are randomized or more generally fixed according to some specific heuristic, leading to the development of Fast and Deep Neural Network (FDNN) models. Examples of such systems consist of multi-layered neural network architectures where the connections to the hidden layer(s) are left untrained after initialization. Limiting the training algorithms to operate on a reduced set of weights implies a number of intriguing features. Among them, the extreme efficiency of the resulting learning processes is undoubtedly a striking advantage with respect to fully trained architectures. Besides, despite the involved simplifications, randomized neural systems possess remarkable properties both in practice, achieving state-of-the-art results in multiple domains, and theoretically, allowing to analyze intrinsic properties of neural architectures (e.g. before training of the hidden layers’ connections). In recent years, the study of randomized neural networks has been extended towards deep architectures, opening new research directions to the design of effective yet extremely efficient deep learning models in vectorial as well as in more complex data domains.

This tutorial will cover all the major aspects regarding the design and analysis of Fast and Deep Neural Networks, and some of the key results with respect to their approximation capabilities. In particular, the tutorial will first introduce the fundamentals of randomized neural models in the context of feedforward networks (i.e., Random Vector Functional Link and equivalent models), convolutional filters, and recurrent systems (i.e., Reservoir Computing networks). Then, it will focus specifically on recent results in the domain of deep randomized systems, and their application to structured domains (trees, graphs).


Claudio Gallicchio (University of Pisa, Italy)

Claudio Gallicchio is Assistant Professor at the Department of Computer Science, University of Pisa. He is Chair of the IEEE CIS Task Force on Reservoir Computing, and member of IEEE CIS Data Mining and Big Data Analytics Technical Committee, and of the IEEE CIS Task Force on Deep Learning. Claudio Gallicchio has organized several events (special sessions and workshops) in major international conferences (including IJCNN/WCCI, ESANN, ICANN) on themes related to Randomized Neural Networks. He serves as member of several program committees of conferences and workshops in Machine Learning and Artificial Intelligence. He has been invited speaker for several national and international conference. His research interests include Machine Learning, Deep Learning, Randomized Neural Networks, Reservoir Computing, Recurrent and Recursive Neural Networks, Graph Neural Networks.

Simone Scardapane (La Sapienza University of Rome, Italy)

Simone Scardapane is Assistant Professor at the the “Sapienza” University of Rome. He is active as co-organizer of special sessions and special issues on themes related to Randomized Neural Networks and Randomized Machine Learning approaches. His research interests include Machine Learning, Neural Networks, Reservoir Computing and Randomized Neural Networks, Distributed and Semi-supervised Learning, Kernel Methods, and Audio Classification. Simone Scardapane is an Honorary Research Fellow with the CogBID Laboratory, University of Stirling, Stirling, U.K. Simone Scardapane is the co-organizer of the Rome Machine Learning & Data Science Meetup, that organizes monthly events in Rome, and a member of the advisory board for Codemotion Italy. He is also a co-founder of the Italian Association for Machine Learning, a not-for-profit organization with the aim of promoting machine learning concepts in the public. In 2017 he has been certified as a Google Developer expert for machine learning. Currently, he is the track director for the CNR sponsored “Advanced School of AI” (