Cognitive Neurorobotics, Okinawa Institute of Science and Technology, Japan
An analysis of meta-level cognitive processes of a variational recurrent neural network model when acting with the environment.
My talk introduces a hierarchical variational RNN model which accounts for possible neuronal information processing mechanisms assumed in predictive coding and active inference frameworks. The model minimizes the free energy in terms of the weighted sum of two terms, the reconstruction error and the complexity. I explain analytically how this weighting can affect the meta-level cognitive process of the model when it interacts with the environment hierarchically.
Reference: Ahmadi, A., & Tani, J. (2019). A Novel Predictive-Coding-Inspired Variational RNN Model for Online Prediction and Recognition. Neural Computation, 31, 2025–2074
Bio
Jun Tani received the D.Eng. degree from Sophia University, Tokyo, Japan in 1995 and conducted research with the Sony Computer Science Lab from 1993. Afterwards, he was a team leader of the Laboratory for Behavior and Dynamic Cognition, RIKEN Brain Science Institute, Saitama, Japan and became a Full Professor with the KAIST, Daejeon, South Korea. Currently, Jun is a Full Professor with OIST, Japan, where he studies the principles of embodied cognition and mind by conducting synthetic neurorobotics experimental studies under the framework of predictive coding and active inference with the hope of reconstructing the development of general cognitive minds of infants in neurorobotics experiments.
Artificial Cognitive Systems, Donders Institute, Radboud University, Netherlands
How can the brain match the unreasonable effectiveness of backpropagation?
A biologically plausible theory of neural information processing should be able to answer how the brain is able to learn from experience. The unreasonable effectiveness of the backpropagation algorithm for training neural networks in artificial intelligence suggests that the brain might be using a similar gradient descent approach to learning. However, this suggestion has been dismissed on the grounds that backpropagation is biologically implausible. In this talk, I will argue that biologically implausible operations such as the propagation of gradient information as well as the notorious weight transport problem can be replaced by biologically plausible mechanisms. This may ultimately provide the basis for a biologically plausible learning rule that matches the performance of backpropagation and allows event-driven computing on neuromorphic devices.
Bio
Marcel van Gerven received his PhD degree from the Radboud University, Nijmegen, the Netherlands in 2007. Thereafter he researched in several postdoc and faculty appointments at the Departamento de Inteligencia Artificial UNED, Madrid, Spain and the Radboud University. Currently, Marcel is a Full Professor at the Radboud University and PI of both the Donders Centre for Cognition and the Institute for Brain, Cognition and Behaviour. Here, he studies how the brain operates in naturalistic conditions by using neural networks models for human brain functions, and aims to develop new neural network architectures to create intelligent machines that outperform humans on cognitively challenging tasks.
Brain Intelligence Theory, RIKEN Center for Brain Science, Japan
Reverse engineering Bayesian aspect of canonical neural networks.
We identify a class of biologically plausible cost functions for canonical neural networks of rate coding neurons, where the same cost function is minimised by both neural activity and plasticity. We then demonstrate that such cost functions can be cast as variational free energy under an implicit generative model in the well-known form of partially observed Markov decision processes. This equivalence means that the activity and plasticity in a canonical neural network can be understood as approximate Bayesian inference and learning, respectively. The firing thresholds – that characterise the neural network cost function – correspond to prior beliefs about hidden states in the generative model, meaning that the Bayes optimal encoding of hidden states is attained when the network’s implicit priors match the process generating its sensory inputs. These results highlight the potential utility of reverse engineering generative models to characterise the neuronal mechanisms underlying Bayesian inference and learning.
Reference: Isomura, T. & Friston, K. (2020) Reverse-engineering neural networks to characterize their cost functions. Neural Computation. 32(11), 2085-2121.
Bio
Takuya Isomura received his Ph.D. degree at the University of Tokyo, Japan in 2017. Afterwards, he spent his postdoc at Lab for Neural Computation and Adaptation at RIKEN Center for Brain Science, Saitama, Japan as a Special Postdoctoral Researcher. From 2020, he is a Unit Leader of the Brain Intelligence Theory Unit at the same institute. He has conducted computational research to reveal neuronal mechanisms underlying sentient behaviour and to develop biologically inspired machine learning schemes – through mathematical analyses of the process that neural circuits self-organise generative models – and experimental validations using in vitro neural networks.
Brain Network Dynamics, IRCN, The University of Tokyo, Japan
Global and local brain dynamics underlying typical and atypical human intelligence.
Conceptually, information processing underpinning complex human cognition should primarily rely on the brain-wide distribution patterns and hierarchy of local neural dynamics and resultant global brain state dynamics. Here, I will discuss several data-driven analyses to elucidate such local and global neural dynamics and show some information-processing mechanisms that the methods have identified in typical and atypical—here, mainly high-functioning autistic adults—human brains. In addition, I will introduce a new non-invasive brain stimulation tool to efficiently intervene in such brain state dynamics and, ultimately, control human cognitive and behavioural tendencies.
Bio
Takamitsu Watanabe completed medical and research training programmes (MD, 2007; PhD, 2013) at The University of Tokyo, Japan. Afterwards, he conducted JSPS and Marie-Curie post-doctoral research in the University College London, UK and was appointed as Deputy Team Leader for the Miyashita lab in the RIKEN Center for Brain Science, Japan. Currently, Takamitsu is an Associate Professor with IRCN, University of Tokyo, where he studies human brain dynamics to understand neurobiological mechanisms behind various human cognitions as neural dynamics on a complex network.