This piece, by Armin Bazajani, a PHd student at the Piray Lab, was published on 09/16/24.
Neuroscience puts the intelligence in Artificial Intelligence. Its impact on AI, from artificial “neurons” in neural networks to activation functions, cannot be understated– yet, AI seems to diverge more and more from the brain as it moves forward, creating fundamental differences between the two. While some claim that AI has gotten all it can from neuroscience, others disagree, stating that we are entering a “new era” of computing and AI, which may require us to go back to the basics, to go back to the brain.
Recently, I came across a YouTube video[11] featuring a panel discussion with preeminent experts in artificial intelligence, many of whom have backgrounds in computational and cognitive neuroscience. They engaged in a thought-provoking discussion exploring the bidirectional relationship between AI and neuroscience.
At this point, deep learning is in everyone’s vocabulary. Surprisingly, most people (the readers of this newsletter perhaps an exception) aren’t aware of the profound impact that the brain has had on the advancement of deep learning. In fact, even fewer people know Geoff Hinton (Turing award winner and one of the ”fathers” of deep learning) was a trained psychologist and although his PhD is in Artificial Intelligence, the work was more akin to cognitive science. Another example, Demis Hassabis (CEO of DeepMind) was a trained neuroscientist before he started DeepMind. And the list continues. In regards to neuroscience’s impact on AI, where do we begin? Perhaps the most well-known example is the ”neurons” in our neural networks resulting from an attempt to explain actual neurons in the brain[8]. Interestingly, in the video Hinton also attributes the inspiration for the use of things like the ReLU activation function[9] and dropout[14] to the brain as well. Even at DeepMind, Demis and Matt Botvinvick (Director of Neuroscience Research) claim that the inspiration for replay in AlphaGo[13], a key feature in getting it to work, was inspired by replay in the brain[2].
Conversely, the idea of mathematically explaining the brain was around long before the deep learning revolution. Boltzmann Machines, a precursor to neural networks, were popularized in the fields of cognitive science by Geoff Hinton and Terry Sejnowski[5, 1, 6]. Additionally, there have been many researchers, myself included, who use reinforcement learning as a model of dopaminergic learning in the brain[3, 10]. There have been and there will continue to be a synchronous relationship in this direction as we seek normative models of explanation for brain function. Particularly, there has been an observable shift to using models of deep learning to gain further insight into the brain[12]. But this isn’t the point of this piece, we are more interested in the converse direction.
Now we need to discuss the obvious divorce between the two. Yes, you have neurons in both biological and artificial neural networks. But communication in the brain works in an asynchronous, continuous bio-chemical way, while communication in an artificial neural networks by taking the numeric state of one node, scaling it by the weight of the connection between two nodes, and then adding the weighted contribution of the previous node to the activation of the receiving node. So, already the fundamentals are very different between the two approaches (with the biological variant being much more complex and elaborate). We also have no idea how backpropagation could feasibly occur in the brain, although there have been attempts to draw parallels[7, 15]. Personally, this feels a bit forced, but the overlaps are interesting nonetheless and I’m open to the idea of a functionally similar algorithm occurring in the brain.
Going back to the talk, it seems that both Geoff Hinton and Demis Hassabis share the belief that perhaps we have already gotten most of the inspiration from the brain that we could and that new AI developments will be almost exclusively from engineering. This is something that Yann Lecun (Shared the Turing award with Geoff and is considered another father of deep learning) has also echoed himself. Hinton, claims that maybe we’ve created a superior form of intelligence that couldn’t have evolved due to how energy intensive it is and we must now create a delineation between the two. Demis argues we’re entering a ”new era” of computing and AI, driven by the internet’s vast, growing data. Interestingly, he suggests that what we should be taking from neuroscience are analysis techniques, rather than just inspirations for AI architectures.
The brain-AI connection seems intuitive - we have this compact, energy efficient organ producing intelligence, art, and curiosity. While I think there are probably a few more insights that can be gleaned from the brain in our pursuit of better AI, especially in planning and episodic memory, at this point I’m not convinced there will be another ”breakthrough”. Maybe it will come in the form of neuromorphic computing or NeuroAI architectures but I personally don’t see it happening anytime soon. Many expected the brain to inspire breakthroughs in continual learning, a capability at which it excels. However, a recent seminal advancement in this field[4] came purely from engineering, not neuroscience. And that’s because there is no ”brain-like” way to solve these problems in deep learning, the fields have fundamentally diverged from their once closer relationship.
After being involved in both AI and now computational neuroscience research, I can say that the flow of ideas from one field to the other is quite lopsided. And that’s okay. We don’t need to dangle this carrot of super-human AI in front of neuroscience research to motivate it, we do it because understanding the brain is a fundamentally interesting endeavor.
[1] D. H. Ackley, G. E. Hinton, and T. J. Sejnowski. A learning algorithm for boltzmann machines. Cognitive Science, 1985.
[2] M. F. Carr, S. P. Jadhav, and L. M. Frank. Hippocampal replay in the awake state: a potential substrate for memory consolidation and retrieval. Nature, 2011.
[3] P. Dayan and B. W. Balleine. Reward, motivation, and reinforcement learning. Neuron, 2002.
[4] S. Dohare, J. F. Hernandez-Garcia, Q. Lan, P. Rahman, A. R. Mahmood, and R. S. Sutton. Loss of plasticity in deep continual learning. Nature, 2024.
[5] G. E. Hinton and T. J. Sejnowski. Analyzing cooperative computation. Proceed ings of the Fifth Annual Conference of the Cognitive Science Society, 1983.
[6] G. E. Hinton and T. J. Sejnowski. Parallel distributed processing: Explorations in the microstructure of cognition, vol. 1. chapter Learning and Relearning in Boltzmann Machines. MIT Press, 1986.
[7] T. P. Lillicrap, A. Santoro, L. Marris, C. J. Akerman, and G. Hinton. Back propagation and the brain. Nature, 2020.
[8] W. S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics, 1943.
[9] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. International Conference on Machine Learning, 2010.
[10] Y. Niv. Reinforcement learning in the brain. Journal of Mathematical Psychology, 2009.
[11] T. Poggio, D. Hassabis, G. Hinton, P. Perona, D. Siegel, and I. Sutskever. Cbmm10 panel: Research on intelligence in the age of ai. https://www. youtube.com/watch?v=Gg-w_n9NJIE&ab_channel=MITCBMM, Nov 2023.
[12] B. A. Richards, T. P. Lillicrap, P. Beaudoin, Y. Bengio, and R. B. others. A deep learning framework for neuroscience. Nature Neuroscience, 2019.
[13] D. Silver, A. Huang, C. Maddison, et al. Mastering the game of go with deep neural networks and tree search. Nature, 2016.
[14] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 2014.
[15] J. C. Whittington and R. Bogacz. Theories of error back-propagation in the brain. Trends in cognitive sciences, 2019.