Day 1 (25th Oct)
Introductory neuroscience (Elaine Murray): This session will provide an overview of the structure and function of the nervous system. The lecture will start with a review of the divisions of the nervous system and the main cell types, neurons and glia. An introduction to basic neuroanatomy will follow covering key external and internal structures of the brain and the main components of systems controlling movement, learning and memory, and emotional regulation. To understand neuronal processes and pathologies it is important to understand how neurons work. An overview of the action potential, the electrical signal used by neurons to carry information to their target, will be provided. Finally, the main steps involved in synaptic transmission, including the neurotransmitters responsible for chemical signalling in the nervous system, will be reviewed.
Cognitive neural systems and behaviour (Simon Kelly): Our knowledge of the brain processes underlying perception, cognition and action has come from research using a spectrum of levels of analysis from the molecular, through single neurons and circuits, to behaviour. This lecture provides an introduction to the behavioural end of this spectrum. I will give a broad overview of basic methods of psychophysics - the systematic measurement of behaviour - and how these methods provide insights into properties and mechanisms of the neural systems for basic sensation and for cognitive functions such as attention, decision making and memory and learning. I will discuss how the principled measurement of behaviour helps to make sense of even the lowest-resolution forms of neural activity measurements amenable to human research, and conversely, how such neural activity measurements, once well-characterised functionally, can inform simple mathematical models that capture not just the observed patterns of behaviour but also the underlying algorithms the brain is using to generate them. A core thread running through all of these themes will be the importance of careful task design - what we ask our subjects to do and under what conditions.
Mathematics for neuroscience: An overview (Áine Byrne): The use of mathematics has many historical successes, particularly in the realm of physics and engineering, where mathematical concepts are regularly employed to address challenges far beyond the context in which they were originally developed. More recently, mathematics has been employed to further our understanding of biological systems, such as the brain. Despite the immense complexity of the brain, mathematical modelling has allowed for major advances to be made towards understanding behaviour, consciousness and disease. This lecture introduces the mathematical tools needed for mathematically modelling the brain. We will review concepts from linear algebra, vector calculus and differential equations. We will learn how to describe neural systems using differential equations and how to simulate these equations computationally.
Day 2 (26th Oct)
Computational modelling of plasticity & learning in brains (Cian O'Donnell): This lecture will introduce the basics of how we think learning works in the brain, and common computational models of synaptic plasticity at the single synapse, single neuron, and neural circuit levels. It will cover classic models of Hebbian plasticity, spike-timing-dependent plasticity, and attractor networks. Finally, we will briefly discuss modern attempts to link brain learning to backpropagation and deep learning in artificial neural networks.
Glia cells: Capturing human intelligence in AI systems; Brain inspired hardware self-repair (Liam McDaid & John Wade): This talk briefly discusses brain inspired intelligence or AI and why AI lags significantly behind human intelligence. A significant point discussed is whether AI can truly mimic human intelligence and what are the factors that may allow this to happen in the future. The talk then presents research carried out at Ulster on modelling brain function with significant focus on how different cell types interact. Of specific interest are glia cells and in particular astrocytes. Results from recent research Ulster will be presented and serves to illustrate our current understanding of the complexity of cellular signalling between neurons and astrocytes. Also why advancing our understanding on how these cell types exchange information is vital in our understanding of both low and high-level brain function.
Neural network dynamics and modelling of cognitive functions (KongFatt Wong-Lin): This lecture will first discuss neural network models that are conducive for theoretical analysis and conceptual understanding. Then examples of how different neural network dynamics can lead to different cognitive functions will be discussed. A primary focus of this lecture is on understanding the network mechanism of decision-making, and it shall be demonstrated how neural network models can be adapted to produce different decision-making behaviour.
An introduction to model-free and model-based reinforcement learning and their application to cognitive neuroscience (Mehdi Khamassi): The model-free reinforcement learning (RL) framework, and in particular Temporal-Difference learning algorithms, have been successfully applied to Neuroscience since about 25 years. It can account for dopamine reward prediction error signals in simple Pavlovian and single-step (instrumental) decision-making tasks. However, more complex multi-step tasks illustrate their computational limitations.
In parallel, the last 10 years have seen a growing interest in computational models for the coordination of different types of decision-making systems, e.g. model-free and model-based RL. Model-based here means that the subjects try to learn an internal model of the statistical structure of the task (like a cognitive map in spatial tasks), and can plan based on mental simulations within such a model.
Computational models for the coordination of multiple decision-making systems enable to explain more diverse behaviors and learning strategies in humans, monkeys and rodents. They enable to explain shifts between different modes of deliberation (fast responses versus long deliberations before responding). They also enable to clarify the respective roles of the prefrontal cortex areas, hippocampus, basal ganglia and dopaminergic system in different learning and decision-making tasks.
I will illustrate this line of research with a didactic presentation of first simple models, and then more complex models for the coordination of model-free and model-based reinforcement learning. I will then show a variety of behavioral and neurophysiological results in different paradigms (navigation tasks, classical conditioning tasks, instrumental learning tasks, working-memory tasks, social interaction tasks).
Day 3 (27th Oct)
Investigating time series neural data: Experimental design & processing (Saugat Bhattacharyya): Recent advances in neuroscience technologies have paved the way to innovative applications in healthcare, rehabilitation, biometrics and brain-computer interfacing. These technologies are tuned to observe and influence brain activity to augment or assist in human motor or cognitive development. The neural activities are recorded using invasive or no-invasive technologies, albeit non-invasive technologies, such as electroencephalography (EEG), magnetoencephalography (MEG), functional near-infrared spectroscopy (fNIRS) and functional magnetic resonance imaging (fMRI) are the most popular form of recording amongst researchers and users. Non-invasive neural signals recorded from EEG or MEG devices are non-stationary, complex signals. Hence, it is vital to follow standard experimental design practices to evoke or induce the necessary task response among users and apply time-/frequency-/time-frequency domain processing methods to extract meaningful information about those task responses from the neural signals (EEG/MEG). In this lecture, you will be introduced to some standard practices and consideration while designing an experiment involving EEG/MEG recording, necessary pre-processing methods including temporal and spatial filtering, and artefact removal, and finally signal processing using time-frequency and inter-trial phase clustering techniques.
Non-invasive brain-computer interfaces: Enhancing applicability using computational intelligence and technological advances (Girijesh Prasad): Brain-computer interface (BCI), also known as brain-machine interface (BMI), utilizes neuro-physiological correlates of voluntary mental tasks to facilitate direct communication between human brain and computing devices without the involvement of neuro-muscular pathways. BCI research is, in general, progressing in two main areas: augmentative & alternative communication by replacing neuro-muscular pathways. BCI can assist neuro-rehabilitation by helping to activate desired cortical areas for targeted brain plasticity. Current BCI systems, however, lack sufficient robustness, and performance variability among users is quite high. One of the critical limitations is because of the non-stationary characteristics of brain’s neurophysiological responses, which makes it hard to extract time-invariant stable features unique to voluntary mental tasks.
In this talk, the presentation will first briefly review state-of-the-art BCI research and then discuss our computational intelligence supported R&D towards robust BCI design and our current application focus in post-stroke neuro-rehabilitation. In particular, it will discuss how integrating an EEG-EMG based BCI and hand exoskeleton results in personalized post-stroke neuro-rehabilitation system that ensures active and engaging exercises and leads to enhanced recovery of the paralyzed upper limbs. Also, to take advantage of MEG’s highest spatiotemporal resolution (306 channels, Triux, Elekta, recorded at 1k Hz) among all neuroimaging modalities, the development of an MEG-based BCI controlling an MEG compatible hand exoskeleton located in a magnetically shielded room (MSR) will be discussed. Finally, the remaining R&D challenges will be highlighted.
Introduction to the statistical methodology for brain connectivity analysis (Jose Sanchez Bornot): Research on brain functional connectivity is critical to improving our understanding of neural information processing. This is a very active area where many different approaches converge, e.g. based on information theory, time series analysis or dynamical systems, and where different conclusions can be achieved using different neuroimaging modalities. For example, MRI is used mainly to study anatomical/structural brain changes, whereas fMRI can reflect the brain functional connectivity changes with better spatial accuracy. Otherwise, the oscillation phenomena of neural dynamics cannot be studied without using EEG/MEG data. In this lecture, an introduction and discussion of the different challenges will be presented while discussing techniques such as Granger causality, imaginary coherence and dynamic causal modeling, as well as the challenges associated with the statistical analyses that involve different neuroimaging data.
Decoding mental imagery from electroencephalography (EEG) and applications of AI-enabled wearable neurotechnology for communication and rehabilitation (Damien Coyle): Research in the field of brain–computer interfaces (BCIs) and neurotechnology has proven that electrical signals in the brain, modulated intentionally by mental imagery, can relay information directly to a computer, where it is translated by intelligent algorithms (some inspired by the brain’s neural networks) into control signals that enable communication and control without movement or can improve self- regulation of brain activity. This talk will present results from research at Intelligent Systems Research Centre that shows people with restricted abilities resulting from disease, injury or trauma may benefit from neurotechnology, including those who have prolonged disorders of consciousness or locked-in syndrome following traumatic brain injury, spinal injury, stroke and post-traumatic stress disorder.
Neural activity can be modulated by many kinds of mental imagery e.g., classical motor imagery BCIs distinguish between imagined hand/arm movements. This presentation will also show recent results in decoding imagined three-dimensional limb movements, imagined primitive shapes, emotion inducing imagery and silent/imagined speech from EEG. The presentation will attempt to address the question is it feasible to expect high and robust performance with these types of imagery in EEG-based BCIs and will highlight results which indicate user proficiency in BCI control is a matter of training time, machine learning/AI ability, application of the technology and maintenance of stable affective states. A number of neurogaming applications that enhance BCI user training will be demonstrated.
Day 4 (28th Oct)
Neuro-inspired computation: Spiking neural networks (Nikola Kasabov): The lecture introduces the third generation of artificial neural networks, the spiking neural networks (SNN), as the latest methods and systems for neuro-inspired computation, along with their numerous applications. SNN are not only capable of deep learning of temporal or spatio-temporal data, but also enabling the extraction of knowledge representation from the learned data. Similarly to how the brain learns, these SNN models do not need to be restricted in number of layers, neurons in each layer, etc. as they adopt self-organising learning principles of the brain [ref. 1,2].
The lecture consists of 3 parts:
Fundamentals of SNN
Brain-inspired SNN architectures. NeuCube.
Design and implementation of selected applications
The material is illustrated on an exemplar SNN architecture NeuCube (free software and open source available from www.kedri.aut.ac.nz/neucube). Case studies are presented of brain and environmental data modelling and knowledge representation using incremental and transfer learning algorithms. These include: predictive modelling of EEG and fMRI data measuring cognitive processes and response to treatment; prediction dementia and AD [3]; understanding depression; predicting environmental hazards and extreme events; moving object recognition and control; brain-inspired audio-visual information processing.
It is also demonstrated that SNN allow for knowledge transfer between humans and machines through building brain-inspired Brain-Computer Interfaces (BI-BCI) [4]. These are used to understand human-to-human knowledge transfer through hyper-scanning and also to create brain-like neuro-rehabilitation robots. This opens the way to build a new type of AI systems – the open and transparent AI.
References:
1. N. K. Kasabov, "NeuCube: A spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data," Neural Networks, vol. 52, pp. 62-76, 2014.
2. N.Kasabov, Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence, Springer, 2019, https://www.springer.com/gp/book/9783662577134.
3. M. Doborjeh, …, N.Kasabov, Personalised Predictive Modelling with Spiking Neural Networks of Longitudinal MRI Neuroimaging Cohort and the Case Study for Dementia, Neural Networks, vol.144, Dec.2021, 522-539, https://doi.org/10.1016/j.neunet.2021.09.013 (available from https://authors.elsevier.com/c/1dsCu3BBjKgGro )
4. K.Kumarasinghe, N.Kasabov, D.Taylor, Deep Learning and Deep Knowledge Representation in Spiking Neural Networks for Brain-Computer Interfaces, Neural Networks, vol.121, Jan 2020, 169-185, doi: https://doi.org/10.1016/j.neunet.2019.08.029.
Meta-cognition and learning from high-dimensional streaming data (Savitha Ramasamy): Learning in humans is continuous and does not need repeated retraining or reminding. Unlike this, learning in machines has been quite cumbersome, and are not generalizable beyond the tasks stipulated in the training data. In addition, even with the surplus flow of streaming data, the model is fixated on the training data distribution. To this end, there is a need to understand learning behaviour in human beings and use this as an inspiration to learn from streaming data. The talk will first introduce human principles of meta-cognitive learning and show how this can inspire development of meta-cognitive learning algorithms for streaming non-stationary data.
Building reliable and secure embedded systems with neuromorphic computing (Jim Harkin): The demand for increasingly more ‘intelligent’ computing systems has to be viewed through the explosion of their complexity. An important knock-on effect however, is degradation in reliability: designing reliable electronic systems is a major challenge. Self-repair is critical in hardware systems where long-term reliable performance is not guaranteed. Increasing gate densities, scaling to sub-nanometer geometries and variations in silicon manufacturing result in additional challenges.
Current self-repairing hardware approaches rely on a central controller, with constraints placed on the type and number of faults (e.g. open/short-circuits) and repair granularity. There is a pressing need to progress beyond these concepts and look for inspiration from biology.
While state-of-the-art hardware devices and neuromorphic chips replicate to an extent a brain information processing paradigm, they are not fault-tolerant and can develop faults due to incorrect operations in post manufacturing, wear-out failures, or radiation effects. Nonetheless, the human brain does exhibit high levels of distributed repair and more recently it has emerged that interactions between astrocyte cells and spiking neurons provide a distributed repair paradigm that has the potential to advance progress in establishing new approaches to reliable information processing in hardware.
This lecture establishes the current challenges in capturing self-repair capabilities in electronic hardware and outlines progress in addressing the interconnect complexity in the communication of vast quantities of information while enabling large-scale hardware implementations of self-repairing neural networks. In addition, methods for the acceleration of such neural networks in hardware will be discussed and remaining challenges in future deployment. Example applications of SNNs in hardware security for the detection of anomaly traffic and in the prediction of traffic congestion will also be presented.
Towards responsible brain research and applications (Arleen Salles): Ethical assessment of scientific research and its applications, including anticipation of societal expectations, is central to participative approaches such as Responsible Research and Innovation (RRI) that have been developed to govern the ethical challenges of science and technology. The goal to identify and reflect upon scientific and technological impacts and to promote engagement with diverse stakeholders is aligned with the idea that neither science nor its products are value neutral. This lecture provides an overview of RRI and how it has been used as a framework to address the ethical, philosophical and societal issues raised by neuroscience and emerging neurotechnologies. I will focus on some of the advantages and limits of this approach and will introduce the notion of Responsibility by Design as a way to progress beyond RRI.
Day 5 (29th Oct)
Neuromorphic vision (Shane Harrigan): This lecture first presents and discusses the growing field of neuromorphic vision. Neuromorphic vision is concerned with the design and usage of neuromorphic vision sensors which emulate retinal neural behaviours exhibited in biological vision systems. These sensors are bio-inspired both in imaging acquisition and communication with asynchronous signals at-sensor level like retinal neural spikes. This lecture will then discuss these novel asynchronous signals, known popularly as events, and the past, current, and emerging trends in their processing for the purposes of information extraction and more complex operations. The lecture will conclude with a summary of the different aspects discussed over the lecture with some samples of current research applications undertaken at Ulster University. This lecture will blend the vision neuroscience, biophysics and computer science/engineering elements which form neuromorphic vision.
Understanding the benefits of Knowledge Transfer Partnerships (KTPs) for businesses, academics and graduates (Amanda Fullerton): For 45 years, Knowledge Transfer Partnerships (KTPs) have been helping businesses innovate for growth. They do this by connecting businesses that have an innovation idea with the university expertise to help deliver it. In effect, they link forward thinking businesses with world-class University researchers to deliver innovation projects led by inspired graduates.
Ulster University has been engaged in KTP since its inception, having continuously regarded the KTP programme as an excellent pathway for generating strategic knowledge transfer opportunities with business partners to improve their performance whilst also demonstrating the impact of the University’s research.
The presentation will demonstrate the key benefits of KTP for businesses, academic researchers and graduates, and will describe the KTP journey and funding available. The presentation will conclude with a profile of a successful Ulster University KTP, led by the School of Computing, that won the 2019 Innovate UK national award for the “KTP with Best Social Impact.
Translating AI-enabled, neurotechnology research and experiences of developing an award winning neurotech startup (Damien Coyle): Training over multiple sessions is certainly key to learning how to modulate brain activity via a motor imagery and this involves the collection of large dataset from multiple users. An award-winning AI-enabled wearable neurotechnology platform that may enable this, developed by NeuroCONCISE Ltd, will be presented along with an overview of the challenges and opportunities of developing a neurotech startup.
Understanding behavior and the brain from the perspective of a dynamical theory of coordination (J. A. Scott Kelso): As the last talk in the Autumn School, participants will be invited to consider the following question: what does it mean to “understand” a phenomenon regardless of the level of description one chooses to investigate it (e.g., micro-, meso-, macro- etc.)? Given that the usual categories of describing behavior and cognition are suspect with respect to their neural underpinnings (see, e.g. “The brain doesn’t think the way you think it does”, Quanta, August 24, 2021 https://www.quantamagazine.org/mental-phenomena-dont-map-into-the-brain-as-expected-20210824/?utm_source=pocket-newtab# ), the focus here will be on coordination-- assumed to be crucial for complex systems regardless of how we categorize behavioral and cognitive function and their relation to structure. In that context, we will explore some of the main concepts, methods and messages of Coordination Dynamics. I offer a strategy aimed at understanding coordination and show how it can be implemented at both behavioral and brain levels.