The next talk of Season 11 is scheduled to December 1, 2025, 20:00GMT.
CHECK THE UPCOMING EVENTS TOWARDS THE END OF THIS PAGE!
Abstract:
his presentation is based on an investigation undertaken for the UK Defence Technology Research Laboratory (Dstl) into the transformative potential for Vector Symbolic Architecture (VSA), a.k.a. Hyperdimensional Computing for advancing cognitive processing capabilities at the network edge. The presentation describes a technology integration experiment, demonstrating how the VSA paradigm offers robust solutions for generation-after-next AI deployment at the network edge. Specifically, we show how VSA effectively models and integrates the cognitive processes required to perform intelligence, surveillance, and reconnaissance (ISR) operations. The experiment integrates functions across the observe, orientate, decide and act (OODA) loop, including the processing of sensed data via both a neuromorphic event-based camera and a standard CMOS frame-rate camera; declarative knowledge-based reasoning in a semantic vector space; action planning using VSA cognitive maps (CM); access to procedural knowledge via large language models (LLMs); and efficient communication between agents via highly-compact binary vector representations. In contrast to previous ‘point solutions’ showing the effectiveness of VSA for individual OODA tasks, this work takes a ‘whole system’ approach, demonstrating the power of VSA as a uniform integration technology.
Abstract: A key feature of symbolic computation is the productivity of formal grammars. Although vector-symbolic architectures (VSAs) allow the expression of composite symbolic sentences over a vector space, the VSA literature has left the study of formal grammars expressed in terms of VSAs under-explored. Until recently, it had not even been demonstrated that VSAs may express an unrestricted grammar (hence, form a Turing-complete system). However, the correspondence between grammar theory and automata theory permits us to study the computational properties of systems in which the terms of a given language are embedded. Thus, VSA-encoded languages supply a means to reason about computability and learning theory in the systems in which they are found. Accordingly, we present three languages: (1) a subset of Lisp 1.5 described over a generic VSA, demonstrating that any VSA may express an unrestricted grammar; (2) an extension of the prior language for Fourier-domain Holographic Reduced Representations that performs numeric computation efficiently; and (3) a type system described over a generic VSA that expresses only programs that halt in polynomial time. These preliminary contributions illustrate the expressive power of VSAs and make a fruitful basis for reasoning about recent progress in machine learning and challenges to the present transformer-led paradigm (notably in the case of the ARC-AGI task) as researchers push toward more "general" models. We find that the heuristic search approach augmenting transformers such that they become capable of addressing fluid reasoning tasks has long been known to be unsustainably inefficient, particularly compared to a human baseline, and that such inefficiencies may be rectified using the kinds of structured, productive representations we have demonstrated to be possible using VSAs. By imposing systematic restrictions on a search space via a formal language, we intend to build AI systems that fit a humanlike learning curve.
Abstract: Vector Symbolic Architectures (VSAs) are one approach to developing Neuro-symbolic AI, where two vectors in $\mathbb{R}^d$ are `bound' together to produce a new vector in the same space. VSAs support the commutativity and associativity of this binding operation, along with an inverse operation, allowing one to construct symbolic-style manipulations over real-valued vectors. Most VSAs were developed before deep learning and automatic differentiation became popular, and instead focused on efficacy in hand-designed systems. In this work, we introduce the Hadamard-derived linear Binding (HLB), which is designed to have favorable computational efficiency, efficacy in classic VSA tasks, and perform well in differentiable systems. Code is available at: https://github.com/FutureComputing4AI/Hadamard-derived-Linear-Binding
Abstract: Episodic memories often form after only a single experience, but how this is achieved in the brain is unknown. Current models remain dominated by Hebbian plasticity (“neurons that fire together wire together”) due to its natural ability to bind representations together, support from in vitro experiments, and rich theoretical basis. Modern experiments, however, strongly challenge the role of Hebbian plasticity in vivo. The empirical plasticity rules best matched to one-shot episodic memory timescales have a strikingly different character, for instance depending only on pre-synaptic activity, as in the hippocampal mossy fiber pathway. Yet how these simpler plasticity rules could encode richly structured episodes is unclear. Here we show that by exploiting high-dimensional neural activity with restricted transitions these rules are in fact well very suited for encoding episodes as paths through complex state spaces—such as those underlying a world model. The resulting memory traces, which we term path vectors, simply sum the visited state representations and yet are highly expressive and can be decoded with an odor-tracking algorithm. Through theory and simulation we show that path vectors are a robust alternative to Hebbian traces, support one-shot sequential and associative recall in a variety of scenarios, and suggest a natural biological basis for policy learning. Path vectors also reveal a simple, brain-inspired solution to the classic VSA/HDC binding problem, relying on purely element-wise addition and leveraging prior state spaces and active reconstruction to encode and recall relational information. This work sheds light on how specific plasticity rules observed in the brain can support one-shot episodic memory formation and provides new support for the highly reconstructive nature of recall.
Abstract: Federated learning is a distributed learning method by training the model in locally multiple clients, which has been used in numerous fields. Current convolutional neural networks (CNN)-based federated learning approaches face challenges from computational cost, communication efficiency, and robust communication. Recently, Hyper Dimensional Computing (HDC) has been recognized as a promising technique to address these challenges. HDC encodes data as high-dimensional vectors and enables lightweight training and communication through simple parallel vector operations. Several HDC-based federated learning methods have been proposed. Although existing methods reduce computational efficiency and communication cost, they are difficult to handle complex learning tasks and are not robust to unreliable wireless channels. In this work, we innovatively introduce a synergetic federated learning framework, FHDnn. With the complementary strengths of CNN and HDC, FHDnn can achieve optimal performance on complex image tasks while maintaining good computational and communication efficiency. Secondly, we demonstrate in detail the convergence of using HDC in a generalized federated learning framework, providing theoretical guarantees for HDC-based federated learning approach. Finally, we design three communication strategies to further improve the communication efficiency of FHDnn by 32×. Experiments demonstrate that FHDnn converges 3× faster than CNN-based federated learning methods, reduces the communication cost by 2,112×, and the local computation and energy consumption by 192×. In addition, it has good robustness to unreliable communication with bit errors, noise, and packet loss.
Abstract: The talk outlines an account of how the brain might process questions and answers in linguistic interaction, focusing on accessing answers in memory and combining questions and answers into propositions. To enable this, we provide an approximation of the lambda calculus implemented in the Semantic Pointer Architecture (SPA), a neural implementation of a Vector Symbolic Architecture. The account builds a bridge between the type-based accounts of propositions in memory (as in the treatments of belief by Ranta, 1994 and Cooper, 2023) and the suggestion for question answering made by Eliasmith (2013), where question answering is described in terms of transformations of structured representations in memory providing an answer. We will take such representations to correspond to beliefs of the agent. On Cooper’s analysis, beliefs are considered to be types which have a record structure closely related to the structure which Eliasmith codes in vector representations (Larsson et al., 2023). Thus the act of answering a question can be seen to have a neural base in a vector transformation translatable in Eliasmith’s system to activity of spiking neurons and to correspond to using an item in memory (a belief) to provide an answer to the question.
Abstract: Interstellar missions will require a high degree of autonomy mediated through artificial intelligence (AI). All interstellar missions are characterised by 50-100-year transits to extrasolar systems. High system availability demands that interstellar spacecraft are self-repairable imposing significant demands on onboard intelligence. We review the current status of artificial intelligence to assess its capabilities in providing such autonomy. In particular, we focus on hybrid AI methods as these appear to offer the richest capabilities in offsetting weaknesses inherent in paradigmic approaches. Symbolic manipulation systems offer logical and comprehensible rationality with predictable behaviours but are brittle beyond their specific applications (a charge that may be levelled at neural networks unless the transfer learning problem can be resolved). More modern approaches to expert systems include Bayesian networks that incorporate probabilistic treatment to accommodate uncertainty. Artificial neural networks are fundamentally different. They are opaque to analysis but potentially offer greater adaptability in application by virtue of their ability to learn. Indeed, deep machine learning is a variation on neural networks with unsupervised neural front ends and supervised neural back ends. Reinforcement learning offers a promising approach for learning directly from the environment. There are inherent weaknesses in neural approaches regarding their hidden mechanisms rendering their distributed representations opaque to analysis. Hybridising symbolic processing techniques with artificial neural networks appears to offer the advantages of both. Human cognition appears to implement both neural learning and symbolic processing. There are several approaches to such hybridisation that we explore including knowledge-based artificial neural networks, fuzzy neural networks, Bayesian methods such as Markov logic networks and genetic methods such as learning classifier systems. Markov logic networks propose a natural correlation between Bayesian probability and neural weights but mapping representation of symbols into switching neurons is less clear (though vector symbolic architectures present an approach) while learning classifier systems are reinforcement learning methods that are promising for interacting with the physical world. We conclude that current AI may not yet be up to the task of interstellar transits and flybys let alone for physical interaction with unknown planetary environments. Certainly, AI is incapable of interactive encounters with extraterrestrial intelligence.
Abstract: Many animals possess a remarkable capacity to rapidly construct flexible mental models of their environments. These world models are crucial for ethologically relevant behaviors such as navigation, exploration, and planning. The ability to form episodic memories and make inferences based on these sparse experiences is believed to underpin the efficiency and adaptability of these models in the brain. Here, we ask: Can a neural network learn to construct a spatial model of its surroundings from sparse and disjoint episodic memories? We formulate the problem in a simulated world and propose a novel framework, the Episodic Spatial World Model (ESWM), as a potential answer. We show that ESWM is highly sample-efficient, requiring minimal observations to construct a robust representation of the environment. It is also inherently adaptive, allowing for rapid updates when the environment changes. In addition, we demonstrate that ESWM readily enables near-optimal strategies for exploring novel environments and navigating between arbitrary points, all without the need for additional training.
Abstract: Advances in bioinformatics are primarily due to new algorithms for processing diverse biological data sources. While sophisticated alignment algorithms have been pivotal in analyzing biological sequences, deep learning has substantially transformed bioinformatics, addressing sequence, structure, and functional analyses. However, these methods are incredibly data-hungry, compute-intensive and hard to interpret. Hyperdimensional computing (HDC) has recently emerged as an intriguing alternative. The key idea is that random vectors of high dimensionality can represent concepts such as sequence identity or phylogeny. These vectors can then be combined using simple operators for learning, reasoning or querying by exploiting the peculiar properties of high-dimensional spaces. Our work reviews and explores the potential of HDC for bioinformatics, emphasizing its efficiency, interpretability, and adeptness in handling multimodal and structured data. HDC holds a lot of potential for various omics data searching, biosignal analysis and health applications.
Abstract: This talk presents a transformative framework for artificial neural networks over graded vector spaces, tailored to model hierarchical and structured data in fields like algebraic geometry and physics. By exploiting the algebraic properties of graded vector spaces, where features carry distinct weights, we extend classical neural networks with graded neurons, layers, and activation functions that preserve structural integrity. Grounded in group actions, representation theory, and graded algebra, our approach combines theoretical rigor with practical utility.
We introduce graded neural architectures, loss functions prioritizing graded components, and equivariant extensions adaptable to diverse gradings. Case studies validate the framework's effectiveness, outperforming standard neural networks in tasks such as predicting invariants in weighted projective spaces and modeling supersymmetric systems.
This work establishes a new frontier in machine learning, merging mathematical sophistication with interdisciplinary applications. Future challenges, including computational scalability and finite field extensions, offer rich opportunities for advancing this paradigm.