The next session of Season 12 of VSAONLINE is on April 27th, 2026.
CHECK THE UPCOMING EVENTS TOWARDS THE END OF THIS PAGE!
Abstract:
The entorhinal-hippocampal formation is the mammalian brain's navigation system, encoding both physical and abstract spaces via grid cells. While this system is well-studied in neuroscience, and its efficiency makes it attractive for robotics and machine learning, integrating continuous spatial computations with abstract cognitive processing remains a challenge. Current approaches, such as Continuous Attractor Networks (CANs), successfully model grid cells for physical space but struggle to unify this with abstract data.
Here, we bridge this gap by proposing a mechanistic model for versatile information processing inspired by CANs and Vector Symbolic Architectures (VSAs). The novel Grid-Cell VSA (GC-VSA) model employs a spatially structured encoding scheme with 3D neuronal modules. These modules mimic the discrete scales and orientations of biological grid cells, reproducing their characteristic hexagonal receptive fields.
In experiments, the model demonstrates versatility across three distinct tasks: (1) accurate path integration for tracking locations, (2) spatio-temporal representation for querying object locations and temporal relations, and (3) symbolic reasoning using family trees to test hierarchical relationships. Finally, we discuss ongoing work utilizing the GC-VSA to model "visually guided path integration". By leveraging the framework’s ability to bind continuous spatial variables with categorical variables (e.g., Object IDs), we demonstrate how visual cues can robustly anchor the grid cell code to the physical environment.
Abstract:
Connectionist approaches to machine learning, \emph{i.e.} neural networks, are enjoying a considerable vogue right now. However, these methods require large volumes of data and produce models that are uninterpretable to humans. An alternative framework that is compatible with neural networks and gradient-based learning, but explicitly models compositionality, is Vector Symbolic Architectures (VSAs). VSAs are a family of algebras on high-dimensional vector representations. They arose in cognitive science from the need to unify neural processing and the kind of symbolic reasoning that humans perform. While machine learning methods have benefited from category-theoretical analyses, VSAs have not yet received similar treatment. In this paper, we present a first attempt at applying category theory to VSAs. Specifically, We generalise from vectors to co-presheaves, and describe VSA operations as the right Kan extensions of the external tensor product. This formalisation involves a proof that the right Kan extension in such cases can be expressed as simple, element-wise operations. We validate our formalisation with worked examples that connect to current VSA implementations, while suggesting new possible designs for VSAs.
Abstract:
We introduce a high-capacity associative memory capable of factorizing compositional representations of variables. The proposed approach is implemented as a continuous-time oscillator neural network. By performing factorization with a continuous-time dynamical system, the proposed Factorizing Oscillator Associative Memory (FOAM) provides efficient solutions to computationally hard problems such as inference in compositional representations and combinatorial optimization. We demonstrate favorable performance compared to existing approaches to factorization, improved interpretability, and relevance to standard tasks such as the subset sum problem.
Abstract:
The modular composite representation (MCR) is a computing model that represents information with high-dimensional integer vectors using modular arithmetic. Originally proposed as a generalization of the binary spatter code model, it aims to provide higher representational power while remaining a lighter alternative to models requiring high-precision components. Despite this potential, MCR has received limited attention. Systematic analyses of its trade-offs and comparisons with other models are lacking, sustaining the perception that its added complexity outweighs the improved expressivity. In this work, we revisit MCR by presenting its first extensive evaluation, demonstrating that it achieves a unique balance of capacity, accuracy, and hardware efficiency. Experiments measuring capacity demonstrate that MCR outperforms binary and integer vectors while approaching complex-valued representations at a fraction of their memory footprint. Evaluation on 123 datasets confirms consistent accuracy gains and shows that MCR can match the performance of binary spatter codes using up to 4x less memory. We investigate the hardware realization of MCR by showing that it maps naturally to digital logic and by designing the first dedicated accelerator. Evaluations on basic operations and 7 selected datasets demonstrate a speedup of up to 3 orders of magnitude and significant energy reductions compared to software implementation. When matched for accuracy against binary spatter codes, MCR achieves on average 3.08x faster execution and 2.68x lower energy consumption. These findings demonstrate that, although MCR requires more sophisticated operations than binary spatter codes, its modular arithmetic and higher per-component precision enable lower dimensionality. When realized with dedicated hardware, this results in a faster, more energy-efficient, and high-precision alternative to existing models.
Abstract:
Graph Neural Networks (GNNs) are the most common approach for learning complex relational data represented using graph data structures. Although GNNs are effective at learning representations of both nodes and graphs for a given task, the learning process is computationally expensive and as such, time and energy-inefficient. This paper investigates this challenge within the context of recent work on untrained graph representations that only train the solver model. We present Graph Vector Function Architecture (GVFA), a novel alternative to learning graph representations in GNNs that is based on hyperdimensional computing (HDC) principles. GVFA is a general zero-shot approach for graph and node representations without learning. As such, our representations are not task-specific and the computational costs of constructing them is substantially lower compared to learning-based GNN. Empirically, we demonstrate the expressiveness and generalization properties of different GVFA configurations. Our experimental results demonstrate that GVFA outperforms several classic GNNs on their benchmark datasets in terms of classification accuracy for both graph and node classification tasks, while also yielding a substantial reduction in training time.
Abstract:
Recent results show that modern Large Language Models (LLM) are indeed capable of understanding and answering questions about structured data such as graphs. This new paradigm can lead to solutions that require less supervision while, at the same time, providing a model that can generalize and answer questions beyond the training labels. Existing proposals often use some description of the graph to create an ``augmented'' prompt fed to the LLM. For a chosen class of graphs, if a well-tailored graph encoder is deployed to play together with a pre-trained LLM, the model can answer graph-related questions well. Existing solutions to graph-based prompts range from graph serialization to graph transformers. In this work, we show that the use of a parameter-free graph encoder based on Fock space representations, a concept borrowed from mathematical physics, is remarkably versatile in this problem setting. The simple construction, inherited directly from the theory with a few small adjustments, can provide rich and informative graph encodings, for a wide range of different graphs. We investigate the use of this idea for prefix-tuned prompts leveraging the capabilities of a pre-trained, frozen LLM. The modifications lead to a model that can answer graph-related questions -- from simple graphs to proteins to hypergraphs -- effectively and with minimal, if any, adjustments to the architecture. Our work significantly simplifies existing solutions and generalizes well to multiple different graph-based structures effortlessly.
Abstract: Despite their capabilities, Large Language Models (LLMs) remain opaque with limited understanding of their internal representations. Current interpretability methods either focus on input-oriented feature extraction, such as supervised probes and Sparse Autoencoders (SAEs), or on output distribution inspection, such as logit-oriented approaches. A full understanding of LLM vector spaces, however, requires integrating both perspectives, something existing approaches struggle with due to constraints on latent feature definitions. We introduce the Hyperdimensional Probe, a hybrid supervised probe that combines symbolic representations with neural probing. Leveraging Vector Symbolic Architectures (VSAs) and hypervector algebra, it unifies prior methods: the top-down interpretability of supervised probes, SAE’s sparsity-driven proxy space, and output-oriented logit investigation. This allows deeper input-focused feature extraction while supporting output-oriented investigation. Our experiments show that our method consistently extracts meaningful concepts across LLMs, embedding sizes, and setups; uncovering concept-driven patterns in analogy-oriented inference and QA-focused text generation. By supporting joint input–output analysis, this work1 advances semantic understanding of neural representations while unifying the complementary perspectives of prior methods.
Abstract:
The talk will focus on reservoir computing that was originally proposed to address vanishing/exploding gradients problem in training recurrent neural networks. It builds on the idea that a randomly connected recurrent layer, the reservoir, can encode spatiotemporal input signals and enable efficient processing of time-series data. A canonical example where reservoir computing is considered particularly useful – prediction of chaotic dynamical systems – will be discussed. We will look into a novel technique for reservoir computing that uses a memory buffer of recent inputs and expands them into higher-order features. This technique can be interpreted as a polynomial kernel machine leading to a new approach that combines randomized representations from reservoir computing and binding operation from hyperdimensional computing with the idea for approximating polynomial kernels. The approach offers competitive predictive performance and better scalability than the direct expansion of higher-order features. Additionally, the approach has an elegant realization based on a recurrent circuit of Sigma-Pi neurons that can iteratively compute randomized representations of higher-order features. This circuit is amenable for implementation on neuromorphic hardware.
Abstract:
Vector-Symbolic Architectures (VSAs) provide a powerful, brain-inspired framework for representing and manipulating complex data across the biomedical sciences. By mapping heterogeneous information, from genomic sequences and molecular structures to clinical records and medical images, into a unified high-dimensional vector space, VSAs enable robust reasoning, classification, and data fusion. Despite their potential, the practical design and implementation of an effective VSA can be a significant hurdle, as optimal choices depend heavily on the specific scientific application. This article bridges the gap between theory and practice by presenting ten tips for designing VSAs tailored to key challenges in the biomedical sciences. We provide concrete, actionable guidance on topics such as encoding sequential data in genomics, creating holistic patient vectors from electronic health records, and integrating VSAs with deep learning models for richer image analysis. Following these tips will empower researchers to avoid common pitfalls, streamline their development process, and effectively harness the unique capabilities of VSAs to unlock new insights from their data.
Abstract: TBA
Abstract:
What does it mean when a brainlike system 'computes'? This is the question of the *semantics* of neuromorphic computing. In classical digital computing, several mutually connected approaches to formalize the 'meaning' of a computational process have been worked out to textbook format. These formal frameworks allow one to characterize, analyse and prove, for instance, whether a computer program actually does what the user meant it to achieve; whether two different programs actually compute 'the same' task; which tasks can be 'programmed' at all; or what hardware requirements must be met to implement a given program. In brief, semantic theory allows one to analyse how abstract models of computational processes interface with reality - both at the bottom level of the physical reality of hardware, and at the top level of user tasks. Neuromorphic computing theory can learn a lot about these things from looking at the digital world, but also needs to find its very own view on semantics.