Season 12 of VSAONLINE is planed to start on January 19th, 2026.
CHECK THE UPCOMING EVENTS TOWARDS THE END OF THIS PAGE!
Abstract:
The entorhinal-hippocampal formation is the mammalian brain's navigation system, encoding both physical and abstract spaces via grid cells. While this system is well-studied in neuroscience, and its efficiency makes it attractive for robotics and machine learning, integrating continuous spatial computations with abstract cognitive processing remains a challenge. Current approaches, such as Continuous Attractor Networks (CANs), successfully model grid cells for physical space but struggle to unify this with abstract data.
Here, we bridge this gap by proposing a mechanistic model for versatile information processing inspired by CANs and Vector Symbolic Architectures (VSAs). The novel Grid-Cell VSA (GC-VSA) model employs a spatially structured encoding scheme with 3D neuronal modules. These modules mimic the discrete scales and orientations of biological grid cells, reproducing their characteristic hexagonal receptive fields.
In experiments, the model demonstrates versatility across three distinct tasks: (1) accurate path integration for tracking locations, (2) spatio-temporal representation for querying object locations and temporal relations, and (3) symbolic reasoning using family trees to test hierarchical relationships. Finally, we discuss ongoing work utilizing the GC-VSA to model "visually guided path integration". By leveraging the framework’s ability to bind continuous spatial variables with categorical variables (e.g., Object IDs), we demonstrate how visual cues can robustly anchor the grid cell code to the physical environment.
Abstract:
Connectionist approaches to machine learning, \emph{i.e.} neural networks, are enjoying a considerable vogue right now. However, these methods require large volumes of data and produce models that are uninterpretable to humans. An alternative framework that is compatible with neural networks and gradient-based learning, but explicitly models compositionality, is Vector Symbolic Architectures (VSAs). VSAs are a family of algebras on high-dimensional vector representations. They arose in cognitive science from the need to unify neural processing and the kind of symbolic reasoning that humans perform. While machine learning methods have benefited from category-theoretical analyses, VSAs have not yet received similar treatment. In this paper, we present a first attempt at applying category theory to VSAs. Specifically, We generalise from vectors to co-presheaves, and describe VSA operations as the right Kan extensions of the external tensor product. This formalisation involves a proof that the right Kan extension in such cases can be expressed as simple, element-wise operations. We validate our formalisation with worked examples that connect to current VSA implementations, while suggesting new possible designs for VSAs.
Abstract:
We introduce a high-capacity associative memory capable of factorizing compositional representations of variables. The proposed approach is implemented as a continuous-time oscillator neural network. By performing factorization with a continuous-time dynamical system, the proposed Factorizing Oscillator Associative Memory (FOAM) provides efficient solutions to computationally hard problems such as inference in compositional representations and combinatorial optimization. We demonstrate favorable performance compared to existing approaches to factorization, improved interpretability, and relevance to standard tasks such as the subset sum problem.
Abstract:
The modular composite representation (MCR) is a computing model that represents information with high-dimensional integer vectors using modular arithmetic. Originally proposed as a generalization of the binary spatter code model, it aims to provide higher representational power while remaining a lighter alternative to models requiring high-precision components. Despite this potential, MCR has received limited attention. Systematic analyses of its trade-offs and comparisons with other models are lacking, sustaining the perception that its added complexity outweighs the improved expressivity. In this work, we revisit MCR by presenting its first extensive evaluation, demonstrating that it achieves a unique balance of capacity, accuracy, and hardware efficiency. Experiments measuring capacity demonstrate that MCR outperforms binary and integer vectors while approaching complex-valued representations at a fraction of their memory footprint. Evaluation on 123 datasets confirms consistent accuracy gains and shows that MCR can match the performance of binary spatter codes using up to 4x less memory. We investigate the hardware realization of MCR by showing that it maps naturally to digital logic and by designing the first dedicated accelerator. Evaluations on basic operations and 7 selected datasets demonstrate a speedup of up to 3 orders of magnitude and significant energy reductions compared to software implementation. When matched for accuracy against binary spatter codes, MCR achieves on average 3.08x faster execution and 2.68x lower energy consumption. These findings demonstrate that, although MCR requires more sophisticated operations than binary spatter codes, its modular arithmetic and higher per-component precision enable lower dimensionality. When realized with dedicated hardware, this results in a faster, more energy-efficient, and high-precision alternative to existing models.
Abstract:
Graph Neural Networks (GNNs) are the most common approach for learning complex relational data represented using graph data structures. Although GNNs are effective at learning representations of both nodes and graphs for a given task, the learning process is computationally expensive and as such, time and energy-inefficient. This paper investigates this challenge within the context of recent work on untrained graph representations that only train the solver model. We present Graph Vector Function Architecture (GVFA), a novel alternative to learning graph representations in GNNs that is based on hyperdimensional computing (HDC) principles. GVFA is a general zero-shot approach for graph and node representations without learning. As such, our representations are not task-specific and the computational costs of constructing them is substantially lower compared to learning-based GNN. Empirically, we demonstrate the expressiveness and generalization properties of different GVFA configurations. Our experimental results demonstrate that GVFA outperforms several classic GNNs on their benchmark datasets in terms of classification accuracy for both graph and node classification tasks, while also yielding a substantial reduction in training time.
Abstract:
Recent results show that modern Large Language Models (LLM) are indeed capable of understanding and answering questions about structured data such as graphs. This new paradigm can lead to solutions that require less supervision while, at the same time, providing a model that can generalize and answer questions beyond the training labels. Existing proposals often use some description of the graph to create an ``augmented'' prompt fed to the LLM. For a chosen class of graphs, if a well-tailored graph encoder is deployed to play together with a pre-trained LLM, the model can answer graph-related questions well. Existing solutions to graph-based prompts range from graph serialization to graph transformers. In this work, we show that the use of a parameter-free graph encoder based on Fock space representations, a concept borrowed from mathematical physics, is remarkably versatile in this problem setting. The simple construction, inherited directly from the theory with a few small adjustments, can provide rich and informative graph encodings, for a wide range of different graphs. We investigate the use of this idea for prefix-tuned prompts leveraging the capabilities of a pre-trained, frozen LLM. The modifications lead to a model that can answer graph-related questions -- from simple graphs to proteins to hypergraphs -- effectively and with minimal, if any, adjustments to the architecture. Our work significantly simplifies existing solutions and generalizes well to multiple different graph-based structures effortlessly.
Abstract: TBA
Abstract:
Recent advances in large language models (LLMs) have enabled strong reasoning over both structured and unstructured knowledge. When grounded on knowledge graphs (KGs), however, prevailing pipelines rely on heavy neural encoders to embed and score symbolic paths or on repeated LLM calls to rank candidates, leading to high latency, GPU cost, and opaque decisions that hinder faithful, scalable deployment. We propose PathHD, a lightweight and encoder-free KG reasoning framework that replaces neural path scoring with hyperdimensional computing (HDC) and uses only a single LLM call per query. PathHD encodes relation paths into block-diagonal GHRR hypervectors, ranks candidates with blockwise cosine similarity and Top-K pruning, and then performs a one-shot LLM adjudication to produce the final answer together with cited supporting paths. Technically, PathHD is built on three ingredients: (i) an order-aware, non-commutative binding operator for path composition, (ii) a calibrated similarity for robust hypervector-based retrieval, and (iii) a one-shot adjudication step that preserves interpretability while eliminating per-path LLM scoring. On WebQSP, CWQ, and GrailQA, PathHD (i) attains comparable or better Hits@1 than strong neural baselines while using one LLM call per query; (ii) reduces end-to-end latency by 40 − 60% and GPU memory by 3-5× thanks to encoder-free retrieval; and (iii) delivers faithful, path-grounded rationales that improve error diagnosis and controllability. These results indicate that carefully designed HDC representations provide a practical substrate for efficient KG-LLM reasoning, offering a favorable accuracy-efficiency-interpretability trade-off.
Abstract:
Vector-Symbolic Architectures (VSAs) provide a powerful, brain-inspired framework for representing and manipulating complex data across the biomedical sciences. By mapping heterogeneous information, from genomic sequences and molecular structures to clinical records and medical images, into a unified high-dimensional vector space, VSAs enable robust reasoning, classification, and data fusion. Despite their potential, the practical design and implementation of an effective VSA can be a significant hurdle, as optimal choices depend heavily on the specific scientific application. This article bridges the gap between theory and practice by presenting ten tips for designing VSAs tailored to key challenges in the biomedical sciences. We provide concrete, actionable guidance on topics such as encoding sequential data in genomics, creating holistic patient vectors from electronic health records, and integrating VSAs with deep learning models for richer image analysis. Following these tips will empower researchers to avoid common pitfalls, streamline their development process, and effectively harness the unique capabilities of VSAs to unlock new insights from their data.
Abstract: TBA
Abstract:
What does it mean when a brainlike system 'computes'? This is the question of the *semantics* of neuromorphic computing. In classical digital computing, several mutually connected approaches to formalize the 'meaning' of a computational process have been worked out to textbook format. These formal frameworks allow one to characterize, analyse and prove, for instance, whether a computer program actually does what the user meant it to achieve; whether two different programs actually compute 'the same' task; which tasks can be 'programmed' at all; or what hardware requirements must be met to implement a given program. In brief, semantic theory allows one to analyse how abstract models of computational processes interface with reality - both at the bottom level of the physical reality of hardware, and at the top level of user tasks. Neuromorphic computing theory can learn a lot about these things from looking at the digital world, but also needs to find its very own view on semantics.