Season 12 of VSAONLINE is planed to start at the endo fo January 2026.
CHECK THE UPCOMING EVENTS TOWARDS THE END OF THIS PAGE!
Abstract:
Sketching algorithms are a broad area of research in theoretical computer science and numerical analysis that aim to distill data into a simple summary, called a "sketch," that retains some essential notion of structure while being much more efficient to store, query, and transmit.
Vector-symbolic architectures (VSAs) are an approach to computing on data represented using random vectors, and provide an elegant conceptual framework for realizing a wide variety of data structures and algorithms in a way that lends itself to implementation in highly-parallel and energy-efficient computer hardware.
Sketching algorithms and VSA have a substantial degree of consonance in their methods, motivations, and applications. In this tutorial style talk, I will discuss some of the connections between these two fields, focusing, in particular, on the connections between VSA and tensor-sketches, a family of sketching algorithms concerned with the setting in which the data being sketched can be decomposed into Kronecker (tensor) products between more primitive objects. This is exactly the situation of interest in VSA and the two fields have arrived at strikingly similar solutions to this problem.
Abstract: Hyperdimensional computation (HDC) is a brain-inspired, lightweight computing paradigm that has shown great potential for edge inference and inference on emerging hardware technologies, achieving state-of-the-art accuracy on certain classification tasks. HDC classifiers are inherently error resistant and support early termination of inference tasks to obtain approximate classification results. Practitioners have developed heuristic methods to terminate inference early on a per-input basis, reducing the computational cost of inference at the cost of accuracy. These techniques lack formal guarantees and therefore may unacceptably degrade classification accuracy or terminate inference tasks later than needed.
We present Omen, the first dynamic HDC optimizer that uses inferential statistics to terminate inference early while providing accuracy guarantees. To realize Omen, we develop a statistical view of HDC that reframes HD computations as statistical sampling and testing tasks, enabling the use of statistical tests. We evaluate Omen on 19 benchmark instantiations of four classification tasks. Omen is computationally efficient, delivering up to 7.21-12.18x inference speed-ups over an unoptimized baseline while only incurring a 0.0-0.7\% drop in accuracy. Omen outperforms heuristic methods, achieving an additional 1.04-5.85x inference speed-up over heuristic methods while maintaining higher or comparable accuracy.
Abstract 1: This talk introduces HD-CB, the first application of Hyperdimensional Computing to model and automate sequential decision-making in Contextual Bandits problems. HD-CB maps environmental states into a high-dimensional space, representing actions with dedicated hypervectors that are updated in real-time based on received rewards. By operating directly in the high-dimensional space, HD-CB replaces traditional, computationally expensive methods, like ridge regression, with efficient and highly parallel vector operations, achieving superior performance, faster convergence, and improved scalability.
https://arxiv.org/abs/2501.16863
Abstract 2: With its inherent parallelism, Hyperdimensional Computing is an ideal candidate for hardware implementation. This talk introduces HDCU, a reconfigurable hardware accelerator powered by a novel RISC-V instruction set extension. HDCU accelerates core arithmetic operations on hypervectors and can be configured at synthesis time to balance execution speed and resource usage, adapting to diverse applications. A custom RISC-V Instruction Set Extension is designed to efficiently control the accelerator, with instructions fully integrated into the GCC compiler chain and exposed to the programmer as intrinsic function calls. The dual flexibility coming from hardware configuration and software programmability sets this work apart from application-specific solutions in the literature, offering a unique, versatile accelerator adaptable to a wide range of applications and learning tasks.
Abstract: This presentation is based on an investigation undertaken for the UK Defence Technology Research Laboratory (Dstl) into the transformative potential for Vector Symbolic Architecture (VSA), a.k.a. Hyperdimensional Computing for advancing cognitive processing capabilities at the network edge. The presentation describes a technology integration experiment, demonstrating how the VSA paradigm offers robust solutions for generation-after-next AI deployment at the network edge. Specifically, we show how VSA effectively models and integrates the cognitive processes required to perform intelligence, surveillance, and reconnaissance (ISR) operations. The experiment integrates functions across the observe, orientate, decide and act (OODA) loop, including the processing of sensed data via both a neuromorphic event-based camera and a standard CMOS frame-rate camera; declarative knowledge-based reasoning in a semantic vector space; action planning using VSA cognitive maps (CM); access to procedural knowledge via large language models (LLMs); and efficient communication between agents via highly-compact binary vector representations. In contrast to previous ‘point solutions’ showing the effectiveness of VSA for individual OODA tasks, this work takes a ‘whole system’ approach, demonstrating the power of VSA as a uniform integration technology.
Abstract: Modern transformer-based encoder-decoder architectures struggle with reasoning tasks due to their inability to effectively extract relational information between input objects (data/tokens) without interference from object level information. To address this, we propose RESOLVE, a neuro-vector symbolic architecture that first mixes object-level features with relational information in high-dimensional spaces, using fast and efficient operations such as bundling (summation) and binding (Hadamard product). This allows both object-level features and relational representations to coexist within the same symbolic structure without interfering with one another, while maintaining a high level of abstraction. RESOLVE is driven by a novel attention mechanism that operates in a bipolar high dimensional space, allowing fast attention score computation compared to the state-of-the-art. By leveraging this design, the model achieves both low compute latency and memory efficiency. RESOLVE also offers better generalizability while achieving higher accuracy in tasks such as mathematical reasoning compared to state-of-the-art methods.
Abstract: Previous research on feature binding in visual working memory has supported a privileged role for location in binding an object's nonspatial features. However, humans are able to correctly recall feature conjunctions of objects that occupy the same location at different times. In a series of behavioral experiments, we investigated binding errors under these conditions, and specifically tested whether ordinal position can take the role of location in mediating feature binding. We performed two dual report experiments in which participants had to memorize three colored shapes presented sequentially at the screen center. When participants were cued with the ordinal position of one item and had to report its shape and color, report errors for the two features were largely uncorrelated. In contrast, when participants were cued, for example, with an item's shape and reported an incorrect ordinal position, they had a high chance of making a corresponding error in the color report. This pattern of error correlations closely matched the predictions of a model in which color and shape are bound to each other only indirectly via an item's ordinal position. In a third experiment, we directly compared the roles of location and sequential position in feature binding. Participants viewed a sequence of colored disks displayed at different locations and were cued either by a disk's location or its ordinal position to report its remaining properties. The pattern of errors supported a mixed strategy with individual variation, suggesting that binding via either time or space could be used for this task. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Abstract: Existing models of the basal ganglia assume the existence of separate channels of neuron populations for representing each available action. This type of localist mapping limits models to small, discrete action spaces, since additional actions require additional channels, costing neural resources and imposing new connective tracts. In contrast, evidence suggests that the basal ganglia plays a role in the selection of both discrete action units, and continuously-valued action kinematics. In this work, we model the basal ganglia with distributed action representations, using high-dimensional vectors. This method lends itself to representing both discrete and continuous action spaces. Vectors that represent actions are weighted by a scalar value (their salience to the current task), and bundled together to form a single input vector. This paper provides an overview of the encoding method and network structure, as well as a demonstration of the model solving an action selection task using spiking neurons.
THE VIDEO OF THE PRESENTATION WILL BE AVAILABLE AFTER PUBLICATION OF THE ARTICLE
Abstract: Dense Associative Memories are high storage capacity variants of the Hopfield networks that are capable of storing a large number of memory patterns in the weights of the network of a given size. Their common formulations typically require storing each pattern in a separate set of synaptic weights, which leads to the increase of the number of synaptic weights when new patterns are introduced. In this work we propose an alternative formulation of this class of models using random features, commonly used in kernel methods. In this formulation the number of network's parameters remains fixed. At the same time, new memories can be added to the network by modifying existing weights. We show that this novel network closely approximates the energy function and dynamics of conventional Dense Associative Memories and shares their desirable computational properties.
Abstract: Continuous attractor networks (CANs) are widely accepted to model the temporary retention of information in the brain through persistent recurrent activity, but are notoriously sensitive to synaptic and neural nonidealities. Introducing structured synaptic heterogeneity mitigates noise-induced diffusion of the represented variable, but reduces the idealised continuum of states to a set of discrete attractor states. This work demonstrates that periodic grid-cell-like codes support CANs which represent continuous variables with high precision, while remaining robust to noise, by increasing the path length of the line attractor in the neural state space. Together, this provides a description of how biological RNNs could perform robust computation using low-dimensional nonlinear neural manifolds and programmable flows upon them.
Abstract: Hypercomplex algebras have recently been gaining prominence in the field of deep learning owing to the advantages of their division algebras over real vector spaces and their superior results when dealing with multidimensional signals in real-world 3D and 4D paradigms. This paper provides a foundational framework that serves as a roadmap for understanding why hypercomplex deep learning methods are so successful and how their potential can be exploited. Such a theoretical framework is described in terms of inductive bias, i.e., a collection of assumptions, properties, and constraints that are built into training algorithms to guide their learning process toward more efficient and accurate solutions. We show that it is possible to derive specific inductive biases in the hypercomplex domains, which extend complex numbers to encompass diverse numbers and data structures. These biases prove effective in managing the distinctive properties of these domains, as well as the complex structures of multidimensional and multimodal signals. This novel perspective for hypercomplex deep learning promises to both demystify this class of methods and clarify their potential, under a unifying framework, and in this way promotes hypercomplex models as viable alternatives to traditional real-valued deep learning for multidimensional signal processing.