Online Speakers' Corner on Vector Symbolic Architectures and Hyperdimensional Computing

CHECK THE UPCOMING EVENTS TOWARDS THE END OF THIS PAGE!

Welcome to the Spring  2024 session of the online workshop on VSA and hyperdimensional computing. The next webinar of the spring  session is on May 6th, 2024. 20:00GMT.  

USE THIS LINK TO ACCESS THE WEBINAR:
https://ltu-se.zoom.us/j/65564790287

Modelling neural probabilistic computation using vector symbolic architectures.
January 29, 2024. 20:00GMT

Michael Furlong, University of Waterloo, Canada

Abstract: Distributed vector representations, and specifically Vector Symbolic Architectures (VSAs), are a key bridging point between connectionist and symbolic representations in cognition. In this talk we will discuss how bundles of symbols in certain Vector Symbolic Architectures (VSAs) can be understood as defining an object that has a relationship to a probability distribution, and how statements using the Holographic Reduced Representation algebra are analogous to probabilistic statements. We will discuss how dot-product similarity between continuous values represented as Spatial Semantic Pointers (SSPs), an example of the technique of fractional binding, induces a kernel function that can be used in density estimation. We will further show how populations of spiking neurons can be used to implement probabilistic operations that are useful in building cognitive models. We also discuss the relationship between our technique and other approaches for modelling uncertainty in high-dimensional vector spaces.

Compositional Vector Semantics in Spiking Neural NetworksFebruary 12, 2024. 20:00GMT

Martha Lewis, University of Bristol, Bristol, UK

Abstract: Categorical compositional distributional semantics is an approach to modelling language that combines the success of vector-based models of meaning with the compositional power of formal semantics. However, this approach was developed without an eye to cognitive plausibility. Vector representations of concepts and concept binding are also of interest in cognitive science and have been proposed as a way of representing concepts within a biologically plausible spiking neural network. This work proposes a way for compositional distributional semantics to be implemented within a spiking neural network architecture, with the potential to address problems in concept binding and give a small implementation. We also describe a means of training word representations using labelled images. 

MIMONets: Multiple-Input-Multiple-Output Neural Networks Exploiting Computation in Superposition.  February 26, 2024. 20:00GMT

Nicolas Menet, Abbas Rahimi, IBM Research, Switzerland.

Abstract: With the advent of deep learning, progressively larger neural networks have been designed to solve complex tasks. We take advantage of these capacity-rich models to lower the cost of inference by exploiting computation in superposition. To reduce the computational burden per input, we propose Multiple-Input-MultipleOutput Neural Networks (MIMONets) capable of handling many inputs at once. MIMONets augment various deep neural network architectures with variable binding mechanisms to represent an arbitrary number of inputs in a compositional data structure via fixed-width distributed representations. Accordingly, MIMONets adapt nonlinear neural transformations to process the data structure holistically, leading to a speedup nearly proportional to the number of superposed input items in the data structure. After processing in superposition, an unbinding mechanism recovers each transformed input of interest. MIMONets also provide a dynamic trade-off between accuracy and throughput by an instantaneous on-demand switching between a set of accuracy-throughput operating points, yet within a single set of fixed parameters. We apply the concept of MIMONets to both CNN and Transformer architectures resulting in MIMOConv and MIMOFormer, respectively. Empirical evaluations show that MIMOConv achieves ≈ 2 – 4× speedup at an accuracy delta within [+0.68, −3.18]% compared to WideResNet CNNs on CIFAR10 and CIFAR100. Similarly, MIMOFormer can handle 2–4 inputs at once while maintaining a high average accuracy within a [−1.07, −3.43]% delta on the long range arena benchmark. Finally, we provide mathematical bounds on the interference between superposition channels in MIMOFormer.

Paper: https://openreview.net/forum?id=ox7aynitoW

Code: https://github.com/IBM/multiple-input-multiple-output-nets 

Role of sleep in memory and learning: from biological to artificial systemsMarch 11, 2024. 20:00GMT

Maxim Bazhenov, UC San Diego, USA

Abstract: Artificial neural networks are known to exhibit a phenomenon called catastrophic forgetting, where their performance on previously learned tasks deteriorates when learning new tasks sequentially. In contrast, human and animal brains possess the remarkable ability of continual learning, enabling them to incorporate new information while preserving past memories. Empirical evidence indicates that sleep plays a crucial role in the consolidation of recent memories and safeguarding against catastrophic forgetting of previously acquired knowledge. In this study, I will begin by elucidating the key features and mechanisms of sleep in the brain. Subsequently, I will present our recent findings on the application of sleep-related concepts in computational models of brain networks and artificial intelligence.

Computing with Residue Numbers in High-Dimensional Representation.  March 25, 2024. 20:00GMT

Chris Kymn,  UC Berkeley, USA

Abstract: We introduce Residue Hyperdimensional Computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors. We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operations on the vector elements. The resulting framework, when combined with an efficient method for factorizing high-dimensional vectors, can represent and operate on numerical values over a large dynamic range using vastly fewer resources than previous methods, and it exhibits impressive robustness to noise. We demonstrate the potential for this framework to solve computationally difficult problems in visual perception and combinatorial optimization, showing improvement over baseline methods. More broadly, the framework provides a possible account for the computational operations of grid cells in the brain, and it suggests new machine learning architectures for representing and manipulating numerical data. 

Symbolic Disentangled Representations for ImagesApril 8, 2024. 20:00GMT

Alexandr Korchemnyi, Alexey Kovalev,  MIPT, AIRI, Russia

Abstract: The idea of the disentangled representations is to reduce the data to a set of generative factors which generate it. Usually, such representations are vectors in the latent space, in which each coordinate corresponds to one of the generative factors. Then the object represented in this way can be modified by changing the value of a specific coordinate. But first, we need to determine which coordinate handles the desired generative factor, which can be complex with a high vector dimension. In our work, we propose ArSyD (Architecture for Symbolic Disentanglement) that represents each generative factor as a vector of the same dimension as the resulting representation. Then, the object representation is obtained as a superposition of vectors responsible for generative factors. We call such a representation a symbolic disentangled representation. Representation disentanglement is achieved by construction; no additional assumptions about the distributions are made during training, and the model is trained only to reconstruct images. We studied our approach on the objects from the dSprites and CLEVR datasets and provide a comprehensive analysis of the learned symbolic disentangled representations. We also propose new disentanglement metrics that allow you to compare models with different latent representations.

Generalized Holographic Reduced Representations.  April 22, 2024. 20:00GMT

Calvin Yeung, Zhuowen (Kevin) Zou, and Mohsen Imani, UC Irvine, USA 

Abstract: Hyperdimensional Computing acts as a bridge between connectionist and symbolic approaches to artificial intelligence (AI), offering explicit specification of representational structure akin to symbolic approaches while retaining the flexibility of the connectionist models. However, HDC's commutative binding operation poses challenges for encoding complex, nested compositional structures. To address this, we propose Generalized Holographic Reduced Representations (GHRR), an extension of Fourier Holographic Reduced Representations (FHRR), a specific HDC implementation. GHRR introduces a flexible, non-commutative binding operation, enabling improved encoding of complex data structures while preserving HDC's desirable properties of robustness and transparency. In this work, we introduce the GHRR framework, prove its theoretical properties and its adherence to HDC properties, explore its kernel and binding characteristics, and perform empirical experiments showcasing its flexible non-commutativity, enhanced decoding accuracy for compositional structures, and improved memorization capacity compared to FHRR 

Sketching of Polynomial KernelsMay 6, 2024. 20:00GMT

Rasmus Pagh, University of Copenhagen, Denmark.

Abstract: Approximation of non-linear kernels using random feature mapping has been successfully employed in large-scale data analysis applications, accelerating the training of kernel machines. While previous random feature mappings run in O(ndD) time for n training samples in d-dimensional space and D random feature maps, Tensor Sketch (Pham & P., KDD ‘13) approximates any polynomial kernel in O(n(d + D log D)) time. This talk presents Tensorsketch and its relation to CountSketch, a fast dimension reduction technique by Charikar et al. that has become a key tool in dealing with high-dimensional data. It will also touch upon some later extensions to Tensorsketch. 

Modularizing and Assembling Cognitive Map Learners via Hyperdimensional Computing.  May 20, 2024. 20:00GMT

Nathan McDonald, Air Force Research Laboratory, USA

Abstract: Biological organisms must learn how to control their own bodies to achieve deliberate locomotion, that is, predict their next body position based on their current position and selected action. Such learning is goal-agnostic with respect to maximizing (minimizing) an environmental reward (penalty) signal. A cognitive map learner (CML) is a collection of three separate yet collaboratively trained artificial neural networks which learn to construct representations for the node states and edge actions of an arbitrary bidirectional graph. In so doing, a CML learns how to traverse the graph nodes; however, the CML does not learn when and why to move from one node state to another. This work created CMLs with node states expressed as high dimensional vectors suitable for hyperdimensional computing (HDC), a form of symbolic machine learning (ML). In so doing, graph knowledge (CML) was segregated from target node selection (HDC), allowing each ML approach to be trained independently. The first approach used HDC to engineer an arbitrary number of hierarchical CMLs, where each graph node state specified target node states for the next lower level CMLs to traverse to. Second, an HDC-based stimulus-response experience model was demonstrated per CML. Because hypervectors may be in superposition with each other, multiple experience models were added together and run in parallel without any retraining. Lastly, a CML-HDC ML unit was modularized: trained with proxy symbols such that arbitrary, application-specific stimulus symbols could be operated upon without retraining either CML or HDC model. These methods provide a template for engineering heterogenous ML systems. 

Presented slides

VSAONLINE-BLITZJune 3, 2024. 20:00GMT

Stefan Riemann, University of Zurich, Switzerland : Memory from Almost Nothing 

Josh Cynamon, Indiana University, USA : Permutations in VSA


The last session of the Spring Season 2024 on VSAONLINE features two shorter talks discussing exploratory work.