Online Speakers' Corner on Vector Symbolic Architectures and Hyperdimensional Computing

CHECK THE UPCOMING EVENTS AT THE END OF THIS PAGE!

If you want to give a credit to this webinar series use the following entry when citing (BibTeX). 


Welcome to the Winter 2022 session of the online workshop on VSA and hyperdimensional computing. The next webinar of the winter session will start on June 20, 2022. 20:15GMT. 

USE THIS LINK TO ACCESS THE WEBINAR:
https://ltu-se.zoom.us/j/65564790287

Using VSA/HDC for designing image descriptors. January 17, 2022. 20:00GMT

Peer Neubert, TU Chemnitz, Chemnitz, Germany

Abstract:  In this talk, I will discuss and demonstrate, why VSA/HDC can be a potentially very valuable tool for creating powerful image descriptors. For the VSA community, I hope to showcase a promising

application area with broad practical impact. Image descriptors are an omnipresent tool in computer vision and application fields like mobile robotics. The underlying idea of an image descriptor is to encode information from images in vectors such that the vector similarity of different descriptors tells us something about the relation of the corresponding images (e.g. that they show the same scene). Many hand-crafted and in particular (deep) learned image descriptors are numerical vectors with a potentially (very) large number of dimensions. The central idea of this talk is that VSA/HDC can be used to systematically post-process such existing descriptors and combine them with additional information (e.g. geometric or semantic information) to create better descriptors. It all builds upon the ability of VSA operations to combine multiple vectors while controlling the similarity between input and output vectors. I will use the example task of visual place recognition in changing environments to demonstrate the versatility and practical value of this approach. The talk will mainly subsume material from the recent computer vision and robotics conference papers [1] and [2]. In particular, I will present a simple VSA/HDC approach (HDC-DELF) from [1] that improved the state-of-the-art performance on a very competitive computer vision task. 

[1] Neubert, P. & Schubert, S. (2021) Hyperdimensional computing as a framework for systematic aggregation of image descriptors. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

[2] Neubert, P., Schubert, S., Schlegel, K. & Protzel, P. (2021) Vector Semantic Representations as Descriptors for Visual Place Recognition. In Proc. of Robotics: Science and Systems (RSS).

Presented slides: Download

Ontology as manifold: towards symbolic and numerical artificial embedding. January 31, 2022. 20:00GMT

Chloe Mercier, Frederic Alaxandre, Thierry Vieville, INRIA, Antibe, France 

Abstract: Some human cognitive tasks may involve tightly interleaved logical and numerical computations. On the one hand, ontology modelling allows to describe structured symbolic knowledge and perform logical inference, providing a rather natural representation of human reasoning as modeled in cognitive psychology. On the other hand, spiking neural networks are a biologically plausible implementation of processing in brain circuits, although they process numeric vectors rather than symbolic data; while the Semantic Pointer Architecture (SPA) based on the Vector Symbolic Architecture (VSA) provides a way to manipulate symbols embedded as numeric vectors that carry semantic information.

As a step towards filling the symbolic/numerical gap, we propose to map an ontology onto a SPA-based manifold. More specifically, we focus on ontology standards used in the semantic web such as Resource Description Framework Schema (RDFS) and the Web Ontology Language (OWL). We provide a partial implementation into spiking neural networks, using the neural simulator Nengo, to illustrate the case of RDFS entailments based on predicate chaining. Reporting interesting formal results, our embedding enjoys intrinsic properties allowing semantic reasoning through distributed numerical computing. This preliminary work thus combines symbolic and numerical approaches for cognitive modeling, which might be useful to model some complex human tasks such as ill-defined problem-solving, involving neuronal knowledge manipulation.

Presented slides: Download

Large-scale Systems Optimization: From Oscillatory Physics to High-dimensional Vector Memories. February 14, 2022. 20:00GMT

Cristian Axenie

Huawei Research Center, Munich, Germany

Abstract:  The complexity and dynamic order of controlled engineering systems is constantly increasing. Modeling of these systems often results in high-order models imposing great challenges to their analysis, design, and feedback control. Large-scale urban road traffic is a candidate for such class of systems. Despite the advances in road traffic control systems, the problem of optimal traffic signal timing is still resistant to straightforward solutions. Fundamentally nonlinear, traffic flows exhibit both locally periodic dynamics and globally coupled correlations across scales. The talk will focus on two road traffic optimization methods which exploit either the physics of traffic or its intrinsic causal relations. Both systems use distributed representations and computation of road traffic quantities in spiking neural networks. The first system introduces a method to represent and control periodic behavior of traffic using high-dimensional vectors. The second system uses a method to represent and store causal action-consequence pairs from traffic dynamics. Our real-world data experiments demonstrate that both systems are promising candidates for real-world deployment.

Presented slides: Download

Neuromorphic Visual-Spatial Processing with VSAs. February 28, 2022. 20:00GMT

Alpha Renner, Institute of Neuroinformatics (INI), University of Zurich and ETH Zurich, Switzerland

Abstract:  In this talk I will present recent work that is relevant for both the VSA and the neuromorphic community:

1. How we use VSAs to represent images and objects in space (2d and 3d) including transformations such as translations and rotations. This can be used to process visual input from event-based cameras and to perform other visual-spatial tasks towards visual SLAM.

2. How VSAs can be implemented on neuromorphic hardware using efficient spike-timing-based phasor neurons. 

3. Conventional algorithms cannot readily be converted to spiking networks that can be run on neuromorphic hardware. VSAs can help solve this important issue as they serve as a layer of abstraction that enables us to implement algorithms such as the recently introduced resonator on Intel's neuromorphic research processor Loihi.

Similarity-Based Attention for Vector-Symbolic Architectures. March 14, 2022. 20:00GMT

Wilkie Olin-Ammentorp,  Maxim Bazhenov,  Bazhenov Lab, UCSD, USA

Abstract:  Attention-based architectures such as Transformers have allowed neural networks to achieve new state-of-the-art results on tasks such as natural language processing. More recently, the 'Perceiver' architecture (Jaegle et al., 2021) has provided a method for scaling attention-based architectures to large input spaces and arbitrary tasks (e.g. audio classification, optical flow, image compression).


At a high level, parallels can be drawn between attention-based and vector-symbolic systems: we posit that the 'score' calculated between key and value vectors in an attention mechanism can be replaced by the similarity calculated between symbols in a vector-symbolic architecture (VSA). This replacement could allow for a natural adaptation of attention-based mechanisms to transform and process VSAs. We present our preliminary work demonstrating this principle and adapting attention-based mechanisms into our previous 'bridge' networks to expand their capabilities.


Presented slides: Download

Linguistic semantics and geometry. March 28, 2022. 20:00GMT

Jussi Karlgren,  Spotify, Sweden

Abstract:  Geometric models are used for modelling meaning in various semantic-space models. They are seductive in their simplicity and their imaginative qualities, and for that reason, their metaphorical power risks leading our intuitions astray: human intuition works well in a three-dimensional world but is overwhelmed by higher dimensionalities. This talk will discuss about some characteristics of human language semantics, some requirements we would like a knowledge representation to accommodate, and some pitfalls of using high-dimensional geometric representation. I will conclude with an example of how a VSA-inspired Random Indexing model has been used in preliminary experimentation.

Presented slides: Download

Residue Locality-Preserving Encodings. April 11, 2022. 20:00GMT

Christopher Kymn, UC Berkeley, USA.

Abstract:  I will present some preliminary results regarding a number encoding scheme, which I call Residue Locality-Preserving Encodings (RLPEs), that fall under the framework of Vector Symbolic Architectures/Hyperdimensional Computing. RLPEs provide a method for encoding, manipulating, and efficiently decoding integer-valued variables with random high-dimensional vectors. In this talk, I’ll recapitulate the relevant prior work (including resonator networks and vector function architectures), define RLPEs, show some capacity results, and present applications to improving VSAs and solving difficult optimization problems.

PUBLICATION OF THE SLIDES AND THE VIDEO RECORDING IS POSTPONED BY THE REQUEST FROM THE AUTHOR . CONTACT cjkymn@berkeley.edu WITH TOPIC RELATED QUESTIONS.

Fractional Binding in Vector Symbolic Representations for Efficient Mutual Information Exploration. April 25, 2022. 20:00GMT

Michael Furlong, University of Waterloo, Canada

Abstract:  Mutual information (MI) is a standard objective function for driving exploration. The use of Gaussian processes to compute information gain is limited by time and memory complexity that grows with the number of observations collected. We present an efficient implementation of MI-driven exploration by combining vector symbolic architectures with Bayesian Linear Regression. We demonstrate equivalent regret performance to a GP-based approach with memory and time complexity that is constant in the number of samples collected, as opposed to t^2 and t^3, respectively, enabling long-term exploration.

 Presented slides: Download

Orthogonal Matrices for MBAT Vector Symbolic Architectures, and a "Soft" VSA Representation for JSON. May 9, 2022. 20:00GMT

Stephen I. Gallant, Textician, USA

Abstract:  Vector Symbolic Architectures (VSAs) give a way to represent a complex object as a single fixed-length vector, so that similar objects have similar vector representations. These vector representations then become easy to use for machine learning or nearest-neighbor search. We review a previously proposed VSA method, MBAT (Matrix Binding of Additive Terms), which uses multiplication by random matrices for binding related terms. However, multiplying by such matrices introduces instabilities which can harm performance. Making the random matrices be orthogonal matrices provably fixes this problem. With respect to larger scale applications, we see how to apply MBAT vector representations for any data expressed in JSON. JSON is used in numerous programming languages to express complex data, but its native format appears highly unsuited for machine learning. Expressing JSON as a fixed-length vector makes it readily usable for machine learning and nearest-neighbor search. Creating such JSON vectors also shows that a VSA needs to employ binding operations that are non-commutative. 

VSAs are now ready to try with full-scale practical applications, including healthcare, pharmaceuticals, and genomics.

 

Keywords: MBAT (Matrix Binding of Additive Terms), VSA (Vector Symbolic Architecture), HDC (Hyperdimensional Computing), Distributed Representations, Binding, Orthogonal Matrices, Recurrent Connections, Machine Learning, Search, JSON, VSA Applications

 Presented slides: Download

Understanding Hyperdimensional Computing for Parallel Single-Pass Learning. May 23, 2022. 20:00GMT

Tao Yu, Cornell University, USA

Abstract:  Hyperdimensional computing (HDC) is an emerging learning paradigm that computes with high dimensional binary vectors. It is attractive because of its energy efficiency and low latency, especially on emerging hardware -- but HDC suffers from low model accuracy, with little theoretical understanding of what limits its performance. We propose a new theoretical analysis of the limits of HDC via a consideration of what similarity matrices can be "expressed" by binary vectors, and we show how the limits of HDC can be approached using random Fourier features (RFF). We extend our analysis to the more general class of vector symbolic architectures (VSA), which compute with high-dimensional vectors (hypervectors) that are not necessarily binary. We propose a new class of VSAs, finite group VSAs, which surpass the limits of HDC. Using representation theory, we characterize which similarity matrices can be "expressed" by finite group VSA hypervectors, and we show how these VSAs can be constructed. Experimental results show that our RFF method and group VSA can both outperform the state-of-the-art HDC model by up to 7.6% while maintaining hardware efficiency.


 Presented slides: Download

Bridging neural and symbolic representation for simultaneous localization and mapping. June 20, 2022. 20:15GMT

Nicole Sandra-Yaffa Dumont, University of Waterloo, Canada 

Abstract:  To navigate in new environments, an animal must be able to keep track of its own position while simultaneously creating and updating an internal map of the environment, a problem known as simultaneous localization and mapping (SLAM). This requires integrating information from different domains, namely self-motion cues and sensory information. Recently, Spatial Semantic Pointers (SSPs) have been proposed as an extension to vector symbolic architectures for representing continuous variables. A key feature of this approach is that these spatial representations can be bound with other features, both continuous and discrete, to create compressed structures containing information from multiple domains (e.g. spatial, temporal, visual, conceptual). In this work, SSPs are used as the basis of a spiking neural network model of SLAM. A recurrent neural network implements the dynamics of SSPs for keeping track of self-position, which is used for online learning of an associative memory between landmarks and their locations – i.e. an environment map. We show that environment maps can be accurately learned and can be used to greatly improved self-position estimation. Furthermore, grid cells, boundary cells, and object vector cells are included in this model..


 Presented slides: Download