Online Speakers' Corner on Vector Symbolic Architectures and Hyperdimensional Computing

CHECK THE UPCOMING EVENTS AT THE END OF THIS PAGE!

If you want to give a credit to this webinar series use the following entry when citing (BibTeX). 

Welcome to the summer session of the online workshop on VSA and hyperdimensional computing. The next webinar will start on December 13, 2021. 20:00GMT

USE THIS LINK TO ACCESS THE WEBINAR:
https://ltu-se.zoom.us/j/65564790287

Robust high-dimensional memory-augmented neural networks . May 17, 2021. 20:00GMT

Geethan Karunaratne, IBM, Zurich, Switzerland

Abstract:  Traditional neural networks require enormous amounts of data to build their complex mappings during a slow training procedure that hinders their abilities for relearning and adapting to new data. Memory-augmented neural networks enhance neural networks with an explicit memory to overcome these issues. Access to this explicit memory, however, occurs via soft read and write operations involving every individual memory entry, resulting in a bottleneck when implemented using the conventional von Neumann computer architecture. To overcome this bottleneck, we propose a robust architecture that employs a computational memory unit as the explicit memory performing analog in-memory computation on high-dimensional (HD) vectors, while closely matching 32-bit software-equivalent accuracy. This is achieved by a content-based attention mechanism that represents unrelated items in the computational memory with uncorrelated HD vectors, whose real-valued components can be readily approximated by binary, or bipolar components. Experimental results demonstrate the efficacy of our approach on few-shot image classification tasks on the Omniglot dataset using more than 256,000 phase-change memory devices. Our approach effectively merges the richness of deep neural network representations with HD computing that paves the way for robust vector-symbolic manipulations applicable in reasoning, fusion, and compression.

Presented slides: Download

Neural Representation of Continuous Space using Fractional Binding. May 31, 2021. 20:00GMT

Brent James Komer, University of Waterloo, Canada

Abstract:  In this talk, we present a biologically inspired method of encoding continuous space within a population of neurons. This method provides an extension to the Semantic Pointer Architecture (SPA) to encompass Semantic Pointers with real-valued spatial content in addition to symbol-like representations. We demonstrate how these Spatial Semantic Pointers (SSPs) can be used to generate cognitive maps containing objects at various locations. A series of operations are defined that can retrieve objects or locations from the encoded map as well as manipulate the contents of the memory.

We explore the topology of the SSP vector space and show how it preserves metric information while compressing all coordinates to unit length vectors. This allows a limitless spatial extent to be represented in a finite region. Neurons encoding space represented in this manner can naturally exhibit firing fields similar to those found in the brain, such as place cells, grid cells, and band cells..

In addition to constructing biologically plausible models of spatial cognition, SSPs are applied to the domain of machine learning. We demonstrate how replacing traditional encoding mechanisms with SSPs can improve performance on both spatial and non-spatial tasks.

Presented slides: Download

Learning and compressing Tensor Product Representations for large-scale AI problems. June 14, 2021. 20:00GMT

Paul Smolensky(1,2) and Coleman Haley(1,2,3)

(1) Microsoft Research, (2) Johns Hopkins University, and (3) University of Edinburgh

Abstract:  Recent applications developed at Microsoft Research have explored deep-learning Tensor Product Representations (TPRs) for problems in AI and NLP. An example will be presented, after reviewing the theory of TPRs. Results will also be presented on the use of highly-compressed TPRs to encode parse trees in a large-scale NLP corpus. 

Presented slides: Download 

Hybrid Neural Architecture for the Linguistic Operator in Decision Support Systems. June 28, 2021. 20:00GMT

Alexander Demidovsky,  National Research University Higher School of Economics, Russia

Abstract:  Developing integrated neural-symbolic systems is a real and difficult challenge. These hybrid systems incorporate the benefits of both connectionist and symbolic approaches. Neural-symbolic systems, in particular, are described by their robust learning and distributed neural computations. They can also be interpreted, represented, and analyzed in a symbolic form. High interpretability is particularly critical for Decision Support Systems (DSS) that use symbolic constructs to describe the problem situation, stakeholders, and evaluation criteria, and where the reasoning process should be clear to the decision maker. These specifications increase the difficulty of designing integrated neural-symbolic DSSs. In this talk, we will discuss the underlying algorithms of a subset of Decision Support Systems, namely the aggregation of expert assessments for the purpose of selecting between alternative solutions to a given problem. Such vague and fuzzy evaluations are frequently interpreted as a linguistic 2-tuple model or its derivatives, and the algorithms that aggregate them are frequently referred to as operators. 

We will describe two approaches to developing such a DSS feature as an assessment aggregation module in a connectionist paradigm. Each approach makes extensive use of Tensor Product Representations (TPRs) for encoding and decoding symbolic structures without information loss or training. We will clarify why a solution solely based on TPRs does not meet selected functional requirements, despite theoretically being sufficient to achieve the goal. We will propose a hybrid architecture that makes use of TPRs for assessment encoding and decoding tasks while offloading aggregation logic to a learnable component built on top of Neural Turing Machines - a type of Memory-Augmented Neural Networks. We will address the advantages and disadvantages of the suggested solution, as well as potential future research directions.

 Presented slides: Download

Functional representations in VSA. August 23, 2021. 20:00GMT

Peter beim GrabenBernstein Center for Computational Neuroscience Berlin, Germany

Abstract:  Vector-symbolic architectures (VSA) provide viable techniques for the representation of complex symbolic data structures in high-dimensional embedding spaces (Gayler 2006, Kanerva 2009). In a VSA, symbols and variables are represented as filler and role vectors of some underlying linear spaces, respectively. When a symbol is assigned to a variable, the corresponding filler vector is bound to the corresponding role vector. Different filler-role bindings can be bundled together to form a data structure, such as a list, a tree, or a table. The resulting representation vectors can be recursively bound to other roles and further bundled together to yield arbitrarily complex data structures (Smolensky 1990). In order to avoid the “curse of dimensionality” induced by the binding process, VSA employ some data compression and subsequent clean-up algorithms restricting their memory capacity by the signal-to-noise ratio (Plate 1995). However, also loss-less VSA can be devised making use of infinite-dimensional functional representations (beim Graben & Potthast 2009) such as in dynamic neural field architectures (beim Graben & Potthast 2014). In my presentation, I give an overview about functional representations in infinite-dimensional Banach or Hilbert spaces and their relevance for neural field architectures, which are continuum approximations of neural networks exhibiting nice mathematical properties. Specifically, I explain the Fourier representation of lists and the spherical harmonics representation of phrase structure trees of context-free grammars (beim Graben & Potthast 2009).

References

Gayler, R. W. (2006). Vector symbolic architectures are a viable alternative for Jackendoff's challenges. Behavioral and Brain Sciences, 29, 78 – 79.

Kanerva, P. (2009). Hyperdimensional computing: An introduction to computing in distributed representation with high-dimensional random vectors. Cognitive Computation, 139 – 159.

Smolensky, P. (1990). Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artificial Intelligence, 46, 159 – 216.

Plate, T. A. (1995). Holographic reduced representations. IEEE Transactions on Neural Networks, 6, 623 – 641.

beim Graben, P. & Potthast, R. (2009). Inverse problems in dynamic cognitive modeling. Chaos, 19, 015103.

beim Graben, P. & Potthast, R. (2014). Universal neural field computation. In: Coombes, S., beim Graben, P., Potthast, R. & Wright, J. J. (Eds.) Neural Fields: Theory and Applications, Springer, 299 – 318.


Presented slides: Download

Towards a Predictive Processing Implementation of the Common Model of Cognition. September 6, 2021. 20:00GMT

Alexander Ororbia (1), M. Alex Kelly (2),  1.) Rochester Institute of Technology, USA; 2.) Bucknell University, USA and Carleton University, Canada.

Abstract:  We present a cognitive architecture that is built from powerful yet simple neural models. Specifically, we describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory. The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales than what is possible with existent cognitive architectures.

Presented slides: Download

Generalized Learning Vector Quantization for Classification in Randomized Neural Networks and Hyperdimensional Computing. September 20, 2021. 20:00GMT

Cameron Diao, Rice University, USA

Abstract:  Machine learning algorithms deployed on edge devices must meet certain resource constraints and efficiency requirements. Random Vector Functional Link (RVFL) networks are favored for such applications due to their simple design and training efficiency. We present a modified RVFL network that avoids computationally expensive matrix operations during training, thus expanding the network's range of potential applications. The modification replaces the least-squares classifier with the Generalized Learning Vector Quantization (GLVQ) classifier, which only employs simple vector and distance calculations. The GLVQ classifier can also be considered an improvement upon certain classification algorithms popularly used in the area of Hyperdimensional Computing. The proposed approach achieved state-of-the-art accuracy on a collection of datasets from the UCI Machine Learning Repository - higher than previously proposed RVFL networks. We further demonstrate that our approach still achieves high accuracy while severely limited in training iterations (using on average only 21% of the least-squares classifier computational costs).


Presented slides: Download

Near-channel classifier: symbiotic communication and classification in high-dimensional space. October 4, 2021. 20:00GMT

Michael Hersche. ETH Zurich, Zurich, Switzerland.

Abstract:  In this talk, we propose to combine channel coding, source coding, and ML classification into a single unified layer by exploiting multifaceted hyperdimensional representations. First, we propose methods to improve hyperdimensional modulation (HDM) in two ways: 1) reducing the complexity of encoding and decoding operations by generating, manipulating, and transmitting bipolar or integer vectors instead of complex vectors; 2) increasing the SNR gain by 0.2dB using a new soft-feedback decoder; it can also increase the additive superposition capacity of HD vectors up to 1.7x in noise-free cases. Secondly, we propose to combine data encoding/decoding aspects of communication with classification into a single framework by relying on multifaceted HD representations. This leads to a near-channel classification (NCC) approach that avoids transformations between different representations and the overhead of multiple layers of encoding/decoding, hence reducing latency and complexity of a wireless smart distributed system while providing robustness against noise and interference from other nodes. 

Presented slides: Download

Grounding of symbols in the dynamic neural field architectures. October 18, 2021. 20:00GMT

Yulia Sandamirskaya. Intel Labs. Germany

Presented slides: Download

Computing on Functions Using Randomized Vector Representations. November  1, 2021. 20:00GMT

Fritz Sommer. University of California Berkeley. USA 

Joint work with E. Paxon Frady, Denis Kleyko, Christpher Kymn and Bruno A. Olshausen

Abstract:  Vector space models for symbolic processing that encode symbols by random vectors have been proposed in cognitive science and connectionist communities under the names Vector Symbolic Architecture (VSA), and, synonymously, Hyperdimensional (HD) computing. Here we generalize VSAs to function spaces by mapping continuous-valued data into a vector space such that the inner product between the representations of any two data points represents a similarity kernel. By analogy to VSA, we call this new function encoding and computing framework Vector Function Architecture (VFA). In VFAs, vectors can represent individual data points as well as elements of a function space (a reproducing kernel Hilbert space).  The algebraic vector operations, inherited from VSA, correspond to well-defined operations in function space.  Furthermore, we study a previously proposed method for encoding continuous data, fractional power encoding (FPE), which uses exponentiation of a random base vector to produce randomized representations of data points and fulfills the kernel properties for inducing a VFA. We show that the distribution from which elements of the base vector are sampled determines the shape of the FPE kernel, which in turn induces a VFA for computing with band-limited functions. In particular, VFAs provide an algebraic framework for implementing large-scale kernel machines with random features, extending Rahimi & Recht, 2007.  Finally, we demonstrate several applications of VFA models to problems in image recognition, density estimation and nonlinear regression.  Our analyses and results suggest that VFAs constitute a powerful new framework for representing and manipulating functions in distributed neural systems, with myriad applications in artificial intelligence.

Presented slides: Download

Hyperdimensional Computing in Distributed Randomized Neural Networks. November  15, 2021. 20:00GMT

Antonello Rosato. Sapienza University of Rome. Italy

Abstract:  In the supervised learning domain, randomized algorithms are getting more and more attention, due to their elementary approach and intrinsically simpler and lighter training. In this webinar, we will explore the use of randomized neural networks in the distributed classification domain. A novel compression approach originating from the hyperdimensional computing framework is applied to the local classifiers sharing, taking into account the cost of information exchange between agents. The demonstration of the satisfactory accuracy of the algorithm will be shown, with experimental results comparing it to the local classifiers themselves and to the centralized benchmark. Also, we will analyze in depth the effect of such compression, measuring it up against conventional algorithms, dimensionality reduction, and quantization techniques. 

Presented slides: Download

Reiterating on main postulates of VSA with Hyperseed: What is English of Sweden. November  29, 2021. 20:00GMT

Evgeny Osipov. Luleå University of Technology. Sweden

Joint work with  Sachin Kahawala, Dilantha Haputhanthri, Thimal Kempitiya, Daswin De Silva, Damminda Alahakoon, Denis Kleyko.

Abstract:  Recently we have proposed a holistic VSA pipeline for unsupervised learning called Hyperseed (https://arxiv.org/abs/2110.08343). In Hyperseed the learning utilizes the similarity preservation property of binding. The input to the algorithm are VSA constructs of a particular phenomenon (e.g. set of features, sentences, etc.). In a novel way the algorithm learns holistic representations of higher level entities (e.g. types of flowers, languages). The distinctive feature of Hyperseed is learning from few examples. The algorithm uses HRR and FHRR representations, which makes it suitable for implementation on spiking neural architectures. In the talk we present the algorithm in the context of the main postulates of VSA, highlighting their dual connectionist-symbolic nature with a holistic example involving neural representation on the one end and analogical reasoning on the other.

Presented slides: Download

An Overview of a Comprehensive Survey of Hyperdimensional Computing/Vector Symbolic Architectures. December  13, 2021. 20:00GMT

Denis Kleyko. University of California Berkeley. USA 

Joint work with Dmitri A. Rachkovskij, Evgeny Osipov, and Abbas Rahimi

Abstract:  Recently we made available preprints of two-part comprehensive survey on Hyperdimensional Computing/Vector Symbolic Architectures (HDC/VSA). HDC/VSA is a highly interdisciplinary area with connections to computer science, electrical engineering, artificial intelligence, mathematics, and cognitive science. This fact makes it challenging to create a thorough overview of the area. However, due to a surge of new researchers joining the area in recent years, the necessity for a comprehensive survey of the area has become extremely important. Therefore, amongst other aspects of the area, Part I surveys important aspects such as: known computational models of HDC/VSA and transformations of various input data types to high-dimensional distributed representations. Part II is devoted to applications, cognitive computing and architectures, as well as directions for future work. In this talk, I will provide the overview of the main topics covered by the survey and look forward to getting the feedback on the aspects of the survey that can be expanded and/or elaborated. 

Links to the preprints:  

A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations

A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II: Applications, Cognitive Models, and Challenges

Presented slides: Download