Online Speakers' Corner on Vector Symbolic Architectures and Hyperdimensional Computing

CHECK THE UPCOMING EVENTS AT THE END OF THIS PAGE!

If you want to give a credit to this webinar series use the following entry when citing (BibTeX). 

Welcome to the winter session of the online workshop on VSA and hyperdimensional computing. The session will start on November 29 at 20:00  GMT. 

USE THIS LINK TO ACCESS THE WEBINAR:
https://ltu-se.zoom.us/j/65564790287


A Neural-Network-Like Mechanism for Learning with Hyperdimensional Vectors. November 16th, 2020. 20:00GMT

Peter Sutor, University of Maryland, USA

Abstract:  Hyperdimensional Vectors form the basis of Hyperdimensional Computing, enabling many useful properties for computational frameworks. Some of thes properties include the ability to map data - even entire data structures - int long binary vectors. Interestingly, such mappings can then serve as a "universal currency" for any type of information, which can be passed to other learning mechanisms. A natural question to ask next is: what can be further achieved solely with hyperdimensional vectors and the properties of hyperdimensional computing? Perhaps an entire end-to-end learning system can be composed entirely of hyperdimensional vectors. In the proposed paper, we explore the usage of hyperdimensional vectors in a "neural-network-like" fashion for data-driven, classication based learning, where a neuron is actually a structured hyperdimensional vector and the connections between neurons are transformations achieved by hyperdimensional computing. In theory, any data that can be eectively represented by hyperdimensional vectors can be fed into this input-output pipeline.  (Read an extended abstract)

Presented slides: Download

Comparison of Vector Symbolic Architectures. November 16th, 2020. 20:00GMT

Kenny Schlegel, TU Chemnitz, Germany


Abstract: Vector Symbolic Architectures (VSAs) combine a high-dimensional vector space with a set of carefully designed operators in order to perform symbolic computations with large numerical vectors.  Over the past years, VSAs have been applied to a broad range of tasks and several VSA implementations have been proposed. The available implementations differ in the underlying vector space (e.g., binary vectors or complex-valued vectors) and the particular implementations of the required VSA operators - with important ramifications for the properties of these architectures. For example, not every VSA is equally well suited to address each task, including complete incompatibility.  We provide an overview of several available VSA implementations and discuss their commonalities and differences in the underlying vector space, bundling, and binding/unbinding operations. A main part is the experimental comparison of the available implementations on different tasks (including language recognition and visual place recognition). In addition, we demonstrate and explain our implementation based on a MATLAB toolbox.

Presented slides: Download

HDM: Hyper-Dimensional Modulation for Reliable Wireless Communication. November 30th, 2020, 17:00 GMT

Hun-Seok Kim, University of Michigan, USA

Abstract: This talk introduces hyper-dimensional modulation (HDM), a new class of practical modulation scheme for robust communication among low-power low-complexity devices. Unlike conventional orthogonal modulations, HDM conveys numerous information bits per symbol by combining hyper-dimensional vectors that are not strictly orthogonal to each other. Information bits are spread across many elements in the hyper-dimensional vector, thus HDM is tolerant of element-wise failures in high noise channels. Evaluation results confirm that uncoded HDM with 256-dimension exhibits the bit error rate (BER) comparable to that of low-density parity check (LDPC) and Polar codes, while HDM demodulation complexity is lower than that of LDPC and Polar decoders for the same block length of 256. HDM provides graceful tradeoffs between data-rate and signal-to-noise ratio for robust short message communications among power- and complexity-constrained devices. 

Presented slides: Download

Hyperdimensional Computing for Efficient and Robust Learning.  December 14th, 2020, 20:00 GMT

Mohsen Imani, UC Irvine, USA

Abstract: Modern computing systems are plagued with significant issues in efficiently performing learning tasks. In this talk, I will present a new brain-inspired computing system that supports various learning tasks while offering significantly high computation efficiency and robustness than existing platforms. Our platform utilizes HyperDimensional (HD) computing, an alternative method of computation that implements principles of the brain functionality: (i) fast learning, (ii) robustness to noise/error, and (iii) intertwined memory and logic. These features make HD computing a promising solution for today’s embedded devices with limited resources as well as future computing systems in deep nanoscaled technology that have issues of high noise and variability. To leverage the memory-centric nature of HD computing, I exploit emerging technologies to enable processing in-memory which is capable of highly-parallel computation and data movement reduction.

Presented slides: Download

Compressing Many Subject-specific Brain-Computer Interface Models into One Model by Hyperdimensional Superposition. December 14th, 2020, 20:00 GMT

Michael Hersche, ETH Zurich , Switzerland

Abstract: In this talk, we present a new method to reduce the overall storage requirements of subject-specific deep neural network models in brain-computer interfaces by superposition in hyperdimensional spacethat yields a single yet personalized model. Our method makes use of unexploited capacity of trained models by orthogonalizing parameters in a hyperdimensional space, followed by iterative retraining to compensate noisy decomposition. This method can be applied to various layers of deep inference models. We show that our method exploits unutilized capacity for compression and surpasses the accuracy of two state-of-the-art networks: it compresses the smallest EEGNet by 1.9x and the relatively larger Shallow ConvNet by 2.95x at 2.4% and 1.4% higher accuracy, respectively. 

Presented slides: Download

Vector Symbolic Architectures for Automotive Scene Representation and Downstream Applications. January 11th, 2021, 17:00 GMT

Florian Mirus, BMW Group and Technical University of Munich, Germany

Abstract: In this talk , we present a first step towards a cognitive environment model for automotive applications using distributed representations. We investigate the use of VSAs for knowledge representation and reasoning in automotive context. This approach to information encoding is rather generic and can be applied to various different tasks with little modifications to the representation itself.

Presented slides: Download

Sequence structure in the human brain: Vector symbolic architectures as a neurocomputational lingua franca. January 25th, 2021, 20:00 GMT

Ryan Calmus, Newcastle University, United Kingdom

Abstract:  Understanding how the brain segregates and binds complex information distributed in time is a challenging endeavour for the neuroscientific community, requiring computationally and neurobiologically informed approaches to solve. Language is a salient example of the complexity of the binding problem, where hierarchically organized dependencies (for example, nested dependencies between words and phrases in sentences) feature prominently. However, the problem is not unique to language nor humans, since binding in the time domain is relevant for auditory cognition more generally and for executing complex action sequences. To understand the contributions of the diverse regions involved in this process, it is increasingly important that we generate specific, testable, and mutually consistent hypotheses. Vector symbolic architectures are ideally placed to serve as a common language for neurocomputational hypothesis generation and triangulation of evidence across domains. Here we provide evidence for a vector symbolic account of the mechanisms supporting sequence representation in the human brain.

Presented slides: Download

Geometry of high-dimensional data and correction of AI errors . February 8th, 2021, 20:00 GMT

Alexander Gorban,  University of Leicester, United Kingdom

Abstract:  

All artificial Intelligence (AI) systems sometimes make errors and will make errors in the future. These errors must be detected and corrected immediately and locally in the networks of collaborating systems. Real-time re-training is not always viable due to the resources involved. The complete re-training could introduce new mistakes and damage existing skills. The ideal correctors should: be simple, not damage the skills of the legacy system when they are working successfully, allow fast non-iterative learning, and allow correction of the new mistakes without destroying the previous fixes.   

If the essential dimension of the data is high enough, then the correction problem can be solved by combinations of simple supervised and unsupervised learning methods even if the data sets are exponentially large with respect to dimension. This phenomenon is the particular case of the blessing of dimensionality. On the other hands, the ability to correct an AI system also opens up the possibility of an attack on it, and the high dimension induces certain vulnerabilities caused by the same stochastic separability phenomenon. The mathematical foundations of the methods of AI corrections and AI attacks are given by stochastic separability theorems that belong to measure concentration theory. 

In high-dimensional datasets under broad assumptions each point can be separated from the rest of the set by simple and robust Fisher's discriminant (is Fisher separable). Errors or clusters of errors can be separated from the rest of the data. To manage errors and analyze vulnerabilities, the stochastic separation theorems should evaluate the probability that the dataset will be Fisher separable in given dimensionality and for a given class of distributions. Explicit and optimal estimates of these separation probabilities are required, and solution of this problem is presented. 

The general stochastic separation theorems with optimal probability estimates are obtained for important classes of distributions: log-concave distribution, their convex combinations and product distributions. The standard i.i.d. assumption was significantly relaxed. 

These theorems and estimates can be used both for correction of high-dimensional data driven AI systems and for analysis of their vulnerabilities. The third area of application is the emergence of memories in ensembles of neurons, the phenomena of grandmother's cells and sparse coding in the brain, and explanation of unexpected effectiveness of small neural ensembles in high-dimensional brain. 

Gorban, A.N., Makarov, V.A.,  & Tyukin, I.Y. (2019). The unreasonable effectiveness of small neural ensembles in high-dimensional brain. Phys. Life Rev., 29, 55--88. https://doi.org/10.1016/j.plrev.2018.09.005 

Grechuk, B., Gorban, A. N., & Tyukin, I. Y. (2020). General stochastic separation theorems with optimal bounds. arXiv preprint arXiv:2010.05241. https://arxiv.org/abs/2010.05241

Presented slides: Download

Links from VSA to connectionism and representation theory, February 22, 2021, 20:00 GMT

Paxon E. Frady, UC Berkeley, USA.


Abstract:

I will present our work that shows how different flavors of VSAs are related by common geometric principles. We use compressed sensing to illustrate how VSA representations are connected to traditional connectionist ideas, such as localist-feature representations. This shows that the VSA binding operation is related to the tensor-product, like in Smolensky's tensor-product representation, and that the VSA protected sum operation is linked to concatenation. Further, I will describe how VSAs fit into ideas within abstract algebra and representation theory. I will describe how different group structures could be represented by VSAs, and how complex-valued VSAs can be used to represent smooth manifold structures. 


Presented slides: Download

Combining Vector Symbolic Architecture and Semiotic Approach to Solve Visual Question Answering Task, March 1, 2021, 20:00 GMT

Alexey Kovalev, HSE University, Russia.


Abstract:

In this talk, we present how a combination of Vector SymbolicArchitecture and the Semiotic Approach is applied to solve the symbol grounding problem. The version of a Semiotic Approach we use is based on the consciousness theory proposed by Lev Vygotsky, Alexander Luria, and Aleksei Leontiev. We demonstrate this combined approach on the Visual Question Answering Task when an intelligent system is asked to answer a question on natural language given an input image. The proposed architecture bridges the gap between the standard neural network approach to CV/NLP tasks and symbolic systems.  Also, we show preliminary results on a well-known diagnostic dataset CLEVR.


Presented slides: Download

Quantum Mathematics in Artificial Intelligence, March 15, 2021, 20:00 GMT

Dominic Widdows, Kirsty Kitto, Trevor Cohen

Abstract: 

In the decade since 2010, successes in artificial intelligence have been at the forefront of computer science and technology, and vector space models have solidified a position at the forefront of artificial intelligence. At the same time, quantum computers have become much more powerful, and announcements of major advances are frequently in the news.

The mathematical techniques underlying both these areas have more in common than is sometimes realized. Vector spaces took a position at the axiomatic heart of quantum mechanics in the 1930s, and this adoption was a key motivation for the derivation of logic and probability from the linear geometry of vector spaces. Quantum interactions between particles are modelled using the tensor product, which is also used to express objects and operations in artificial neural networks.

This paper describes some of these common mathematical areas, including examples of how they are used in artificial intelligence (AI), particularly in automated reasoning and natural language processing (NLP). Techniques discussed include vector spaces, scalar products, subspaces and implication, orthogonal projection and negation, dual vectors, density matrices, positive operators, and tensor products. Application areas include information retrieval, categorization and implication, modelling word-senses and disambiguation, inference in knowledge bases, and semantic composition.

Some of these approaches can potentially be implemented on quantum hardware. Many of the practical steps in this implementation are in early stages, and some are already realized. Explaining some of the common mathematical tools can help researchers in both AI and quantum computing further exploit these overlaps, recognizing and exploring new directions along the way.

The presentation will give some edited highlights of the paper, and discuss some opportunities this brings for VSA's to be even more in the spotlight.


Presented slides: Download.

Boundary and Normal States of Consciousness, March 29, 2021, 20:00 GMT

Hedda R. Schmidtke

Abstract: 

Consciousness is both present in every thought and most elusive to track down by introspection. When we try to think about what it is, to ``picture'' it, something strange happens: we get into an endless regress. For this reason, some claim that consciousness is simply an illusion. This talk will present recent work on the Activation Bit Vector Machine (ABVM) as it comes increasingly closer to consciousness -- in the original sense matching our subjective experience. The ABVM is a logical VSA based on the Context Logic (CL) framework of logical languages. CL has an inherent image semantics that is nevertheless purely logical. For this reason, it is capable of providing an intuitive and direct solution for looking behind the scenes of some of the hardest problems in AI, including meaning and consciousness.

Presented slides: Download