Online Speakers' Corner on Vector Symbolic Architectures and Hyperdimensional Computing

CHECK THE UPCOMING EVENTS TOWARDS THE END OF THIS PAGE!

Welcome to the Spring  2023 session of the online workshop on VSA and hyperdimensional computing. The next webinar of the fall session will start on June 15th, 2023. 20:00GMT. 

USE THIS LINK TO ACCESS THE WEBINAR:
https://ltu-se.zoom.us/j/65564790287

Deploying Convolutional Networks on Untrusted Platforms Using 2D Holographic Reduced Representations. January 23, 2023. 20:00GMT

Mohammad Alam, University of Maryland, USA

Due to the computational cost of running inference for a neural network, the need to deploy the inferential steps on a third party's compute environment or hardware is common. If the third party is not fully trusted, it is desirable to obfuscate the nature of the inputs and outputs, so that the third party can not easily determine what specific task is being performed. Provably secure protocols for leveraging an untrusted party exist but are too computational demanding to run in practice. We instead explore a different strategy of fast, heuristic security that we call Connectionist Symbolic Pseudo Secrets. By leveraging Holographic Reduced Representations (HRR), we create a neural network with a pseudo-encryption style defense that empirically shows robustness to attack, even under threat models that unrealistically favor the adversary.

Presented slides: Download

Preliminary: VSA library in Lava. February 6th, 2023. 20:00GMT

Paxon Frady. Intel, USA 


Presented slides

Hyperdimensional Feature Fusion for Out-Of-Distribution Detection . February 20th, 2023. 20:00GMT

Samuel Wilson, Queensland University of Technology, Australia 

Abstract:  We introduce powerful ideas from Hyperdimensional Computing into the challenging field of Out-of-Distribution (OOD) detection. In contrast to most existing work that performs OOD detection based on only a single layer of a neural network, we use similarity-preserving semi-orthogonal projection matrices to project the feature maps from multiple layers into a common vector space. By repeatedly applying the bundling operation ⊕, we create expressive class-specific descriptor vectors for all in-distribution classes. At test time, a simple and efficient cosine similarity calculation between descriptor vectors consistently identifies OOD samples with better performance than the current state-of-the-art. We show that the hyperdimensional fusion of multiple network layers is critical to achieve best general performance. 

Presented slides: Download.

Unpaired Image Translation via Vector Symbolic Architectures. March 13th, 2023. 20:00GMT

Justin Theiss, Meta, USA 

Abstract:  Image-to-image translation has played an important role in enabling synthetic data for computer vision. However, if the source and target domains have a large semantic mismatch, existing techniques often suffer from source content corruption aka semantic flipping. To address this problem, we propose a new paradigm for image-to-image translation using Vector Symbolic Architectures (VSA), a theoretical framework which defines algebraic operations in a high-dimensional vector (hypervector) space. We introduce VSA-based constraints on adversarial learning for source-to-target translations by learning a hypervector mapping that inverts the translation to ensure consistency with source content. We show both qualitatively and quantitatively that our method improves over other state-of-the-art techniques. 

Presented slides: Download.

Hyperdimensional Computing with Applications. March 20th, 2023. 20:00GMT

Tajana Simunic Rosing , UC San Diego, USA

Abstract:  In today’s world technological advances are continually creating more data than what we can cope with. Much of data processing will need to run at least partly on devices at the edge of the internet, such as sensors and smart phones. However, running existing machine learning on such systems would drain their batteries and be too slow.  Hyperdimensional (HD) computing is a class of learning algorithms that is motivated by the observation that the human brain operates on a lot of simple data in parallel. It has been proposed as a lightweight alternative to state of the art machine learning.  HD computing uses high dimensional random vectors (e.g. ~10,000 bits) to represent data, making the model robust to noise and HW faults.  It uses search, along with three base operations: permutation, addition (or bundling/consensus sum) and multiplication (circular convolution / XOR). Addition allows us to represent sets, multiplication expresses conjunctive variable binding, and permutation enables encoding of causation and time series. Hypervectors are compositional - they enable computation in superposition, unlike standard neural representations.   Systems that use HD computing to learn can run directly in memory and have been shown to be accurate, fast and very energy efficient.  Most importantly, such systems can explain how they made decisions, resulting in devices that can learn directly from the data they obtain without the need for the cloud. In this talk I will present some of my team’s recent work on hyperdimensional computing software and hardware infrastructure, including: i) novel algorithms supporting key cognitive computations in high-dimensional space such as classification, clustering, regression and others, ii) novel systems for efficient HD computing on sensors and mobile devices, which cover hardware accelerators such as GPUs, FPGAs and PIM, along with software infrastructure to support it.  I will also present the prototypes my team built and tested, along with exciting results and some ideas for the next steps. 

Presented slides: Download.

Vector Symbolic Finite State Machines in Attractor Neural Networks. April 3d, 2023. 20:00GMT

Madison Cotteret, University of Groningen, Netherlands 

Abstract:  Hopfield attractor networks are robust distributed models of human memory. We propose construction rules such that an attractor network may implement an arbitrary finite state machine (FSM), where states and stimuli are represented by high-dimensional random bipolar vectors, and all state transitions are enacted by the attractor network's dynamics. Numerical simulations show the capacity of the model, in terms of the maximum size of implementable FSM, to be linear in the size of the attractor network. We show that the model is robust to imprecise and noisy weights, and so a prime candidate for implementation with high-density but unreliable devices. By endowing attractor networks with the ability to emulate arbitrary FSMs, we propose a plausible path by which FSMs may exist as a distributed computational primitive in biological neural networks.

Presented slides

A Neuro-vector-symbolic Architecture for Solving Raven’s Progressive Matrices. April 24th, 2023. 20:00GMT

Michael Hersche, IBM Zurich, Switzerland 

Abstract:  Neither deep neural networks nor symbolic artificial intelligence (AI) alone has approached the kind of intelligence expressed in humans. This is mainly because neural networks are not able to decompose joint representations to obtain distinct objects (the so-called binding problem), while symbolic AI suffers from exhaustive rule searches, among other problems. In this talk, we show that the two problems can be addressed with our proposed neuro-vector-symbolic architecture (NVSA) by exploiting its powerful operators on high-dimensional distributed representations that serve as a common language between neural networks and symbolic AI. The efficacy of NVSA is demonstrated by solving Raven’s progressive matrices datasets. Compared with state-of-the-art deep neural network and neuro-symbolic approaches, end-to-end training of NVSA achieves a new record of 87.7% average accuracy in RAVEN, and 88.1% in I-RAVEN datasets. Moreover, compared with the symbolic reasoning within the neuro-symbolic approaches, the probabilistic reasoning of NVSA with less expensive operations on the distributed representations is two orders of magnitude faster.

Paper: https://rdcu.be/c7fwW

Code: https://lnkd.in/eZ47Dmep 

Presented slides

In-memory factorization of holographic perceptual representations. May1st, 2023. 20:00GMT

Geethan Karunaratne, IBM Zurich, Switzerland 

Abstract:  Disentangling the attributes of a sensory signal is central to sensory perception and cognition and hence is a critical task for future artificial intelligence systems. Here we present a non-deterministic, non-von Neumann compute engine capable of efficiently factorizing high-dimensional holographic representations of combinations of such attributes. The compute engine combines the emerging compute paradigm of in-memory computing with an enhanced variant of a resonator network. We introduce a threshold-based nonlinear sparse activation between two key operations of the resonator network, a threshold-based convergence detection, and exploit the intrinsic stochasticity associated with the memristive devices employed for in-memory computing. The stochastic in-memory factorizer is shown to solve at least five orders of magnitude larger problems that cannot be solved by the resonator networks, as well as substantially lowering the computational time and space complexity. We present a large-scale experimental demonstration of the factorizer by employing two in-memory compute chips based on phase-change memristive devices. The dominant matrix–vector multiplication operations take a constant time, irrespective of the size of the matrix, thus reducing the computational time complexity to merely the number of iterations. Moreover, we experimentally demonstrate the ability to reliably and efficiently factorize visual perceptual representations generated by modern CNNs. Links to paper, code, and data.


Manuscript:

https://www.nature.com/articles/s41565-023-01357-8


Presented slides

Hyper-Dimensional Function Encoding: Enable Neural Network to Process Continuous Objects - work in progress. May 15th, 2023. 20:00GMT

Dehao Yuan, U. Maryland, USA. 

Abstract:  We propose Hyper-Dimensional Function Encoding (HDFE). Given samples of a function mapping, HDFE produces an explicit vector representation of the given function, which is invariant to the sample distribution and sample density. Sample invariance enables HDFE to consistently encode continuous objects, and therefore allows neural network to receive continuous objects as inputs for machine learning tasks, such as classification and regression. Besides, the encoding produced by HDFE is decodable, which enables neural network to regress continuous objects by regressing their encodings. Furthermore, the HDFE transformation is both distance-preserving (isometric) and universally approximable, which enhances the ability of neural networks to manipulate or predict continuous objects. Therefore, HDFE serves as an interface for processing continuous objects.

Presented slides

Efficient Decoding of Compositional Structure in Holistic Representations. May 29th, 2023. 20:00GMT

Denis Kleyko. RISE. Sweden. 

Abstract:  The talk will be devoted to our recent article that is to appear in Neural Computation. We investigate the task of retrieving information from compositional distributed representations formed by hyperdimensional computing/vector symbolic architectures and present novel techniques that achieve new information rate bounds. First, we provide an overview of the decoding techniques that can be used to approach the retrieval task. The techniques are categorized into four groups. We then evaluate the considered techniques in several settings that involve, for example, inclusion of external noise and storage elements with reduced precision. In particular, we find that the decoding techniques from the sparse coding and compressed sensing literature (rarely used for hyperdimensional computing/vector symbolic architectures) are also well suited for decoding information from the compositional distributed representations. Combining these decoding techniques with interference cancellation ideas from communications improves previously reported bounds (Hersche et al., 2021) of the information rate of the distributed representations from 1.20 to 1.40 bits per dimension for smaller codebooks and from 0.60 to 1.26 bits per dimension for larger codebooks. 

Presented slides

MIDNIGHTSUN SESSION. June 15th, 2023. 20:00GMT

Abstract:  

The topic for this event is HD/VSA as a scientific/engineering discipline

After more than three decades of active research on HD/VSA there are strong indicators of their unique place within the AI landscape. A whole generation of PhDs have graduated with HD/VSA at the heart of their projects. There are several high-profile academic projects, which have been going over the past few years. We see an emergence of stable constellations of researchers all over the world focusing on different aspects of HD/VSA. In other words, HD/VSA is on its way to become a discipline of its own within the AI landscape. 

However, several challenges need to be addressed in order to make HD/VSA to be a discipline that is both accepted and recognized by other communities as well as being attractive for a new generation of researchers. These challenges, as we see, come along several dimensions:

1. How to make HD/VSA attractive for academic and industrial employers?

a. What unique skills do researchers trained within HD/VSA posses?
b. How these skills are matched to the ones that are common within the todays mainstream AI?

2. How to make HD/VSA attractive for the newcomers - undergraduate students, graduate students, postdocs, and researchers from other disciplines?

a. Pre-requisite skills?
b. A clear career path including the guidelines on the publication track?
c. Why bother at all? – the selling point

3. How to make HD/VSA attractive to and understood by other disciplines, e.g., from the applications point of view?

4. What are these OTHER disciplines, which would benefit from or be beneficial to HD/VSA? 

a. Neuroscience?
b. Deep learning?
c. Material science?
d. Electrical engineering?
e. Neuromorphic computing?
f. Cognitive science?
g. Computer science?
h. Robotics?
i. What else? 

5. What kind of outreach activities (in addition to ongoing webinars, website, community workshops, tutorials on conferences for other communities) are needed? What are the relevant conferences to be present on?

 

Presented slides