Online Speakers' Corner on Vector Symbolic Architectures and Hyperdimensional Computing

CHECK THE UPCOMING EVENTS TOWARDS THE END OF THIS PAGE!

If you want to give a credit to this webinar series use the following entry when citing (BibTeX). 


Welcome to the Fall  2023 session of the online workshop on VSA and hyperdimensional computing. The last webinar of the fall session was on December 11th, 2023. 20:00GMT.  See you soon in 2024!

USE THIS LINK TO ACCESS THE WEBINAR:
https://ltu-se.zoom.us/j/65564790287

Mappings between lower dimensional and hyperdimensional embedding spaces. September 11, 2023. 20:00GMT

Tony Plate, USA

Many trained moderate dimensional embeddings are readily available, e.g., from trained deep and transformer-style neural networks, with dimensionality in the range of 16 to 256.    However, hyperdimensional vector techniques require much higher dimensionality, e.g., in the 1000's, so techniques for mapping from low to higher dimensional spaces can be useful.  In this talk I look at what is needed from such mappings  in order to take full advantage of the properties of hyperdimensional computing, i.e., binding, bundling, and robustness to noise.  I describe several such mappings and report on experimental studies of their properties.

Presented slides: Access here

Local prediction-learning in high-dimensional spaces enables neural networks to plan.  September 18th, 2023. 20:00GMT

Wolfgang Maass, Christoph Stöckl, Yukun Yang. TU Graz, Austria 

Being able to plan a sequence of actions in order to reach a goal is common challenge in machine learning. We show that simple local learning rules enable can a neural network to create high-dimensional representations of actions and sensory inputs so that they encode salient information about their relationship. In fact, it can create a high-dimensional space, which reduces planning to a simple geometric problem that can easily be solved by a neural network. We evaluate our model on a variety of tasks, ranging from navigation on an abstract graphs to motor control. As our approach does not require backpropagation of errors or giant datasets for learning, it is suitable for implementation in highly energy-efficient neuromorphic hardware.

Presented slides

Luke Yi. Stanford University. October 2d, 2023. 20:00GMT

Hardware-Aware Static Optimization of Hyperdimensional Computations 

Abstract: Binary spatter code (BSC)-based hyperdimensional computing (HDC) is a highly error-resilient approximate computational paradigm suited for error-prone, emerging hardware platforms. In BSC HDC, the basic datatype is a \textit{hypervector}, a typically large binary vector, where the size of the hypervector has a significant impact on the fidelity and resource usage of the computation. Typically, the hypervector size is dynamically tuned to deliver the desired accuracy; this process is time-consuming and often produces hypervector sizes that lack accuracy guarantees and produce poor results when reused for very similar workloads. We present Heim, a hardware-aware static analysis and optimization framework for BSC HD computations. Heim analytically derives the minimum hypervector size that minimizes resource usage and meets the target accuracy requirement. Heim \textit{guarantees} the optimized computation converges to the user-provided accuracy target on expectation, even in the presence of hardware error. Heim deploys a novel static analysis procedure that unifies theoretical results from the neuroscience community to systematically optimize HD computations.  We evaluate Heim against dynamic tuning-based optimization on 25 benchmark data structures. Given a 99% accuracy requirement, Heim-optimized computations achieve a 99.2%-100.0% median accuracy, up to 49.5% higher than dynamic tuning-based optimization, while achieving 1.15x-7.14x reductions in hypervector size compared to HD computations that achieve comparable query accuracy and finding parametrizations 30.0x-100167.4x faster than dynamic tuning-based approaches. We also use Heim to systematically evaluate the performance benefits of using analog CAMs and multiple-bit-per-cell ReRAM over conventional hardware, while maintaining iso-accuracy -- for both emerging technologies, we find usages where the emerging hardware imparts significant benefits.

Presented slides

Ali Safa. KU Leuven, Belgium. October 16th, 2023. 20:00GMT

SupportHDC: Hyperdimensional Computing with Scalable Hypervector Sparsity 


Abstract: Hyperdimensional Computing (HDC) is an emerging brain-inspired machine learning method that is recently gaining much attention for performing tasks such as pattern recognition and bio-signal classification with ultra-low energy and area overheads when implemented in hardware. HDC relies on the encoding of input signals into binary or few-bit Hypervectors (HVs) and performs low-complexity manipulations on HVs in order to classify the input signals. In this context, the sparsity of HVs directly impacts energy consumption, since the sparser the HVs, the more zero-valued computations can be skipped. This short paper introduces SupportHDC, a novel HDC design framework that can jointly optimize system accuracy and sparsity in an automated manner, in order to trade off classification performance and hardware implementation overheads. We illustrate the inner working of the framework on two bio-signal classification tasks: cancer detection and arrhythmia detection. We show that SupportHDC can reach a higher accuracy compared to the conventional splatter-code architectures used in many works, while enabling the system designer to choose the final design solution from the accuracy-sparsity trade-off curve produced by the framework. We release the source code for reproducing our experiments with the hope of being beneficial to future research.


Presented slides

Anthony Thomas. UCSD, USA. October 30th, 2023. 20:00GMT

A Tour Through Learning with High-Dimensional Representations: VSAs, Kernel Methods, and Randomized Embeddings 

Abstract: One of the most fundamental ideas in information processing is the notion that high-dimensional embeddings can expose structure in data in such a way that problems, which were complex when posed on the original representation of the data, become simpler, in some sense, when posed on the embeddings. This basic idea lies at the heart of a wide variety of approaches to information processing. Notable examples include: kernel methods from statistics and machine learning, randomized embeddings and sketching from computer science, and hyperdimensional computing (HDC) from cognitive science. The literature on this topic is vast and interdisciplinary. In this talk, I will survey some of its key results with attention to what they tell us about the capabilities of HDC and its relationship to other techniques in the broader literature. I will focus, in particular, on discussing some of the analytic tools that have been developed for analyzing such representations for use in learning settings, and how they can be applied to gain insight into HDC and VSAs.

Presented slides

Kenneth L. Clarkson. IBM Almaden Research. November 13th, 2023. 20:00GMT

Capacity Analysis of Vector Symbolic Architectures
(Joint work with Shashanka Ubaru and Elizabeth Yang) 


Abstract: Hyperdimensional computing (HDC) is a biologically-inspired framework which represents symbols with high-dimensional vectors, and uses vector operations to manipulate them. The ensemble of a particular vector space and a prescribed set of vector operations (including one addition-like for "bundling" and one outer-product-like for "binding") form a *vector symbolic architecture* (VSA). While VSAs have been employed in numerous applications and have been studied empirically, many theoretical questions about VSAs remain open. We analyze the *representation capacities* of four common VSAs: MAP-I, MAP-B, and two VSAs based on sparse binary vectors. "Representation capacity' here refers to bounds on the dimensions of the VSA vectors required to perform certain symbolic tasks, such as testing for set membership iS and estimating set intersection sizes |XY| for two sets of symbols X andY, to a given degree of accuracy. We also analyze the ability of a novel variant of a Hopfield network (a simple model of associative memory) to perform some of the same tasks that are typically asked of VSAs. In addition to providing new bounds on VSA capacities, our analyses establish and leverage connections between VSAs, "sketching" (dimensionality reduction) algorithms, and Bloom filters.

  

Hussam Amrouch. TU Munchen, Germany. November 27th, 2023. 20:00GMT

HW/SW Codesign for Brain-Inspired Hyperdimensional In-Memory Computing 


Abstract: Breakthroughs in deep learning consistently drive innovation. However, DNNs tend to overwhelm conventional computing systems. Hyperdimensional Computing (HDC) is rapidly gaining prominence as a potent method for rapid learning from a relatively small amount of data. It also holds the promise of offering energy-efficient lightweight computation. This talk will provide a comprehensive overview of the major shortcomings of existing von Neumann architectures and the growing need for innovative designs that fundamentally reduce memory latency and energy consumption by enabling data processing within the memory itself. Additionally, the talk will delve into the immense potential of beyond-von Neumann architectures, which utilize both emerging beyond-CMOS devices like Ferroelectric Field-Effect Transistors (FeFET) and conventional CMOS-based SRAM memories. 

Yeseong Kim. DGIST. Republic of Korea. December 11th, 2023. 20:00GMT

Brain-Inspired Hyperdimensional Computing in the Wild: Lightweight Symbolic Learning for Sensorimotor Controls of Wheeled Robots 


Abstract: Efficiency and performance pose significant challenges when integrating Machine Learning (ML) into robotics, particularly in energy-limited real-world scenarios.

Hyperdimensional Computing (HDC) presents a promising, energy-efficient alternative, yet its application in robotics remains largely unexplored.

We introduce ReactHD, a novel framework grounded in Hyperdimensional Computing (HDC), crafted to facilitate advanced perception-action learning in the realm of sensorimotor control tasks. This framework is specifically tailored for deploying robots in complex real-world environments. ReactHD utilizes hypervectors to encode LiDAR sensor data and learn appropriate high-dimensional patterns for robot actions. Additionally, ReactHD incorporates two HD-based, streamlined symbolic learning methods: HDC-based Interactive Learning by Demonstration (HDC-IL) and HD-Reinforcement Learning (HDC-RL). It renders robots to show precisely situated reactive behaviors in complex environments. Our empirical evaluations on a wheeled robot powered by a low-power Raspberry Pi show that ReactHD achieves learning outcomes that are both robust and accurate, comparable to state-of-the-art deep learning, while substantially enhancing performance and energy efficiency by two orders of magnitude.