Video Recordings

Keynote 1. Ross Gayler. Thinking about Vector Symbolic Architectures.

Abstract: Vector Symbolic Architectures are defined in terms of a very small set of operators acting on a vector space. The task of the VSA researcher is to discover the implications that follow from the definition in terms of the systems that can be implemented with VSAs. The VSA definitions are the researcher’s raw materials, but they also need tools to transform those raw materials into useful hypotheses and system designs. One important tool for a researcher is a conceptual framework, which specifies how the researcher thinks about VSAs and relates them to the other things they know. It is the researcher’s mental model of how VSAs work. The primary requirement for a conceptual framework is that it is productive; it should make it easy for the researcher to generate interesting hypotheses and designs. These hypotheses and designs don’t have to be correct, just plausible. Beating them into shape is a different part of the research process. Most VSA research papers contain a statement of the VSA definition. Very few mention the researcher’s conceptual framework. In this talk I will sketch out my conceptual framework - how I think about Vector Symbolic Architectures - in the hope that it might be interesting and useful to other researchers.

Keynote 2. Dmitri Rachkovskij. Hierarchies and Hypervectors

Agenda

1. Part-whole and generalization-specialization hierarchies.

2. Hypervectors. Superposition and binding operations.

3. Architecture of Associative-Projective Neural Networks.

4. Analogical reasoning with hypervectors: retrieval, mapping, inference.

5. Hypervectors for spatial data and Visual Place Recognition.

6. Some topics for future research.

Keynote 3. Tony Plate. What can transformers and VSA learn from each other?

Abstract: 

Systematicity and compositionality in transformer networks. Various transformer neural network architectures have demonstrated remarkable linguistic and even reasoning abilities.  This has been done without explicit support for variable binding.  In this talk I survey some of the work that demonstrates reasoning abilities, looking how robust and extensive those capabilities really are.  I hope to generate discussion on what kinds of tasks can be useful for probing reasoning abilities, and on whether (or not) transformer networks might be made more powerful and transparent by incorporating binding techniques from hyperdimensional computing. 

Keynote 4. Jeff Orchard. Spikes for Cognition: Hyperdimensional Computing with Spiking Phasors 

Abstract: Hyperdimensional (HD) computing offers a powerful framework for representing compositional reasoning. Such algorithms lend themselves to neural-network implementations, allowing us to create neural networks that can perform cognitive functions like spatial reasoning, arithmetic, and symbolic logic. But the vectors involved can be quite large. Advances in neuromorphic hardware hold the promise of reducing the running time and energy footprint of neural networks by orders of magnitude. In this talk, I will outline a spiking implementation of Fourier Holographic Reduced Representation (FHRR), one of the most versatile VSAs. The phase of each complex number of an FHRR vector is encoded as a spike time within a cycle. Neuron models derived from these spiking phasors can perform the requisite vector operations to implement an FHRR. We demonstrate the power and versatility of our networks in several foundational problem domains, including spatial memory, function representation, and memory (i.e. signal delay), all on a substrate of spiking neurons. 

Lightening talk (15/6 13:30). Kylie Huch. Superconducting Hyperdimesnsional Associative Memory for Scalable Machine Learning

Abstract: We propose a generalized architecture for the first rapid-single-flux-quantum (RSFQ) associative memory circuit. The circuit employs hyperdimensional computing (HDC), a machine learning (ML) paradigm utilizing vectors with dimensionality in the thousands to represent information. HDC designs have small memory footprints, simple computations, and simple training algorithms compared to superconducting neural network accelerators (SNNAs), making them a better option for scalable SFQ machine learning (ML) solutions. The proposed superconducting HDC (SHDC) circuit uses entirely on-chip RSFQ memory which is tightly integrated with logic, operates at 33.3 GHz, is applicable to general ML tasks, and is manufacturable at practically useful scales given current SFQ fabrication limits. Tailored to a language recognition task, SHDC consists of ∼2–20 M Josephson junctions (JJs) and consumes up to three times less power than an analogous CMOS HDC circuit while achieving 78–84% higher throughput. SHDC is capable of outperforming the state of the art RSFQ SNNA, SuperNPU, by 48-99% for all benchmark NN architectures tested while occupying up to 90% less area and consuming up to nine times less power. To the best of the authors’ knowledge, SHDC is currently the only superconducting ML approach feasible at practically useful scales for real-world ML tasks and capable of online learning. 

Lightening talk (15/6 14:00). Rachel StClare. Hardware accelerator for hyperdimensional computing 

Abstract:

Dr Rachel StClair will present Simuli, a fables design company creating a commercially available accelerator for hyperdimensional computing. Simuli will discuss its accelerator design and application while inviting the community to collaborate on its implementation.

Lightening talk (15/6 14:30). Alexander Serb. Low-Energy Adiabatic Computing

Abstract:

Computing is an integral part of modern life and the demands of the world for compute capacity are only increasing as ever more complex and powerful algorithms are developed. On the other side of the equation, a community of hardware engineers is working hard to ensure that all this computation demand can continue to be satisfied without damaging power budgets and the environment. In this talk we will discuss a more exotic circuit design technique, adiabatic computing, and how we are progressing towards maturing it at the univ. of Edinburgh. Luckily, it is especially well-suited for massively parallel computation, and that includes neural networks. We hope you enjoy hearing about ‘the hardware perspective

Lightening talk (15/6 14:30). Madison Cotteret. Vector Symbolic Finite State Machines in Attractor Neural Networks

Abstract:  Hopfield attractor networks are robust distributed models of human memory. We propose construction rules such that an attractor network may implement an arbitrary finite state machine (FSM), where states and stimuli are represented by high-dimensional random bipolar vectors, and all state transitions are enacted by the attractor network's dynamics. Numerical simulations show the capacity of the model, in terms of the maximum size of implementable FSM, to be linear in the size of the attractor network. We show that the model is robust to imprecise and noisy weights, and so a prime candidate for implementation with high-density but unreliable devices. By endowing attractor networks with the ability to emulate arbitrary FSMs, we propose a plausible path by which FSMs may exist as a distributed computational primitive in biological neural networks. 

Lightening talk (15/6 15:00). Kenny Schlegel. Encoding time series data using HDC 

Abstract:

In this talk I will give an overview of our work on using HDC to incorporate explicit knowledge into time series analysis. For example, we were able to improve an existing time series classification model by including global time as an HDC timestamp in the encoding mechanism. 

In addition to this approach, called HDC-MiniRocket, I will also give a brief overview about some ongoing work. For example, how to find the optimal hyper-parameter based on HDC superposition that affect the graded similarity of timestamps in HDC time coding. Another ongoing work relates to a specific application of time series modelling in the context of Advanced Driver Assistance Systems (ADAS), where we can use HDC to encode scenarios as a temporal series of scenes. The goal of such a method is to find known scenarios in data streams. In doing so, we can use HDC not only for the temporal dependencies, but also for the spatial relationships between objects efficiently. 

WIP talks  (16/6 13:30). 

Tom Glover. Evolutionary algorithm optimisation of cellular automata for vector expansion.

Adam Vandervorst. Programing library for binary hyperdimensional computing.

Dilantha Haputhanthrige. Sparse Reservoir Computing

Sachin  Kahawala. VSA for manifold learning

Lightening talk (16/6 14:15). Laura Smets. Training a HyperDimensional Computing Classifier using a Threshold on its Confidence

Abstract:

Hyperdimensional computing (HDC) has become popular for light-weight and energy-efficient machine learning, suitable for wearable Internet-of-Things (IoT) devices and near-sensor or on-device processing. HDC is computationally less complex than traditional deep learning algorithms and achieves moderate to good classification performance. This article proposes to extend the training procedure in HDC by taking into account not only wrongly classified samples, but also samples that are correctly classified by the HDC model but with low confidence. As such, a confidence threshold is introduced that can be tuned for each dataset to achieve the best classification accuracy. The proposed training procedure is tested on UCIHAR, CTG, ISOLET and HAND dataset for which the performance consistently improves compared to the baseline across a range of confidence threshold values. The extended training procedure also results in a shift towards higher confidence values of the correctly classified samples making the classifier not only more accurate but also more confident about its predictions.

Lightening talk (16/6 14:45 - 15:15). Christopher Kymn. Efficient visual scene inference with resonator networks and sparse coding

Abstract:

Factorization is a central problem for understanding visual scenes: examples include separating the effects of form from motion, and of lighting from surface reflectance. The resonator network is a HDC/VSA-based algorithm for performing factorization with distributed representations, with strong performance relative to gradient-based methods. However, an unsolved problem is how to learn useful representations of visual scenes as input for the resonator.

In this talk, we propose methods for integrating resonator networks with the latent representations produced by sparse coding, a well-known unsupervised learning framework for signal representation. We show how this integration helps with capacity limits of distributed representations and reduces collisions in the combinatorial search space. Conversely, we also show that the resonator network can perform inference for so-called Lie Group sparse coding, since, remarkably, they share the same generative model. Time permitting, we’ll briefly discuss how the proposed schema maps onto neural circuits of the visual cortex.