Vector Symbolic Architectures (VSAs) is a family of connectionist computational models. The most known VSAs frameworks are Holographic Reduced Representations , Binary Spatter Codes , and Multiply Add Permute . VSAs are one of the most promising paradigms for algorithmic approximation, which belong to the category of computing with randomness. Though inspired by observations of key aspects of human memory, perception, and cognition, VSAs are well interpretable through the mathematical properties of high-dimensional spaces such as the concentration of measure theory .
It has been advocated that VSAs are one of the suitable candidates for implementing functionality of general AI. An exciting property of VSAs is their strategic position on the map of AI research. On one hand, they can implement pure symbolic operations, while on the other their distributed representations can easily fit with artificial neural models. Thus, VSAs have the potential to take the best of both paradigms and bridge the gap between symbolic and ANNs paradigms, which make them especially appealing for building novel AI systems .
The recent developments in the area of hyperdimensional computing and VSAs deliver great promises for practical applications ranging from robotics to wireless communication systems. The mission of this workshop is to gather most prominent contributors to the area of VSA and consolidate the research community. The talks in this workshop are by invitation only. The workshop is open for all people interested in the field. For further enquiries please contact the organisers.
Who should attend
- Data science practitioners
- Academics interested in the area of alternative computing models.
- Developers of alternative computing hardware (neuromorphic processors and memory)
- Companies searching for energy efficient ways of doing machine learning / AI functionality