Abstract: Like electronic memory, neuroscience and psychology research has shown that biological memory in humans and animals comes in many shapes and forms.
In this talk we discuss some aspects of biological memory that have not yet been considered much in AI models, but could perhaps make AI models more powerful and efficient. We will discuss both synaptic models as well as network memory phenomena. We will also discuss energy costs of biological memory formation and how biology might found ways to minimize the energy needed for memory formation.
Abstract: With billions of sensors making their way into smart devices each year, the need for energy-efficient AI has never been more pressing. The inherent sparsity, event-driven nature, and temporal processing abilities of spiking neural networks make neuromorphic computing a promising solution to the challenges of AI at the edge. Yet, the key to the success of this revolutionary computing concept lies elsewhere: the rest of the system.
This talk will explore the case for neuromorphic computing at the sensor-edge, and examine the design considerations that went into creating Innatera's Spiking Neural Processor Pulsar - the world's first neuromorphic microcontroller that brings truly brain-like AI to sensors. Using real-world applications, the talk will dive into the impact of Pulsar's neuromorphic technology on the battery-life, performance and architecture of modern smart devices, and explore future avenues for research.
Abstract: Throughout history, humans have harnessed matter to perform tasks beyond their biological limits. Initially, tools relied solely on shape and structure for functionality. We progressed to responsive matter that reacts to external stimuli and are now challenged by adaptive matter, which could alter its response based on environmental conditions. A major scientific goal is creating matter that can learn, where behavior de-pends on both the present and its history. This matter would have long-term memory, enabling autono-mous interaction with its environment and self-regulation of actions. We may call such matter ‘intelligent’.
Here, we introduce a number of experiments towards ‘intelligent’ disordered nanomaterial systems, where we make use of “material learning” to realize functionality. We have earlier shown that a ‘designless’ network of gold nanoparticles can be configured into Boolean logic gates using artificial evolution. We later demonstrated that this principle is generic and can be transferred to other material systems. By exploiting the nonlinearity of a nanoscale network of dopants in silicon, referred to as a dopant network processing unit (DNPU), we can significantly facilitate handwritten digit classification. An alternative material-learning approach is followed by first mapping our DNPU on a deep-neural-network model, which allows for applying standard machine-learning techniques in finding functionality. We also can optimize DNPUs by using gradient descent in materia, using experimental gradient extraction. Finally, we show that our devices are not only suitable for solving static problems but can also be applied in highly efficient real-time processing of temporal signals at room temperature.