Emerging Memory Devices for Unconventional Computing
The scaling down of CMOS feature size down to the atomic length has resulted in high device leakage and performance instability. Moreover, the data-intensive tasks in conventional von-Neumann computing systems result in significant latency, bandwidth, and energy consumption due to memory wall problems. The unconventional computing architectures such as in-memory computing, neuromorphic computing, stochastic/probabilistic/approximate computing and quantum computing are the promising solutions to address these issues. The development of emerging memory devices such as spintronic devices, resistive random-access memory (ReRAM), phase-change memory (PCM), ferroelectric field-effect transistor (Fe FET) etc. have satisfied the ever-increasing requirements of low energy, high scalability, good CMOS compatibility, high reliability, and high performance for these unconventional computing systems. Increasing demands for the application of unconventional computing paradigm has led to the growth of the global emerging memory market. According to Verified Market Research, the market size of emerging non-volatile memory technologies is projected to reach $138.48 Billion by 2028, growing at a compound annual growth rate (CAGR) of 8.9% from 2021 to 2028. Despite huge advancements, crucial challenges still remain before such emerging devices and computational systems to replace the traditional von-Neumann computing architecture.
The goal of this summer school is to provide participants with academic exposure to recent developments, challenges, and future trends in the field of unconventional computing by exploiting emerging devices. Its main objective is to foster communication among physicists, engineers from academia and industry, and researchers working on nanodevice-based computing systems. During the event, participants and experts will exchange ideas, which will serve to inspire new applications in this interesting field. Through this summer school, participants will have the opportunity to collaborate on research, network with colleagues, and participate in future conferences conducted by IEEE NTC. IEEE NTC Student Chapter, IIT Roorkee will help coordinate and publicize activities that are occurring in the region. The membership drive will be conducted during the summer school to enhance the sense of inclusivity and engagement among professionals, researchers, and enthusiasts in the field of nanotechnology and nanoscience. This will further help in establishing stronger connections among researchers, facilitate interdisciplinary collaborations, and promote the exchange of ideas, ultimately advancing the field of nanotechnology and driving innovation forward.
RESEARCH PROJECTS:-
1. Image Processing using spin devices: Image pre-processing techniques play an important role for analysis of visual data. Convolution serves as the basic computational primitive for various associative computing tasks ranging from edge detection to image matching. The primary objective of the research work will be focused on design and implementation of efficient image processing architecture using spin devices. The objective includes edge detection, filtering, segmentation and image restoration targeted for binary, grayscale and RGB image formats. This will be extended to applications of object recognition such as pupil detection. The aim is to lower the power consumption (read/write/reset) of spin-based devices during operation performed for image processing SOT/DW device based feature extraction for edge detection using crossbar array. The objective is to design the IP core of a video image edge detection system on the EDK. The hardware circuit of the algorithm is to be achieved on Xilinx/Vivado and simulations using the Model Sim tool. To evaluate energy consumption in edge detection algorithms using spin based devices and comparison with conventional methods. Implementation of edge detection algorithms using hybrid spin networks to utilize swarm intelligence using the concept of memristive networks. Study of impact of various parameters of the spin devices on the efficacy of the implementation.
2. Efficient spin based neuromorphic computing systems (NCSs) design: Image classification tasks using spin devices achieve the state of art accuracy at very low energy consumption. A compact, energy-efficient magnetic neuron, which can directly be cascaded to a spin-based crossbar array of synapses to eliminate additional interfacing circuitry. Spintronic neurons that allows them to compute multiple critical convolutional neural network functionalities simultaneously and in parallel, saving space and time Development of a device-to-system-level behavioral model to underscore the applicability of the system in all image recognition applications The third objective of the research will be focused on achieving the desired accuracy with minimized loss in F score. Such improved performance can be extended to more complex motion detection. The algorithmic level performance measure with respect to device level variations is to be achieved. Design and simulation of CNN accelerator using spin devices. To implement a high - density cross-point spintronic synapse array with improved synaptic characteristics for EEG and other pattern recognition. The aim is to utilise LUT based simulation framework for achieving optimum performance in image processing and machine learning applications. Training spin based (SOT/DW) XNOR network for optimised and energy efficient image processing. Effective software implementation of complex neuromorphic networks, with device variability sufficiently low to allow operation of integrated neural networks. To design a hybrid memristive device framework for machine learning tasks. Codesign for stochastic computing utilizing the stochastic switching characteristics of devices (along with algorithms) based on nanoscale non-volatile technologies.
3. Performance optimization for spin-based hardware accelerators: Hardware implementation of efficient neural network architecture for image classification Convolution serves as the basic computational operation for various image processing tasks. CMOS implementation of such computations face bottlenecks in area and energy consumption due to the large number of multiplication and addition operations involved. An ultra-low power and compact hybrid spintronic-CMOS design for the convolution. computing unit is the second objective of this research work. The hardware mapping of a feed-forward DNN comprising of multiple hidden layers of neurons connected in an all- to-all fashion to an output layer Spintronic synapses are present at each cross-point of the array and the conductance of the device encodes the value of the corresponding synaptic weight. The back propagation and feed forward algorithm have trade-off with the weight precision and accuracy The aim is to achieve backpropagation algorithms and weight update scheme on hardware for generalized cross bar array structure using spin devices. To design a low power and scalable spiking network accelerator for edge detection. utilizing potential divider arrangement of spin devices to indicate difference in frequencies between spike trains. The aim to design a complete framework for training and inference using all spin non-spiking neural networks. To make a significant contribution to the realization of future spin-based ANN applications using SOTDMI-based CIDWM-MTJ. Implementation in micromagnetic framework for a spin orbit torque driven domain wall based synaptic device for on chip learning of FCNN. To implement ex-situ training approaches for spin device crossbars that account for sneak paths and the stochasticity inherent in device switching.
4. Design and implementation of spiking neural networks: For recognition and classification tasks, biological systems are much better in terms of performance and energy efficiency Building blocks of all such biological systems are neurons and synapses. The various neuron models and synaptic learning algorithms can be mapped to spin devices by exploiting noisy device characteristics. The aim will be to design brain- inspired stochastic models using spin devices and to develop the learning algorithms for spiking neural networks. The other features targeted include spike count based homeostasis, adaptive synaptic plasticity (ASP) for efficient feature learning.
5. Framework design for spin device-based bio-plausible system: The last targeted work includes stochastic STDP algorithm and its circuit level implementation for Hebbian/ Anti Hebbian learning. A forced-learning methodology to achieve efficient synaptic plasticity in such stochastic SNNs. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Long term/short term synaptic plasticity to achieve learning efficiency for supervised/unsupervised as well as partially supervised learning approaches. This will be used for a complete paradigm design for spin based spiking CNN architecture and applications in edge computing devices.