Research Highlights

In-memory Implementation of On-chip Trainable and Scalable ANN for AI/ML Applications

Abhash Kumar, Jawar Singh, Sai Manohar Beeraka, and Bharat Gupta

Abstract—Traditional von Neumann architecture based processors become inefficient in terms of energy and throughput as they involve separate processing and memory units, also known as memory wall. The memory wall problem is further exacerbated when massive parallelism and frequent data movement are required between processing and memory units for realtime implementation of artificial neural network (ANN) that enables many intelligent applications. One of the most promising approach to address the memory wall problem is to carry out computations inside the memory core itself that enhances the memory bandwidth and energy efficiency for extensive computations. This paper presents an in-memory computing architecture for ANN enabling artificial intelligence (AI) and machine learning (ML) applications. The proposed architecture utilizes deep in-memory architecture based on standard six transistor (6T) static random access memory (SRAM) core for the implementation of a multi-layered perceptron. Our novel on-chip training and inference in-memory architecture reduces energy cost and enhances throughput by simultaneously accessing the multiple rows of SRAM array per precharge cycle and eliminating the frequent access of data. The proposed architecture realizes backpropagation which is the keystone during the network training using newly proposed different building blocks such as weight updation, analog multiplication, error calculation, signed analog to digital conversion, and other necessary signal control units. The proposed architecture was trained and tested on the IRIS dataset which exhibits ≈ 46× more energy efficient per MAC (multiply and accumulate) operation compared to earlier classifiers.

Simulation-Based Ultralow Energy and High-Speed LIF Neuron Using Silicon Bipolar Impact Ionization MOSFET for Spiking Neural Networks

Alok Kumar Kamal and Jawar Singh , Senior Member, IEEE

Abstract—The silicon bipolar impact ionization MOSFET offers potential for the realization of leaky integrated fire (LIF) silicon neuron due to the presence of parasitic bipolar junction transistor (BJT) in the floating body. In this article, we have proposed an L-shaped gate bipolar impact ionization MOS (L-BIMOS) with reduced breakdown voltage (VB = 1.68 V) and demonstrated the functioning of LIF neuron based on the positive feedback mechanism of parasitic BJT. Using the 2-D TCAD simulations, we manifest that the proposed L-BIMOS exhibits a low threshold voltage (0.2 V) for firing a spike, and the minimum energy required to fire a single spike for L-BIMOS is calculated to be 0.18 pJ, which makes the proposed device 194 times more energy efficient than the PD-SOI MOSFET silicon neuron and 5 × 103 times more energy efficient than the analog/digital circuit-based conventional neurons. Furthermore, the proposed L-BIMOS silicon neuron exhibits spiking frequency in the gigahertz range when the drain is biased at VDG = 2.0 V.