Research

Threshold Logic and Artificial Neuron Design

Motivation: Digital design using conventional CMOS technology has been optimized over four decades to improve its performance, power, and area (PPA), leaving little to no room for further improvement. Therefore, the investigation of non-CMOS technology and the development of radically different computing primitives is warranted to design digital systems of the future with high energy efficiency and high throughput. 

We investigate threshold logic as an alternative to Boolean logic and its hardware implementation in the form of an Artificial Neuron (AN) to design digital ASICs and custom hardware accelerators.

Related Publications:

[1] Ankit Wagle, Gian Singh, Jinghua Yang, Sunil Khatri, and Sarma Vrudhula. “Threshold Logic in a Flash.” In 2019 IEEE 37th International Conference on Computer Design (ICCD), 550–58. Abu Dhabi, United Arab Emirates: IEEE, 2019.

[2] Ankit Wagle, Gian Singh, Sunil Khatri, and Sarma Vrudhula. “A Novel ASIC Design Flow Using Weight-Tunable Binary Neurons as Standard Cells.” IEEE Transactions on Circuits and Systems I: Regular Papers 69, no. 7 (July 2022): 2968–81.

Processing In Memory (PIM)

Motivation: The latency and energy consumption of data-intensive or data-centric applications is dominated by the movement of the data between the processor and memory. In modern CPU/GPU systems, about 60% of the total energy is consumed by the data movement over the limited bandwidth channel between the processor and memory.  Processing in Memory (PIM) is an effective technique to overcome the memory bottleneck and provide an energy-efficient platform for execution of the data-intensive applications.  

Related Publications: 

[1] Gian Singh, Ankit Wagle, Sarma Vrudhula, and Sunil Khatri. “CIDAN: Computing in DRAM with Artificial Neurons.” In 2021 IEEE 39th International Conference on Computer Design (ICCD), 349–56. Storrs, CT, USA: IEEE, 2021. [Video]

[2] Gian Singh, Ankit Wagle, Sunil Khatri, and Sarma Vrudhula. “CIDAN-XE: Computing in DRAM with Artificial Neurons.” Frontiers in Electronics 3 (February 18, 2022): 834146.

[3]  Ayushi Dube, Ankit Wagle, Gian Singh, and Sarma Vrudhula. “Tunable Precision Control for Approximate Image Filtering in an In-Memory Architecture with Embedded Neurons.” In  Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design, 1–9. San Diego California: ACM, 2022.

[4] Gian Singh, Sanmukh R. Kuppannagari, and Sarma Vrudhula. "PARAG: PIM Architecture for Real-time Acceleration of GCNs." In 30th IEEE International Conference on High Performance Computing, Data, and Analytics. Goa, India, IEEE, 2023.   

[5] Gian Singh and Sarma Vrudhula. "A DRAM-based Near-Memory Architecture for Accelerated and Energy-Efficient Execution of Transformers."  In 34th ACM GLSVLSI, Tampa Bay Area, FL, USA, June 12-14,  2024.

[6] Gian Singh, Ayushi Dube, and Sarma Vrudhula. "Energy-Efficient and Low-Latency Computation of Transcendental Functions in a Precision-Tunable PIM Architecture." In IEEE Computer Society Annual Symposium on VLSI, Knoxville, TN, USA, July 1-3, 2024. (accepted paper.)

[7] Gian Singh, Ayushi Dube, and Sarma Vrudhula. "A High Throughput, Energy-Efficient Architecture for Variable Precision Computing in DRAM." IFIP/IEEE International Conference on Very Large Scale Integration, 2024. (Accepted Paper)


CNN Hardware Accelerator

Motivation: Deep neural networks (DNNs) have become the dominant framework for machine learning and artificial intelligence applications. The significant energy consumption and the massive environmental impact of training and inference of such DNNs may soon overshadow their societal benefits. Deploying these networks on battery-constrained edge devices requires a highly energy-efficient platform.

Related Publications:

[1] Ankit Wagle, Gian Singh,  Sunil Khatri, and Sarma Vrudhula. “An ASIC Accelerator for QNN with Variable Precision and Tunable Energy-Efficiency.”