Research
Project 1: University Grant Commission (UGC), Gov. of India
Title: Design Exploration and VLSI Implementation of Deep Neural Network Accelerator with Configurable Architecture
Organizations Collaborators: Independent Research Fellowship Awardee
Collaboration Duration: July-2017 to June-2023
Achievements and Contribution:
Hardware Optimization: The presented DNN architecture reduces hardware-resource requirements and power consumption while maintaining computational rate. It utilizes hardware-reused context and novel computational units like MAC and AF, significantly reducing resource overhead.
Flexible Design: The proposed architecture supports various precisions, arithmetic computation, and activation functions, offering flexibility in design.
System-Level Performance: The design architecture demonstrates better performance through hardware-reused techniques, such as iterative MAC and AF computations, reused resources in the neural network, and layer-multiplexed architecture, saving resources without sacrificing throughput.
Edge Computing Solutions: The embedded system design with lower hardware resources and power consumption is suitable for edge computing. The efficient memory addressing scheme and optimized design parameters contribute to improved performance and power efficiency.
Future Work: Further exploration of near-memory computing techniques, improvements in MAC design, and investigation of data quantization and approximation at CORDIC computing elements can be pursued. Additionally, runtime programmable features for dynamic configuration and adaptation can be considered.
Supporting Material: Publications, Startups, Products, and Solutions for efficient hardware design of DNN accelerator
Project 2: Jožef Stefan Institute, Slovenia
Title: Data multiplexed and hardware reused architecture for efficient deep neural network accelerators
Organizations Collaborators: Prof. Gregor Papa, Prof. Anton Biasizzo
Collaboration Duration: June 2020 to Nov. 2021
Achievements and Contribution: Potential for edge-AI computing solutions
Reduced hardware-resource requirements and power consumption.
Efficient activation function design for higher accuracy.
Improved memory access scheme and throughput.
Implemented DNN on hardware and overcame SoC throughput limits.
Supporting Material: Publications1, Publication2
Project 3: New York University Abu Dhabi, United Arab Emirates
Title: Memristor-Based Neuromorphic Hardware Leveraging 3D Architecture
Organizations Collaborators: Dr. Johann Knechtel, Nikhil Rangarajan
Collaboration Duration: Jan. 2022 to July 2022
Achievements and Contribution: Solutions for Neuromorphic computing
SCRAMBLE: A configurable neuromorphic architecture with memristive memory cells for secure weight and activation storage.
Reconfigurable design thwarts model stealing and hardware IP stealing attacks.
Quantified security-performance trade-offs benchmarked against prior art in neuromorphic architectures.
Supporting Material: Publications1
Project 4: Center for Development of Advanced Computing (C-DAC) & MeitY, Govt. of India
Title: Semi-Autonomous Drones for Terrain Monitoring and Identification of Terrain Modifying Actors
Organizations Collaborators: Funding Received for Independent Research on Product development using VEGA Processor
Collaboration Duration: Sept. 2020 to Dec. 2021
Achievements and Contribution:
Indian Microprocessor-based Drone: An innovative solution using VEGA Processor for forest monitoring, addressing challenges and providing fault handling for reliability in terrain.
Long-Range Monitoring and Communication: The drone supports satellite communication for control over vast distances using NRF, utilizing statistical models
ML/AI algorithms Enable Controller activity to identify deforestation-related activities.
Supporting Material: Finalist in All-India competitions, Startup, Product
Project 5: Center for Advancing Electronics Dresden (cfaed), TU Dresden, Germany
Title: Area and Power efficient Hardware chip design and implementation of Neuromorphic Computing Accelerator using Emerging Reconfigurable Nanotechnologies
Organizations Collaborators: Prof. Akash Kumar, Prof. Koen De Bosschere
Collaboration Duration: Nov. 2019 - Jan. 2020
Achievements and Contribution:
We propose a CORDIC-based design of an unsigned/signed computational unit which can compute both MAC and non-linear activation function.
We demonstrate how CORDIC is configured within RECON to operate in linear or hyperbolic rotation mode to solve arithmetic (for MAC) and trigonometric operations (for AF) respectively.
Optimization of the proposed CORDIC-based architecture in terms of area, power, and delay. We further employ power-gating approach and evaluate it with 45nm technology node to demonstrate power-savings. We analyse and discuss the impact of technology scaling on circuit’s physical parameters like area, power and delay
Supporting Material: Webpage , Publication1
Project 6: Technische Universität Dresden (TUD), Germany
Title: Design and Implementation of Approximate Artificial Neural Network Accelerator
Organizations Collaborators: Prof. Akash Kumar
Collaboration Duration: April 2019 - Sept. 2019
Achievements and Contribution:
Custom IP is design for fully connected Neural Network and Complete Neural Network have implemented on the FPGA Hardware using Embedded system design approach.
Supporting Material: Webpage, Publication1
Project 7: NSDCS Lab, IIT Indore, India
Title: Design Exploration and VLSI Implementation of Deep Neural Network Accelerator with Configurable Architecture
Organizations Collaborators: Prof. Santosh Kumar Vishvakarma
Collaboration Duration: July 2017 to till now
Achievements and Contribution:
CORDIC-based DNN Computation: Investigating efficient MAC and AF designs using CORDIC algorithm, addressing hardware complexity, precision, and power.
Scalability and Configurability: Proposed designs offer scalable area and power, computing both MAC and AFs in a single block with configurable activation functions.
Hardware Reuse and Efficiency: CORDIC-based approach maximizes hardware reuse, minimizing unused blocks for improved efficiency in resource utilization.
Non-linear AFs and Performance: Exploring exponential, sigmoid, and tanh AFs for higher gradient values and better weight updates, benefiting fully connected and recurrent neural networks.
Supporting Material: PhD Thesis Work, Publications, Startup incubated at IIT Indore
Project 8: Oriental University, Indore, India
Title: DNN Optimization Using Resource Optimal MAC Architecture and Dynamically Configurable AFs
Organizations Collaborators: Prof. Dhruva Ghai
Collaboration Duration: July 2022 to till now
Achievements and Contribution:
Efficient MAC Unit: Resource and power savings with bias pre-loading and data-resize function for DNNs
Compatibility and Results: Reduced resource usage, power consumption, and critical delay in experimental evaluation.
Configurable activation function with ROM and Cordic architectures achieves accurate inference with reduced memory, benefiting IoT applications.
Supporting Material: Publication1, Publication2 (Under Review Work)
Project 9 : Scas Technologies, Incubated At IIT Indore, India
Title: Semi-Autonomous Drones for Terrain Monitoring and Identification of Terrain Modifying Actors
Organizations Collaborators: C-DAC India, MeitY Govt. of India
Collaboration Duration: July 2020 to till now
Achievements and Contribution:
ScAs Technologies is a startup from NSDCS Research Lab IIT Indore. Currently, It is incubated at Advanced Centre for Entrepreneurship (ACE) Foundation at IIT Indore. We are developing semi-autonomous AI-enabled UAVs for tracking and monitoring.
As a finalist (Top 25 Teams) in the prestigious Swadeshi Microprocessor Challenge (SMC) Competition, ScAs Technologies demonstrated remarkable excellence and was awarded a prize of 4 lakh rupees, recognizing our outstanding achievements in the field.
Supporting Material: Startup Webpage