Research
Published Papers
Journals (11)
Raut, Gopal, Jogesh Mukala, Vishal Sharma, and Santosh Kumar Vishvakarma. "Designing a Performance-Centric MAC Unit with Pipelined Architecture for DNN Accelerators." Circuits, Systems, and Signal Processing (2023): 1-27. Impact Factor: 2.311
Mehra, Sumiran, Gopal Raut, Ribhu Das, Santosh Kumar Vishvakarma, and Anton Biasizzo. "An Empirical Evaluation of Enhanced Performance Softmax Function in Deep Learning." IEEE Access (2023). Impact Factor: 3.476
Raut, Gopal, Saurabh Karkun, and Santosh Kumar Vishvakarma. "An Empirical Approach to Enhance Performance for Scalable CORDIC-based Deep Neural Networks." ACM Transactions on Reconfigurable Technology and Systems (2023). Impact Factor: 2.837
Raut, Gopal, Anton Biasizzo, Narendra Dhakad, Neha Gupta, Gregor Papa, and Santosh Kumar Vishvakarma. "Data multiplexed and hardware reused architecture for deep neural network accelerator." Neurocomputing 486 (2022): 147-159. Impact Factor: 5.779
Rajput, Gunjan, Shashank Agrawal, Gopal Raut, and Santosh Kumar Vishvakarma. "An accurate and noninvasive skin cancer screening based on imaging technique." International Journal of Imaging Systems and Technology 32, no. 1 (2022): 354-368. Impact Factor: 2.177
Chhajed, Harsh, Gopal Raut, Narendra Dhakad, Sudheer Vishwakarma, and Santosh Kumar Vishvakarma. "BitMAC: Bit-Serial Computation-Based Efficient Multiply-Accumulate Unit for DNN Accelerator." Circuits, Systems, and Signal Processing (2022): 1-16. Impact Factor: 2.311
Rajput, Gunjan, Gopal Raut, Mahesh Chandra, and Santosh Kumar Vishvakarma. "VLSI implementation of transcendental function hyperbolic tangent for deep neural network accelerators." Microprocessors and Microsystems 84 (2021): 104270. Impact Factor: 3.503
Sharma, Vijay Pratap, Hemant Patidar, Gopal Raut, Vikas Maheshwari, and Rajib Kar. "Low power resource efficient CORDIC enabled neuron architecture using 45 nm CMOS technology." e-Prime-Advances in Electrical Engineering, Electronics and Energy 4 (2023): 100157.
Gupta, Neha, Ambika Prasad Shah, Rana Sagar, Gopal Raut, Narendra Dhakad, and Santosh Kumar Vishvakarma. "Soft error hardened voltage bootstrapped Schmitt trigger design for reliable circuits." Microelectronics Reliability 117 (2021): 114013. Impact Factor: 1.418
Raut, Gopal, Shubham Rai, Santosh Kumar Vishvakarma, and Akash Kumar. "RECON: resource-efficient CORDIC-based neuron architecture." IEEE Open Journal of Circuits and Systems 2 (2021): 170-181. Impact Factor: --
Raut, Gopal, Ambika Prasad Shah, Vishal Sharma, Gunjan Rajput, and Santosh Kumar Vishvakarma. "A 2.4-GS/s power-efficient, high-resolution reconfigurable dynamic comparator for ADC architecture." Circuits, Systems, and Signal Processing 39, no. 9 (2020): 4681-4694. Impact Factor: 2.311
Conferences (8)
Edavoor, Pranose J., Aneesh Raveendran, David Selvakumar, Vivian Desalphine, and Gopal Raut. "Design and Analysis of Posit Quire Processing Engine for Neural Network Applications." In 2023 36th International Conference on VLSI Design and 2023 22nd International Conference on Embedded Systems (VLSID), pp. 252-257. IEEE, 2023.
Rangarajan, Nikhil, Satwik Patnaik, Mohammed Nabeel, Mohammed Ashraf, Shubham Rai, Gopal Raut, Heba Abunahla et al. "SCRAMBLE: A Secure and Configurable, Memristor-Based Neuromorphic Hardware Leveraging 3D Architecture." In 2022 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), pp. 308-313. IEEE, 2022.
Bhatnagar, Varun, Gopal Raut, and Santosh Kumar Vishvakarma. "Loading Effect Free MOS-only Voltage Reference Ladder for ADC in RRAM-crossbar Array." In Proceedings of the Great Lakes Symposium on VLSI 2022, pp. 199-202. 2022.
Raut, Gopal, Shubham Rai, Santosh Kumar Vishvakarma, and Akash Kumar. "A CORDIC based configurable activation function for ANN applications." In 2020 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), pp. 78-83. IEEE, 2020. Best Paper Nominee
Rajput, Gunjan, Gopal Raut, Sajid Khan, Neha Gupta, Ankur Behor, and Santosh Kumar Vishvakarma. "ASIC Implementation Of Biologically Inspired Spiking Neural Network." In 2019 9th International Conference on Emerging Trends in Engineering and Technology-Signal and Information Processing (ICETET-SIP-19), pp. 1-5. IEEE, 2019.
Raut, Gopal, Vishal Bhartiy, Gunjan Rajput, Sajid Khan, Ankur Beohar, and Santosh Kumar Vishvakarma. "Efficient low-precision cordic algorithm for hardware implementation of artificial neural network." In VLSI Design and Test: 23rd International Symposium, VDAT 2019, Indore, India, July 4–6, 2019, Revised Selected Papers 23, pp. 321-333. Springer Singapore, 2019.
Beohar, Ankur, Gopal Raut, Gunjan Rajput, Abhinav Vishwakarma, Ambika Prasad Shah, and Santosh Kumar Vishvakarma. "Compact spiking neural network system with SiGe based cylindrical tunneling transistor for low power applications." In VLSI Design and Test: 23rd International Symposium, VDAT 2019, Indore, India, July 4–6, 2019, Revised Selected Papers 23, pp. 655-663. Springer Singapore, 2019.
Khan, Sajid, Neha Gupta, Gopal Raut, Gunjan Rajput, Jai Gopal Pandey, and Santosh Kumar Vishvakarma. "An ultra low power AES architecture for IoT." In VLSI Design and Test: 23rd International Symposium, VDAT 2019, Indore, India, July 4–6, 2019, Revised Selected Papers 23, pp. 334-344. Springer Singapore, 2019.
Book Chapters (3)
Raut, Gopal, Vishal Bhartiy, Gunjan Rajput, Sajid Khan, Ankur Beohar, and Santosh Kumar Vishvakarma. "Efficient low-precision cordic algorithm for hardware implementation of artificial neural network." In VLSI Design and Test: 23rd International Symposium, VDAT 2019, Indore, India, July 4–6, 2019, Revised Selected Papers 23, pp. 321-333. Springer Singapore, 2019.
Beohar, Ankur, Gopal Raut, Gunjan Rajput, Abhinav Vishwakarma, Ambika Prasad Shah, and Santosh Kumar Vishvakarma. "Compact spiking neural network system with SiGe based cylindrical tunneling transistor for low power applications." In VLSI Design and Test: 23rd International Symposium, VDAT 2019, Indore, India, July 4–6, 2019, Revised Selected Papers 23, pp. 655-663. Springer Singapore, 2019.
Khan, Sajid, Neha Gupta, Gopal Raut, Gunjan Rajput, Jai Gopal Pandey, and Santosh Kumar Vishvakarma. "An Ultra-Low Power AES Architecture for IoT." In International Symposium on VLSI Design and Test, pp. 334-344. Springer, Singapore, 2019.
Conference Presentations
"A CORDIC based configurable activation function for ANN applications" (G. Raut, S. Rai, SK. Vishvakarma, A. Kumar, 2020 IEEE Computer Society Annual Symposium on VLSI, ISVLSI-2020) Best Paper Nominee Presentation Video
"Design and Analysis of Posit Quire Processing Engine for Neural Network Applications" (PJ. Edavoor, A. Raveendran, D. Selvakumar, V. Desalphine, G. Raut, 36th International Conference on VLSI Design, VLSID-2023)
"Loading Effect Free MOS-only Voltage Reference Ladder for ADC in RRAM-crossbar Array" (V. Bhatnagar, G. Raut, SK. Vishvakarma, Proceedings of the Great Lakes Symposium on VLSI, GLSVLSI-2022) Poster,
Preprints or Under Review Manuscripts
Journals (6):
Preprint 1: "Powering the AI Revolution: SoC Design for Enhanced DNN Accelerators" IEEE Transactions on Circuits and Systems I: Regular Papers (TCAS-I), 2023 Under Preparation
Preprint 2: "Cost-effective MAC Architecture for Hardware Implementation of Deep Neural Networks" (S. Vishvakarma, G. Raut, SK. Vishvakarma and Dhruva Ghai, Microprocessors and Microsystems, Elsevier, 2023) Under Review
Preprint 3: "QuantMAC: A quantized Multiply-Accumulate Computation Unit for DNN Accelerator" (Neha Asar, G. Raut, V. Trivedi and SK. Vishvakarma, Neurocomputing, 2023) Under Review
Preprint 4: "Enhancing Performance for a CORDIC-based Configurable Activation Function Implementation" (G Raut, S. Rai, R. Das, SK. Vishvakarma and A. Kumar, IEEE Access, 2023) Under Review:
Preprint 5: "Hybrid ADDer: A Viable Solution for Efficient Design of MAC in DNNs" (V. Trivedi, K. Lalvani, G. Raut, A. Khomane, N. Asar and SK. Vishvakarma, Circuits, Systems, and Signal Processing, 2023) Minor Revision Submitted
Preprint 6: "An Adaptive Multi-Operator Logic Design for Efficient In-Memory Advanced Computing with 6T SRAM" (N. Dhakad, I. Chittora, G. Raut, V. Sharma and SK. Vishvakarma, Circuits, Systems, and Signal Processing, 2023) Accepted with Minor Revision
Conferences (2):
Preprint 1: "Hardware-Accelerated Arbitrary Precision Integers Activation Functions for Deep Neural Networks" (G Raut, PJ. Edavoor, Rithambara Thakur, D. Selvakumar, 36th International Conference on VLSI Design, VLSID-2023) About to submit
Abstract: This research article presents novel approaches for implementing arbitrary precision integer activation functions (AFs) in deep neural networks (DNNs). We focus on two design strategies: a ROM-based approach for lower precision and a CORDIC-based approach for higher precision using Fixed $<$N,q$>$ format. The ROM-based approach utilizes a read-only memory (ROM) to implement AFs like sigmoid and tanh, supporting dynamic fixed-point representations. The CORDIC-based approach employs a configurable design for sigmoid and tanh AFs with arbitrary integer fixed-point representation, optimizing hardware resources. Our designs are evaluated through software-based accuracy evaluations and hardware-based performance parameter evaluations on an FPGA. Results demonstrate effective precision-aware optimization in DNN hardware, providing insights into AF implementation with arbitrary integer fixed-point processing elements.
Preprint 2: "A Configurable Activation Function for Variable Bit-Precision in Hardware DNN Implementation" (S. Vishwakarma, G Raut, Narendra Dhakad, S. K Vishvakarma, Dhruva Ghai, 6th IFIP International Internet of Things Conference, IFIP-IoT 2023) Accepted
Abstract: This paper presents a configurable activation function (AF) based on ROM/Cordic architecture for generating $sigmoid$ and $tanh$ with varying bit precision. A ROM-based design is utilized for low-bit precision and a Cordic-based design for high-bit precision. The proposed configurable AF has been evaluated on LeNet and VGG-16 DNN models. Results show that the proposed AF achieves inference accuracy comparable with the tensor-based model (with an accuracy loss of less than 1.5\%). The proposed configurable architecture gives high accuracy for both low and high bit precision. The architecture's performance is evaluated by simulating the AF model for a `fixed$<9,6>$' arithmetic representation. Instead of a Cordic-based approach for low-bit precision, a ROM-based approach reduces memory requirements (86.66 \% LUT savings for 4-bit precision and 80.95 \% LUT savings for 8-bit precision).