Research

Published Papers

Journals (11)

Conferences (8)

Book Chapters (3)

Conference Presentations

Preprints or Under Review Manuscripts

Journals (6)

Conferences (2): 

Abstract: This research article presents novel approaches for implementing arbitrary precision integer activation functions (AFs) in deep neural networks (DNNs). We focus on two design strategies: a ROM-based approach for lower precision and a CORDIC-based approach for higher precision using Fixed $<$N,q$>$ format. The ROM-based approach utilizes a read-only memory (ROM) to implement AFs like sigmoid and tanh, supporting dynamic fixed-point representations. The CORDIC-based approach employs a configurable design for sigmoid and tanh AFs with arbitrary integer fixed-point representation, optimizing hardware resources. Our designs are evaluated through software-based accuracy evaluations and hardware-based performance parameter evaluations on an FPGA. Results demonstrate effective precision-aware optimization in DNN hardware, providing insights into AF implementation with arbitrary integer fixed-point processing elements.


Abstract: This paper presents a configurable activation function (AF) based on ROM/Cordic architecture for generating $sigmoid$ and $tanh$ with varying bit precision. A ROM-based design is utilized for low-bit precision and a Cordic-based design for high-bit precision. The proposed configurable AF has been evaluated on LeNet and VGG-16 DNN models. Results show that the proposed AF achieves inference accuracy comparable with the tensor-based model (with an accuracy loss of less than 1.5\%). The proposed configurable architecture gives high accuracy for both low and high bit precision. The architecture's performance is evaluated by simulating the AF model for a `fixed$<9,6>$' arithmetic representation. Instead of a Cordic-based approach for low-bit precision, a ROM-based approach reduces memory requirements (86.66 \% LUT savings for 4-bit precision and 80.95 \% LUT savings for 8-bit precision).