FPGA based Hardware accelerator design for machine learning models


Deep Neural Networks (DNNs) have become highly popular choice for embedded vision applications because of the accuracy and versatility. In addition to the end node inference, incremental learning or continual learning are getting popular for the vision hardware due to online learning capabilities. Although a high number of research has been carried out to optimize DNN models, executing DNN models for both inference and learning on the edge devices is still a major challenge due to power hungry heavy computation and memory bandwidth requirements. The focus of this research is to propose novel software-hardware approaches for ultra low power DNN inference and edge learning deployment in vision end nodes. 

Publications

2023

Energy Efficient DNN Compaction for Edge Deployment

Bijin Elsa Baby     Dipika Deb     Benuraj Sharma     Kirthika Vijayakumar     Satyajit Das 

 ARC 2023                                                                                                                                          PDF