Created an automated pipeline using the Eyes-Defy-Anemia dataset to detect Anemia using Eye conjunctiva images by means of hemoglobin level prediction
This can be helpful as technical assistance to clinical staff in rural areas where blood testing is not easily accessible
Applied different digital image processing steps like denoising and thresholding as pre-processing steps and also used image augmentation to improve the performance
Built model as defined in the figure beside which utilized CNN layers from U-Net model pre-trained on HAM10000 dataset and achieved the best MSE score of 1.95 for hemoglobin level prediction
The goal of this task was to design a motion-planning algorithm for the robot such that it can intercept a moving target in minimum time
Implemented motion planning using weighted A* algorithm in Python from Scratch using heapq
Developed skip version of weighted A* algo for faster implementation, where I calculate the path, follow it for some time steps and then again calculate the path instead calculating path at each time step
This can be helpful in motion planning of autonomous vehicles, drones etc. used to catch some target
The map shows the trajectory followed by the robot (blue squares) which was designed using weighted A* algo and the final position of the target (yellow square) as well as the robot (yellow square)
Designed state space, control space, cost function, motion model, stage and terminal costs, etc. in Python for the door-key environment
Implemented Deterministic Shortest Path (DSP) algorithm in Python from scratch to find the path with minimum cost for the robot to start from the initial position and go through the door and reach the goal position
This can be helpful in path planning of cleaning robots, warehouse automation robots etc
The GIF shows how the robot has planned its path in a randomly generated given environment to collect the key, open the nearest door and reach the goal via the shortest path using the DSP algorithm that I implemented
Developed Attention on Attention network(AoAnet) model-based Image Captioning framework backed by Optical Character Recognition, which can help visually impaired people
Introduced novel approach for image captioning by combining features from Bottom-Up attention model and Keras OCR deep learning model, to include tiny details from images like text and numbers in the generated captions
Achieved improvement of 30.83 units in the CIDEr score using novel approach over state-of-the-art AoAnet model-based approach
The main motive behind this project is to decrease the spread of novel coronavirus as much as possible. It is a working prototype of the idea to sanitize the door handle automatically every time someone touches it. It is made using Arduino microcontroller, Servo motor, Water sprinkler, and Ultrasonic sensor. This project was appreciated by MLA of our constituency, Mr. Harsh Sanghvi, and the founder of fair observer Prof. Atul Singh. This had helped me utilise my COVID 19 lockdown time efficiently.
This project was done as a part of the Analog circuits course's lab project. The main motive was to make something useful while getting hands-on experience of the analog circuit elements like 555 timers, Opamps, Hall effect sensors, etc. This project can make stroke patients' boring exercises very much fun so that they won't skip their exercise sessions, and their recovery will be faster. This project was implemented without the use of any microcontroller to reduce cost and to take a more in-depth dive into analog circuits.
In this project, a prototype of wireless home automation using different gestures of the hand was implemented. Wireless communication was established between two Arduino microcontrollers using the Master-Slave configuration of two Bluetooth modules. Gestures were identified using the interaction between magnets and Hall effect sensors.
Deduced that “The combination of all DOFs of the wrist joint are involved in almost all daily activities” using experiments done via VICON setup and markers in the robotics lab. I designed one of its kind, a compact wearable device that actuates 2 DOFs of wrist simultaneously to assist the stroke patients. 3-D printed the prototype of the same.
Android app was made using simulink to access different sensors of the mobile and play songs accordingly. Based on the location of Server phone song is played on the client phone. And if the client phone is shaken that shake is detected using accelerometer and filtered version of the song is played.
Developed contact mic utilizing Fibre Bragg Gratings which has approx. 30% improvement in PESQ score over conventional mic for speech recorded in noisy environment Further
Applied digital signal processing technics adaptive filtering, high pass filtering and singular value decomposition on the recorded signal using MATLAB and SIMULINK to make it more intelligible
Gained hands-on experience of sensor development and digital signal processing
Published research paper along with two Ph.D. students at OFS 2020 - top-notch conference of the field
ML project of predicting Appliance wise energy breakdown from mains reading on Edge devices to avoid dependency on cloud server
This reduces restrictions caused by internet and threat on data privacy
Evaluated accuracy and inference time of Edge algorithms such as Pruning, Tensor Decomposition, Quantisation to make the state of the art NILM model deployable to the microcontroller like RasberryPi
Built state-of-the art sequence2point neural network-based novel multitask learning framework to reduce the size and inference time in case of multiple appliances
Research paper published at ACM-BuildSys 2020 conference
As the age of AI assistants and chatbots is growing, we feel that making chatbots empathising is really necessary, which is currently lacking in current AI assistants like Siri, Google Assistant or Alexa
Developed a chatbot that generates empathizing replies using not only plain text but also emojis to make response more empathising
Generated novel SENTEMOJI dataset containing 79000 utterances classified in 10 emotion classes with the corresponding emojis
Used concepts of NLP and different models like BERT, Emoji2Vec, Word2Vec for implementation
Performed user-study for evaluation and obtained results showed that SentEmojiBot replied more emapthisingly than the usual chatbot
Published a research paper at Young Researchers' Symposium Track of ACM-CODS-COMAD-2020