I'm a second year Master's student in Computer Engineering at NYU Tadon School of Engineering.
I am currently working at Emerge Lab at NYU under Prof. Eugene Vinitsky
Previously, I have interned at Computational Linguistics (CLAUSE) group in Bielefeld under Prof. Sina Zarrieß and
Dr. Özge Alaçam in the field of multimodal machine learning.
I have also interned under Prof. Shital Chiddarwar at IvLabs- the robotics and AI lab at VNIT trying to look at machine learning through the lens of mathematics.
Here's Résumé for further information.
Aug 2024: Starting as a TA for Applied Matrix Theory under Prof. Z.P. Jiang!!
Aug 2024: Ending my internship at Sov.ai. Starting as an RA under Eugene Vinitsky!
May 2024: I have joined Sov.ai as a Quant Developer Intern!
Sept 2023: Started my MS in Computer Engineering at NYU.
May 2022: Started my intern under Prof. Sina Zarrieß and Dr. Özge Alaçam
Oct 2021: Our paper Enhancing Context through Contrast accepted at NeurIPS Preregistration Workshop
May 2020: Started working at IvLabs under Prof. Shital Chiddarwar
Discovering RL agents in the Code Space:
Project Overview: Improving the efficacy of LLMs to create code that solves RL algorithms. We derive our inspiration from Quality Diversity Algorithms (MAP-elites) and Dreamcoder to enhance the effectiveness of Funsearch algorithm in searching through the code space. Done in collaboration with Prof. Eugene Vinitsky (NYU Tandon).
Enhancing Context Through Contrast:
Project Overview: Proposed a novel post-pretraining step to enhance Neural Machine Translation performance. Based on the ideology that languages are transformation from an abstract meaning space. Done in collaboration with Kshitij Ambilduke (Université Paris-Saclay), Rishika Bhagwatkar (Université de Montréal), Khurshed P. Fitter (EPFL), Prasad Vagdargi (JHU) and Prof. Sheetal Chiddarwar (VNIT).
Medical Visual Question Answering:
Project Overview: Benchmarked a number of multimodal transformer architectures on ImageCLEF 2019 dataset using Meta AI Research's (then Facebook AI Research) Multi-Modal Framework (MMF). Done in collaboration with Prasad Vagdargi (final year PhD student, Johns Hopkins University)
[Code]
Dynamic Discretization for Multimodal Transformers:
Proposed a novel architecture for soft discretization of images in an image-text pair to increase their representational quality in multimodal transformers.
Information Bottleneck view of Transformers:
Performed experiments to understand the information flow in transformer architecture as used in text-to-text tasks.
Convolutional Language Modelling:
Proposed a new architecture for language modelling based on convolutional neural networks.