About me:

I am a third-year PhD student at University of Washington Seattle where I am working with Prof. Jeffery A. Bilmes. I like to work on improving the efficiency of large-scale models, for both training and inference; with the help of submodular optimization. In summer'24 I am interning at NVIDIA - Applied Deep Learning Research Team. 

Before joining PhD program, I completed my undergraduate degree at IIT Delhi, where I worked with Dr Sumeet Agarwal.  I'm also an avid photographer (Checkout my Unsplash) and hiker! When not working, I can be found messing around with my camera, google earth, or hiking some beautiful mountains in North Cascades. (Most Recent Hike: Colchuck Lake). 

Recent Updates

Preprint(s)


COBRA: COmBinatorial Retrieval Augmentation for Few-Shot Learning

Arnav Das*, Gantavya Bhatt*, Lilly Kumari, Sahil Verma, Jeff Bilmes


[Poster] In DMLR workshop at ICML'24


Under Review


pdf/openreview

Comparing Bad Apples to Good Oranges: Aligning Large Language Models via Joint Preference Optimization

Hritik Bansal*, Ashima Suvarna*, Gantavya Bhatt*, Nanyun Peng, Kai-Wei Chang, Aditya Grover


[Oral] In DMLR workshop at ICML'24

[Poster] In MHFAI Alignment workshop at ICML'24


arxiv/openreview (DMLR)/openreview (MHFAI)

Deep Submodular Peripteral Networks

Gantavya Bhatt*, Arnav Das*, Jeff Bilmes


Under Review 


arxiv

Effective Backdoor Mitigation Depends on the Pre-training Objective

Sahil Verma, Gantavya Bhatt, Avi Schwarzschild, Soumye Singhal, Arnav Das, Chirag Shah, John P Dickerson, Jeff Bilmes

[Best Paper Award 🏆] In BUGS workshop at NeurIPS'23 

Under review

pdf / arxiv 

Conference and Workshop Publications


An Experimental Design Framework for Label-Efficient Supervised Finetuning of Large Language Models

Gantavya Bhatt*, Yifang Chen*, Arnav Das*, Jifan Zhang*, Sang Truong, Stephen Mussmann, Yinglun Zhu, Jeff Bilmes, Simon Shaolei Du, Kevin Jamieson, Jordan P Ash, Robert D Nowak

Accepted at ACL'24 (Findings)


arxiv

LabelBench: A Comprehensive Framework for Benchmarking Label-Efficient Learning

Jifan Zhang*, Yifang Chen*, Gregory Canal, Arnav Das†, Gantavya Bhatt†, Stephen Mussmann, Yinglun Zhu, Jeff Bilmes, Simon Shaolei Du, Kevin Jamieson, Robert D Nowak

In Adaptive Experimental Design and Active Learning in the Real World workshop at NeurIPS'23.

Accepted at DMLR'24

pdf / arxiv / code

Accelerating Batch Active Learning Using Continual Learning Techniques

Arnav Das, Gantavya Bhatt*, Megh Manoj Bhalerao, Vianne R. Gao, Rui Yang, Jeff Bilmes

In Transactions of Machine Learning Research (Dec edition, TMLR) 

In DMLR workshop at ICML'23

pdf / arxiv / code

RadarHD: Demonstrating Lidar-like Point Clouds from mmWave Radar

Akarsh Prabhakara, Tao Jin, Arnav Das*, Gantavya Bhatt*, Lilly Kumari, Elahe Soltanaghei, Jeff Bilmes, Swarun Kumar, Anthony Rowe

In Annual International Conference On Mobile Computing And Networking ACM MobiCom '23

pdf / arxiv / code 

High Resolution Point Clouds from mmWave Radar

Akarsh Prabhakara, Tao Jin, Arnav Das*, Gantavya Bhatt*, Lilly Kumari, Elahe Soltanaghei, Jeff Bilmes, Swarun Kumar, Anthony Rowe

In IEEE International Conference on Robotics and Automation (ICRA'23) 


pdf / arxiv / code 

Matryoshka Representations for Adaptive Deployment

Aditya Kusupati*, Gantavya Bhatt*, Aniket Rege*, Matthew Wallingford, Aditya Sinha, Vivek Ramanujan, William Howard-Snyder, Kaifeng Chen, Sham Kakade, Prateek Jain, and Ali Farhadi

In Neural Information Processing Systems (NeurIPS'22)

pdf / arXiv / code

Tighter m-DPP Coreset Sample Complexity Bounds

Gantavya Bhatt, Jeff Bilmes

In SubsetML workshop at International Conference of Machine Learning (ICML'21)

pdfarXiv

Systematic Generalization in Neural Networks-based Multivariate Time Series Forecasting Models

Hritik Bansal*, Gantavya Bhatt*, Pankaj Malhotra and Prathosh AP

In International joint Conference on Neural Networks (IJCNN'21)

pdf / arXiv / code



Can RNNs trained on harder subject-verb agreement instances still perform well on easier ones?

Hritik Bansal*, Gantavya Bhatt* and Sumeet Agarwal

In Proceedings of the Society for Computation in Linguistics: Vol. 4 , Article 38. 

pdf / arXiv / code


Decay RNN

How much complexity does an RNN architecture need to learn syntax-sensitive dependencies?

Gantavya Bhatt*, Hritik Bansal*, Rishubh Singh* and Sumeet Agarwal

In Proceedings of the Society for Computation in Linguistics: Vol. 4 , Article 38. 

pdf / arXiv / code