Welcome to my homepage!
I am a Senior Applied Scientist at Amazon since Sep 2022. Some of my recent projects include leveraging Large Language Models (LLMs) to access dialog quality and enhancing job recommendations using LLMs.
Previously, I was a Staff Research Scientist in Natural Language Processing at DiDi AI Labs, Los Angeles (working with Kevin Knight). Prior to that, I held Post-Doctoral positions at the Computational Language Understanding Lab at the University of Arizona and the Information Extraction and Synthesis Lab at the University of Massachusetts, Amherst.
I graduated with a Joint-PhD in Computer Science from IIT Bombay and Monash University in 2015.
Research Interests:
Large Language Models
Deep Learning
Machine Translation
Task-oriented Dialog
Semi-supervised Learning
Representation Learning
Information Extraction
Updates:
Our paper Leveraging LLMs for dialogue quality measurement has been accepted at NAACL 2024 in the industry track.
I will be joining Amazon Alexa AI as a Senior Applied Scientist in September 2022.
Our latest work is a multilingual videoconference translator! Accepted at EMNLP 2021.
Organized the Triangular Machine Translation shared task at WMT 2021.
Our paper titled Parallel Corpus Filtering via Pre-trained Language Models accepted at ACL 2020.
Organized the Open Domain Translation challenge task at IWSLT 2020.
Our papers accepted in *SEM 2019 and SPNLP (co-located with NAACL-HLT 2019) .
Our system Eidos accepted as a system demonstration in NAACL-HLT 2019, Minneapolis, USA.
Joined DiDi AI Labs in Los Angeles in Jan 2019 as Research Scientist in NLP.
Attended COLING 2018 in Santa Fe. Link to my presentation.
Our paper titled "Visual Supervision in Bootstrapped Information Extraction" has been accepted to EMNLP 2018.
Our paper titled "An Exploration of Three Lightly-supervised Representation Learning Approaches for Named Entity Classification" has been accepted to COLING 2018.
Attended NAACL-HLT 2018 in New Orleans, LA. [poster]
Our paper titled "Keep your bearings: Lightly-supervised Information Extraction with Ladder Networks that avoids Semantic Drift " has been accepted to NAACL-HLT 2018. [link]