I am a postdoctoral researcher at the Allen Institute for AI in Seattle. 

I will join the Ohio State University's CS Department as an Assistant Professor in Fall 2024. I will be recruiting multiple PhD students this cycle. If you are interested, apply here (and mention my name in your application).

I obtained my Ph.D. at the Language Technologies Institute at Carnegie Mellon University (CMU) with the final two years of my PhD spent visiting the University of Washington in Seattle. 

My research interests broadly include topics in Machine Learning for Natural Language Processing (NLP). In particular, I am interested in building NLP solutions for real use cases that work for all people, those that can uniformly support diverse languages, domains, populations, and individuals. 

Contact: sachink [at] allenai [dot] org, kumar1145 [at] osu [dot] edu

Google Scholar | Twitter | Github | LinkedIn

News:

[May 2024] Dolma is accepted at ACL 2024!

[May 2024] Presented Gen-Z at ICLR 2024 in Vienna.

[April 2024] Invited talk at MilaNLP. 

[March 2024] Two papers accepted at NAACL 2024!

[March 2024] Reward-bench paper is on arXiv!

[March 2024] Gave a guest lecture on mitigating societal harms of LLMs at KAIST.

[February 2024] Dolma is on arXiv!

[January 2024] Paper on Generative zero-shot classification accepted at ICLR 2024! 

[December 2023] Giving an invited talk at the IndoML symposium on December 22nd, 2023. Come say hi!

[November 2023] I am co-teaching a tutorial on mitigating societal harms in LLMs at EMNLP 2023. See you in Singapore!

[November 2023] New preprint out on zero-shot text classification.

[November 2023] New preprint out on preserving author perspectives in news summarization.

[October 2023] I’m co-organizing a workshop on Customizable NLP at EMNLP 2024! Details forthcoming.

[October 2023] A paper accepted at EMNLP 2023! An updated camera ready soon.

[August 2023] Successfully defended my thesis 🥳. 

[August 2023] Started at AI2 as a Young Investigator. 

[July 2023] Our paper Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play Multi-Character Belief Tracker got outstanding paper awards at ACL 2023 and ICML 2023 Theory of Mind Workshop 🎉.

[July 2023] I will join Ohio State University as an Assistant Professor in the CS Department in Fall 2024!

[July 2023] I will spend a year as a postdoc at Allen Institute for AI working with Hanna Hajishirzi and Noah Smith!

[May 2023] Two new preprints on arXiv: Do All Languages Cost the Same? Tokenization in the Era of Commercial Language Models and SSD-2: Scaling and Inference-time Fusion of Diffusion Language Models.

[May 2023] 3 papers accepted at ACL 2023! Camera-ready versions out soon.

[May 2023] Vidhisha presented our survey paper at EACL in Croatia!

[April 2023] New preprint out on arXiv: Assessing Language Model Deployment with Risk Cards.

[January 2023] Our survey paper got accepted at EACL 2023! The camera-ready version out soon. 

[December 2022] New preprint out of arXiv: On the Blind Spots of Model-Based Evaluation Metrics for Text Generation

[October 2022] New preprint out on arXiv: SSD-LM: Semi-autoregressive Simplex-based Diffusion Language Model for Text Generation and Modular Control.

[October 2022] Passed my thesis proposal. I am now a Ph.D. candidate!

[October 2022] Gave an invited talk about my latest research at Google.

[October 2022] New preprint out on arXiv: "Language Generation Models Can Cause Harm: So What Can We Do About It? An Actionable Survey".

[October 2022] Two papers accepted at EMNLP 2022! arXiv versions out soon. 

[June 2022] My research is now funded by Google PhD Fellowship!

[May 2022] New preprint out on arXiv: "Constrained Sampling from Language Models via Langevin Dynamics in Embedding Spaces".

[April 2022] Gave a tutorial at TheWebConf 2022 on "Mitigating Societal Harms of Large Language Models: A Case Study in Language Generation"

[September 2021] Paper on "Controlled Text Generation as Continuous Optimization with Multiple Constraints" accepted at NeurIPS 2021!

[September 2021] Short paper on "Improving the Diversity of Unsupervised Paraphrasing with Embedding Outputs" accepted at MRL@EMNLP 2021!

[May 2021] Short paper on "Machine Translation into Low Resource Language Varieties" accepted at ACL 2021! arXiv preprint coming soon. 

[March 2021] Paper on "An Exploration of Data Augmentation Techniques for Improving English to Tigrinya Translation" accepted at AfricaNLP@EACL 2021. Preprint coming soon.  

[November 2020] Paper on "End-to-End Differentiable GANs for Text Generation" accepted at the ICBINB@NeurIPS 2020. 

[November 2020] Invited Talk on "Language Generation with Continuous Outputs" at G-Research, London. 

[Aug 2020] Teaching Assistant for the brand new course on Multilingual NLP (Fall 2020) at CMU.

[May 2020] Paper on "A Deep Reinforced Model for Cross-Lingual Summarization with Bilingual Semantic Similarity Reward" accepted at  WNGT@ACL 2020.

[December 2019] Going to Facebook AI Research, Seattle (virtually) for the summer.

[November 2019] Presented two posters at EMNLP 2019 in Hong Kong

[September 2019] Paper on "A Margin-based Loss with Synthetic Negative Samples for Continuous-output Machine Translation" accepted at EMNLP-WNGT workshop!

[September 2019] Teaching Assistant for Algorithms for NLP (Fall 2019)

[August 2019] Paper on "Topics to Avoid: Demoting Latent Confounds in Text Classification" accepted at EMNLP 2019!

[May 2019] Headed to Facebook for the summer as a research intern in their conversational AI team in Menlo Park. 

[December 2018] Paper on "Von Mises-Fisher Loss for Training Sequence to Sequence Models with Continuous Outputs" accepted at ICLR 2019!

[October 2018] Gave my first ever lecture in Algorithms for NLP on Structural Classification

[September 2018] Teaching Assistant for Algorithms for NLP (Fall 2018)

[August 2018] Gave a talk about my research on Machine Translation with Continuous Outputs at LTI's Student Research Symposium

[August 2017] Headed to CMU LTI to start my PhD