Lajanugen Logeswaran
I am a Research Scientist at LG AI Research. I received my PhD from the Computer Science Department at the University of Michigan. My research interests lie in Machine Learning and Natural Language Processing. My specific interests include representation learning, learning from limited supervision and language grounding.
News
03/13/24: 3 papers accepted at NAACL 2024
10/10/23: Invited talk at UM on Task Planning with Language Models
10/07/23: 4 papers accepted at EMNLP 2023
05/02/23: 3 papers accepted at ACL 2023
Talks
May 2024: Guiding Language Models to be Better Agents (Frontiers of AI in Business and Society @ UIC)
Oct 2023: Task Planning with Large Language Models (University of Michigan AI Seminar)
Jul 2022: Few-Shot Subgoal Planning with Language Models (Talk starts 49:16)
Aug 2019: Zero-Shot Entity Linking by Reading Entity Descriptions
Feb 2019: Ann Arbor Deep Learning Event
Selected Publications
Code Models are Zero-shot Precondition Reasoners [paper]
Lajanugen Logeswaran, Sungryull Sohn, Yiwei Lyu, Anthony Zhe Liu, Dong-Ki Kim, Dongsub Shim, Moontae Lee, Honglak Lee
NAACL 2024 (Also at Neurips FMDM Workshop 2023)
You don't need a personality test to know these models are unreliable: Assessing the Reliability of Large Language Models on Psychometric Instruments
Bangzhao Shu, Lechen Zhang, Minje Choi, Lavinia Dunagan, Lajanugen Logeswaran, Moontae Lee, Dallas Card, David Jurgens
NAACL 2024
Understanding the Capabilities and Limitations of Large Language Models for Cultural Commonsense
Siqi Shen, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Soujanya Poria, Rada Mihalcea
NAACL 2024
Prospector: Improving LLM Agents with Self-Asking and Trajectory Ranking
Byoungjip Kim, Youngsoo Jang, Lajanugen Logeswaran, Geon-Hyeong Kim, Yu Jin Kim, Honglak Lee, Moontae Lee
Neurips FMDM Workshop 2023
A Picture is Worth a Thousand Words: Language Models Plan from Pixels [paper]
Anthony Liu, Lajanugen Logeswaran, Sungryull Sohn, Honglak Lee
EMNLP 2023
TOD-Flow: Modeling the Structure of Task-Oriented Dialogues
Sungryull Sohn, Yiwei Lyu, Anthony Zhe Liu, Lajanugen Logeswaran, Dong-Ki Kim, Dongsub Shim, Honglak Lee
EMNLP 2023
GRACE: Discriminator-Guided Chain-of-Thought Reasoning [paper]
Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang
Findings of EMNLP 2023
Merging Generated and Retrieved Knowledge for Open-Domain QA [paper]
Yunxiang Zhang, Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang
EMNLP 2023
Unsupervised Task Graph Generation from Instructional Video Transcripts [paper]
Lajanugen Logeswaran, Sungryull Sohn, Yunseok Jang, Moontae Lee, Honglak Lee
Findings of ACL 2023
Also at ACL WNU Workshop 2023
Multimodal Subtask Graph Generation from Instructional Videos [paper]
Yunseok Jang, Sungryull Sohn, Lajanugen Logeswaran, Tiange Luo, Moontae Lee, Honglak Lee
ICLR MRL Workshop 2023
Few-shot Reranking for Multi-hop QA via Language Model Prompting [paper]
Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang
ACL 2023
Knowledge Unlearning for Mitigating Privacy Risks in Language Models [paper]
Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo
ACL 2023Exploring the Benefits of Training Expert Language Models over Instruction Tuning [paper]
Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo
ICML 2023
Learning Compositional Tasks with Language Instructions [paper]
Lajanugen Logeswaran, Wilka Carvalho, Honglak Lee
AAAI 2023
Also at NeurIPS Workshop on DeepRL 2021
Few-shot Subgoal Planning with Language Models [paper]
Lajanugen Logeswaran, Violet Fu, Moontae Lee, Honglak Lee
NAACL 2022
Also at ACL CSRR workshop 2022
Few-shot Sequence Learning with Transformers [paper]
Lajanugen Logeswaran, Ann Lee, Myle Ott, Honglak Lee, Marc'Aurelio Ranzato, Arthur Szlam
NeurIPS Workshop on Meta-Learning (MetaLearn 2020)
Zero-Shot Entity Linking by Reading Entity Descriptions [paper][code]
Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Honglak Lee
ACL 2019
Nominated for best paper
Content Preserving Text Generation with Attribute Controls [paper]
Lajanugen Logeswaran, Honglak Lee, Samy Bengio
NIPS 2018
An Efficient Framework for Learning Sentence Representations [paper][code]
Lajanugen Logeswaran, Honglak Lee
ICLR 2018
Sentence Ordering and Coherence Modeling using Recurrent Neural Networks [paper][data]
Lajanugen Logeswaran, Honglak Lee, Dragomir Radev
AAAI 2018
Generative Adversarial Text-to-Image Synthesis [paper]
Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee
ICML 2016
Professional Experience
Research Scientist, LG AI Research (Ann Arbor), Jul 2021 - Present
Research Intern, Facebook AI Research (New York), May - Aug 2019
Research Intern, Google Research (Seattle), May 2018 - Jan 2019
Research Intern, Google Brain (Mountain View), Feb - Jun 2017
Awards & Honors
Rackam Conference Travel Grant (ICML 2016, AAAI 2018, NIPS 2018)
Scholarship to attend Deep Learning Summer School (2016)
Prof KKYW Perera award for highest GPA in final year (2014)
IEEEXtreme 24 hour Programming Competition - 24th place (2013)
INexus International Robot Competition - 3rd place (2012)
Bronze medal at the 50th International Mathematical Olympiad (2009)
Gold medal at Sri Lankan Mathematics Olympiad (2007)