SustaiNLP 2023

Fourth Workshop on Simple and Efficient Natural Language Processing


Organizing Committee


Nafise Sadat Moosavi is an assistant professor in the Computer Science department of the University of Sheffield working on developing simple, robust, and efficient models. She co-founded SustaiNLP and co-organized the first three workshops.

Iryna Gurevych is a professor of Computer Science and director of the Ubiquitous Knowledge Processing (UKP) Lab at the Technical University (TU) of Darmstadt in Germany. Her main research interest is machine learning for large-scale language understanding, including text analysis for social sciences and humanities. She is one of the co-founders of the field of computational argumentation. Iryna’s work received numerous awards, e.g. a highly competitive Lichtenberg-Professorship Award from the Volkswagen Foundation and an ERC Advanced Grant 2022. Iryna was the President of SIGDAT and is the ACL president in 2023. She served as the program co-chair of ACL 2018 and as the General Chair of *SEM 2020.

Yufang Hou is a research scientist at IBM Research. She was a member of IBM Project Debater. Her research interests include anaphora resolution, computational argumentation and information extraction from scientific literature. She is currently serving as an area chair at EACL 2021 and is a member of the standing review committee of TACL.

Gyuwan Kim is a computer science Ph.D. student at UC Santa Barbara, working on machine learning for natural language processing. Previously, he worked at NAVER as a research scientist and studied at Seoul National University. His main research interest is improving the efficiency and robustness of NLP models with better algorithmic solutions. He won the best paper award at SustaiNLP 2021 workshop.

Young Jin Kim is a Principal Researcher at Microsoft where he develops machine learning models with state-of-the-art techniques. His recent research focus includes designing efficient and effective algorithms and model architectures for large scale language models. Young received his Ph.D. from Georgia Institute of Technology for his research in deep learning and high-performance computing.

Tal Schuster is a Senior Research Scientist at Google Research. In his work, Tal develops Machine Learning models that are more robust, trustworthy, and efficient. He is currently focusing on Adaptive Compute methods: dynamically modifying the model's behavior with per-example confidence measures. Tal published several papers in top NLP and ML conferences on adaptive compute Language Models for classification and text generation, introducing uncertainty-based early exit gates with controllable efficiency gains. Tal completed his PhD at MIT, advised by Regina Barzilay, and his MSc at Tel Aviv University, advised by Lior Wolf. 


Ameeta Agrawal is an Assistant Professor in the Department of Computer Science at Portland State University where she leads the Natural Language Processing lab. Prior to this, she obtained her Ph.D. from York University. Her research focuses on developing efficient and effective NLP models that work for diverse communities. Her work on corpus ordering for pretraining language models received an honorable mention at SustaiNLP 2021.