SustaiNLP 2022

Third Workshop on Simple and Efficient Natural Language Processing


Organizing Committee


Nafise Sadat Moosavi is an assistant professor in the Computer Science department of the University of Sheffield working on developing simple, robust, and efficient models. She co-founded SustaiNLP and co-organized the first two workshops.

Iryna Gurevych is a professor of Computer Science and director of the Ubiquitous Knowledge Processing (UKP) Lab at the Technical University (TU) of Darmstadt in Germany. Her main research interest is machine learning for large-scale language understanding, including text analysis for social sciences and humanities. She is one of the co-founders of the field of computational argumentation with many applications, such as the identification of fake news and decision-making support. Iryna’s work received numerous awards, e.g. a highly competitive Lichtenberg-Professorship Award from the Volkswagen Foundation and a DFG Emmy-Noether Young Researcher’s Excellence Career Award. Iryna was elected to be President of SIGDAT, one of the most important scientific bodies in the ACL community. She was program co-chair of ACL’s most important conference in 2018, the Annual Meeting of the Association for Computational Linguistics, and she is General Chair of *SEM 2020, the 9th Joint Conference on Lexical and Computational Semantics.


Angela Fan is a PhD student at INRIA Nancy and a researcher at Facebook AI Research Paris, working on text generation and efficient inference. She previously chaired the annual FAIR conference. She has co-founded and co-organized the first SustaiNLP workshop.


Yufang Hou is a research scientist at IBM Research. She was a member of IBM Project Debater. Her research interests include anaphora resolution, computational argumentation and information extraction from scientific literature. She is currently serving as an area chair at EACL 2021 and is a member of the standing review committee of TACL.

Zornitsa Kozareva leads and manages language efforts at Facebook AI Research. Before that, Dr. Kozareva was leading and managing search intelligence groups at Google. Dr. Kozareva managed Amazon AWS Deep Learning group that built and launched Amazon Comprehend and Amazon Lex Cloud Services. Dr. Kozareva was a Senior Manager at Yahoo! leading the Query Processing group for Mobile Search and Product Ads. Before industry Dr. Kozareva wore an academic hat as Research Professor at the CS Department of the University of Southern California, where she spearheaded multi-million dollar research grants funded by DARPA and IARPA. Dr. Kozareva work is featured in the press like Forbes, VentureBeat, GizBot, NextBigWhat. Dr. Kozareva is a recipient of the John Atanasoff Award given by the President of the Republic of Bulgaria in 2016 for her contributions and impact in science, education and industry; Yahoo! Labs Excellence Award in 2014 and RANLP Young Scientist Award in 2011.

Sujith Ravi is Founder & CEO of SliceX AI. Previously, he was the Director of Amazon Alexa AI, where he led efforts to build multimodal neural conversational AI at scale. Prior to that, Dr. Ravi founded and headed multiple ML and NLP teams in Google AI spanning large-scale semi-supervised learning, graph & deep learning, and on-device machine learning for products used by billions of people in Search, Ads, Assistant, Gmail, Photos, Android, Cloud and YouTube. These technologies power conversational AI (e.g., Smart Reply); Web and Image Search; On-Device ML in Android and Assistant; Neural Structured Learning in TensorFlow; Learn2Compress for Google Cloud; TensorFlow Lite for edge devices. Dr. Ravi’s work has been featured in the press (e.g., Wired, TechCrunch, NYTimes, New Scientist) and won Best Paper Awards in SIGDIAL 2019 & KDD 2014. Dr. Ravi was a mentor for Google Launchpad startups, Co-Chair (AI & Deep Learning) for 2019 National Academy of Engineering (NAE) symposium, Co-organizer and Senior/Area Chair for top-tier ML and NLP conferences like ACL, EMNLP, NeurIPS, ICML, AAAI.


Sasha Luccioni is a researcher working with Yoshua Bengio and others on climate change-related initiatives at Mila, including projects that aim to estimate the environmental impact of Machine Learning and to analyze financial disclosures from a climate standpoint. Her work sits at the intersection of AI and the environment, and her goal is to find ways to maximize the positive impacts of AI while minimizing the negative ones, be it from a research or application perspective. Sasha's work has been featured in various news and media outlets such as MIT Technology Review, WIRED and the Wall Street Journal, among others, both her projects on the environmental impact of AI and those on how to reduce it. She is also a 2020 National Geographic Explorer, and holds an IVADO postdoctoral scholarship.



Gyuwan Kim is a computer science Ph.D. student at UC Santa Barbara, working on machine learning for natural language processing. Previously, he worked at NAVER as a research scientist and studied at Seoul National University. His main research interest is improving the efficiency and robustness of NLP models with better algorithmic solutions. He won the best paper award at SustaiNLP 2021 workshop.



Roy Schwartz is a senior lecturer at the School of Computer Science and Engineering at The Hebrew University of Jerusalem (HUJI). Roy studies natural language processing and artificial intelligence. Prior to joining HUJI, Roy was a postdoc (2016-2019) and then a research scientist (2019-2020) at the Allen institute for AI and at The University of Washington, where he worked with Noah A. Smith. Roy completed his Ph.D. in 2016 at the School of Computer Science and Engineering at HUJI, where he worked with Ari Rappoport. Roy’s work has appeared on the cover of the CACM magazine, and has been featured, among others, in the New York Times, MIT Tech Review, and Forbes.



Andreas Rücklé is an applied scientist at Amazon Search in Berlin, working at the intersection of NLP and IR. Before joining Amazon in 2021, he completed his Ph.D. at UKP Lab, TU Darmstadt. His research interests include obtaining efficient and scalable models and transferring models across different tasks, domains, and languages.