SustaiNLP 2021
Second Workshop on Simple and Efficient Natural Language Processing
Accepted Papers
BioCopy: A Plug-And-Play Span Copy Mechanism in Seq2Seq Models
Yi Liu, Guoan Zhang, Puning Yu, Jianlin Su and Shengfeng Pan
Combining Lexical and Dense Retrieval for Computationally Efficient Multi-hop Question Answering
Georgios Sidiropoulos, Nikos Voskarides, Svitlana Vakulenko and Evangelos Kanoulas
Countering the Influence of Essay Length in Neural Essay Scoring
Sungho Jeon and Michael Strube
Distiller: A Systematic Study of Model Distillation Methods in Natural Language Processing
Haoyu He, Xingjian Shi, Jonas Mueller, Sheng Zha, Mu Li and George Karypis
Efficient Domain Adaptation of Language Models via Adaptive Tokenization
Vin Sachidananda, Jason Scott Kessler and Yi-An Lai
Evaluating the carbon footprint of NLP methods: a survey and analysis of existing tools
Nesrine bannour, Sahar Ghannay, Aurélie Névéol and Anne-Laure Ligozat
Hyperparameter Power Impact in Transformer Language Model Training
Lucas Høyberg Puvis de Chavannes, Mads Guldborg Kjeldgaard Kongsbak, Timmie Rantzau and Leon Derczynski
Improving Synonym Recommendation Using Sentence Context
Maria Glenski, William I. Sealy, Kate Miller and Dustin Arendt
Learning to Rank in the Age of Muppets: Effectiveness–EfficiencyTradeoffs in Multi-Stage Ranking
Yue Zhang, ChengCheng Hu, Yuqi Liu, Hui Fang and Jimmy Lin
Length-Adaptive Transformer: Train Once with Length Drop, Use Anytime with Search
Gyuwan Kim and Kyunghyun Cho
Limitations of Knowledge Distillation for Zero-shot Transfer Learning
Saleh Soltan, Haidar Khan and Wael Hamza
Logistic Regression Trained on Learner Data Outperformed Neural Language Models in Unsupervised Automatic Readability Assessment
Yo Ehara
Low Resource Quadratic Forms for Knowledge Graph Embeddings
Zachary Zhou, Jeffery Kline, Devin Conathan and Glenn Fung
Memory-efficient Transformers via Top-k Attention
Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut and Jonathan Berant
On the Role of Corpus Ordering in Language Modeling
Ameeta Agrawal, Suresh Singh, Lauren Schneider and Michael Samuels
Semantic Categorization of Social Knowledge for Commonsense Question Answering
Gengyu Wang, Xiaochen Hou, Diyi Yang, Kathleen McKeown and Jing Huang
Shrinking Bigfoot: Reducing wav2vec 2.0 footprint
Zilun Peng, Akshay Budhkar, Ilana Tuil, Jason Levy, Parinaz Sobhani, Raphael Cohen and Jumana Nassour
Simple and Efficient ways to Improve REALM
Vidhisha Balachandran, Ashish Vaswani, Yulia Tsvetkov and Niki Parmar
Speeding Up Transformer Training By Using Dataset Subsampling - An Exploratory Analysis
Lovre Torbarina, Velimir Mihelčić, Bruno Šarlija, Lukasz Roguski and Željko Kraljević
Unsupervised Contextualized Document Representation
Ankur Gupta and Vivek Gupta