List of Accepted Papers

A Tale Of Two Laws Of Semantic Change: Predicting Synonym Changes With Distributional Semantic Models

Bastien Lietard, Mikaela Keller and Pascal Denis


Adverbs, Surprisingly

Dmitry Nikolaev, Collin Baker, Miriam R. L. Petruck and Sebastian Padó


Analyzing Syntactic Generalization Capacity Of Pre-Trained Language Models On Japanese Honorific Conversion

Ryo Sekizawa and Hitomi Yanaka


Are Language Models Sensitive To Semantic Attraction? A Study On Surprisal

Yan Cong, Emmanuele Chersoni, Yu-Yin Hsu and Alessandro Lenci


Arithmetic-Based Pretraining – Improving Numeracy Of Pretrained Language Models

Dominic Petrak, Nafise Sadat Moosavi and Iryna Gurevych


Can Pretrained Language Models Derive Correct Semantics From Corrupt Subwords Under Noise?

Xinzhe Li, Ming Liu and Shang Gao


Can Sequence-To-Sequence Transformers Naturally Understand Sequential Instructions?

Xiang Zhou, Aditya Gupta, Shyam Upadhyay, Mohit Bansal and Manaal Faruqui


Crapes:Cross-Modal Annotation Projection For Visual Semantic Role Labeling

Abhidip Bhattacharyya, Martha Palmer and Christoffer Heckman


Does Character-Level Information Always Improve Drs-Based Semantic Parsing?

Tomoya Kurosawa and Hitomi Yanaka


Empirical Sufficiency Lower Bounds For Language Modeling With Locally-Bootstrapped Semantic Structures

Jakob Prange and Emmanuele Chersoni


Estimating Semantic Similarity Between In-Domain and Out-Of-Domain Samples

Rhitabrat Pokharel and Ameeta Agrawal


Evaluating Factual Consistency Of Texts With Semantic Role Labeling

Jing Fan, Dennis Aumiller and Michael Gertz


Event Semantic Knowledge In Procedural Text Understanding

Ghazaleh Kazeminejad and Martha Palmer


Functional Distributional Semantics At Scale

Chun Hei Lo, Hong Cheng, Wai Lam and Guy Emerson


Generative Data Augmentation For Aspect Sentiment Quad Prediction

An Wang, Junfeng Jiang, Youmi Ma, Ao Liu and Naoaki Okazaki


Guiding Zero-Shot Paraphrase Generation With Fine-Grained Control Tokens

Teemu Vahtola, Mathias Creutz and Jörg Tiedemann


How Are Idioms Processed Inside Transformer Language Models?

Ye Tian, Isobel James and Hye Son


Improving Toponym Resolution With Better Candidate Generation, Transformer-Based Reranking, and Two-Stage Resolution

Zeyu Zhang and Steven Bethard


Including Facial Expressions In Contextual Embeddings For Sign Language Generation

Carla Viegas, Mert Inan, Lorna Quandt and Malihe Alikhani


Is Shortest Always Best? The Role Of Brevity In Logic-To-Text Generation

Eduardo Calò, Jordi Levy, Albert Gatt and Kees Van Deemter


Jseegraph: Joint Structured Event Extraction As Graph Parsing

Huiling You, Lilja Øvrelid and Samia Touileb


Kglm: Integrating Knowledge Graph Structure In Language Models For Link Prediction

Jason Youn and Ilias Tagkopoulos


Language Models Are Not Naysayers: An Analysis Of Language Models On Negation Benchmarks

Thinh Hung Truong, Timothy Baldwin, Karin Verspoor and Trevor Cohn


Leverage Points In Modality Shifts: Comparing Language-Only and Multimodal Word Representations

Lisa Bylinina, Denis Paperno and Alexey Tikhonov


Leveraging Active Learning To Minimise Srl Annotation Across Corpora

Skatje Myers and Martha Palmer


Lexplain: Improving Model Explanations Via Lexicon Supervision

Orevaoghene Ahia, Hila Gonen, Vidhisha Balachandran, Yulia Tsvetkov and Noah A. Smith


Limits For Learning With Language Models

Nicholas Asher, Swarnadeep Bhar, Akshay Chaturvedi, Julie Hunter and Soumya Paul


Monolingual Phrase Alignment As Parse Forest Mapping

Sora Kadotani and Yuki Arase


Not All Counterhate Tweets Elicit The Same Replies: A Fine-Grained Analysis

Abdullah Albanyan, Ahmed Hassan and Eduardo Blanco


Pcfg-Based Natural Language Interface Improves Generalization For Controlled Text Generation

Jingyu Zhang, James Glass and Tianxing He


Pets Can Be Vets and Get Meds: Experiments With Vague Euphemistic Terms and Multilingual Euphemistic Disambiguation

Patrick Lee, Iyanuoluwa Shode, Alain Chirino Trujillo, Yuan Zhao, Olumide Ojo, Diana Cuervas Plancarte, Anna Feldman and Jing Peng


Probing Neural Language Models For Understanding Of Words Of Estimative Probability

Damien Sileo and Marie-Francine Moens


Probing Out-Of-Distribution Robustness Of Language Models With Parameter-Efficient Transfer Learning

Hyunsoo Cho, Choonghyun Park, Junyeob Kim, Hyuhng Joon Kim, Kang Min Yoo and Sang-Goo Lee


Query Generation Using Gpt-3 For Clip-Based Word Sense Disambiguation For Image Retrieval

Xiaomeng Pan, Zhousi Chen and Mamoru Komachi


Representation Of Lexical Stylistic Features In Language Models' Embedding Space

Qing Lyu, Marianna Apidianaki and Chris Callison-Burch


Revisiting Syntax-Based Approach In Negation Scope Resolution

Asahi Yoshida, Yoshihide Kato and Shigeki Matsubara


Robust Integration Of Contextual Information For Cross-Target Stance Detection

Andreas Waldis, Tilman Beck and Iryna Gurevych


Scalable Performance Analysis For Vision-Language Models

Santiago Castro, Oana Ignat and Rada Mihalcea


Seeking Clozure: Robust Hypernym Extraction From Bert With Anchored Prompts

Chunhua Liu, Trevor Cohn and Lea Frermann


Semantically-Informed Hierarchical Event Modeling

Shubhashis Roy Dipta, Mehdi Rezaee and Francis Ferraro


Syntax and Semantics Meet In The "Middle": Probing The Syntax-Semantics Interface Of Lms Through Agentivity

Lindia Tjuatja, Emmy Liu, Lori Levin and Graham Neubig


Testing Paraphrase Models On Recognising Sentence Pairs At Different Degrees Of Semantic Overlap

Qiwei Peng, David Weir and Julie Weeds


True Detective: A Challenging Benchmark For Deep Abductive Reasoning In Large Language Models

Maksym Del and Mark Fishel


When Truth Matters – Addressing Pragmatic Categories In Natural Language Inference

Reto Gubelmann, Aikaterini-Lida Kalouli, Christina Niklaus and Siegfried Handschuh


„Mann" Is To "Donna” As「国王」is To « Reine » Adapting The Analogy Task For Multilingual and Contextual Embeddings

Timothee Mickus, Eduardo Calò, Léo Jacqmin, Denis Paperno and Mathieu Constant