Extended Submission deadline: Jun 30th
Constructionist approaches to language posit that all linguistic knowledge needed for language comprehension and production can be captured as a network of form-meaning mappings, called constructions. Construction Grammars (CxGs) do not distinguish between words and grammar rules, but allow for mappings between forms and meanings of arbitrary complexity and degree of abstraction. CxGs are thereby able to uniformly capture the compositional and non-compositional aspects of language use, making the theory particularly attractive to researchers in the field of Natural Language Processing (NLP). CxG theories, for example, can serve as a valuable ‘lens’ to assess and investigate the abilities of today’s large language models, which lack explicit, theoretically grounded linguistic insights. At the same time, techniques from the field of NLP are often employed for the further development and scaling of CxG theories and applications.
Prof Adele Goldberg, Professor of Psychology, Princeton University
Prof Thomas Hoffmann, Professor of English Language and Linguistics, Catholic University of Eichstätt-Ingolstadt
Prof Laura A. Michaelis, Professor of Linguistics, University of Colorado Boulder
This workshop aims to bring together researchers across theory and practice from the two complementary perspectives of Construction Grammar and NLP to explore how CxG approaches can both inform and benefit from NLP methods, with an emphasis on LLMs. Therefore, we invite original research papers from a broad spectrum of topics, including but not limited to:
Contributions to CxG linguistic theory
Formalisms for Construction Grammars
Natural Language Understanding (NLU)
Opinion pieces on the interplay between CxGs and NLP
Constructions and Language Models (Mechanistic Interpretability, BERTology, probing and evaluation of LLMs)
Resources: Constructicons and corpora annotated for Construction Grammar
Construction Grammar learning and adaptation
Applications
The 2nd CxGs+NLP workshop will be co-located with the 16th International Conference on Computational Semantics (IWCS), organized by the Heinrich Heine University (HHU) in Düsseldorf, Germany. The workshop will be a full day on 24 September 2025. Additionally, we will be hosting a community-building event in Düsseldorf on 25 September 2025, including panel discussions and breakout sessions on how to organize CxG community resources.
We are expecting the workshop to be in-person only, but are awaiting details on the possibility of a hybrid presentation option.
Jun 30th: Extended Submission deadline Jun 06: submission deadline
Aug 01: notification of acceptance, registration opens
Aug 22: camera-ready papers due
Sep 22-23: IWCS main conference
Sep 24: workshop
Sep 25: community-building event
Two types of submission are solicited: long papers and short papers. Long papers should describe original research and must not exceed 8 pages. Short papers (typically system or project descriptions, or ongoing research) must not exceed 4 pages. Acknowledgments, references, a limitations section (optional), an ethics statement (optional), and a technical appendix (optional, not subject to reviewing) do not count towards the page limit.
Accepted papers get an extra page in the camera-ready version and will be published in the conference proceedings in the ACL Anthology. Additionally, non-archival publications will be considered for acceptance into the workshop as in-person poster presentations only.
CxGs+NLP 2 papers should be formatted following the common two-column structure as used by IWCS 2021 (borrowed from ACL 2021). Please use these specific style-files or the Overleaf template.
Style files: https://iwcs2021.github.io/download/iwcs2021-templates.zip
Overleaf template: https://www.overleaf.com/latex/templates/instructions-for-iwcs-2021-proceedings/fpnsyxqqpfbw
Double submission policy: We will accept submissions that have been submitted elsewhere, but require that the authors notify us, including information on where else they are submitting and let us know if the work is accepted for publication elsewhere.
Submission site https://openreview.net/group?id=IWCS/2025/Workshop/CxGs_NLP
As reviewing will be double blind, papers must not include authors’ names and affiliations. Furthermore, self-references or links (such as github) that reveal the author’s identity, e.g., “We previously showed (Smith, 1991) …” must be avoided. Instead, use citations such as “Smith previously showed (Smith, 1991) …” Papers that do not conform to these requirements will be rejected without review. Papers should not refer, for further detail, to documents that are not available to the reviewers. For example, do not omit or redact important citation information to preserve anonymity. Instead, use third person or named reference to this work, as described above (“Smith showed” rather than “we showed”). If important citations are not available to reviewers (e.g., awaiting publication), these paper/s should be anonymised and included in the appendix. They can then be referenced from the submission without compromising anonymity. Papers may be accompanied by a resource (software and/or data) described in the paper, but these resources should also be anonymized.
Claire Bonial is a computational linguist specializing in the murky world of event semantics. In her efforts to make this world computationally tractable, she has collaborated on and been a foundational part of several important NLP lexical resources, including PropBank, VerbNet, and Abstract Meaning Representation. A focused contribution to these projects has been her theoretical and psycholinguistic research on both the syntax and semantics of English light verb constructions (e.g., “take a walk”, “make a mistake”). Bonial received her Ph.D. in Linguistics and Cognitive Science in 2014 from the University of Colorado Boulder and began her current position in the Content Understanding Branch at the Army Research Laboratory (ARL) in 2015. Since joining ARL, she has expanded her research portfolio to include multi-modal representations of events (text and imagery/video), human-robot dialogue, and misinformation detection.
Harish's long term research goals are focused on the investigating methods of incorporating high-level cognitive capabilities into models. In the short and medium term, his research is focused on the infusion of world knowledge and common sense into pre-trained language models to improve performance on complex tasks such as multi-hop question answering, conversational agents, and social media analysis. Harish completed his PhD in Question Answering from the University of Birmingham in 2019 and began his current post as Lecturer in Artificial Intelligence at the University of Bath in 2022. In the interim, he has worked on research related to MWEs, Construction Grammar and language modelling and was the principal organiser of the SemEval 2022 Task on MWEs.
Katrien Beuls is an assistant professor in artificial intelligence at the University of Namur (Belgium). For many years, she has been a leading figure in the development of the Fluid Construction Grammar (FCG) framework. Apart from her contributions to the core of FCG, she has been involved in the application of computational construction grammars across many domains, including language tutoring, intelligent cooking assistants, and online opinion observatories. Her ongoing research is mainly concerned with agent- based models of the emergence, evolution, and acquisition of human-like languages in machines, adopting a construction grammar perspective. She has previously organised several workshops, including the 2017 AAAI Spring Symposium on Computational Construction Grammar and Natural Language Understanding and the IJCAI 2022 workshop on Semantic Techniques for Narrative-Based Understanding.
Paul Van Eecke (°1990) obtained master degrees in Artificial Intelligence (2013 – summa cum laude) and Linguistics (2012, summa cum laude) from the KU Leuven, and a PhD Degree in Computer Science (2018) from the Artificial Intelligence Laboratory of the Vrije Universiteit Brussel. From 2014 until 2017, he worked as an assistant researcher at Sony Computer Science Laboratories Paris, where he carried out research on implementing construction grammars and became one of the main developers of the Fluid Construction Grammar (FCG) system.
He is currently a tenure track research professor (Senior Research Fellow) in ‘humanlike communication in autonomous agents’ at the VUB Artificial Intelligence Laboratory. His main research interests include the emergence and evolution of language through communicative interactions, computational construction grammar and its applications, and the use of a combination of symbolic and subsymbolic AI techniques for solving advanced perception, reasoning and communication tasks.
I am a computational linguistics and natural language processing researcher at the Army Research Lab in Adelphi, Maryland. Previously, I have worked in the NERT lab, and been a member of GUCL at Georgetown. My research focuses on computational semantics and semantic parsing. My interests include deep learning, natural language understanding, and mathematical linguistics. I am particularly interested in strategies for computationally modelling word and sentence meaning in frameworks like Abstract Meaning Representation.
I am a postdoc at UT Austin Linguistics, working with Kyle Mahowald, funded by a Fellowship from the German Academic Exchange Service (DAAD).
I completed my PhD at the Center for Information and Language Processing at LMU Munich where my thesis was about Computational Approaches to Construction Grammar and Morphology. My supervisor was Hinrich Schütze. Previously, I completed my B.Sc. and M.Sc. degrees in Computational Linguistics and Computer Science at LMU, with scholarships from the German Academic Scholarship Foundation and the Max Weber Program. My M.Sc. thesis, supervised by Hinrich Schütze, was on the application of Complementary Learning Systems Theory to NLP. I spent the final year of my bachelor's degree as a visiting student at Homerton College, University of Cambridge, where I wrote my B.Sc. thesis on Character-Level RNNs under the supervision of Anna Korhonen.
Researching the challenges and risk of using large language models in Natural Language Processing (NLP), focusing on addressing these issues through the use of explainable NLP.
There has been significant progress in NLP in recent years, particularly with the development of pre-trained and large language models. These models are based on neural networks and are trained on massive amounts of text data, allowing them to learn the patterns and structure of language in a way that mimics human understanding. However, the increased use of these types of models has highlighted many challenges in NLP that still need to be addressed. Issues such as bias and lack of model interpretability are important considerations in the development and deployment of NLP models.