Construction Grammars and NLP (CxGs+NLP) Workshop

The first CxGs+NLP Workshop, 2023

Motivation and Call for Papers

Construction Grammar (CxG) approaches recognize all levels of linguistic structure as contributing meaning, which makes them a powerful tool for considering a wide variety of linguistic problems, from determining parts-of-speech to aspectual properties. How we look at these problems has an impact on a variety of related NLP and NLU applications, including parsing, question-answering and interactive information extraction, machine translation, language grounding in robotics, etc.  For many applications in the traditional NLP pipeline, existing assumptions that meaning is tied to individual lexical items and composed according to rules leave some language phenomena unaccounted for. CxGs offer theoretical solutions to such phenomena and have made headway in the development of computational resources such as constructicons, but there is more to do in fruitfully bringing CxG theories to NLP applications. 

Similarly, recent advances in NLP, driven in large part by the introduction of pre-trained language models, have led to the development of computational methods independent of a linguistic grounding. Although there exists work in attempting to understand the cognitive and linguistic feasibility of these models, such work remains in its infancy.

Given this dichotomy between the recent direction of NLP research and the closely related field of CxGs, we are excited to announce the first CxGs + NLP workshop, aimed at bringing together researchers in the fields of Natural Language Processing and Construction Grammar so as to jump-start, what we believe is, an important conversation between these two complementary, yet currently disparate fields.

Given this dichotomy between the recent direction of NLP research and the closely related field of CxGs, we are excited to announce the CxGs + NLP workshop, aimed at bringing together researchers in the fields of Natural Language Processing and Construction Grammar so as to jump-start, what we believe is, an important conversation between these two complementary, yet currently disparate fields.

Our aim is to bring together theoretical and computational researchers interested in CxG approaches and encourage topics examining how theoretical research can inform computational approaches and applications, whether existing or needed in the future.  Thus, we invite original research papers from a range of topics, including but not limited to:

Venue

The Georgetown University Round Table on Linguistics (GURT) is a peer-reviewed annual linguistics conference held continuously since 1949 at Georgetown University in Washington DC, with topics and co-located events varying from year to year. Under an overarching theme of ‘Computational and Corpus Linguistics’, GURT 2023 will feature four events, which are workshops or conferences focused on computational and corpus approaches to syntax but also covering theoretical issues: Universal Dependency Workshop (UDW), Depling, Treebanks and Linguistic Theory (TLT), and CxGs+NLP. All talks from all events will take place in a single (non-parallel) plenary session, with the papers from one event being presented contiguously.  The goal of co-locating these events to promote cross-fertilization of ideas across subcommunities. Proceedings will be published separately for each event, and will be available in the ACL Anthology.  

Please see the GURT website here: https://gurt.georgetown.edu/

In order to support rich discussions and networking with minimal overhead and cost, GURT will be primarily an in-person event; we will, however, accommodate a limited number of live/synchronous remote presentations, prioritizing those with circumstances that prevent travel. University policies regarding COVID safety will be in force during the event.

Georgetown University is located in a historic neighborhood in the heart of the nation’s capital. The city is a premier tourist destination, and the region is served by Reagan National (DCA), Dulles (IAD), and Baltimore-Washington (BWI) airports.

 Important dates:



All deadlines are 11.59 pm UTC -12h ("anywhere on Earth").

Submissions

We accept two types of submissions, long papers and short papers, following the ACL policy on submission, review, and citation: https://www.aclweb.org/adminwiki/index.php?title=ACL_Policies_for_Submission,_Review_and_Citation

All papers accepted for presentation at the workshop will be included in the CxGs + NLP  2023 proceedings volume, which will be part of the ACL Anthology.  Additionally, non-archival short papers will be considered for acceptance into the workshop as in-person poster presentations only; these should be submitted by email directly to the organizers for review as opposed to submission through the EasyChair conference website and will not undergo double-blind review. 

Long papers may consist of up to eight (8) pages of main content; short papers may consist of up to four (4) pages of main content; and final versions will be given one additional page of content so that reviewers' comments can be taken into account. Limits on main content do not apply to references or (optional) ethics statements. After the references, the submission may include appendices for supplementary content not necessary for evaluating the contributions of the paper (reviewers will not be required to review the appendices). Submissions should be sent in electronic forms, using the OpenReview conference management system: https://openreview.net/group?id=georgetown.edu/GURT/2023/Conference

Submissions are open to all, and are to be submitted anonymously. All papers will be refereed through a double-blind peer review process with final acceptance decisions made by the workshop organizers.  Submissions may be selected for publication in a GURT venue other than CxGs + NLP at the discretion of the organizers.

Paper Submission and Templates: 

Submission is electronic, using the EasyChair conference management system. Both long and short papers must follow the ACL two-column format, using the supplied official style files: https://github.com/acl-org/acl-style-files

Please do not modify these style files, nor should you use templates designed for other conferences. 

Double submission policy: We will accept submissions that have been or will be submitted elsewhere, but require that the authors notify us, including information on where else they are submitting. We also require that authors withdraw work that will be published elsewhere (no double publication).

Submissions that violate these requirements will be rejected without review.

Instructions For Double-Blind Review:

As reviewing will be double blind, papers must not include authors’ names and affiliations. Furthermore, self-references or links (such as github) that reveal the author’s identity, e.g., “We previously showed (Smith, 1991) …” must be avoided. Instead, use citations such as “Smith previously showed (Smith, 1991) …” Papers that do not conform to these requirements will be rejected without review. Papers should not refer, for further detail, to documents that are not available to the reviewers. For example, do not omit or redact important citation information to preserve anonymity. Instead, use third person or named reference to this work, as described above (“Smith showed” rather than “we showed”). If important citations are not available to reviewers (e.g., awaiting publication), these paper/s should be anonymised and included in the appendix. They can then be referenced from the submission without compromising anonymity. Papers may be accompanied by a resource (software and/or data) described in the paper, but these resources should also be anonymized.


CxGs+NLP Workshop Chairs

Claire Bonial

Claire Bonial is a computational linguist specializing in the murky world of event semantics. In her efforts to make this world computationally tractable, she has collaborated on and been a foundational part of several important NLP lexical resources, including PropBank, VerbNet, and Abstract Meaning Representation. A focused contribution to these projects has been her theoretical and psycholinguistic research on both the syntax and semantics of English light verb constructions (e.g., “take a walk”, “make a mistake”). Bonial received her Ph.D. in Linguistics and Cognitive Science in 2014 from the University of Colorado Boulder and began her current position in the Content Understanding Branch at the Army Research Laboratory (ARL) in 2015. Since joining ARL, she has expanded her research portfolio to include multi-modal representations of events (text and imagery/video), human-robot dialogue, and misinformation detection.

https://www.researchgate.net/profile/Claire-Bonial

Harish Tayyar Madabushi

Harish's long term research goals are focused on the investigating methods of incorporating high-level cognitive capabilities into models. In the short and medium term, his research is focused on the infusion of world knowledge and common sense into pre-trained language models to improve performance on complex tasks such as multi-hop question answering, conversational agents, and social media analysis. Harish completed his PhD in Question Answering from the University of Birmingham in 2019 and began his current post as Lecturer in Artificial Intelligence at the University of Bath in 2022. In the interim, he has worked on research related to MWEs, Construction Grammar and language modelling and was the principal organiser of the SemEval 2022 Task on MWEs. 

https://www.harishtayyarmadabushi.com/

Questions?

Don't hesitate to post questions to the our Google Group