NLPhD Speaker Series

This is an NLP speaker series organized by PhD students and featuring PhD students. The talks are open to everyone!

The talks are held via MS Teams. You can join via this link. If you don't have a Saarland University email address, you are still very welcome, but you might encounter issues with joining. Please contact the organizers and we will add you.

Upcoming Talks:

Ben Peters (Instituto de Telecomunicações)

Date/Time: 04 May at TBA PM (CET)

Title: TBA

Abstract:

TBA


Jean-Baptiste Cordonnier (EPFL)

Date/Time: 01 June at 14:00 PM (CET)

Title: TBA

Abstract:

TBA

Calendar:

Organizers:

This series is organized by members of
LST at Saarland University and Saarland Informatics Campus. If you have questions or suggestions, feel free to contact us.

Past Talks:

Lieke Gelderloos (Tilburg University)

Date/Time: 13 April at 2.30 PM (CET)

Title: Active word learning through self-supervision

Abstract:

Models of cross-situational word learning typically characterize the learner as a passive observer. However, a language learning child can actively participate in verbal and non-verbal communication. In a computational study of cross-situational word learning, we investigate whether a curious word learner which actively selects input has an advantage over a learner which has no influence over the input it receives. We present a computational model that learns to map words to objects in images through word comprehension and production. The productive and receptive parts of the model can operate independently, but can also feed into each other. This introspective quality enables the model to learn through self-supervision, and also to estimate its own word knowledge, which is the basis for curious selection of input. We examine different curiosity metrics for input selection, and analyze the impact of each method on the learning trajectory. A formulation of curiosity which relies both on subjective novelty and plasticity yields faster learning, robust convergence, and best eventual performance.



Brielen Madureira (University of Potsdam)

Date/Time: 23 March at 3.00 PM (CET)

Title: Incremental Processing in the Age of non-Incremental Encoders

Abstract:

While humans process language incrementally, the best language encoders currently used in NLP do not. Both bidirectional LSTMs and Transformers assume that the sequence that is to be encoded is available in full, to be processed either forwards and backwards (BiLSTMs) or as a whole (Transformers). In this talk, I will present the results of an investigation on how they behave under incremental interfaces, when partial output must be provided based on partial input seen up to a certain time step, which may happen in interactive systems. We tested five models on various NLU datasets and compared their performance using three incremental evaluation metrics. The results support the possibility of using bidirectional encoders in incremental mode while retaining most of their non-incremental quality.


Shauli Ravfogel (Bar-Ilan University)

Date/Time: 26th January at 14:30 (CET)

Title: Identifying and manipulating concept subspaces

Abstract:

While LM representations are highly nonlinear in the input text, probing literature has demonstrated that linear classifiers can recover from the representations various human-interpretable concepts, from notions of gender to part-of-speech. I will present Iterative Nullspace Projection (INLP), a method to identify subspaces within the representation that correspond to arbitrary concepts. The method is data-driven and identifies those subspaces by the training of multiple orthogonal classifiers to predict the concept at focus. I will overview some recent work of ours, which demonstrates the utility of these concept subspaces for different goals: mitigating social bias in static and contextualized embeddings; assessing the influence of concepts on the model's behavior; and identifying syntactic features that explain the internal organization of representation space with respect to specific phenomena, such as relative clause structures.

Meetings:

Shauli will be available for meetings after the talk. Please reach out to Marius (see organizers) for scheduling meetings.