NLPhD Speaker Series
This is an NLP speaker series organized by PhD students and featuring PhD students. The talks are open to everyone!
The talks are held via MS Teams. You can join via this link. If you don't have a Saarland University email address, you are still very welcome, but you might encounter issues with joining. Please contact the organizers and we will add you.
Elizabeth Salesky (Center for Language and Speech Processing, Johns Hopkins University)
Date/Time: 25 January at 14:30 PM (CET)
Title: Looking beyond Unicode for Open-Vocabulary Text Representations
Models of text typically have finite vocabularies, and commonly use subword segmentation techniques to achieve an 'open vocabulary.' This approach relies on consistent and correct underlying unicode sequences, and experiences degradation when presented with common types of noise, variation, and rare forms. In this talk, I will discuss an alternate approach in which we render text as images and learn open-vocabulary representations along with the downstream task, which in our work is machine translation. We show that models using visual text representations approach or match performance of traditional unicode-based models for several language pairs and scripts, with significantly greater robustness. I will also discuss several open questions and avenues for future work.
Rabeeh Karimi Mahabadi (EPFL and Idiap Research Institute)
Date/Time: 08 March at 14:30 PM (CET)
William N. Havard (LSCP laboratory at ENS Ulm)
Date/Time: 14 December at 14:30 PM (CET)
Title: Lexical Emergence from Context: Exploring the Role of Attention
In recent years, deep learning methods have allowed the creation of neural models that are able to process several modalities at once. Neural models of Visually Grounded Speech (VGS) are such kind of models and are able to jointly process a spoken input and a matching visual input. Such models have sparked interest in linguists and cognitive scientists as they are able to model complex interactions between two modalities --- speech and vision --- and can be used to simulate child language acquisition and, more specifically, lexical acquisition. In this talk, I present how a RNN-based model of VGS uses its attention mechanisms to highlight key segments in the speech signal, and how this compares to child language acquisition. In order to gain a better understanding of the learning patterns of VGS models, I present the results of experiments conducted on two typologically different languages, English, and Japanese.
Ratish Puduppully (University of Edinburgh)
Date/Time: 23 November at 14:30 PM (CET)
Title: Data-to-text Generation with Neural Planning
Recent approaches to data-to-text generation have adopted the very successful encoder-decoder architecture or variants thereof. These models generate fluent (but often imprecise) text and perform quite poorly at selecting appropriate content and ordering it coherently. In this talk, I will discuss my attempts at overcoming these issues with the integration of content planning with neural models. I will first present work on integrating fine-grained or micro plans with data-to-text generation. Such micro plans can take the form of a sequence of records highlighting which information should be mentioned and in which order. I then discuss how coarse-grained or macro plans can be beneficial for data-to-text generation. Macro plans represent high-level organization of important content such as entities, events, and their interactions; they are learnt from data and given input to the generator. In conclusion, I show that planning makes data-to-text generation more interpretable, improves the factuality and coherence of the generated documents and reduces redundancy in the output text.
Ece Takmaz (University of Amsterdam)
Date/Time: 27 October at 14:30 PM (CET)
Title: Generating image descriptions guided by sequential speaker-specific human gaze
When we describe an image, certain visual and linguistic processes take place, acting in concert with each other. For instance, we tend to look at an object just before mentioning it, but this may not always be the case. Inspired by these processes, we have developed the first models of image description generation informed by the sequential cross-modal alignment between language and human gaze. We build our models on a state-of-the-art image captioning model, which itself was inspired by the visual processes in humans. Our results show that aligning gaze with language production would help generate more diverse and more natural descriptions that are sequentially and semantically more similar to human descriptions. In addition, such findings could also help us shed light on human cognitive processes by comparing different ways of encoding the gaze modality and aligning it with language production.
Mor Geva (Tel Aviv University)
Date/Time: 14 September at 14:30 PM (CET)
Title: Down the Rabbit Hole: Debugging the Inner Workings of Transformer Models
The NLP field is dominated by transformer-based language models that are pre-trained on huge amounts of text. A large body of research has focused on understanding what knowledge and language features are encoded in the representations of these models. However, very little is known on how the models reason over the knowledge they encode. In this talk, I’ll share two recent findings on the inner workings of transformer models, and show how they can be harnessed for interpretability of model predictions. First, we will analyze one of the fundamental yet underexplored components in transformers - the feed-forward layers, and show their role in the prediction construction. Then, I will describe an interesting phenomenon of emergent behaviour in multi-task transformer models, that can be harnessed for interpretability and extrapolation of skills.
Maria Ryskina (CMU)
Date/Time: 22 June at 14:30 PM (CET)
Title: Unsupervised decipherment of informal romanization
Informal romanization is an idiosyncratic way of typing non-Latin-script languages in Latin alphabet, commonly used in online communication. Although the character substitution choices vary between users, they are typically grounded in shared notions of visual and phonetic similarity between characters. In this talk, I will focus on the task of converting such romanized text into its native orthography for Russian, Egyptian Arabic, and Kannada, showing how similarity-encoding inductive bias helps in the absence of parallel data. I'll also share some insights into the behaviors of the unsupervised finite-state and seq2seq models for this task and discuss how their combinations can leverage their different strengths.
Jean-Baptiste Cordonnier (EPFL)
Date/Time: 01 June at 14:00 PM (CET)
Title: Transformers for Vision
The Transformer architecture has become the de-facto standard for natural language processing tasks. In vision, recent trends of incorporating attention mechanisms have led researchers to reconsider the supremacy of convolutional layers as a primary building block. In this talk, we will explore the continuum of architectures mixing convolution and attention. I will share what is specific to the image domain and what lessons could be transferred to NLP.
Ben Peters (Instituto de Telecomunicações)
Date/Time: 04 May at 14:30 PM (CET)
Title: Down with softmax: Sparse sequence-to-sequence models
Sequence-to-sequence models are everywhere in NLP. Most variants employ a softmax transformation in both their attention mechanism and output layer, leading to dense alignments and strictly positive output probabilities. This density is wasteful, making models less interpretable and assigning probability mass to many implausible outputs. One alternative is sparse sequence-to-sequence models, which replace softmax with a sparse function from the 𝛼-entmax family. In this talk, we show how these models can shrink the search space by assigning zero probability to bad hypotheses, reducing length bias and sometimes enabling exact decoding.
Lieke Gelderloos (Tilburg University)
Date/Time: 13 April at 2.30 PM (CET)
Title: Active word learning through self-supervision
Models of cross-situational word learning typically characterize the learner as a passive observer. However, a language learning child can actively participate in verbal and non-verbal communication. In a computational study of cross-situational word learning, we investigate whether a curious word learner which actively selects input has an advantage over a learner which has no influence over the input it receives. We present a computational model that learns to map words to objects in images through word comprehension and production. The productive and receptive parts of the model can operate independently, but can also feed into each other. This introspective quality enables the model to learn through self-supervision, and also to estimate its own word knowledge, which is the basis for curious selection of input. We examine different curiosity metrics for input selection, and analyze the impact of each method on the learning trajectory. A formulation of curiosity which relies both on subjective novelty and plasticity yields faster learning, robust convergence, and best eventual performance.
Brielen Madureira (University of Potsdam)
Date/Time: 23 March at 3.00 PM (CET)
Title: Incremental Processing in the Age of non-Incremental Encoders
While humans process language incrementally, the best language encoders currently used in NLP do not. Both bidirectional LSTMs and Transformers assume that the sequence that is to be encoded is available in full, to be processed either forwards and backwards (BiLSTMs) or as a whole (Transformers). In this talk, I will present the results of an investigation on how they behave under incremental interfaces, when partial output must be provided based on partial input seen up to a certain time step, which may happen in interactive systems. We tested five models on various NLU datasets and compared their performance using three incremental evaluation metrics. The results support the possibility of using bidirectional encoders in incremental mode while retaining most of their non-incremental quality.
Shauli Ravfogel (Bar-Ilan University)
Date/Time: 26th January at 14:30 (CET)
Title: Identifying and manipulating concept subspaces
While LM representations are highly nonlinear in the input text, probing literature has demonstrated that linear classifiers can recover from the representations various human-interpretable concepts, from notions of gender to part-of-speech. I will present Iterative Nullspace Projection (INLP), a method to identify subspaces within the representation that correspond to arbitrary concepts. The method is data-driven and identifies those subspaces by the training of multiple orthogonal classifiers to predict the concept at focus. I will overview some recent work of ours, which demonstrates the utility of these concept subspaces for different goals: mitigating social bias in static and contextualized embeddings; assessing the influence of concepts on the model's behavior; and identifying syntactic features that explain the internal organization of representation space with respect to specific phenomena, such as relative clause structures.
Shauli will be available for meetings after the talk. Please reach out to Marius (see organizers) for scheduling meetings.