Keynote speakers

Marco Baroni (University of Trento), joint *SEM / SemEval keynote speaker. 

Playing ficles and running with the corbons: What (multimodal) distributional semantic models learn during their childhood.

Joint work with Angeliki Lazaridou, Marco Marelli (Trento), Raquel Fernandez (Amsterdam), Grzegorz Chrupała (Tilburg).

Abstract: Distributional semantic methods have some a priori appeal as models of human meaning acquisition, because they induce word representations from contextual distributions naturally occurring in corpus data without need for supervision. However, learning the meaning of a (concrete) word also involves establishing a link between the word and its typical visual referents, which is beyond the scope of classic, text-based distributional semantics. Since recently several proposals have been put forward about how to induce multimodal word representations from linguistic and visual contexts, it is natural to ask if this line of work, besides its practical implications, can help us to develop more realistic, grounded models of human word learning within the distributional semantics framework. 

In my talk, I will report about two studies in which we used multimodal distributional semantics (MDS) to simulate human word learning. In one study, we first measured the ability of subjects to link a nonce word to relevant linguistic and visual associates when prompted only by exposure to minimal corpus evidence about it. We then simulated the same task with an MDS model, finding its behavior remarkably similar to that of subjects. In the second study, we constructed a corpus in which child-directed speech is aligned with real-life pictures of the objects mentioned by care-givers. We then trained our MDS model on these data, and inspected the generalizations it learned about the words in the corpus and the objects they might denote. 

The results highlight interesting issues not only for distributional semantics (can we build meaningful word representations from very limited contexts? are such representations reasonably human-like?), but also for the study of human language acquisition (are we "done" with learning a word once we associate it to a referent? do we incrementally refine our word representations? is an explicit cross-situational mechanism really necessary?). 


Preslav Nakov (Qatar Computing Research Institute), *SEM keynote speaker. 

60 Years Ago People Dreamed of Talking with a Machine. Are We Any Closer?.

Joint work with Marti Hearst (UC Berkeley). 

Abstract: The 60-year-old dream of computational linguistics is to make computers capable of communicating with humans in natural language. This has proven hard, and thus research has focused on sub-problems. Even so, the field was stuck with manual rules until the early 90s, when computers became powerful enough to enable the rise of statistical approaches. Eventually, this shifted the main research attention to machine learning from text corpora, thus triggering a revolution in the field.

Today, the Web is the biggest available corpus, providing access to quadrillions of words; and, in corpus-based natural language processing, size does matter. Unfortunately, while there has been substantial research on the Web as a corpus, it has typically been restricted to using page hit counts as an estimate for n-gram word frequencies; this has led some researchers to conclude that the Web should be only used as a baseline.

In this talk, I will reveal some of the hidden potential of the Web that lies beyond the n-gram, with focus on the syntax and semantics of English noun compounds. I will further show how these ideas apply to a number of NLP problems, including syntactic parsing and machine translation, among others. Finally, I will share some thoughts about the future of lexical semantics and machine translation, in view of the ongoing deep learning revolution.