Invited Speakers

Mehrnoosh Sadrzadeh

Mehrnoosh Sadrzadeh is a Senior Lecturer at Queen Mary, University of London. She holds a BSc in Computer Software Engineering and an MSc in Logic from Sharif University of Technology, Tehran, Iran. After obtaining her PhD in 2006, under an Ontario Graduate Scholarship and jointly with Oxford CS, I did two postdoctorates in Southampton and Paris; In 2008, she was awarded an EPSRC Postdoctoral Fellowship in Oxford and a Junior Research Fellowship in Wolfson College; in 2011, she received an EPSRC Career Acceleration Fellowship in Oxford, which in 2013, she transferred to the School of Electronic Engineering and Computer Science, Queen Mary University of London. She was promoted to senior lecturer in 2016, she is co-leading the school’s Computational Linguistics Lab, and has just finished a Royal Academy of Engineering Industrial Fellowship (Jan 2019) in partnership with the BBC R&D Data Team. Mehrnoosh works on developing high level logical and mathematical models of natural language, learning the model parameters from data, and applying the results to main stream tasks.


Ellie Pavlick

Ellie Pavlick is an Assistant Professor of Computer Science at Brown University. She received her PhD from University of Pennsylvania in 2017, under the supervision of Chris Callison-Burch, where her focus was on paraphrasing and lexical semantics. Ellie's current research is on cognitively-plausible language acquisition, focusing on grounded language learning, in collaboration with the Robotics and Visual Computing groups at Brown, as well as on pragmatic inference, in collaboration with the Department of Cognitive, Linguistic, and Psychological Sciences.



Raffaella Bernardi

Raffaella Bernardi is Assistant Professor at DISI (Department of Information Engineering and Computer Science) and CIMeC (Center for Mind/Brain Science), University of Trento. She studied at the Universities of Utrecht and Amsterdam specializing in Logic and Language. After her PhD defence in Categorial Type Logic, she has worked within the Network of Excellence in Computational Logic, and she has been part of the Management Board of FoLLI (European Association for Logic, Language and Information) for several years. Her research interests took a computational turn in 2002 when she moved to the Free University of Bozen-Bolzano and started working on Interactive Question Answering. In 2011, she started working on Distributional Semantics investigating its compositional properties and its integration with Computer Vision models. She has been the PI in the EU Project "CACAO", a member of the unitn team for the EU projects "Galateas", LiMoSINe and CogNET. She was part of the team that won the ERC 2011 Starting Independent Research Grant COMPOSES. She has been member of the Management Board of the Cost Action "The European Network on Integrating Vision and Language" and she is part of the Executive Board for the Special Interest Group on Computational Semantics (SIGSem) and for Formal Grammar.

Invited Talk Abstracts

Mehrnoosh Sadrzadeh "Ellipsis in Compositional Distributional Semantics"

Ellipsis is a natural language phenomenon where part of a sentence is missing and its information must be recovered from its surrounding context, as in “Cats chase dogs and so do foxes.”. Formal semantics offers different methods for resolving ellipsis and recovering the missing information, but the problem has not been considered for distributional semantics, where words have vector embeddings and combinations thereof provide embeddings for sentences. In elliptical sentences these combinations go beyond linear as copying of elided information is necessary. I will talk about recent results in our NAACL 2019 paper, joint with G. Wijnholds, where we develop different models for embedding VP-elliptical sentences using modal sub-exponential categorial grammars. We extend existing verb disambiguation and sentence similarity datasets to ones containing elliptical phrases and evaluate our models on these datasets for a variety of linear and on-linear combinations. Our results show that indeed resolving ellipsis improves the performance of vectors and tensors on these tasks and it also sheds some light on disambiguating their sloppy and strict readings.


Raffaella Bernardi "TBA"


Ellie Pavlick "What should constitute natural language "understanding"?"

Natural language processing has become indisputably good over the past few years. We can perform retrieval and question answering with purported super-human accuracy, and can generate full documents of text that seem good enough to pass the Turing test. In light of these successes, it is tempting to attribute the empirical performance to a deeper "understanding" of language that the models have acquired. Measuring natural language "understanding", however, is itself an unsolved research problem. In this talk, I will discuss recent work which attempts to illuminate what it is that state-of-the-art models of language are capturing. I will describe approaches which evaluate the models' inferential behavior, as well as approaches which rely on inspecting the models' internal structure directly. I will conclude with results on human's linguistic inferences, which highlight the challenges involved with developing prescriptivist language tasks for evaluating computational models.