Yoav Artzi, Cornell University

 Context and Non-compositional Phenomena in Language Understanding [Slides]

 Sentence meaning can be recovered by composing the meaning of words following the syntactic structure.  However, robust understanding requires considering non-compositional and contextual cues as well. For  example, a robot following instructions must consider its observations to accurately complete its task.  Similarly, to correctly map temporal expressions within a document to standard time values, a system must  consider previously mentioned events. In this talk, I will address such phenomena within compositional  approaches, and focus on the non-compositional parts of the reasoning process.

 Joint work with Kenton Lee and Luke Zettlemoyer.




 



Alexander Koller,  University of Potsdam 

Top-down and bottom-up views on success in semantics [Slides]

As participants of *SEM, all of us are excited about the resurgence of research in computational semantics over the past few years. There is a general feeling that modern data-driven approaches to semantics, especially distributional ones, are great success stories. This is in contrast to classical knowledge-based approaches, which are widely accepted as respectable and pretty, but not useful in practice.

In my talk, I will challenge this perception by asking what the measure of success of research in semantics should be. I will distinguish between bottom-up and top-down views on linguistic theories, and argue that we count (computational) truth-conditional semantics as failed for top-down reasons, but data-driven semantics as a success for bottom-up reasons. I will argue that identifying top-down goals for modern computational semantics would help us understand the relationship between classical and modern approaches to semantics, and distinguish research directions in modern semantics that are useful from those that are merely fun.

In the second part of the talk, I will focus on one candidate for a top-down goal that is mentioned frequently, namely similarity of arbitrary phrases based on distributional methods. I will ask whether our evaluation methods for similarity are appropriate, and whether similarity is even a meaningful concept if the task and context are left unspecified. I will conclude with some thoughts on how we might obtain top-down goals by taking a more task-based perspective.





https://sites.google.com/site/starsem2016/program/keynote-speakers/bonnie_smile.jpg?attredirects=0
 



Bonnie Webber, University of Edinburgh

Exploring for Concurrent Discourse Relations [Slides]

Discourse relations are an element of discourse coherence, indicating how the meaning and/or function of clauses in a text make sense together. Evidence for discourse relations can come from a range of sources, including explicit discourse connectives such as coordinating and subordinating conjunctions and discourse adverbials. While some clauses may require an explicit connective to provide evidence for a discourse relation, other clauses don't.

This talk starts from the observation that there may be more than one piece of explicit evidence for how a clause relates to the rest of the discourse. I first consider why this may be so, before  considering the related questions of why there may only be one piece of explicit evidence or none at all. The amount of explicit evidence, however, does not constrain the possibility that a clause bears more than one relation to the previous discourse, what we have called ``Concurrent Discourse Relations''.

Since we don't fully understand concurrent discourse relations, I present work we have been doing on exploring for evidence from corpora and on getting evidence from crowdsourcing experiments. The goal is to be able to use such evidence to help automatically annotate concurrent relations in corpora and improve the ability of systems to extract information from text by recognizing more of the relations underlying text coherence.