Keynote talk by Shalom Lappin

Deep learning and sentences in context

University of Gothenburg, Queen Mary University of London & King's College London

Abstract

In earlier work colleagues and I explored the accuracy of LSTMs in predicting mean human judgments of sentence acceptability, both in and out of context. In this talk I present the results of recent experiments that we have done to extend this research. We explore two factors that produce a puzzling compression effect for human acceptability ratings in context, relative to ratings of these sentences out of context. We also use more powerful transformer models to predict these judgments across context types. We find that bidirectional transformers approach, and on at least one plausible metric, surpass human performance for this task.

lappin_geko20.mp4

Thank you for watching! What's next?

  • Attend the live discussion session for this keynote on May 18th! Check the program and consider registering in advance.
  • Leave comments/questions on our keynote talks in the #keynotequestions channel on geckosympo.slack.com. !! To access it you need to register for GeCKo !! You will then receive an invitation to Slack.
  • Explore! Watch the other keynote talks and check out the posters.