Keynote talk by Shalom Lappin
Keynote talk by Shalom Lappin
Deep learning and sentences in context
Deep learning and sentences in context
University of Gothenburg, Queen Mary University of London & King's College London
In earlier work colleagues and I explored the accuracy of LSTMs in predicting mean human judgments of sentence acceptability, both in and out of context. In this talk I present the results of recent experiments that we have done to extend this research. We explore two factors that produce a puzzling compression effect for human acceptability ratings in context, relative to ratings of these sentences out of context. We also use more powerful transformer models to predict these judgments across context types. We find that bidirectional transformers approach, and on at least one plausible metric, surpass human performance for this task.