Invited Speakers

Pieter Abbeel (UC Berkley)

Getting Signal in RL from Auxiliary Losses, Unpaired Observations, Single Snapshot of the World

Jeffrey Bilmes (U of Washington)

Machine Education

Machine learning enables computers to solve tasks too complex to program directly, and this is achieved by optimizing objectives parameterized by large amounts of data. Many machine learning strategies are inspired by human learning, as humans are uncannily successful at learning complex tasks. Viewing existing machine learning paradigms through the lens of human learning, however, leaves a lot to be desired. In this talk, we'll look at some novel mathematical objectives for machine education that are inspired by human education. Some of these involve simultaneous discrete and continuous optimization, continuous over parameter space and discrete over sample space. Since information diversity is often important in human education, some of these objectives utilize submodular functions. This includes forms of single-learner and ensemble-based curriculum learning. Others will use submodularity to render machine learning more computationally efficient, by producing either smart mini-batches or good core sets.

The above is work performed jointly with Tianyi Zhou, Shengjie Wang, Wenruo Bai, Baharan Mirzasoleiman, and Jure Leskovec.

Yejin Choi (U of Washington)

Cracking Commonsense AI with Knowledge Modeling and Generative Reasoning

Despite considerable advances in deep learning, AI remains to be narrow and brittle. One fundamental limitation comes from its lack of commonsense intelligence: reasoning about everyday situations and events, which in turn, requires a broad spectrum of background knowledge about how the physical and social world works.

In this talk, I will discuss new integration of learning paradigms --- knowledge modeling and generative reasoning --- that demonstrate promises toward commonsense intelligence.

Tom Griffiths (Princeton)

Bridging metalearning and metareasoning: Towards heterogeneous and compositional task distributions

Metalearning provides an effective way for learners to transfer experience from one task to another. However, most metalearning problems assume that tasks are sampled iid from a relatively unstructured distribution. This may be appropriate for describing some of the problems that human learners face, but in many ways the most impressive feats of human metalearning involve not just simply transferring experience but recognizing that a new problem can be decomposed into parts that are in turn solutions to parts of other problems. The problems that people face are heterogeneous and compositional, which requires a different approach to metalearning. I will outline how this can be addressed by bridging metalearning and metareasoning.

Raia Hadsell (DeepMind)

Learning with Rich Experience

Conversational Machine Learning

If we wish to predict the future of machine learning, all we need to do is identify ways in which people learn but computers don’t, yet. Humans often learn from natural language instruction. Now that computers are finally able to have conversations (i.e., we routinely have simple conversations with our phones), it is time to explore how users might use conversations with their computers to teach these computers to perform new tasks. Today, less than 1% of phone users can program their phone, but if this line of research succeeds we might change that to 99%. This talk will describe our recent research in this direction, including development of a prototype personal agent that Android phone users can teach to perform new action sequences to achieve new commands, using natural language interaction together with demonstrations. This line of research presents a potentially significant new paradigm for machine learning to complement current data-intensive statistical approaches to machine learning. This talk covers joint work with Igor Labutov, Forough Arabshahi, Brad Meyers, Shashank Srivastava, and Toby Li.