Interactive Assignments for Teaching Structured Neural NLP

Overview

These assignments were created for UC Berkeley's Computer Science graduate NLP course (cs288) in Spring 2020 and 2021. Assignments are designed to be interactive, easily gradable, and to give students hands-on experience with several key types of structure (sequences, tags, parse trees, and logical forms), modern neural architectures (LSTMs and Transformers), inference algorithms (dynamic programs and approximate search) and training methods (full and weak supervision). Our aim was to let students incrementally and interactively develop models and see their effectiveness on real NLP datasets. We designed assignments to build incrementally both within each assignment and across assignments, with the goal of enabling students to undertake graduate-level research in NLP by the end of the course.

Assignments

All assignments are interactive Google Colab notebooks.

Project 0: Intro to PyTorch Mini-Project

Project 1: Language Modeling

Project 2: Neural Machine Translation

Project 3: Parsing and Transformers

Project 4: Semantic Parsing

Paper

Link: http://nlp.cs.berkeley.edu/pubs/Gaddy-Fried-Kitaev-Stern-Corona-DeNero-Klein_2021_TeachingNLP_paper.pdf

Presented at the Teaching NLP Workshop @ NAACL, 2021.

Authors

David Gaddy, Daniel Fried, Nikita Kitaev, Mitchell Stern, Rodolfo Corona, John DeNero, and Dan Klein