Instructor
Giorgio Magri
magrigrg@gmail.com
https://sites.google.com/site/magrigrg/home
Goals
This course offers a perspective on phonological theory focused on computation. Questions addressed include the following: what are the formal properties of various phonological frameworks? what is the statistics behind probabilistic phonological models? what are the typological predictions of categorical and probabilistic phonological models? how should the problems of learning, production, and interpretation be properly formulated within these various phonological models? to what extent do these computational problems admit provably efficient and correct solution algorithms? how do these algorithms depend on architectural properties versus phonological substance? Methods explored include formal language theory, statistics, machine learning, and convex geometry.
Topics for this year
Rather than providing a broad overview of the field, this course focuses each year on a few specific related topics. The current 2018 installment of the course will focus on two topics:
The basic idea is that constraint-based phonology stemmed out of the connectionist literature of the eighties. We thus want to understand what is new in the recent revival of neural networks and how it might impact phonology. This issue might be particularly relevant for the probabilistic strand of phonology that is keeping the field busy in recent years.
Practicalities
This course is taught within the M2 program Linguistique informatique at the University of Paris Diderot. In the fall 2018, the course meets on Tuesdays, from 13:30 to 15:30, in room 309. The course is covered by lecture notes, which are made available shortly after each class, through a link on the syllabus below. Grading is based on class presentations at the end of the course. The course is taught in English and all teaching materials are in English as well (student presentations can of course be delivered in French).
Prerequisites
Apart from some familiarity with phonological theory (as presented for instance in B. Hayes' textbook), there are no other prerequisites to attend the course. The pace of the course is calibrated on the audience. Please contact the instructor for any questions concerning prerequisites.
Syllabus
There is no predefined set of topics that needs to be covered and the syllabus will therefore develop as we proceed, depending on the cruising speed. The rough idea is to spend September and October on probabilistic constraint-based phonology and then November and December on deep neural networks. The text of reference for the latter topic is Goodfellow, Bengio, and Courville's recent textbook.
typological structure in probabilistic constraint-based phonology (and in particular MaxEnt grammars)
deep neural networks in phonological theory (with an eye on the formal foundations)