Phonology has been modeled using rules, constraints, finite state machines, exemplars, and many other approaches. Recent advances in deep learning have prompted researchers to explore how deep neural architectures (e.g., seq2seq models, transformers, RNNs, LSTMs, VAEs, GANs) can shed new light on human sound systems. This workshop aims to bring together scholars who use—or are interested in—deep learning to model phonological processes and phenomena.
We welcome abstracts that address any aspect of deep learning in phonology, including (but not limited to) theoretical implications, empirical results, modeling and architectural choices, and evaluation methods. The special session will include presentations, followed by a dinner and open discussion about the future directions of modeling phonology using deep learning.
We are proudly supported by the Institute of Cognitive and Brain Sciences at UC Berkeley.
The invited speaker will be Stephan Meylan, Ph.D. (UC Berkeley), whose work explores the intersection between deep learning, cognitive science, and linguistics.