Artificial grammar learning (AGL) studies have been widely used for testing the learnability of phonological patterns. It has been shown at the behavioral level that learners can learn and extract adjacent and non-adjacent dependencies with relatively short training. Less is known about how lab-learned patterns are encoded at the neurophysiological level. The aim of this project is to examine the neurophysiological correlates of different learning mechanisms when learning non-adjacent phonotactic patterns. We believe that understanding phonological processing will illuminate the learning mechanisms (domain-specific vs. domain-general) used to acquire language.
Using both behavioral and EEG/ERP measures, we are interested in the following questions:
Do domain-specific (linguistic) vs. domain-general mechanisms support learning new phonological patterns?
Are there reliable neurophysiological correlates of processing sound patterns?
Do different learning mechanisms lead to different types of neural observations?
In a behavioral experiment, we showed that when phonological patterns fall within the scope of human languages, they can be learned. When they fall outside, they are much harder to learn. This is a piece of direct evidence that the domain-specific phonological learning mechanism is limited by linguistic constraints.
We also look for neurophysiological correlates of phonological computation that can be detected during word processing using EEG. We find that the P3 component - reflecting categorization of phonotactically well-formed vs. ill-formed words - shows how fast the brain computes the phonotactic difference between words following the pattern and words violating it. In addition to the P3, we find a Late Positive Component (LPC) reflecting violations of non-adjacent phonotactic constraints that influence later stages of cognitive processing.
We also compare implicit (domain-specific) vs. explicit (domain-general) learning strategies. Implicit learning is how we learn our first language - it is cue-based, effortless, unconscious, and required no feedback. Explicit learning is how we often learn second languages - it is rule-based, effortful, conscious, and requires feedback. Our results show that explicit learning works - our participants showed high behavioral sensitivity to the pattern. However, while implicit learning leads to a measurable neural learning response, explicit learning leaves the brain silent.
Embedding phonological processing within cognitive neuroscience can reveal new insights into critical learning related questions. Our observations about learning and computing phonological non-adjacent dependencies suggest a complex interplay between domain-general and domain-specific learning and processing mechanisms.