Phonological features have been considered the building blocks of language, as they allow parsimonious descriptions of sound inventories as well as phonological patterns and alternations (Chomsky & Halle, 1968; Hall, 2001). Indeed, phonological patterns often involve sound classes; for example, the realization of the English Saxon genitive s can be predicted as follows: “the suffix agrees in the feature [voice] with the preceding (non-sibilant) sound”, e.g., cat[s] and dog[z], and even Ba[xs] music (according to Halle, 1964). Furthermore, given that listeners/speakers can learn and generalize a phonological rule in perceptual experiments (Finley & Badecker, 2009; Wilson, 2006), there is reason to believe that phonological features are psychologically real. However, it is still unclear how learners arrive at these abstract, feature-based generalizations. To address this question, we assessed French listeners’ generalization to untrained consonants after being trained to an artificial phonotactic pattern instantiated in a subset of a natural class. There were always two phases to this study: exposure and test. Test was always with new words, to ensure that the response was not simply due to memory for the words in the exposure. In separate series of studies, we tested infants and adults.
Results from the infant studies show that 6-month-olds encode patterns at the level of the feature directly. In Exp. 1, infants were tested on their preference between pseudo-words with trained sounds and pseudo-words with legal sounds (within-class, but untrained sounds). They showed no significant preference, suggesting that they were unable to notice that the words with legal sounds were more novel than those with the trained sounds. In Exp. 2, testing compared pseudo-words with legal versus illegal sounds; the latter were not part of the natural class used in training. Here, infants showed a robust preference for illegal pseudo-words.
1) the data
2) the BU Proceedings paper
Contrastingly, in adults generalization is based on individual consonants (rather than on the whole class of onsets). Nonetheless, it does appear to operate on discrete features and/or articulatory representations, rather than uninformed acoustic distances drew directly from the stimuli. Thus, there is some support for phonotactic patterns being stored abstractly from the raw signal, using phonological features or other experientially-based (e.g. articulatory) units.
1) the data
2) final results (supplementary analyses txt; figures are too heavy to be posted, please do email if you'd like to see them!)
3) the supplementary materials
4) a draft of the JLabPhon paper