One of the major questions in cognitive science is ‘what makes us human?’. To me, the answer to this question comes largely from human’s unique ability to use language. No other species has a communication system as complex, abstract, and creative as human language. To understand how human languages make us who we are, we need to understand the core, universal properties of language– those underlying properties that are the same across all languages of the world. We also need to understand the cognitive and social properties that humans possess in order to learn complex human languages. To the language scientist, how languages are learned, represented, and processed, is the key to understanding the core of human cognition. I am particularly interested in word formation, including phonology, morphology, and the interfaces between the two. My goal is to use the patterns that are found among languages of the world as a basis to form hypotheses about how the mind must work in order to produce such patterns. In particular, my research makes use of the artificial grammar learning paradigm, in which participants are trained on to a miniature version of a novel, made up (artificial) language, and then tested on their learning and generalization of that language in order to test predictions about learning and representations.
Since Fall 2011 (with a maternity leave in Spring and Fall 2019), I have served as a psychology faculty at primarily undergraduate institutions, with limited resources to support faculty research and professional development. Despite this, I have continued to publish in top venues in cognitive science and linguistics (e.g., Journal of Memory and Language, Language and Speech, Language), have been invited to write comprehensive review articles (e.g., Language and Linguistics Compass, WIREs) and commentaries, including two contributions (one coauthored with Dr. Anne Pycha) to the Oxford Encyclopedia of Vowel Harmony. In 2017, I won the K.T. Tang Award for Excellence in Research at PLU for my contributions to the field. My most influential contributions have been on the nature of locality in phonology in morphology, discussed below.
Much of my research has focused on vowel harmony, a phonological pattern that requires all vowels in a word to share a particular phonetic feature value within a single word. This pattern is of particular interest to phonologists because it involves long-distance patterns, which are represented in terms of complex, hierarchical structures. In order to understand how the mind represents these complex patterns, I have conducted dozens of experiments related to learnability and vowel harmony. Much of these experiments tap into linguistic representations by comparing how adults learn patterns that on the surface are very similar, but require very different abstract linguistic representations. One type of pattern (opaque vowels) requires only adjacent, flat-structured representations (e.g., aXb, where X and b are required to share a feature. A similar type of pattern (transparent vowels) is represented with non-adjacent dependencies (e.g., aXb where a and b are required to share a feature). In Finley (2015, Language), adult, English speaking learners were exposed to a vowel harmony pattern that required either adjacent representations (opaque vowels) or non-adjacent representations (transparent vowels) that required nonadjacent vowels to conform to the harmony pattern. Participants in both conditions were able to learn the basic harmony pattern, but only participants in the Opaque vowel condition learned the behavior of the non-participating vowel. A series of follow-up experiments demonstrated that increasing the training in terms of types (number of words containing the relevant non-adjacent sequences) as well as tokens (number of times each type was heard) is sufficient for participants to learn the non-adjacent pattern. These results suggest that the information required for learning a language is dependent on the complexity of the representation. I have since conducted several follow-up studies related to learning transparent vowels in vowel harmony. One set of studies explored the role of anti-harmony in learning transparent vowels. Anti-harmony occurs in languages with transparent vowels, where stems always select for the opposite harmonic feature value (e.g., front vowel stems in Hungarian that always take back vowel stems). The monotonicity theory of transparency (Rebrus & Törkenczy, 2015) suggests that only languages with anti-harmony can permit transparent vowels. Exposure to stems that selected a disharmonic affix created a bias towards the transparent vowel (Finley 2019, Proceedings of AMP). I have also explored the role of coarticulation in learning transparent vowels. Several theories of vowel harmony suggest that transparent vowels are a result of increased coarticulation (e.g., front neutral vowels are produced more posterior in the mouth in back vowel contexts) (Benus & Gafos, 2007). I manipulated coarticulation through cross-splicing highly coarticulated tokens (confirmed with F2 measurements) so that some participants heard items consistent with a highly coarticulated neutral vowel, and others did not. There was no major effect on learnability of transparent vowels through this manipulation (Proceedings of the LSA), suggesting that differences in learnability are likely due to representational, rather than phonetic differences.
Learners are biased towards locality for non-participating vowels in vowel harmony, but there does not appear to be a bias for locality in exceptions to vowel harmony (Finley 2021, Language and Speech). Participants exposed to a vowel harmony pattern where one affix alternated for back/round harmony ([me]/[mo]) and another affix failed to alternate ([go]) showed no biases towards locality when presented with novel items that contained both affixes (e.g., showed no preference for local [bede-go-mo] over non-local [bede-go-me]), raising questions about the nature of locality of exceptions shown in my previous work (Finley 2010, Lingua). While there was no bias towards locality, there was a bias towards harmony; participants were more likely to select the non-alternating (back vowel) affix when stem vowels were also back. These results have been replicated with an aim to simulate these results using computational methods, comparing different instantiations of MaxEnt for variation (Hughto et al., 2019), and extended by Stella Wang who explored the interaction between exceptions and directionality in vowel harmony as an undergraduate research fellow in my lab (Proceedings of the Penn Linguistics Conference, in press).
While the majority of my research (including my dissertation) has focused on vowel harmony, I have also explored locality issues in consonant harmony, showing that learners are biased towards local over non-local consonant harmony, in line with the implicational universal that non-local harmony always allows local dependencies, but not vice-versa (Finley 2011, Journal of Memory and Language). However, if learners are exposed to a non-local pattern across two syllables, they will generalize to three syllables (2012, Cognitive Science), supporting a theory of non-local representation that goes beyond n-grams. Additional evidence for the need for representations of non-local phenomena beyond statistics has come from studies on morpheme segmentation. Using a statistical learning paradigm, we showed that speakers can parse complex words into stems and suffixes (Finley and Newport, 2010, BUCLD) without any reference to semantics, but were unable to parse a language with infixes, even with the same general statistics (Finley and Newport, in preparation). In addition, learners can parse a language with non-concatenative morphology, but with restrictions; both children and adults generally relied on word edges and similarity of structures to parse non-concatenative morphology (Finley and Newport, 2021ab).
Additional work on learning biases in morphology has focused on gender and number systems. In syncretism, morphological forms merge together, such that the same form can have multiple meanings. Syncretism can both benefit and challenge the learner; while syncretism creates fewer forms to learn, it also increases ambiguity. When syncretism is systematic, ambiguity is decreased, and learning increases (Finley and Wiemers, 2015, WCCFL). Learners are also more likely to benefit from syncretism if syncretism occurs across a ‘marked’ category, such as dual (Finley, in press). Another study demonstrating how biases shape the learning of morphology demonstrated that learners make use of gender stereotypes when making inferences about gender marking in a novel language (Finley et al., in press, Language and Cognition). All of these studies demonstrate the role of cognitive biases in shaping linguistic structure. These biases show how learners can make use of limited information for make inferences about how language should work. Using a cross-situational learning paradigm, I showed that learners can simultaneously acquire noun meanings and their gender, and then use that morphological knowledge to infer words (Finley, in submission).
I have built a research program in which experimental techniques are used to address theoretically driven questions in language and cognition. I am firmly committed to interdisciplinary research that uses learning as a way to approach linguistic representations. As my research continues to address more nuanced questions pertaining to linguistic theory (e.g., morphophonology, exceptions, representations, and the time course of learning), we will gain a better understanding of language, representation, and cognition. I have several exciting plans for my research, and I am eager to recruit graduate and undergraduate students to take part in the development and implementation of these projects. I plan to continue my research on the nature of linguistic representations using artificial language learning experiments. This will include issues related to locality in phonology and morphology, as well as learning biases for linguistic structures. This type of work involves developing formal, axiomatic approaches to hypothesis testing, allowing for clear, concise links between data and theory. I am also eager to develop computational models (e.g., Bayesian and neural network models) to better understand the nature of linguistic representations.