Giorgio Magri‎ > ‎

Convergence of error-driven ranking algorithms

Paper:
available here


Abstract: 
According to the OT error-driven ranking model of language acquisition, the learner performs a sequence of slight re-rankings triggered by mistakes on the incoming stream of data, until it converges to a ranking that makes no more mistakes. This learning model is very popular in the OT acquisition literature, in particular because it predicts a sequence of rankings that models gradualness in child acquisition. Two implementations of this learning model have been developed in the OT computational literature, namely Tesar & Smolensky (1998) Error-Driven Constraint Demotion (EDCD) and Boersma (1997) Gradual Learning Algorithm (GLA). Yet, EDCD only performs constraint demotion, and it is thus shown to predict a ranking dynamics too simple from a modeling perspective. The GLA performs both constraint demotion and promotion, but has been shown not to converge. This paper thus develops a complete theory of convergence of error-driven ranking algorithms that perform both constraint demotion and promotion. In particular, it shows that convergent constraint promotion can be achieved (with an error-bound that compares well to that of EDCD) through a proper calibration of the amount by which constraints are promoted.


Remarks: 
This paper is also available as ROA 1105-1010. The papers available here and here contain a shorter presentation of two of the results presented in this paper. The Matlab codes mentioned in the paper are available here. 


BibTex entry:
@ARTICLE{Magri(2012Convergence), AUTHOR = {Magri, Giorgio}, TITLE = {Convergence of error-driven ranking algorithms}, JOURNAL = {Phonology}, VOLUME = {29}, NUMBER = {2}, PAGES = {213--269}, YEAR = {2012} }