Die Konsonantenschwächung, das Anlautgesetz, the Spelling Patterns of Notker, and the de Nuptiis System


Below are four papers scanned into Adobe pdf.  They came out of a Germanic Philology program at the University of Minnesota.

-Die Konsonantenschwächung, Word Final <f> and <h> in Notker, 1997

-Das Anlautgesetz, Word Initial <s> and <h> in Notker, 1997

-The Spelling Patterns of Notker, 1996
pages 1 to 39
pages 40 to 79
pages 80 to 118
pages 119 to 161

-The de Nuptiis System (with focus on the Anlautgesetz), 1989
pages 1 to 40
pages 41 to 80

The first two papers are derived from the third, and I think another half dozen could be too.  These are just posted for the common good (or common ill I guess, it's up to you to decide).  There is no need to cite them.  If you do cite however, can you break with convention and clean up the spelling mistakes (documents are being uploaded without editing, as I only have them in paper form).

I hesitated to post the fourth paper, since the presentation is so rough.  It was written in a word processor that did not allow special characters, so formulas and non-ASCII letters had to be hand-written in.  I don't remember if the paper was ever handed in for a class.  But although this fourth paper serves as source material for the third, it also includes material not found in the third, like an analysis of sonorants, plus a statistical analysis of the Anlautgesetz in de Nuptiis, showing it inhibited not just by boundary and sonority, but by these in gradation.  The Anlautgesetz was joined by an Inlaut- and Auslautgezetz, to form a three part fortition rule in Notker:

All obstruents, not just stops, become fortis
[-sonorant] --> [+fortis] 

a) syllable internally, when next to another obstruent.
/ [-sonorant]

b) syllable finally, when preceded by sonority level of nasal or less.
/ [nasal sonority or lower] __ $

c) syllable initially, unless preceded by a sonorant.  However the effect of the sonorant is affected by both its degree of sonorancy (vocalic sonorants more likely than consonantal to inhibit Anlautgesetz), and by the degree of boundary before the obstruent (syllable plus high point is more likely to allow Anlautgesetz after a sonroant than syllable boundary plus low point, and both more so than syllable boundary plus only word boundary). 
/ (-sonorant) $ __

/ ([+sonorant] ([+boundary])) $ __

As you can see, a binary feature system, as used in generative phonology rules, does not depict gradations elegantly, or an element that inhibits the inhibiting of another element.  But it was the system most familiar to readers at the time.

It has been 15 - 20 years since I have read these papers, and only did so after cleaning out our library this winter.  If I remember correctly however, their goal was to design two computer programs.

The first program, and easiest, would generate Notker's de Nuptiis text.  That necessitated including substantial formulas (in the generative tradition, which seemed easiest back then to integrate into a computer program, although nowadays other traditions could be coded in too).  In addition to a phonological component, there would follow a graphological one.  Phonemes would undergo phonological rules to generate allophones.  These would map to graphemes, which would undergo graphological rules to generate allographs.  The input was the strings of phomemes in words/morphemes, the output was the string of graphs in the manuscript.  The rules required positing provisional syllable boundaries for phonological rules and word boundaries for graphological rules, with those boundaries getting revisited at the end (ex: in this text, syllable boundary by the syllable boundary driven fortitions, word boundary by the spelling changes also conditioned at morpheme boundary after prefixes).

This concept wasn't new (for at least the phonological component).  People had been doing it on paper for a couple decades, if only with isolated spelling patterns, and not for an entire manuscript.

The second program was more ambitious.  This was an attempt to rethink the methods of language reconstruction via ancient texts, and code those methods into an algorithm.  Input would be the ancient text, ancient source writing system, and related modern languages.  Output would be the manuscript system, both phonological (morphemes, phonemes, allophones, phonological rules, syllable boundaries, any rule orderings), and graphological (graphemes, allographs, graphological rules, word boundaries, etc).  As each new text was run through the system, the methods could be refined, and then run against previous texts for regression testing.  Such automation would drive accountability to our field.

In the late 1980's, this automation was cost prohibitive to program, and alien to most scholars in the field.  Both things have changed by now, and perhaps quite a few readers will be able to sketch out an entity diagram of phonemes, graphemes, allographs, and so on, and then play with the input/output via a different list of phonological and graphological algorithms.  That first computer program, the one to generate de Nuptiis, is pretty doable.  Feel free to contact me if you have questions.  The second is harder, and with much larger data sets.

Alles Gute with your studies,
Christian Nederloe