The EP-Memory records the sequence of linguistics categories extracted from each sentence processed, building a chain of tokens named Parsed Structure (PS). Every PS is weighted according to its usage in real previously processed dialogs. As an example for sentence: “Hola” the related EP is:
“ER= {< 0.0152009> < <0.0183416> [SUSTANTIVO] # {{Hola}}”.
As the memory learns from the experience, it reinforces parts of its knowledge from initial status, changing the relative relevance of involved structures. After certain number of occurrences, weighting is stabilized and changes are minor.
In {1} EP tests show the WIH ability to process dialogs, derive weightings and generate category sequences.
The first EP training was performed with random structures to show that no special category sequence is preferred by the prototype since the dataset was constructed with no special use of any parsed structure. As a consequence, weighting values should be mostly evenly distributed in the domain.
The obtained results, showed in Figure 3, verify this hypothesis.
The second test made in that paper, deal with real test cases. Figure 4 shows the 3D-histogram with a strong change in the general bar distribution. There is a significant biased slope down curve from highest values.
As weighting distribution follows roughly real usage of linguistics categories, it follows that EP-Memory is able to model it. But, in order to make the prototype be able to handle language, it must be complemented with the modeling of contexts where each type of linguistic EP structure is used. That task is distributed mostly among ER-Memory and MCT-Memory.
References
López De Luise M.D, Hisgen D, Soffer M. “Automatically Modeling Linguistic Categories in Spanish”. CISSE 2009.