Even though we seem happy with the goal structure for our fable, on the lower-right hand panel there are hints of electronic dissent (you may have to click again on bare canvas to see it):
Graph Validation is a built-in check that the system runs whenever you make a change to the graph. To take a step back, the interpretative-layer annotations in Scheherazade are geared toward a particular model of narrative discourse called the Story Intention Graph. The schemata of the SIG has certain rules about how connected the graph needs to be. This panel will be red if the graph isn't connected enough, and green if it is. When the panel is red, it tells you specifically what parts of the graph need additional work.
Each error on the list here is associated with a certain node. You can click on the node to jump to that part of the graph. For example, click on the button on the right side of the first error, the one that reads "the crow is sitting on the branch of the tree":
The canvas view automatically selects/highlights the corresponding timeline node (note the red border). Okay, now that we know exactly what part of the graph has an issue, what exactly is wrong? No Interpretation. This means that there is no interpretative-layer content associated with the timeline node. The interpretative layer wants completeness with respect to the timeline -- for every timeline predicate, to either annotate that predicate with an agentive reading (that is, connect that node with some arc to a goal or plan), or either annotate that predicate as having no agentive reading.
We connected most of the timeline predicates in this fable to interpretative-layer (agentive) nodes, thereby annotating them with interpretative content, but we left the first three predicates hanging without any outgoing arcs:
Scheherazade is asking, "Do these three predicates relate to any goals?"
Well, the first one, "the crow is sitting on the branch of the tree", could be interpreted as an attempt to satisfy an over-arching goal of living one's life. That is a perfectly valid interpretation of the story. Unfortunately, it sets kind of a bad precedent. We could also, by that logic, say that it relates to the goal of being above the ground, for safety's sake; for being more visible to potential male suitors, for sake of reproduction; and so on, and so forth. We could easily find ourselves sitting here all day enumerating the various rationales the crow might have for sitting there, none of which have an explicit basis in the text.
Scheherazade will happily let you do that. But the annotation guidelines I used for my experiments said, in essence, that only those goals that have at least one explicit or implicit mention should be modeled. Note that there are multiple arcs that can connect a blue timeline node to a tan goal node:
"interpreted as", for a direct or very close equivalence ("the fox snatches the cheese" is interpreted as "the fox obtains the cheese")
"implies", for an equivalence that the author probably implied but did not state directly
"actualizes", for when a timeline node implies a positive actualization status for an interpretative node, but the author probably did not state the action for the purpose of communicating that implication
The "ceases" arc is like "actualizes" but for a negative actualization status. So while the underlying story-world may have the crow motivated by a web of goals, only two -- to have the cheese, and to satisfy her vanity -- are relevant to the author's telling of the story, because they are explicitly or implicitly affected later on. These are the only two goals that have a basis in the set of sentences we have modeled as predicates.
The same strategy extends to plans, too. It's semantically plausible to say that the fox has a hundred-step plan that involves minute changes in the position of the cheese at it leaves the crow's beak and enters his paw. But none of these plan steps are important enough for the author to include in the story's telling. They would have incoming and outgoing "would cause" arcs to other interpretative-layer nodes (other steps in the plan), but no arcs giving them a basis by connecting them to timeline-layer nodes. For this reason we do not include them. The automatic graph validator (the one giving the errors) probably should include a check that every interpretative-layer node has a textual basis, but for the time being you'll have to keep this guideline in mind.
Does "the crow is sitting in the branch of the tree" relate to either of the two goals I described for the crow? No. This particular predicate is to set the scene. We can indicate this by clicking the button in the lower-right panel called "No Interpretation":
Click it, and the interface will shut off this node to outgoing arcs:
The timeline node and corresponding text node get black borders to indicate that they have no interpretation.
Click on bare canvas to bring back the validation panel. It no longer complains about "the crow is sitting in the branch of the tree".
Well, that's one down. We can go down the list this way, but keep in mind that graph validation is an aid only -- fixing all these errors is necessary for properly annotating the interpretative layer, but having no errors doesn't mean your interpretative-layer annotation is complete. (Case in point: The validation algorithm would be satisfied if every timeline node were marked with "No Interpretation".) To be complete, the annotation also has to include all the goals and plans with a textual basis, and only those goals.
Still, the validation routine was useful for prompting us to consider all the goals that have a textual basis. Only because of the validation error for "the cheese is in the beak of the crow" did we consider modeling the crow's goal to retain her cheese. But this goal is actually crucial when you think about it. It's not a complete reading to say that the fox wins but nobody loses, yet this is how we have our goal structure set up at the moment. In fact the fable describes a zero-sum game and the crow needs to be shown as the loser.
So, let's first make the goal, by first setting up a Goal box for the crow, and then inside it, placing a property "the crow has the cheese":
Since the crow has that desire, and also has the cheese, we can model it as satisfied at the story's beginning.
We connected "the cheese is in the beak of the crow" to the goal box with "implies" and to the goal content with "interpreted as".
Does this goal have any coreferent mentions (is it referred to elsewhere)? Of course -- when the crow loses the cheese. We draw a "ceases" arc between "the cheese falls" and the goal content.
Now we have encoded an act of harm to another: the fox hurt the crow by acting on plan that ultimately ceased the content of one of the crow's goals. Did the fox do this with intention to hurt the crow? Or only with intention to help himself? You can encode your own interpretation. At the moment, we have encoded that the fox only acted with intention to help himself, and the harm to the crow was a side effect he hadn't really considered. Let's instead model the notion that the fox acted with an intention to harm the crow, by adding a goal content node in the fox's goal box that would prevent the crow's goal:
Now the fox cares about two dual goals -- to get the cheese, and to defeat the crow's goal by making him lose the cheese. He cares about both of these goals individually. By snatching the cheese from the crow, he kills two birds with one stone, so to speak: to get food and to humiliate his enemy. Another reader might say instead that the food is the only rationale, or that the humiliation is the only rationale. We've now said that they are both his rationale.
This "loses" action is then actualized by "the cheese falls". When we place an "interpreted as" arc from "the cheese falls" to "the crow loses the cheese", and click on "the cheese falls", we get the following actualization status shadings:
This means that at the point in time when the cheese falls, the actualization scenario looks like this:
"the crow loses the cheese" is solid green, for "actualized", because we pointed to it with "interpreted as"
"the crow has the cheese" is solid red, for "ceased", because we pointed to it with "ceases"
"the cheese drops" is solid green, because we pointed to it with "interpreted as"
"the fox obtained the cheese" is light green, which means "expected to be actualized", because:
"the cheese drops" is actualized, and
"the cheese drops" points to "the fox obtains the cheese" with "precondition for". The system calculates that if X is a precondition for Y, or would cause Y, and X is true, then Y is expected to be true in the present or the near future. It expresses this expectation with a light green.
Try something for me -- remove the arc where we linked the timeline node "the cheese falls" to the crow's goal with "ceases". To do that, first click on "the crow has the cheese" to get this detail panel showing all the arcs adjacent to that node.
We see on this panel the three nodes that are connected to this one and the arcs that connect them. Click the "x" on the "ceases this" arc to delete it, for now. Then click on the blue timeline node "the cheese falls" to examine the actualization status again for that point in the story.
"The crow has the cheese" has gone from solid red to light red. Light read means "expected to be ceased", as a mirror image to light green meaning "expected to be actualized". But why does the system think the crow has likely lost out in her desire to have the cheese, if we didn't explicitly say so? Because:
"the crow loses the cheese" is actualized, and
"the crow loses the cheese" links to "the crow has the cheese" with "would prevent".
So if X would prevent Y, and X, then Y is probably already ceased or soon going to be ceased. This is why it's light red.
Press "undo" to replace the link where we do, in fact, confirm that the crow loses the cheese.
Why are these light red and green "expectation" shadings relevant? They can serve as another check for whether the system is correctly understanding your interpretation of the story's goal structure. If the story is supposed to elicit suspense, for instance, then there is likely a node somewhere that is expected to soon flip to "actualized", colored light green, as one or more antecedents come to pass -- but stays light green for some time, leaving the reader in a state of anticipation.
With me? Awesome. Take five, then we'll finish up.
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
..
Okay, let's take another look at our validation panel.
We have two more cases of "No Interpretation". For the first one, "the fox observes the crow", let's confirm that yes, there is no interpretative-layer implication here. Click the "No Interpretation" button for that timeline node.
That leaves the very last timeline node: "the fox says to the crow that the crow is obligated to want to own wits". Does this have a goal interpretation?
I think so. The fox is insulting the crow by calling her stupid. He wouldn't be doing this if he didn't have an intention to hurt the crow. So let's say that this clause implies that the fox wants to insult the crow, and that it is "interpreted as" a successful fulfillment of this goal:
Great! Now every timeline predicate has a goal-centric interpretation or a confirmation that it does not relate to any goal. Are we done?
Hrm. Almost, but not quite. What is a "drain"?
To explain this, we need to think about the current encoding from the perspective of the computer. Right now, we know that the fox aims to achieve something, and to prevent the crow from having something. As humans we know the verbs, nouns and adjectives relating to those things (like "have", "cheese", and "insult") but the computer doesn't understand human nature well enough to contextualize these. Is it good or bad to have cheese? What if cheese were a radioactive poison and the crow were hurting himself and putting the entire society at risk by holding it high up in the tree? Maybe the fox is a hero trying to save the crow's life by running away with the cheese before it poisons everyone? If the fox had to trick the crow to get the cheese away, such a story encoding would look close to identical to this one. That's a problem. Without a real understanding of what is good or bad for an agent -- things that have an affectual impact on a character -- it isn't possible to really know whether stories are similar or analogous to each other. We need a way for all story encodings to express, with a common language, what is good and what is bad with respect to each character.
A special kind of node called the Core Goal node (instead named the "Affect" node in the thesis) is introduced to serve that purpose. It is called "core" here because it is designed to go at the ends of plans. A Core Goal node put at the end of the plan serves to orient the entire plan as either positive or negative with respect to a character, as if it each plan step were suddenly magnetized as positive or negative. Like this:
At the core, the fox wants health. This is an axiom. We can all agree that health is a basic need for a character and is good for that character.
The fox wants to obtain the cheese and obtaining the cheese would provide for its health.
Therefore, the fox wants cheese because achieving this would help itself rather, as opposed to harming itself, helping another or harming another.
The fox wants the cheese to drop. This is a precondition for the fox obtaining the cheese.
Therefore, the fox wants the cheese to drop because this gets him closer to helping himself.
And so on, back to the beginning of the plan. By adding a single special node to a plan, we now know for every plan step whether the actualization of that step would be good or bad for at least one character. It's a form of inductive reasoning, with the Core Goal serving as the "base case" (hence the name "Core"), and the chain of arcs leading up to it informing a logical induction.
Let's make this concrete. Select the fox's large goal box, the one with the multi-step plan. Then, from the panel at the top, select "health" of "the fox":
Click "create" and place the resulting black node near the end of the plan:
Connect "the fox obtains the cheese" to "THE FOX: HEALTH" with a "provides for" arc:
Now look again at our list of validation errors:
It's much shorter now! Actually we fixed four of them at once. Which four? The four plan steps leading up to the new Core Goal node ("the fox obtains the cheese", "the cheese drops", "the crow drops the cheese", "the crow opens the beak of the crow"). All four of these are now understood by Scheherazade to be things that the fox ultimately wants to happen because they would be good for its health.
Now let's give an affectual orientation to the fox's secondary plan, to humiliate the crow. Set up a Core Goal about "ego" relating to the crow, inside the box relating to the crow's opening goal:
Add a "provides for" link from "the crow has the cheese" to "THE CROW: EGO":
Bingo! Now the fox's secondary plan is affectually oriented. We've encoded symbolically that the crow's state of having the cheese is good for it, and the fox sets out to end that good state, therefore harming the crow. Our hypothesis that the fox may have been saving the crow from a radioactive destruction is now disproven; nay, this is a sordid tale of harm intend'd and foxdom's inhumanity to crowdom.
Even though the validation error list did not detect it, we are also missing a Core Goal node to orient the fox's deceptive goal. The plan that he wants the crow to have must feature its own affective orientation. The fox wants the crow to have a two step plan: the crow sings, and the fox thinks the crow's voice is beautiful. What should be the Core Goal here? How about CROW.EGO?
Here we are saying that the fox wants the crow to have a self-serving plan, as opposed to just a plan. Doesn't this make the story richer? Suddenly it's about "bad advice" and a hidden agenda: the fox misled the crow into believing that she was acting toward her own interests, when in fact she was acting against them. Due to the typing of the Core Goal -- ego -- it's also now about the folly of vanity. The crow acted vainly, and it was punished. Voila -- the moral of the story.
Of course, we jumped over one key point: where do "ego", "health" and the other types of Core Goals come from? I compiled the list from work in motivation theory by Maslow and Max-Neef (two separate theories). I don't make any claim that this is a particularly good or bad list of types of human needs. You can make up your own for your annotation project, if you like. (This will require a recompile of Scheherazade.) My only claim is that core goal typing is a representational feature for adding more granularity to the classification of plans by their affectual impacts on the story's characters. Others in my list include justice, honor, wealth and enlightenment -- all available, as you've seen, in the node type dropdown.
We still need a Core Goal set up for one more plan: the one at the end we added a few minutes ago, for motivating the fox's taunting of the crow. Why does the fox taunt the crow? Well, we said before that the fox had an intention to hurt the crow. He probably also had an intention to boost his own ego at the same time. So let's set up two Core Goal nodes, one for the fox's ego, and one for the crow's ego, inside the fox's goal box.
Set up arcs to each one from the fox's sole plan step. The actualization of this step "provides for" the fox's ego, and "damages" the crow's ego.
Great! Now click on bare canvas once more to run the validation algorithm. Here's what we see instead of a list of actualization errors:
An expanse of blessed green, telling us that the graph seems to be complete! As I've noted, in the future you'll need to check for things that the validation routine does not pick up, like Core Goals in nested plans.
Let's hit the "zoom out" button and take a look at what we've made:
This is a single unified graph with a ton of information about the fable. On the left is a column of nodes indicating surface text, divided into relevant clauses. In the middle is a column of temporally sequenced propositions that tell us the who, what, when and where of the story timeline. And on the right half is a network of frames, nodes and arcs that tell us the why of the story: the underlying goals, plans and affectual impacts. The arcs that traverse across the graph tell us equivalences, attempts and outcomes. We see a long plan formed and then gradually executed over time; we see a character intending to hurt another through deception by appealing to the second character's vanity; and we see who the winners are and who the losers are -- all linguistically grounded in verb frames, noun frames and a few controlled vocabularies.
And that's it! You've just finished your first Scheherazade annotation. After you save your work, click ahead to the conclusion.