Previous: Finishing the Timeline
At the very top of the editor, you'll see a button for a third major panel called Interpretations.
Here's what it looks like:
What's going on here? Well, I've indicated once or twice before that the underlying data structure for our story representation is a semantic network. This means there are different kinds of nodes, and different kinds of arcs that go between those nodes, and so the graph structure encodes a cumulative meaning. The Interpretations screen gives you a more direct view of the graph structure of our representation of story meaning. So far there are two kinds of nodes:
red nodes, with spans of original story text, and
blue nodes, with timeline propositions (actions, properties and modifiers)
There are, so far, three kinds of arcs:
red ones, which indicate an equivalence between a red node (of original story text) and a blue node (representing a timeline proposition)
dark purple ones, which indicate temporal ordering among timeline nodes (standing in for the timeline itself)
light purple ones, which attach modifiers to the actions or properties that they modify
What we're going to do here is draw directly on this canvas with a few more kinds of nodes that represent the "whys" of the story:
Purple nodes, which are actually drawn as boxes, that represent goals or beliefs of characters
Tan nodes, which represent actions and statives that exist as parts of character goals or beliefs, drawn inside the goal or belief boxes
Black nodes, which represent affectual impact on a character, such as, "this goal, if satisfied, would be good for character X"
The objective to Interpretative mode is to annotate each blue timeline node with a goal that indicates why the agent did what it did in that timeline node. An individual timeline node can be marked as having "no interpretation" if it has no relationship with a character goal. The idea is to get a sense of how various actions fit together as parts of a larger goal or system of goals that motivate the story as a whole. Our theoretical basis is that a typical narrative is one in which discrete agents are operating with internal goals and beliefs, and the actions in the story are a projection of the interactions of those agents as they strive to reach those goals. If you like, you can read much, much more about the design of the Interpretative "layer" of Scheherazade in Chapter 3 of my thesis.
Here is how you can indicate a goal in this layer: Suppose we want to encode an interpretation that the crow has an underlying goal to keep the cheese (otherwise, why would she care that the fox would snatch it up?) and the fox has an underlying goal to take the cheese. Click on the white part of the screen -- what I'll call "bare canvas" -- and select "Goal" of "the fox" from the dropdowns as the top of the screen. Then, click "Create". You should see a purple Goal node appear. This is actually an empty goal "box".
   Â
Click on the goal now to highlight it. We can put nodes inside of Goal and Belief boxes by first clicking on them to highlight them, then creating the inside node from the top panel. Note the red border after we click on "THE FOX: GOAL":
Once the goal is highlighted, create a new action about the fox:
You should see the familiar form creation panel come up. Create an action that indicates what the fox's goal is -- to get the cheese, of course.
Note that unlike in the Timelines screen, here the agent of the action (the main doer) is determined by the "about" dropdown by the Create button. You won't be asked, in other words, to click a red prompt button to answer the question of who is doing some action.
When you click the accept button, you'll get a tan action node inside the blue goal frame.
Note that you can click both nodes and boxes to drag them around. (You can drag around a goal or belief box by dragging its purple label.)
It's easy to create arcs. Just click to highlight the node that you want the arc to travel from, so it gets a red outline. Then, in the panel on the bottom right, select the new arc type from the dropdown. In this case, we want to say that the timeline action "the fox thinks how to obtain the cheese" can be interpreted as a declaration of an overarching goal for the story, namely, that the fox wants to get the cheese. There is an arc called "interpretedAs" that is meant to indicate this kind of literal equivalence between a timeline node and a goal node.
When you click "interpretedAs" as your arc type, the canvas will go into a mode where you can click on the destination for your new arc. This "select a destination node" mode is indicated by a red color on the canvas. The boxes and nodes that are eligible (that is, the ones that can be in the receiving end of the arc) are highlighted in cyan, like this:
Both the goal box and the node inside the box are separately highlighted. We can point to either of them. What's the difference?
The goal box (typically purple) refers to the agent's mental state of having a goal, e.g., the fox's desire for the cheese
The goal content (anything inside the goal box) refers to the desired thing itself, e.g., the act of the fox getting the cheese
These two are quite different. One is always a state of mind; the other is an event that is the subject of the agent's thinking. In this case, we want to say that "the fox thinks how to obtain the cheese" is an equivalent of the a goal box, in that both are indications of the fox's state of mind with respect to the desirability of cheese acquisition. Thus, we click on the goal box label ("THE FOX: GOAL") and not, in this case, the goal content ("the fox obtains the cheese"). If instead we clicked the than node "the fox obtains the cheese", we would be indicating that when the timeline says "the fox thinks how to obtain the cheese", that is equivalent to the fox actually obtaining the cheese -- which is not what we want to do. When we click the purple goal box label, the colors return to normal and the arc appears:
This is what we want. One caveat: You may have seen colors like this instead:
What's happening here is that when a blue timeline node is clicked on and highlighted, the colors of the interpretative nodes are coded to indicate the actualization status of the node with respect to the timeline node. Actualization status means:
green, for actualized, or in effect, actually existing in the story-world
red, for prevented or ceased, decidedly not existing in the story world
grey, for hypothetical, neither actualized or decidedly prevented in the story-world, just imaginary
The "interpretedAs" arc that we added means that at the point in the story where the fox thinks how to obtain the cheese, the Fox does, in actual point of fact, have that desire. So the goal box label is green, or actualized, but the goal content is grey, or hypothetical -- the fox does in fact want to obtain the cheese at this point in the story, but the fox has neither obtained the cheese, nor been decisively blocked from getting the cheese.
For comparison, try clicking on the previous timeline node, "the fox observes the crow":
The goal box label is grey. When the fox observed the crow, the fox had not yet conceptualized the goal to obtain the cheese. But at the point in time when "the fox thinks about how to obtain the cheese" --
Boom. The goal box label goes from grey to green, because it is actualized, or flipped to "true", at that point in the story.
Again, the goal content remains grey because it's still hypothetical. The fox has not yet achieved or lost his goal to get the cheese. If we want to say he achieves it, we send an "interpretedAs", "implies" or "actualizes" arc to that node itself; if we want to say he loses it, we send a "ceases" arc to that node. Do we actually want to do that in this story? Sure -- At the end, the fox actually gets the cheese:
I've zoomed out to the point where the text is nearly illegible, but it's the graph structure that matters. I've added an "interpretedAs" node from "the fox snatches the cheese" at the bottom of the story up to the goal content "the fox obtains the cheese". At the point where the fox snatches the cheese, both the goal box label and the goal content are green -- the fox not only has a conception of his goal, but the goal has itself come to pass at that point in the story. This is the basic pattern of an achieved goal (my main research intent being to find patterns such as goal achievement, goal loss, backfire, side effect, and so on, based on a descriptive representation of the goal structure of a story).
If you click on the bare canvas, away from any node, the colors of the goal box labels and goal content nodes will revert back to their usual states (purple and tan, respectively).
Plans are represented here by having multiple nodes of goal content connected with the "wouldCause", "preconditionFor" or "wouldPrevent" arc, depending on what the intended causal relationship is between the plan steps. For instance, the fox here has a plan to make the crow want to sing, so that she will open her mouth, so that the cheese will drop, so that he can pick it up. If we put a second node of goal content inside the fox's goal box, one in which we select "no particular character" for the "about" dropdown and then proceed to model the action "the cheese drops", we can connect them with a green "preconditionFor" arc:Â
The interpretation of this is that "the fox has a goal to obtain the cheese, and he plans for the cheese to drop which will allow this to happen."
Let's consider the fox's entire plan, though. I would personally argue that it is this:
Getting from the last figure to this one looks like a bit of a leap, but I just kept adding more of the fox's plan steps as actions inside his goal box, and connecting them with orange wouldCause arcs. (The rule of thumb is that wouldCause indicates causality without any further intervention by the agent with the goal; preconditionFor indicates that once the cheese drops, it's up to the fox to go ahead and grab it.)
The plan starts with the fox flattering the crow. This causes the crow to develop a goal of its own -- a goal within a goal. I inserted a goal concerning the crow inside the goal box concerning the fox. This indicates that the fox's plan is for the crow to develop its own plan. From there, I can put goal content inside the crow's goal -- actions that the fox would like the goal to like to happen.
The fox wants the crow to develop its own two step plan. Step 1: Sing. Step 2: Trigger a belief on the part of the fox (easily drawn by selecting Belief from the node type dropdown) that the crow has a beautiful voice. But ah, here's the rub: The crow's goal itself will also cause the crow to open her beak, which triggers an orthogonal causal chain leading to the snatching of the cheese. It's like one row of falling dominoes forking into two. Think of this as a visual representation of a hidden agenda: The same action of the crow singing triggers two different causal chain reactions, one belonging to the crow's plan (within the fox's plan -- the false intention), and the other belonging directly to the fox's plan (the true intention).
Once that is modeled in the abstract, we can tie in the timeline nodes as "supporting evidence" for our interpretation of the fox's plan. There is an arc type called attemptToCause which we use to annotate which of the fox's actions are really attempts to actualize the first plan step, to flatter the crow. We can defensibly say that all the fox's actions from "the fox approaches the tree" through "the crow would be the queen of every bird" are gradual attempts to flatter the crow into developing its goal to sing. We can connect all these nodes to "flatter" with the pink-orange "attempt to cause" arc.
Then, the payoff: The flattery action is actualized. Plan Step 1 is achieved!
"The crow caws" plugs into the same plan representation too, simultaneously actualizing both the deceptive plan step "the crow sings" and the hidden plan step "the crow opens her beak" -- this is where the dominoes fork:
And when the author/translator makes it explicit that the crow's goal is based on vanity, we can draw the interpretedAs arc between the "in order to demonstrate" modifier and the goal box label of the crow's plan. Even though the crow's goal was merely a plan of the fox's goal, it can be independently actualized as something which occurred at this point in the story. In a sense, the crow's goal box is just a special kind of goal content: the crow's mental state of wanting something is a property that serves as a plan step.
We can then say that "the cheese stops being in the crow's beak" is an actualization of the next step in the fox's plan, for the crow to drop the cheese, and "the cheese falls" is an actualization of the following step, "the cheese drops". The fox's plan is coming true in sequence. We already drew an arc to indicate that the fox's ultimate goal, to get the cheese, was indicated by the action "the fox gets the cheese".
What we've done here is to come up with a couple of carefully designed actions and properties that tie all the timeline actions (and their source text spans) together as parts of a greater whole. Of course, not every story is as goal-based as this one, but given an agent-intentive reading of narrative, the Interpretations part of Scheherazade is intended to help guide the annotation of a story into its underlying goals, plans and beliefs.
There are some other features, though, of Interpretative-panel annotation...