Language mapping

This is my own extract from: O'Keefe, J., Nadel, L.  The Hippocampus as a Cognitive Map,  1978. Oxford University Press.


14.3.2(c). LANGUAGE The assumption that some abstract deep-structure base characterizes long-term memory first arose within psycholinguistics. We can briefly review this development, concentrating on evidence demonstrating that there are two stages in language processing, one involving the serial ordering of a linear string of symbols, the other consisting of an underlying non-linear structure from which this ordered string is generated. The similarities between these processes, and those just described for imagery, will become clear in the course of our discussion. As there is considerably more information available concerning deep structure in language than there is for imagery, we can go into some more detail here; the features of deep structure demonstrated for language are probably also to be found in the imagery system. 

The existence of a deep structure. One approach to language, as to any form of behaviour, is to attempt to explain it solely on the basis of its observable linear structure. This approach, associated with the behaviourist school, suggests that each element of a sentence is generated in response to preceding elements, or in response to a stimulus in the environment, and that the whole sentence can be thought of as a Markov chain. Given the first word of a sentence, any other word has a finite probability of being produced, depending upon the number of times in the past that the word followed the first. Thus, a word like `smelly' would be followed quite often by `feet' or cheese', less frequently by `music' or `airplane', and virtually never by `for' or `thinks'. Higher-order Markov chains would take into account not just the previous word but the previous two words, three words, and so on. Language, on this model, is generated solely by a system which produces strings of symbols in an ordered left-to-right linear sequence. Highly practiced sequences would be run off without recourse to decisions, ideas, etc. From this point of view there is nothing unusual about Lashley's colleague who claimed that '... he could arise before an audience, turn his mouth loose and go to sleep. He believed in the peripheral chain theory of language' (Beach et al. 1960, pp. 510-11). Another aspect of language emphasized by the behaviourists is the 

*There was some evidence in Lea (1975) that certain starting sites in a spatial array are more easily accessed than others; this seemed related to either top-to-bottom scanning methods or to some subjective impact of the objects located at those sites. This is, as Lea points out, an area requiring further research. Lea also failed to find any relationship between the reaction time required to scan from one site to another, and the real-world distance captured by the image of those sites; this is in disagreement with results reported by Kosslyn (1973). It is not necessarily the case, however, that a failure to find an increase in reaction times indicates that the image does not represent increased distances. There is no reason to assume, within a neural mapping structure, that real-world distances would be correlated with neural distances in a fashion which would produce orderly changes in reaction times with changes in imaged distances. 
 
referential nature of meaning; that is, the way words refer to things or events and appear to derive their meanings from this reference. This connection comes about as the result of a simple conditioning process; a sound experienced in the presence of an object will, when later heard by itself, call up the same, or some of the same, responses as the object itself. The meaning of a sequence of sounds or words would then be given by the sum total of the conditioned meanings of each individual sound or word. In the face of harsh criticisms, to be mentioned below, this strong position has been progressively modified and weakened. In one recent formulation (Osgood 1971) meaning was seen as dependent upon some sort of internal response (rm) which was derived from the total external response to the object. Words are not conditioned to the external responses but to these rms. The meanings of more abstract words, such as justice, are derived in a secondary fashion from the rms associated with actual objects or events. 

This simple behavourist approach, which emphasizes the observable aspects of language, does seem to explain adequately many of the stereotyped features of language and some of the simpler referential features of meaning. It fails, however, in the language sphere in exactly those places where it fails in its explanation of behaviour in general, human or infra-human; it ignores or denies the purposeful variability and originality of behaviour, the novel behaviour not obviously due to 
generalization, the flexible use of behaviour learned in one situation but applied for a different purpose in another, and the underlying similarity amongst superficially different behaviours. These aspects of behaviour become acutely obvious in language and it is here that the deficiencies of the behaviourist account are most glaring. As pointed out by Lashley (1951), Chomsky (1957 and elsewhere), and Fodor (1965): 

(1) Novel sentences constitute a large proportion of all utterances. 
(2) The related words in a sentence often are not contiguous. The sentence 'the man who lived in the house sneezed' derives its meaning from the noncontiguous elements 'the man ... sneezed' and not from the contiguous elements 'the house sneezed'. 
(3) Superficially different sentences such as the active `the boy hit the dog' and the passive `the dog was hit by the boy' have the same meaning. 
(4) The same sound can have more than one meaning.

It is hard to see how reference to any response, or partial response, or hidden partial response will remove the ambiguity associated with the use of sounds with two different meanings. Disambiguation almost always depends on the context within which the sound occurs. It is to explain the existence and importance of these features of language that Chomsky 

* It is not clear whether one should speak of one word with two meanings or two words which sound alike (homophones). 

and most subsequent writers on linguistics have postulated the presence of a deep structure in addition to the more superficial one which generates the left-to-right 
temporally ordered pattern of the observed behaviour.

The form of the deep structure. The deep structure was designed to account for the creative aspects of language, the connectedness between non-contiguous surface elements, the different meanings of a sound depending on its context, and the relationship between such superficially different sentences as the active and passive forms. In Chomsky's (1957, 1965) systems the deep structure consists of a complex set of rules which operate on symbols or strings of symbols, as well as the structures which the application of the rules generates. Chomsky's grammar has three sets of rules: the most important of these are syntactic, the others are semantic and phonological. The syntactic rules were seen as the creative part of the system, generating the basic sentence structure; the other rules acted passively on the inputs they received from the syntactic component to generate meaning and phonological representations. 

Let us briefly consider the rules of the syntactic component of Chomsky's grammar and the structures that they generate. There are two different types of syntactic rule, phrase-structure rules and transformational rules. The phrase-structure rules operate upon symbols for grammatical categories such as noun phrase (NP) and verb phrase (VP), rewriting them as strings of symbols. Fig. 33 gives some examples of the operation of these rules. Note that these rules operate on individual symbols, and that no account is taken of the history of that symbol or of the derivation of the string in which it is embedded. A noun phrase receives the same treatment wherever it appears. The structure generated by the operation of the phrase structure rules is often portrayed as a tree diagram called a phrase marker (see Fig. 34); this seems to be viewed by Chomsky as a static structure, all parts of which exist simultaneously in the base component of the grammar. Transformational rules were introduced when it was seen that although phrase-structure rules could generate simple active sentences, by themselves they could not account for such things as passive sentences or questions. The 
transformational rules are applied to the deep structures generated by the base component and differ from the phrase-structure rules in that they apply to strings of 
symbols, are applicable only in a fixed order, and take into account the history or derivation of the string upon which they operate. This last property means that, in a 
sense, they operate upon whole phrase markers. By allowing for the optional (Chomsky 1957) or obligatory (Chomsky 1965) addition, deletion, or re-ordering of elements within a string, they easily provide for the transformation of sentences from, for instance, active to passive or declarative to interrogative (see Fig. 35).


 
Rewrite SENTENCE as NOUN PHRASE (NP)+VERB PHRASE (VP) 
                                     S            NP +                    VP 

Rewrite NOUN PHRASE as DETERMINER (DET)+NOUN (N) 
                      NP                            DET +          N 

Rewrite VERB PHRASE as VERB (V)+NOUN PHRASE (NP) 
                     VP                  V +                     NP 

Rewrite NOUN (N) as dog, boy 
                     N      dog, boy 

Rewrite DETERMINER (DET) as the 
                              DET      the 

FIG. 33. Examples of phrase-structure rules for the sentence `The dog bites the boy'. 

       
       
FIG. 34. An example of a phrase marker for the same sentence as in Fig. 33. 


DETI  + NI  + V + DET2 + N2
The      dog bites   the      boy 

DET2 + N2 + BE + V + EN + BY + DETI  + NI
 The     boy   is      bitten   by      the    dog 

FIG. 35. An example of the transformation rule for converting active to passive sentences. 
Same sentence as in Fig. 33. 

In Chomsky's theory the meaning of a sentence is assigned to it by a separate component, the semantic component. This acts passively on the terminal string of the phrase marker, fitting meanings to each of the elements. Meanings are fully determined by the nature of the input from the syntactic component to the semantic component. Katz and his colleagues (Katz and Fodor 1963, Katz and Postal 1964, Katz 1972) have constructed a theory for this kind of semantic component, envisaging it as composed of two parts: a dictionary of meanings, and a set of projective rules allowing for, and providing meanings of, combinations of items.

Chomsky's system succeeds in doing what it set out to do. It accounts for many of the interesting features of languages which fall outside the province of simple behaviourist models; it permits the generation of an infinite number of sentences from a finite number of rules; it explains why distant elements of a sentence can have strong relationships; it answers to our intuitive feeling of a similarity between syntactically different sentences by identifying a common deep structure. However, as Chomsky himself noted (1965, p. 162), it fails in one important respect; it does not capture the still deeper semantic relationships which can exist between syntactically different sentences. Thus, the grammar fails to capture the similarity between sentences A and B, or C and D: 

(A) I liked the play. 
(B) The play pleased me. 
(C) John bought the book from Bill. 
(D) Bill sold the book to John. 

This failure to account for paraphrases would appear to be due to the narrow definition of the semantic component of the system. As we have seen, it is purely a passive feature of the grammar, whose function is to ascribe meaning to the deep structures generated by the base component. Intuitively, this seems to be an unnecessary restriction on the role of the semantic component. The meaning of a sentence is not only the sum total of the meanings of the words but includes the way in which they are put together. In this broader sense of semantic the base structure itself should be included in the semantic, and not in the syntactic, component.
This broader usage of semantic requires some elaboration, for it embodies an important shift in thinking about language comprehension. The behaviourist emphasis upon the elements of speech meant that most research was concerned with individual items; how they were processed, 

* The dictionary operates on the basis of componential analysis, specifying the meaning of a lexical element as the set of categories within which it is included (semantic markers) together with those features which separate it from other lexical items in the same categories (distinguishers). The technique of componential analysis has also been applied to verbs, in particular by Bendix (1966, 1971). He examined a number of verbs and showed that they could all be paraphrased by combinations of a few basic verbs such as have, cause, change, etc.
 
stored, interpreted, and generated. Thus, standard experiments involved the learning of lists of words or paired associates. When organizational factors were allowed (cf.Mandler 1967), they were generally restricted to the meaning relationships between isolated words. This accounts for the notion of categories and the host of experiments on the role of categorial relations in the learning of word lists and paired associates. Chomsky's critique partly embodied the notion that lexical items were not the central elements in language comprehension. However, in moving to higher-orderunits Chomsky did not expand the semantic component to include the meaning of these larger units.
Most of the recent work concerned with semantic deep structures, then, concentrates upon the mechanisms for comprehending and storing these higher-order verbal units, beginning with the recognition that what is remembered of sentences, paragraphs, or even stories is the sense of the discourse as a whole. Before turning to a brief discussion of some of this work it is worth digressing momentarily to discuss Tulving's (1972) notion of semantic memory. In view of the shift we have just described, it is unfortunate that Tulving chose to apply the term semantic in its older usage to a system representing the meaning of individual words, independent of context. The onfusion arising from this usage has led some (e.g. Schank 1975) to reject the notion of semantic memory entirely in favour of a system including only the lexicon and episodic memory. According to Schank, the meaning of individual words is stored in the lexicon, while any relations between individual items must be stored in terms of some event in which they took part. We cannot agree with Schank on this point, though we find his model for semantic deep structure (see p. 398) one of the most attractive in the field. While we do not accept Tulving's separation in toto, we think there is strong evidence for a separation between some form of context-free memory, using (in the old sense) semantic categories, and a context-dependent memory, using something like a spatio-temporal framework. As we shall see in the next chapter, the data from amnesic patients supports this distinction.

Semantic deep structure. The work of Bransford, Franks, and their colleagues (e.g. Bransford and Franks 1971, Bransford, Barclay, and Franks 1972, Bransford and Johnson 1973, Bransford and McCarrell 1974, Franks 1974) provides important clues to the nature of memory for higher-order verbal units. Their early work demonstrated that, given a set of related sentences, subjects formed something like a prototype sentence which, though never actually seen, was more readily recognized as familiarthan sentences which had been seen. Later work extended this observation by showing that the remembered representation for a sentence depended upon the context within which it was seen, as well as upon various inferences and assumptions the subjects could make about the material, presumably based on some prior knowledge of the contexts within which the events described could obtain. In fact, given an inappropriate context, a sentence which would have been understood in isolation was often judged incomprehensible.Similarly, they argued, some sentences which would be meaningless in isolation can be given some sense by the context within which they occur.
This work on sentence comprehension requires a model which provides for some deep structure that codes the relationship between the various elements in the sentence or between several sentences. Studies of semantic deep structure have concentrated upon such models, in the hope of specifying the form within which these relationships could be coded such that the meaning of a sentence as a whole could be stored, paraphrases of that sentence recognized, sentences could influence one another's representations, and prior information could be brought to bear on comprehension of inputs (and, hence, the meaning attached to these). Early work on the basis for a semantic deep structure (e.g. Bendix 1966, 1971, Fillmore 1968, 1971, McCawley 1968, 1971, Lakoff 1971) spoke primarily to the first two of these requirements, concentrating upon sentence comprehension in isolation. Though superseded by later models, we shall describe Fillmore's system as it presents some of the basic features of those which superseded it. According to Fillmore, a case system, in which items were unordered though identified as to function, would provide a more appropriate base than the ordered set of grammatical categories proposed by Chomsky. In Fillmore's system the sentence is represented by its modality, which specifies such conditions as tense, negation, and mood of the sentence as a whole, and proposition, which identifies the verb and its permissible cases. These latter are given as an unordered set, with each case defining the relationship between the item in that case and the verb. Fillmore specified eight deep-structure cases:

(a) Agent―the instigator of the event
(b) Counter-agent―the resistance against which the action occurs
(c) Object―the entity acted upon or under consideration
(d) Result―the entity that ensues from the action
(e) Instrument―the immediate cause of the action
(f) Source―the place from which some entity moves
(g) Goal―the place to which some entity moves
(h) Experiencer―the entity receiving, accepting, undergoing, or experiencing the effect of an action

In the sentence John opened the door with the key, John is the agent, door the object, and key the instrument. The deep structure of each simple sentence would consist in a verb plus its obligatory and optional cases. Open, for example, always requires an object, but takes an agent and an instrument as options. The transformational rules in Fillmore's system, as in Chomsky's, are concerned with generating surface sentences from deep structures. However, since the cases in Fillmore's semantic deep structure are unordered, there is no need for rules which transpose elements. Instead, the rules establish a hierarchy amongst the cases associated with a verb, specifying which grammatical role each case will play in the surface sentence. For open, the instrument is the subject if it occurs alone, but the object of a prepositional phrase (with the key) if there is an agent. Fillmore's grammar does require deletion rules, because cases are represented in the deep structure by prepositional phrases which, in most circumstances, do not appear in the surface structure. Thus, the agent in our example would be represented in deep structure as by John. The preposition would survive in the surface sentence only in the passive case; in the active form the by would be deleted by a transformation rule.

This type of semantic deep structure, important for its emphasis upon functions and actions, can account for many of the facts of sentence comprehension. However,it remains silent on the more complex problems delimited by Bransford, Franks, and others, and those represented by the retention of the sense of paragraphs or entire stories. Three recent models which are specifically pitched at this level seem particularly interesting, those of Schank (1972, 1975), Norman and Rumelhart (1975) and Jackendoff (1976). Common to these approaches is the assumption that the deepstructure representation for language is some form of propositional or conceptual network which codes meaning through the interaction of elements. Thus, for Schank (1975) the basis of human memory is the conceptualization, which is 'action-based with certain specified associative links between actions and objects' (p. 259). Similarly, for Norman and Rumelhart (1975) the basis is the active structural network, which is a semantic network representing the underlying propositions in any stored event. Both systems rely on a set of primitives which define the forms of interaction between the elements in the memory structures; here they follow in the path of Bendix's componential analysis of verbs. Further, both argue that sentence after sentence can be 'added' to the memory structure, in some cases being influenced by what is already there, in other cases influencing it. Thus, they provide models for the comprehension of sets of sentences. More recent work by Rumelhart (1975) attempts to provide the basis for a representational network which would describe the structure of an entire story without building sentence upon sentence.
While we cannot explore these models in detail, it is worth emphasizing the fact that they insist upon a network-like propositional representation where the elements
within the network are related to one another through the action of a primitive set of operators. The meaning of such a network, or conceptualization, is the totality of the relationships embodied within it. We find it particularly heartening that Norman and Rumelhart emphasize the essential non-linguistic character of their networks; they apply their analysis to imagery phenomena as well as to linguistic deep structure. Here, they also stress the view that imagery depends, not on a pictorialization within memory, but rather upon some propositional deep structure which captures the relationships embodied in the image and from which the image can be reconstructed. We will conclude this section on deep structure models with a discussion of Jackendoff's system Jackendoff, 1976) which is, for us, the most interesting and exciting of the recent proposals. Jackendoff, expanding an original suggestion by Gruber 1965, has proposed that all sentences have deep semantic structures which are formally analagous to the subset of sentences describing events or states of affairs in physical space. First he shows how an analysis similar to the one by Fillmore described above will provide a deep structure for sentences about the location and movement of entities in physical space and, second, he shows how modifications and extensions of this purely spatial system can account for the meanings of non-spatial sentences.

   In his analysis of spatial sentences, he starts with examples like:

(1) The train travelled from Detroit to Cincinnati
(2) The hawk flew from its nest to the ground
(3) The rock fell from the roof to the ground

and shows how their meanings can be captured by a deep structure which specifies the thematic relations between the verb and the nouns or noun phrases. Thus (1) would be represented by the deep structure function, GO; the theme of the function, train; the source or place from which the movement started, Detroit; and the goal or place where the movement ends, Cincinnati. Notice the similarity to Fillmore's case system described above. Spatial sentences (2) and (3) above would have similar deep structures with suitable additional information such as the manner of the motion. Other spatial sentences such as

(4) Max is in Africa
(5) The cat lay on the couch
(6) The bacteria stayed in his body
(7) Bill kept the book on the shelf

describe not the motion of the object or theme but its location and are represented by the deep structure function BE (4 and 5) or STAY (6 and 7). Thus all states of affairs and events in physical space can be represented in Jackendoff's system by three functions GO, BE, and STAY, together with the things and places which these functions relate. Agency and causation are added to the deep structure by the higher order functions CAUSE and LET which apply not to entities but to events. Thus if sentence (3) above would be represented by GO (THE ROCK, THE ROOF, THE GROUND), then

(8) Linda threw the rock from the roof to the ground
(9) Linda dropped the rock from the roof to the ground
would be represented as CAUSE (LINDA, GO (THE ROCK, THE ROOF, THE GROUND)) and LET (LINDA, GO (THE ROCK, THE ROOF, THE GROUND)) respectively.

At this point Jackendoff takes a crucial step. He claims that nonspatial sentences have exactly the same representation except that the functions GO, BE, and STAY do not refer to the spatial location of entities but to the possessive, identificational, or circumstantial 'location' of entities. Let us look at possessive GO. While spatial or positional GO signifies the movement of an entity from one physical location to another, possessive GO signifies the movement of an entity from one possessive location to another. The sentence

(10) Harry gave the book to the library
is represented as possessive GO (THE BOOK, HARRY, THE LIBRARY). Similarly

(11) The book belonged to the library

(12) The library kept the book

are represented by possessive BE (THE BOOK, THE LIBRARY) and possessive STAY (THE BOOK, THE LIBRARY). The analysis of sentences about continuing states of identity or changes of identity, or continuing or changing circumstances, are given a similar treatment. Jackendoff shows, for example, that the semantic analysis of the sentence

(13) Linda kept Laura from screaming

is exactly parallel to the sentence about physical prevention

(14) Linda kept Laura (away) from the cookie jar

except that the avoided location is a circumstance in example (13). Thus, for example, the same rules of inference allow us to conclude that Laura did not scream (13) nor did she get to the cookie jar (14).

Jackendoff has not extended his analysis into the domain of completely abstractconcepts nor to verbs referring to internal states or beliefs, but he sees no insurmountable obstacle to such a programme, nor do we. It is also reasonable to assume that this type of analysis can be extended to deal with units of speech longer than a sentence, thus incorporating the recent work on discourse and narrative comprehension. In summary, Jackendoff says:

'I consider it a striking property of the present system that simple principles, framed in terms
of physical space, can be stated formally in such a way as to generalize to domains that bear
no a priori relation to physical space' (Jackendoff, 1976 p. 121).

On linguistic grounds, then, it appears necessary to postulate the existence of several different mechanisms underlying the production and understanding of language. In addition to those mechanisms which select the appropriate words and sentence frames to produce the temporal left-to-right ordered structure of the surface aspects of language, there must be a deeper, more abstract level which carries the sense of a sentence or a set of sentences.

The common element that all deep structures share is their non-temporal aspect; put another way, they can all be represented by purely static spatial structures. The sense of an item is derived from its relation to other items within the structure, the overall sense of the sentence follows from the total configuration, while the interaction between such configurations allows for higher-order messages such as stories. In terms of our model all of these deep-structure elements are identified with maps in the locale system (or their activation). The surface structure of the grammar, transformational processes, the syntactic structures, and the lexicon are those parts of the taxon system which provide the means by which maps in one person's locale system are transferred to another's.∗∗ These taxon systems are analogues of the route systems of lower animals. They are based on the operations of categorization and the formation of linkages between frequently associated items to yield route statements. These routes, which take discourse from one substantive to another, would appear to be the basis for the tone groups described by Laver (1970, see footnote below). The rules which govern the generation of a set of routes from the underlying semantic map are analogous to the transformational component of a Chomsky-type grammar. As we shall see these rules can be much simpler than those specified by Chomsky or Fillmore, since the form of the surface sentence can be read directly from the directions traversed in the map.

Types of semantic maps. In developing the notion of a semantic map we shall build on Jackendoff's insights and follow his general methods. First we shall describe how semantic maps can be used to provide semantic deep structures for sentences about entities and events in physical space. Using one of these spatial semantic maps as an illustration,

* Studies of language generation, which occurs in small segments called tone groups according to Boomer and Laver (1968) and Laver (1970), indicate that there are three separable processes involved (Goldman-Eisler 1968): (1) an idea or determining tendency; (2) the transformation of this idea into a sequential chain of symbols; (3) the selection of appropriate lexical items. One function for the verbal short-term memory system noted before (p. 388) could be the retention of a group of surface elements during the elaboration of the entire tone group. In contrast to the appearance of lexical elements in a short-term holding system, both at input and output, long-term memory for language is clearly concerned with the meaning of an utterance (e.g. Johnson-Laird 1970, Bransford and McCarrell (1974). ∗∗ From an evolutionary point of view, language could have developed as a means of transferring information about the spatial aspects of the environment: how to get somewhere, how to find food, etc. 

we shall outline some of the syntactic transformation rules for transcribing all or part of this map into a surface 'route' sentence or phrase. Following Jackendoff's method, the next step will involve a discussion of how non-spatial maps similar to these spatial maps can be formed and how the same transformation rules operate to generate sentences about non-spatial entities and events. Instead of mapping physical space, these non-spatial maps depict surfaces on which the locations represent possession (or, as we shall call it, influence), identity, and circumstances. In this section we will also introduce the notion that maps or parts of maps can be 'named' and these names can be entered into locations on other maps. 

A semantic map for spatial sentences. Sentences about the location of objects or the occurrence of events in physical space have an obvious and natural representation within a spatial map structure. Let us use Jackendoff's sentence (3) as an example 

(3) The rock fell from the roof to the ground

The map of this event has three phases (see Fig. 36). The first (a) depicts the unstated presupposition that an entity (E) the rock was in a place (P) the roof up to some unknown past time t1. 



 
The second (b) depicts the action at time t1 in which an entity the rock moves from the place the roof to the place the ground. In the third phase (c), the rock is in the location the ground from time t1 onwards. Thus there are two places, an entity which either stays in these places or moves from one to the other, and time markers which specify the time of the movement and the beginning and end of the period spent in a location. These time markers may refer to times attributed to the external world or they may be entirely internal to the map. The entire map is shown in Fig. 36(d).

Notice how the mapping system incorporates the three fundamental functions of Jackendoff's system, GO, BE, and STAY. BE is represented by the location of an entity in a place without a time marker. If there were no time marker on the first phase (a) of our three phase representation, this would depict the BE function: the rock is on the roof and, as far as we know, always was and always will be. The STAY function is represented by the third phase (c) where the time marker t, limits the duration of the state in the past direction but there is no time marker to limit it in the future direction. The second phase of our semantic map (b) represents Jackendoff's GO function, the movement of an entity from one location to another at some specific time. 
A variety of sentences can be generated from our simple spatial semantic map by a set of transformation rules. We will assume that there are no obligatory points of entry into a semantic map nor are there obligatory directions of movement within the map. Thus although the map may have been originally constructed from a simple active sentence, it can be entered at any entity, event, place, or movement and read in part or in whole in any direction. The order of reading and the relationships between the successive items read determines the syntactic role of each item in the surface sentence. Maps containing nothing but spatial entities and events can only generate sentences in the active voice. We shall discuss the passive voice shortly. If the map of our example (Fig. 36) is entered at the entity rock and this is read first, it becomes the syntactic subject of an active sentence. If the movement is read next it becomes the verb, and the place of origin and the place of termination of the movement are made the objects of the prepositions from and to respectively. With this order of selection, we have generated our active sentence from which the map was created.

(15) The rock fell from the roof to the ground

If after reading the entity the rock, we had read first the place of origin the roof and then the movement and place of termination, our sentence would read:

(16) The rock was on the roof and (then) it fell to the ground (from there)

Similarly:

(17) The rock was on the ground where it had fallen from the roof

The natural expression of the relationship between an entity and its location is

(Entity) is  in/on (location)

We might have entered the map at one of the places and read the entity next.

(18) The roof had a rock (on it) and (then) the rock fell to the ground

(19) The ground has a rock (on it) which fell from the roof

The natural expression of the relationship between a location and its entity is (Location) have (entity)

Similarly:

(20) The roof had something fall from it on to the ground and that was the rock

(21) From the roof, the rock fell to the ground

Finally the map can be entered at the movement itself

(22) The falling of the rock from the roof to the ground

In which case the movement is nominalized as a gerund or a noun (fall) and the entity becomes the object of the preposition of as a subjective genitive. It is clear, therefore, that any of the parts of a map can be the subject of the sentence derived from that map and that the parts of the map can be read in any order or direction. Finally the syntactic function of a representation in the map is determined partly by its role in the map and partly by the order in which the parts of the map are read.The simple physical semantic map can embody all three of the deep structure functions (GO, STAY, BE) which were discussed in the previous section. Note that there are no rules necessary to derive inferences from the deep structure since these are built into the map when it is constructed in the first place. Our map of sentence (15) contains the unstated information that the rock was on the roof for some undetermined time, that it fell through the places between the roof and ground before falling to the ground, and that it remained on the ground for some undetermined time after the event.

Semantic maps for non-spatial sentences. Sentences about entities and events in physical space constitute only a small proportion of the language. Non-spatial sentences represent notions such as possession, causation, responsibility, identity, and category inclusion to name a few. Jackendoff introduced the higher order functions CAUSE and LET into his linguistic system to deal with causation and permission in spatial sentences. More importantly he suggested that non-spatial sentences conveying information about possession, identity, and circumstances had the same formal structure as spatial sentences. We will draw upon this insight and introduce the notion of non-spatial maps. Following Jackendoff, we will consider three types of non-spatial map: maps of influence or possession surfaces, maps of identity surfaces, and maps of circumstance surfaces.We do not have room here to go into great detail about each of these non-spatial maps so we will concentrate on the influence or  possession type and only briefly comment on the other two.
One important concept not captured by a purely spatial mapping system is that which is common to the notions of causation, control, power, instrumentality, and possession. These notions represent a relationship between entities and/or events in which one is under the influence of the other. Some of these relationships are represented in semantic systems such as Fillmore's by the deep semantic cases agent and instrument (see above). We will postulate that all of these relations are
represented on one surface which we shall call an influence surface. Influence relations on this surface are represented by entities in particular locations and changes in influence are portrayed as movements between places. Expanding on our previous notation, places in our influence map will be labelled Pinfl while places in our physical spatial map will be labelled Pphys.
Entries in different maps (entities, places, and movements) which have the same name are considered to be connected so that the activation of one entry also activates all the other entities. For example, if Harry were the name of a place in influence space and an entity in physical space, activation of one would activate the other.
Before we discuss an example of a map portraying relations and events in an influence space, it will be useful to introduce the concept of map nesting or embedding. Maps or parts of maps can be labelled with names and these names can then be represented as entities or locations in other maps. The names of maps can appear not only in maps of the same type but of other types as well. Thus the name of an influence map could appear as an entity in a physical spatial map. This notion of map embedding will become clear in the next example which illustrates both the movement of an entity in an influence space and the embedding of this influence
event in a second influence map. Our example is sentence (10) taken from Jackendoff (see above).

(23) Harry gave the book to the library

This means that the book moved into the possession of the library and that this event was caused by Harry. In the system we are proposing both the transfer of possession and the causation of that event would be represented in an influence map. The transfer of possession is represented as a movement from some unknown location into the location (the library). This event is given the name transfer of possession and entered into the location (Harry) in the influence map. The interpretation given to the relationship in an influence map between a location and its content depends on the nature of the content. When the content is a primitive entity drawn from a taxon store such as the rock or the book then the relationship is interpreted as a possession where the location possesses the content. When the content is the name of another map (i.e. an event), then the relationship is one of causation, the event is caused by the location. Entities possess other entities, entities or events cause events.
We will leave open for the moment what the interpretation of an entity in an influence location of an event might be. Fig. 37 shows the influence map of sentence (23). Notice that the sentence does not specify whether the book belonged to Harry before the event described or whether the book actually physically moved to the library. These are left ambiguous.

Consider the related sentences:

(24) Harry gave his own book to the library

which disambiguates the book's former possessor.

(25) Harry gave the book to the library- after it had been displayed on loan there
for several years.

and

(26) Harry gave the book to the library but won't be sending it to them until next
year.

The map representation of the event in sentence (23) is given in Fig. 37 (a)-(d). Notice the similarity to Fig. 36 except that the relationships and the event take place in an influence space. In order to represent the rest of the sentence, namely, that Harry caused the event portrayed in Fig. 37(d), this influence map is given a name and entered into the influence location called Harry (Fig. 37(e) and (f)). Fig. 37(g) shows the map of the whole sentence. As we stated earlier, entities are possessed by influence locations (e.g. Fig. 37(c)) but events are caused by influence locations (e.g. Fig. 37(f) ).
More than one entity can be represented in an influence location. For example, if our influence map had represented sentence (24) instead of

* Strictly speaking, only events cause other events. When an event is entered into the influence space of an entity this is interpreted as an instrumental relationship. We will not go into this complication but assume for the present that agentive entities can cause events. It does not change the basic arguments set out here.


 
FIG. 37. Schematic for an influence map of the sentence 'Harry gave the book to the library'.

sentence (23) then the location Harry would have been substituted for the unknown location of the book in Fig. 37(a) and this location would have contained both the book until time t1 , and the transfer at time t1.Let us look at some of the transformation rules for reading route sentences from influence maps. Here we will continually refer to the rules developed for physical spatial maps to show the similarities. As with physical spatial maps, influence maps can be entered at any point and read in any direction. Let us start with the event in influence space and compare it with the event in physical space mapped in Fig. 36.

Consider the following pairs of sentences:

(15) The rock fell from the roof to the ground
(27) The book went from (the possession of) someone to (the possession of) the library
(16) The rock was on the roof and (then) it fell to the ground
(28) The book was in the possession of someone (or was someone's) and (then) it went to the library
(17) The rock was on the ground where it had fallen from the roof
(29) The book was in the possession of the library whence it had come from some unknown person

The natural syntactic expression of the relationship between an entity and its location in influence space is the same as in physical space. Conversely the natural expression of the relationship between an influence location and its content is (Location) have (entity)

(18) The roof had a rock (on it) and (then) the rock fell to the ground
(30) Someone had a book and (then) the book went to the library
(31) Harry had the book given to the library
(32) Harry had someone give his book to the library

As examples (31) and (32) show the have transformation holds irrespective of whether the content of the influence location is an entity possessed or an eventcaused.
Two important syntactic features which must be introduced into the transformation rules for influence maps do not exist for physical spatial maps. These are the active/passive voice distinction and the genitive of possession. Both of these are necessary to transcribe readings in which the contents of an influence location are read before the name of the location itself. Thus, in our example, reading the locations first in Fig. 37(c) and (g) gives, respectively

(33) The library's book
(34) Harry had the book given to the library while reading the contents of those locations first gives
(35) The book of the library
(36) The book was given to the library by Harry

If the influence location is read after its contents, it becomes the object of the preposition by in a passive sentence or the object of the preposition of. Notice that instead of (34) one could have read (37) or (38): (37) Harry's giving (of) the book to the library (38) Harry's gift of the book to the library In addition to a mode expressing possessional relations and changes in possessional relations, Jackendoff postulated identificational and circumstantial modes whose syntax was analogous to the physical spatial mode in much the same way that that of the possessional mode was. We have not attempted to work this out in detail or to construct maps of these spaces∗ but see no insurmountable problem in doing so. 

 Note, for example, the similarities between the identificational sentence

(39), the circumstantial sentence (40), and the physical sentence (15) above.
(39) The rock went from smooth to pitted
(40) The librarian went from laughing to crying 

To conclude this section, let us briefly mention what seems to us the main weakness of the semantic map idea. Maps of physical space which use a Euclidean metric allow inferences to be drawn about the relationships amongst entities in those maps which go beyond the usual laws of logic. This was, of course, what Kant meant when he called space a synthetic a priori. Thus we can say that the rock in its fall from the roof to the ground must pass through the intervening places. Our map did not specify whether these places were occupied or not. If it had then we could conclude that the rock must bump into, or pass through, whatever occupied those places. Some of the work of Bransford and Franks and their colleagues seems to be based on inferences of this type. Now it is not immediately obvious that the same kinds of inferences can be drawn from our non-physical semantic maps. Part of our difficulty here is that we do not know what the axes of the non-physical semantic maps are or even whether the maps can be considered Euclidean. These objections notwithstanding we think that the work of Jackendoff opens up exciting possibilities for investigating the use of maps of physical and non-physical spaces as the basis for deep semantic structures.

14.3.3. CONCLUSION
The above sections outline a possible way in which the cognitive-mapping system could function as a deep structure for language. The representations in this deep structure have all the properties we have attributed to maps.

* Whether these maps are irreducible primitives or will on further examination be found to be reducible to various combinations of physical spatial and influence maps remains to be seen.

Items are entered into a semantic map not on the basis of the order in which they are received or their position in a left-to-right linear string but in accordance with their semantic relationships to other items in the map. Maps do not incorporate information about the way that they were constructed or the 'route' surface sentence from which they were constructed. Items in the map and their relationship can be read in any order and a large number of different sentences can be generated from the same map. Once a semantic map exists, additional information can be added or changes in the existing information can be made. The resulting large semantic structures embody the ideas expressed in paragraphs or whole stories. We locate these semantic maps in the left human hippocampus. Taken together with the representations of physical occurrences in the right hippocampus they form the basis for what is generally referred to as long-term, context-specific memory for episodes and narratives. In the next chapter, this assertion will be 'tested' against the known facts of the amnesic syndrome.

Comments