INVITED SESSIONS
Tuesday (18th of May)
From semantic primitives to conceptual structure: Experimental investigations into the role of meaning in grammar
Jennifer Culbertson (University of Edinburgh)
In this talk I will report results from a series of artificial language learning experiments which highlight the critical role that meaning plays in explaining why languages look the way they do. The first study targets the semantic space of grammatical person, and the second two target word and morpheme order. Person systems -- typically exemplified by pronominal paradigms (e.g. ‘I’, ‘you’, ‘she’) -- are found in every language, but some are much more common that others. I discuss a series of experiments aimed at investigating which person partitions are more natural from the perspective of learning. I then turn to word order and discuss a series of studies using artificial language learning and a related paradigm in which participants use gesture to spontaneously create a new linguistic system. The studies all suggest that a cognitive bias favoring certain types of word orders is at work in the nominal domain. Most theories of nominal word order argue that constraints on syntactic structure and/or movement underlie this bias. However, I present new evidence which suggests the bias may ultimately derive from meaning, or more specifically, conceptual knowledge. Finally, I relate these findings to morpheme ordering generalisations, again showing the role that meaning plays in determining which orders learners prefer, and which are more common across languages. These three series of studies illustrate at a more general level how experimental methods can complement traditional theoretical linguistic investigations into linguistic typology.
Connectedness: a cognitive primitive as revealed by language, and found elsewhere (namely, with baboons)
Emmanuel Chemla (CNRS, LSCP, Ecole Normale Supérieure)
Imagine a word, say 'blicket', that would mean "apple or banana": apples are blickets, and bananas are blickets. Intuitively, 'blicket' is a strange word, it refers to a concept that is unnatural. Why? It has been claimed that words must correspond to "connected" concepts: if mushrooms are blickets and bananas are blickets, then anything in between an apple and a banana should also be a blicket; so if blicket was to be a more traditional word, it may have to include all fruits, not only apples and bananas. By and large, simple "content words", concrete nouns and adjectives, have connected meanings (cf. extensive philosophical work by Gardenförs, and much work in other domains such as computational psychology, language acquisition, or computer science).
Starting from there, we will formalize a notion of connectedness that applies to any type of word, not only content words. We will find that logical words (in particular quantifiers, such as 'all', 'some', 'none' in English), appear to also be connected across languages. We will provide evidence that non-human animals (specifically, baboons, papio papio) tend to form categories that are connected in the same sense, and argue that this tendency may reveal what are natural classes of objects (content word like) or natural classes of patterns (function word like).
Wednesday (19th of May)
A Meaning First Approach to Generative Grammar
Uli Sauerland (Leibniz-Zentrum Allgemeine Sprachwissenschaft)
In a recent paper, I presented a Meaning-First approach (MFA) to grammar (Sauerland & Alexiadou 2020). In this talk, I introduce the core assumptions of the approach, namely 1) that complex thought-structure generation is independent of language, and 2) that human language can communicate thoughts via a compression into an transmissible form. I then survey the empirical support for the proposal focusing on phenomena in child language where children create sentences with extra words in them.
How words structure our concepts
Gary Lupyan (University of Wisconsin-Madison)
Does language reflect the categories of our mind or does it help create them? On one widespread view learning a language involves mapping words onto pre-existing categories, leaving little room for language to affect the conceptual landscape. Alternatively, many of our concepts — including some that seem very basic — may derive from our experience with and use of language. I will argue in favor of this second view and present evidence for the causal role of language in categorization and reasoning.
Thursday (20th of May)
Logical Connectives in Prelinguistic Thought and Early Language Acquisition: Case Studies of not, or, and possible
Susan Carey (Harvard University)
Ever since Descartes, philosophers have speculated that natural language is necessary for deductive reasoning based on abstract, logically structured thought. This hypothesis cannot be confirmed or falsified from the armchair from a priori considerations alone (see Donald Davidson, who argues in favor of Descartes’ conjecture and Jerry Fodor, who argues against it). A recent explosion of scientific work seeks to evaluate the hypothesis that such reasoning arises in phylogenesis only with the evolution of natural language and in ontogenesis only with the acquisition of language. I will present the current state of the art, as I see it, on two related case studies in the recent scientific work: seeking evidence for reasoning according to the disjunctive syllogism in infancy and non-linguistic thought, which requires logically structured thoughts: A or B and not B, and, within the same paradigms, representation of the modal concept possible, as in possibly A and possibly B.
I will argue that in spite of well confirmed phenomena in the animal and infant literature that are certainly consistent with Fodor’s position, well confirmed failures also favor Davidson’s. I will suggest avenues of future research that might resolve the contradictions in the current data.
Foundations of meaning in infancy: the case of abstract relations
Jean-Rémy Hochmann (CNRS. Institut des Sciences Cognitives Marc Jeannerod, Université Lyon 1)
Abstract relations are considered the pinnacle of human cognition, allowing analogical and logical reasoning, and possibly setting humans apart from other animal species. Such relations cannot be represented in a perceptual code but can easily be represented in a propositional language of thought, where relations between objects are represented by abstract discrete symbols.
Focusing on the abstract relations same and different, I will show that (1) there is a discontinuity along ontogeny with respect to the representations of abstract relations, but (2) young infants already possess representations of same and different. Finally, (3) I will investigate the format of representation of abstract relations in young infants, arguing that those representations are not discrete, but rather built assembling abstract representations of entities.
Friday (21st of May)
Culture shapes the expression of meaning in language
Asifa Majid (University of York)
Cross-linguistic studies show substantial differences in how languages package meaning into words and grammar. English makes a distinction between ‘hand’ and ‘arm’, for example, but it is estimated that a third of the world’s languages collapse this distinction, and refer to both with a single term. English has a single verb ‘to cut’ that can be used regardless of whether the action involves a knife or scissors; in Dutch, however, you must specify whether you ‘snijden’ (cut with a knife) or ‘knippen’ (cut with scissors). Most recently, a large-scale study of 20 diverse cultures has shown that even simple sensory experiences of colours, smells, and tactile textures are expressed differently across languages. This linguistic variation raises the question of whether the underlying cognition of people is also variable across cultures or whether diverse languages interface with a universal bedrock of cognition instead. Recent data suggests the answer may vary across domains, such that some aspects of cognition are more malleable to language effects than others.
Insights into how language transforms the mind and brain from studies with blind individuals
Marina Bedny (Johns Hopkins University)
Empiricist philosophers emphasized the role of sensory experience in knowledge acquisition. Locke reasoned that a person born blind could never grasp visual meanings (e.g., yellow, sparkle). Empirical studies with people who are blind contradict this assertion. Language enables sharing of ‘visual’ meaning across sighted and blind people and transforms the brain. First I will briefly present evidence that blindness enables expansion of higher-cognitive functions, including language, into ‘visual’ cortices, revealing the remarkable capacity of human cortex to changes its representational content. By contrast to the large differences in ‘visual’ cortex representations, people who are blind and those who are sighted share conceptual content of visual phenomena. Blind and sighted people have common cognitive and neural representations of ‘visual’ words (e.g, sparkle). Blind and sighted people also share structured causal knowledge of visual perception, light and color that goes well beyond the meanings of individual words. Indeed, language is more effective at transmitting such structured causal models of visual phenomena than at transmitting verbalizable factoids (e.g., bananas are yellow). Studies with people who are blind illustrate how language provides fodder for inference, enabling sharing of meaning across individuals.
The Extension Dogma
Paul Pietroski (Rutgers University)
In studies of meaning, linguists and philosophers have often followed Donald Davidson and David Lewis in assuming that whatever meanings are--if there are any--they determine extensions, at least relative to contexts. After reviewing some reasons for rejecting this assumption, which is especially unfriendly to mentalistic conceptions of meaning, I'll suggest that this assumption became prevalent for bad reasons. As time permits, I'll conclude by reviewing some work which suggests that even if we focus on quantificational determiners, mentalistic conceptions of meaning are motivated and The Extension Dogma should be abandoned.
Composition, comparison, and cognition
Alexis Wellwood (University of Southern California)
The meaning of a sentence depends on the meanings of its parts. Morphological and syntactic theories tell us what the parts are, and semantic theories aim to tell us why those parts and their observed combination mean just as they do. In this talk, I relate contemporary event semantic analyses to event representation in the psychologist's sense. In my case study, I focus on sentences with more X, emphasizing two critical factors for their interpretation: (i) the conceptual class that X points to, and (ii) the semantic commitments imposed by X's syntactic environment. Research in linguistics and cognitive psychology has plainly borne out the relevant facts for nouns: given nonce gleeb, Ann has more gleeb is about volume or weight if gleeb points to a kind of substance, while Ann has more gleebs is about number whatever gleeb means. My exploration concerns occurrences of X as an adjective (Ann was gleeb more) or verb (Ann gleebed more). Here, the relevant conceptual distinction for (i) will be that between events and processes, and the relevant syntactic details for (ii) are considerably more subtle. In the talk, I present my syntactic-semantic theory and the results of novel experiments bearing out its predictions. If successful, such a study provides evidence for a closely-knit relationship between compositional semantic description and structures in non-linguistic cognition, ultimately grounding an understanding of language as a window into the rest of the mind.
SUBMITTED SESSIONS
Tuesday (18th of May): REFERENTS
Remarking on the atypical: Implications for language learning and modeling
Claire Bergey, Benjamin C. Morris (University of Chicago) & Daniel Yurovsky (Carnegie Mellon University)
Does language reflect regularities in world knowledge? Statistical models trained on language alone approximate human semantic judgments (Mikolov et al., 2013) and co-occurrence patterns in language predict children’s memory of word pairs (Unger, Savic & Sloutsky, 2020), suggesting that language statistics capture and perhaps shape knowledge about the world. However, pragmatic principles predict that language should systematically deviate from reflecting world knowledge, instead selectively noting what is remarkable. For instance, speakers more often mention atypical features of things (e.g., “the purple carrot”) than their typical features (e.g., “the [orange] carrot”) in lab tasks (Rubio-Fernández, 2016). This implies that co-occurrence patterns between nouns and adjectives deviate from knowledge about features of objects, and that using associative mechanisms to learn about object features from language may lead the learner astray. In this study, we first ask whether parents selectively mention atypical features in their speech to children, and then examine whether language embedding models capture feature typicality. We examined parents’ speech to children (ages 14–58 months) in a large longitudinal corpus (Goldin-Meadow et al., 2014), extracting co-occurring concrete adjective–noun combinations (e.g., wooden — shoe). 444 MTurkers rated the typicality of these adjective–noun pairs (e.g., “How common is it for [a shoe] to be a [wooden shoe]?“). We found that parents’ description reliably highlights atypical features of concrete concepts over typical ones. To examine whether embedding models capture feature typicality, we asked whether three models (word2vec trained on parent speech, word2vec pre-trained on Wikipedia, and pre-trained BERT) captured the typicality of adjective–noun pairs. These models’ judgments show low correlations with human judgments (highest: Wikipedia word2vec, r = 0.22), and do not reliably represent nouns as more similar to their typical descriptors than their atypical ones. Overall, regularities in parents’ description seem to depart from world knowledge, with implications for language learning and modeling.
Is Semantic Processing Grounded in Mentalization?
Bálint Forgács (ELTE), Judit Gervain (Università Padua), Eugenio Parise (Lancaster University), György Gergely (Central European University), Zsuzsanna Üllei Kovács, Lívia Elek & Ildikó Király (ELTE)
A number of studies have recently reported an N400 event-related potential (ERP) effect not only when adults or infants experience semantic violations, but also when they follow the language comprehension of communicative partners. The N400 has been argued to indicate first-person language processes. In contrast to this account, the social N400 can be observed when the participant does not, only a communicative partner experiences a semantic incongruity, i.e. in third person language tracking. In a series of EEG experiments we explore the mentalistic nature of the social N400: is this effect social in general or specifically mentalistic instead, i.e. based on attribution of beliefs (meanings as intended) to others? Unlike prior studies with adults, where a social N400 misunderstanding was created by providing less information to an observer, we induced false beliefs to do so. Adult participants were presented with toys that were mislabelled from the perspective of an observer, because of an unseen object change, but were correct from their own perspective. In contrast to previous studies we recorded a social N400 only when we instructed participants to track the language comprehension of the observer. Intriguingly, participants showed no frontal effect, reported earlier to accompany mentalization, as if the N400 coded belief attribution as well. In a second paradigm we found that the N400 was modulated by the sheer presence of another person. We suggest that our findings are consistent with the hypothesis that semantic comprehension, and specifically communicative, on-line meaning construction, is profoundly mentalistic. Prior studies could have overlooked the mentalistic nature of language processing due to the isolated nature of experimental paradigms. Even though it has been shown previously that the N400 is sensitive to social factors, we believe that such observations are a consequence of the integration of language comprehension into mental state attribution.
The Interpretation of External Symbols at the Interface Between Vision and Communication
Barbu Revencu (Central European University)
Mainstream thought on external object representations, both theoretical and empirical, has assumed that reference to an actual object—something we could in principle bump into on the street—is a default component of representations. Against this view, we argue that stand-for relations between external object symbols and discourse referents are the core of external representations, which allows for a straightforward account of non-referring representations. We introduce a simple computational structure consisting of two sets of mental representations (one for objects, one for discourse referents) and two functions operating over the two sets: (i) tokening, which uses conceptual knowledge to generate novel tokens; and (ii) assignment, which establishes local links between objects and discourse referents. We illustrate the ubiquity of this structure across many communicative devices in which the visual system of the interlocutor is part of the interpretive process (puppet shows, animations, drawings, graphs, memes), and draw on early object substitution pretense to argue that the ability develops early and reliably in human ontogeny.
From object files to discourse files: neural support for a common referential index system in scene and sentence comprehension
Ellen Lau (University of Maryland)
Working from cognitive neuroscience evidence, I’ll argue that a ‘missing link’ in our understanding of the language-thought interface in online sentence comprehension is the crucial role of an inferior parietal system for referential indexes—what’s known as the ‘object file’ system in cognitive psychology. In psycholinguistics, our explanations for sentence processing difficulty historically have been biased towards linguistic computations of syntax and logical form, and we’ve emphasized the role of verbal working memory as a constraint on performance. But when we neuroimage sentence comprehension, in addition to the temporal lobe areas that support linguistic analysis and conceptual knowledge, studies observe additional activity in the inferior parietal region of angular gyrus—an apparent puzzle, given the association of inferior parietal cortex in *visual* working memory. This puzzle is resolved when we remember that both visual scene perception and language comprehension depend on the same non-linguistic, core cognition capacity for referential indexing. Visual neuroscience studies show that it is indeed inferior parietal cortex that supports these indexes (object files), which act as pointers to visual and conceptual properties in inferior temporal cortex. I show that extended to language, this referential indexing account straightforwardly explains the pattern of inferior parietal activity in sentence comprehension. I also observe that neural activity in sentences that was classically attributed to the cost of maintaining syntactic dependencies, actually bears a striking resemblance to neural activity associated with the cost of maintaining object files in vision. I’ll conclude that, rather than seeing reference as a ‘nuisance’ that prevents us from learning about language-specific computations of interest, psycholinguists should embrace the referential index system as central to one computational goal of language comprehension—appropriate belief update—and should welcome the chance to build on what is known about this system from the visual working memory and object file traditions.
Wednesday (19th of May): PROPERTIES
Why grass is green and not yellow: Intuitions about object colors in signed and congenitally blind adults
Judy Kim (Yale University) & Marina Bedny (Johns Hopkins University)
Informative descriptions of the world we perceive depend on a common ground understanding of what the descriptions convey. When asked ‘What color is grass?’ nearly 100% of English speakers respond ‘green,’ even though grass can look yellow in bright sunlight, grey in moonlight, and brown when withered. One possibility is that we all label grass ‘green’ merely because we have frequently seen grass appearing green. Here, we tested an alternative hypothesis: people share an understanding of abstract casual principles which they use to assign color labels, and these principles develop independent of direct sensory access. Specifically, we hypothesized that sighted and blind adults would use “typical viewing conditions” (daylight rather than night, outside rather than inside) and objects’ causal histories when describing object color. We tested the color labeling intuitions of sighted (n=15) and congenitally blind (n=20) individuals for novel objects. Novel objects were introduced in an “explorer on an island” scenario. Objects were described as having two colors: one color on the inside vs. outside or one during daylight vs. nighttime. On some day vs. night trials, objects had nighttime-intended functions (causal history manipulation). Participants were asked to pick one color to describe the object to a friend in a letter (or texture, in a control condition). Sighted and blind individuals alike chose observer-centric outside and daytime colors by default, but switched to nighttime colors when objects had nighttime functions. These results suggests that when assigning color labels to objects, people take into account normative viewing conditions (daylight/outside) and the causal history of the objects (how is it intended to be seen?). These intuitions develop independent of visual experience. People use intuitive theories of perception to produce mutually comprehensible linguistic descriptions of sensory phenomena, and these intuitive theories are shared via linguistic communication.
Meanings of body part terms: Cross-linguistic colexifications between body parts and objects
Annika Tjuka (Max Planck Institute for the Science of Human History)
In semantic typology, the human body has been a popular domain of study for cross-linguistic comparisons for many decades. Most studies focused on how languages segment the body into linguistic units (e.g., Enfield et al. 2006). They showed that three types of salience are important in segmenting the body into parts: spatial alignment, perceptual and functional salience (Andersen 1978; Morrison and Tversky 2005; Majid and van Staden 2015). As of yet, the extensions of meaning from body part terms to objects have not been systematically studied across multiple languages although isolated examples of cross-linguistic polysemy, such as eye/seed and languages that consistently extend body parts to objects exist (Brown and Witkowski 1983; Levinson 1994). In my talk, I discuss a study of colexification patterns based on a database with colexifications of 2,906 concepts across 2,940 languages (CLICS, Rzymski et al. 2020). The goal of the study is to determine which body parts are frequently colexified with objects and whether there are universal patterns. The results show 411 colexifications between body parts and objects. The three most common body parts that are colexified with objects are HEAD, SKIN, and EYE. In the case of HEAD, either perceptual salience (round shape) or spatial alignment leads to colexifications with objects such as GOURD or ROOF. The most common colexification between a body part and an object in 213 languages is SKIN-BARK, but several colexifications occur only within one language family, e.g., HEART-FLOWER. The study sheds light on the general principle of using salience as the basis for meaning extensions and the different forms of salience that establish particular colexifications. The present results may provide predictions to test hypotheses, for example, related to embodiment. In addition, the observed language variation is a reminder of cognitive diversity.
Privativity as a window to lexical-conceptual structure
Joshua Martin (Harvard University)
The relationship between domain-general conceptual representations and domain-specific lexical representations is a core interface question in the cognitive science of language. Phenomena like polysemy and coercion have motivated fine-grained, enriched lexical entries, e.g., Generative Lexicon and Modern Type Theories. However, such theories are rightfully criticized for stipulative featural architectures – what evidence exists that any particular qualia or types are reasonable semantic primitives? As the number of phenomena we describe with these theories grows, so does the number of posited basic features, and so as empirical coverage increases, claims to parsimony and cognitive reality decrease. We need motivated metrics for which components of concepts we lexically encode, accessible to specifically linguistic meaning calculation. Here, I propose one such metric: a conceptual feature is represented linguistically if it is involved in cross-linguistically regular meaning shifts sensitive to the syntax of modification. In short, if a conceptual feature is consistently targeted by a class of modifiers cross-linguistically, it is a good candidate for a semantic primitive, and if the meaning shifts it is involved in are sensitive to syntactic structure, it must be visible to the compositional system, narrowly conceived, not just post-compositional pragmatic processes. I illustrate with privative adjectives, a restricted class of modifiers that coerce their arguments to lack a feature. The same features are targeted across languages: we consistently find ‘counterfeit’ and ‘mock’ adjectives which remove their argument’s origin or telic features, and never find plenty of logically possible privatives. I synthesize existing data on word order in Romance and novel data concerning adjective movement in Bangla and morphosyntactic alternations in Slavic to show that this privation is syntactically sensitive. The structure of modification determines coercion: ‘fake’ + ‘N’ is nonsubsective in one configuration, subsective in another. Thus, privatives provide one diagnostic for lexically represented features of nominal concepts.
A bias for cross-category harmony is sensitive to semantic similarity
Fang Wang, Simon Kirby & Jennifer Culbertson (University of Edinburgh)
The tendency for heads and dependents to be correlated across different phrase types is called cross-category harmony. Typological research suggests that this tendency is strong for certain combinations of heads and dependents but weaker for others. For example, head-dependent order in verb phrases (VP) tends to harmonise with adpositional phrase (PP) order but not adjective phrase (AdjP) order (Dryer 1992). This observation is supported by a recent study using artificial language learning: learners trained on an artificial language, showed a strong preference for harmonic orders between VP and PP, but no such preference between VP and AdjP (Wang et al. 2021). Here we ask why the strength of harmony bias might differ across phrase types. We hypothesized that ordering of constituents across phrases is not just about aligning heads, but is sensitive to the semantic similarity of the elements to be aligned. We tested this by comparing learners’ preference for harmony between VP and AdjP (where adjectives are assumed to potentially align with verbs, as in previous work) across two distinct classes of adjectives: stative and less verb-like (e.g., ‘red’) vs. active and more verb-like (e.g., ‘broken’). English native speakers were trained and tested on order of verbs and object nouns (VO/OV), then trained on adjectives in isolation. In the critical test, they were then asked to produce phrases requiring an adjective and a noun. We found that participants who were taught verb-object order tended to extrapolate harmonic adjective-noun order regardless of whether adjectives were active or stative (likely reflecting participants’ L1). However, participants who were taught object-verb order tended to extrapolate harmonic noun-adjective order only when adjectives were active. Our results suggest that semantic similarity of elements across phrases may play a role in driving cross-category harmony.
Thursday (20th of May): EVENTS
Children are sensitive to the internal temporal profiles of events
Yue Ji (Beijing Institute of Technology) & Anna Papafragou (University of Pennsylvania)
Language distinguishes bounded events which are developments leading to an inherent endpoint (e.g., eat a sandwich) from unbounded events which have a homogeneous structure without an inherent endpoint (e.g., eat cheese). Four-to-5-year-olds have not fully acquired aspectual bounded/unbounded contrasts. Linguistic aspect is frequently assumed to build on pre-linguistic conceptual notions, but little research has explored children’s sensitivity to the bounded/unbounded distinction in cognition. Here we fill this gap. Based on the finding that children encode event endpoints as a critical component in memory and language, we hypothesize that the salience of endpoints depends on the non-homogeneous structure of bounded events; in unbounded events with a homogeneous structure, endpoints should be treated largely similarly to other points. We created videos of bounded events and closely related unbounded events. Each video was edited twice, once to introduce a mid-interruption and once to introduce an end-interruption. In a “picky-puppet” task, 4-to-5-year olds and adults were assigned to the Bounded or the Unbounded condition, depending on the event category that they were exposed to. During training, participants watched 8 pairs of videos. Each pair showed the same event but differed in the placement of interruption. After each video, participants heard “The girl likes the video”, or “The girl doesn’t like the video”. Within each condition, half of the participants heard that the picky girl liked the video with a mid-interruption but did not like the video with an end-interruption (“Likes mid-interruption” version), and half heard the reverse (“Likes end-interruption”). At test, participants watched new events and decided whether the girl would like them. A significant interaction between Condition and Version was found (z=3.37, p<.001). Both age groups watching bounded events had more difficulty accepting that the girl liked end- compared to mid-interruptions but no such difference was detected among viewers of unbounded events.
Prelinguistic grounding of event structure. The case of giving and taking
Denis Tatone (Central European University)
Give and take verbs differ with respect to their syntactic requirements: the former requires the patient to be made explicit in the sentence structure; the latter does not. This has been argued to reflect differences in the distribution of semantic roles: in giving the roles of agent and patient necessarily refer to distinct participants, whereas in taking these are borne out by a single participant. Here I will argue that this asymmetry is rooted in prelinguistic assumptions about the number of obligatory participants that each action concept entails. I will review three lines of evidence corroborating this claim: firstly, preverbal infants interpreted giving actions as patient-directed (‘A gives X to B’), but kinematically identical taking events as object-directed (‘A takes X’); secondly, adults exposed to abstract animations of transfer events produced stronger alpha-band suppression (an EEG signature of action understanding sensitive to the perceived interactivity of observed actions) for giving over nonsocial acts of object disposal, but not for taking over nonsocial acts of object acquisition; thirdly, adults showed evidence of agent-patient binding when presented with giving, but not taking, events in a change-detection task. Taken together, these findings suggest that differences in argumenthood between giving and taking reflects deeper structural asymmetries in their corresponding prelinguistic schemas: if giving is obligatorily represented in a three-place structure, insofar as the agent’s object-directed action can be meaningfully interpret only in relation to its effects on the patient (making its inclusion mandatory), taking is only facultatively so, insofar as the agent’s action can be apprehended as directed to the goal of object acquisition without having to consider its effects on the object’s original possessor (the patient, whose inclusion becomes thus accessory).
Where word and world meet: Intuitive correspondence between visual and linguistic symmetry
Alon Hafri (Johns Hopkins University), Lilia Gleitman (University of Pennsylvania), Barbara Landau (Johns Hopkins University) & John Trueswell (University of Pennsylvania)
Symmetry is ubiquitous in nature, in logic and mathematics, and in perception, language, and thought. Although humans are exquisitely sensitive to visual symmetry (e.g., of a butterfly’s wings), linguistic symmetry goes far beyond sensory experience, to social situations (e.g., marry, conspire) and even to the abstractions pervasive in scientific reasoning (e.g., humans know that x equals y entails that y equals x). This raises a question: how might a language learner discover which terms map onto such abstract concepts? Here, we asked whether an intuitive correspondence exists between visual and linguistic representations of symmetry, in ways that could prove instrumental for acquiring symmetrical terms. To address this question, we used a cross-modal matching paradigm. On each trial, adult participants observed a visual stimulus (either symmetrical or non-symmetrical) and had to choose between two English predicates that were unrelated to the visual stimulus, one symmetrical and one non-symmetrical (e.g. “negotiate” vs. “propose”). In a first study with visual events (symmetrical collisions and asymmetrical launches), participants reliably chose the predicate consistent with the visual event’s symmetry. A second study showed that this “matching” effect generalized to static objects, and was weakened when the visual stimuli’s binary nature was made less apparent (i.e., one object with a symmetrical contour, rather than two symmetrically configured objects). This suggests that the mapping of symmetry across cognitive systems is most obvious when it is “relational”, i.e., when it holds for binary relations in both systems. Taken together, the visual/linguistic correspondence we have identified suggests a possible avenue for acquisition of word-to-world mappings for the seemingly inaccessible logical symmetry of linguistic terms. In particular, we speculate that there might exist perceptual “gems” (e.g., shaking-hands, hugging) from which more concrete symmetrical words are acquired; other processes (e.g., syntactic bootstrapping) may then enable acquisition of more abstract ones.
Linguistic and nonlinguistic event categories have similar prototype structure
Lilia Rissman & Gary Lupyan (University of Wisconsin-Madison)
Are linguistic and nonlinguistic event categories structured in the same way? We focus on event roles: in ‘Jan eats sorbet,’ Jan is an “Agent” and the sorbet is a “Patient” (Rissman & Majid, 2019). In linguistic theory, event roles have been analyzed in terms of prototypes, e.g., being intentional and playing a causative role are properties of proto-Agents whereas being affected is a property of proto-Patients (Dowty, 1991). We asked whether the same role prototypes guide category learning in a non-linguistic task. English speakers (n=202) saw 20 images of one figure acting on another (e.g., one figure kicking another). A salient red dot marked the Agent (or Patient) in each scene. Participants had to learn to group the pictures into Agent and Patient categories using accuracy feedback on each trial. Participants then completed 52 test trials containing all new scenes. Some participants found this task difficult: 38% failed to learn the Agent/Patient distinction during training. Among people who learned the distinction, we analyzed whether Dowty’s proto-Properties predicted participants’ categorization accuracy, confidence, and reaction time at test. We found that participants were more accurate, more confident, and faster when the Agent was more intentional (b = .27, SE = .10, p < .01; b = .07, SE = .03, p < .05; b = -.07, SE = .02, p < .01). Participants were also more accurate when the Agent caused the event (b = .24, SE = .08, p < .01). Finally, the affectedness of the Patient predicted confidence and reaction time (b = -.05, SE = .02, p < .05; b = .05, SE = .02, p < .05). These results demonstrate similarities in event structure across linguistic and nonlinguistic domains, suggesting that event roles are domain general. Nonetheless, for many people, event roles are surprisingly inaccessible to conscious reasoning.
ASYNCHRONOUS PRESENTATIONS
Tuesday (18th of May)
Investigating memory specificity for semantic features of concepts using a vector-based semantic network model
Alex Ilyés, Borbála Paulik, Attila Keresztes
Contemporary theories of human memory have been based on the distinction between two main neurocognitive memory systems, one representing events of our lives, i.e., episodic memory, and the other representing knowledge about the world, i.e., semantic memory. The hippocampus, a bilateral brain region in the medial temporal lobe has been suggested to specifically support the episodic memory system. This classic distinction and the exclusive role of the hippocampus in episodic memory has recently been challenged by computational models of memory. One such model – also supported by animal and human findings – posits that hippocampal pattern separation, the process of orthogonalizing highly similar input representations, supports specificity of memory traces and plays a crucial role in the encoding and retrieval of episodic memories. However, several findings suggest a broader role of the hippocampus, and potentially hippocampal computations, in manipulating semantic representations. The present study aims to investigate hippocampal contributions to semantic memory by examining its role in establishing the specificity of semantic traces. To this end, we have developed a task that tests mnemonic discrimination, a behavioral proxy for pattern separation, as a function of semantic similarity – assessed by a vector-based semantic network model – between interfering memory traces. Using this task, we will measure behavioural outcomes of hippocampal pattern separation on semantic memory traces. In this poster, we will present pilot results, and a plan to investigate hippocampal contributions to performance on the same task using high-resolution magnetic resonance imaging (MRI).
The role of novel labels on resources allocation: A developmental framework to investigate the impact of language on attention from infants to adults
Giulia Calignano, Eloisa Valenza, Francesco Vespignani, Sofia Russo, Simone Sulpizio
Do novel linguistic labels have privileged access to attentional resources compared to non-linguistic labels, and visual information only? We explored this fundamental question by looking at the time-course of resource allocation towards visual stimuli at two cognitive developmental stages. By means of a training-test and an attentional overlap task, in Experiment 1 and 2 we shed light on how novel labels and object-only stimuli influence pupil size variations and saccade latency as indexes of resources allocation and attention disengagement, in both infants (N = 24) and adults (N = 60). Experiment 3 tests the impact of linguistic information on visual attention in adults (N = 50), by comparing tones and novel labels. Since the attentional deployment is affected by both the saliency and the degree of familiarity of the perceptual stimulus, we created three test conditions: (i) consistent with (i.e., identical to) the training and (ii) inconsistent with the training (i.e., with an altered feature), and (iii) deprived of one perceptual feature. The 2x2 within-participants design of each experiment was modeled by using Generalized Mixed-effects Models to account for individual variability in looking times, saccade latency and pupil dilation. Our results suggest that: (i) the paradigm is sensitive to detect memory-based abilities in both 12-month-olds and adults, as indicated by the analysis of pupil size variations over time, and (ii) infants allocate fewer attentional resources (i.e. shorter looking times and reduced pupil dilation) to familiarize with a visual object presented with a label, compared to an unlabeled object, whereas the reversed pattern was shown by adults. We interpreted these results by discussing the role of novel linguistic labels on resource allocation at different stages of lexical development, stressing the importance of investigating how and when linguistic labels shape visual object processing and guide attention deployment since early infancy.
Cognitive bases of semantic compositionality: the case of esoteric vs. exoteric languages
Antonio Benítez-Burraco, Candy Cahuana, David Gil, Ljiljana Progovac
Here we consider how language structure can influence cognition in a rather tangible way, focusing on the distinction often invoked in the literature on language evolution between esoteric vs. exoteric language types. Roughly speaking, esoteric languages are characterizable as exhibiting simpler (less layered) syntaxes, and less semantic compositionality, but larger, more complex phonologies and morphologies, with more irregularity, and with more formulaic/ memorized language (e.g. Wray and Grace, 2007). In contrast, exoteric languages are characterizable as involving less complex phonologies and morphologies, but more complex and more layered syntaxes, with more specialized (obligatory) grammaticalized distinctions, correlated with a higher degree of semantic compositionality. Our hypothesis is that predominantly esoteric languages rely more on declarative memory, while predominantly exoteric languages, in comparison, rely more on procedural memory, consistent with Ullman’s (2015) claim that greater complexity of grammatical rules or constraints may lead to a greater relative dependence on procedural memory. While both memories are essential for language, partly overlapping/redundant in their functions, when it comes to language, declarative memory is typically implicated in vocabulary learning and irregular phenomena across domains, including memorized, opaque, formulaic chunks of language (e.g. idioms and proverbs), while procedural memory is implicated in compositional, automated, rule-governed aspects of language (Ullman, 2004, 2015; also Heyselaar et al., 2017; Elyoseph et al., 2020). Ultimately, because cognitive biases can be linked to (epi)genetic modifications, we expect this differential reliance to be detectable in differences in the allele frequencies of specific genes, and we are in the process of testing this hypothesis by seeking correlations between this linguistic dimension implicating different degrees in semantic compositionality, on the one hand, and the prevalence of certain gene alleles in the populations, on the other.
The Bidirectional Relationship between Theory of Mind and Language in Infancy and Toddlerhood
Szabolcs Kiss
This study is a state-of-the-art literature review on the bidirectional relationship between theory of mind (ToM) and language in infancy and toddlerhood. ToM refers to the attribution of different mental states to the self and other people in order to interpret, explain and predict behaviour. First, we discuss the grasping of different mental states in infancy and toddlerhood, including the understanding of false belief. We identify a few explanations for the gap between grasping false belief in infancy and passing the traditional verbal false-belief task at the age of four or five. Next, we enumerate different versions of modularism. The paper not only presents different perspectives on the mind-reading prerequisites for language acquisition but also examines the linguistic preconditions for ToM, including the various social-constructivist views on the formative role of language in ToM. The interplay between ToM and language is a complex one. In this paper, we see that certain ToM abilities, such as understanding seeing, intention, emotion, attention, pretence and false belief, occur earlier than the emergence of language, and, as a corollary of this, one could argue for a cognition-first position in the debate over the developmental relationship between thought and language. In addition to the aforementioned mental states, there are other mind-reading prerequisites for language, such as understanding gaze information, goals, agency, other minds and referential intention. This is only one side of the coin; the other is the different linguistic abilities necessary for the ontogenetic emergence of ToM. On one end of the spectrum is the view according to which language creates mental states. On the opposite end it is stated that words are just linguistic labels for previously existing concepts. In the middle is the view that language as a mental representational medium can support ToM reasoning.
Unusual Perceptual Experiences and Beliefs Are Associated with Amplified Mnemonic Discrimination and Attenuated Generalization
Ágota Vass, Melinda Becske, Ágnes Szőllősi, Mihály Racsmány, Bertalan Polner
According to the hippocampal dysfunction theory, positive symptoms of schizophrenia such as hallucinations and delusions might be attributable to impaired pattern separation and overactive pattern completion which are hippocampal computations underpinning episodic memory. Previous behavioural studies that tested the theory bear significant limitations that are commonly encountered in schizophrenia research, such as small samples and the potential confounding effects of medication. The present study aimed to overcome these limitations by focusing on schizotypal traits in the general population. More specifically, we focused on positive schizotypy, that is, a proneness to unusual experiences that is analogous to hallucinations and delusions. We have tested the predictions of the hippocampal dysfunction theory in a sample of healthy individuals (N=71), oversampled for unusual experiences (as measured by the short Oxford-Liverpool Inventory of Feelings and Experiences (O-LIFE) questionnaire). Mnemonic discrimination and generalization, which are putative behavioral indicators of pattern separation and pattern completion, respectively, were assessed with the Mnemonic Similarity Task. Participants were presented with images of everyday objects and they were requested to correctly categorize objects as old, new or visually similar in a subsequent recognition memory test. Contradicting the predictions of the hippocampal dysfunction theory, we showed that positive schizotypy was associated with enhanced mnemonic discrimination and attenuated generalization, over and above the effects of perceptual discrimination, other dimensions of schizotypy and associated psychopathology. Our findings do not support the hippocampal dysfunction theory, however, they are in line with previous data showing that the positive symptom dimension is rather associated with impaired generalization and fragmentation of experiences. It is also possible, however, that different memory alterations are associated with positive schizotypy and positive symptoms. We argue that high powered studies are needed to clarify how symptom dimensions in patients with schizophrenia relate to specific cognitive alterations.
Who has shown you this object? - investigating source memory of preschoolers in an intergroup context
Andrási Krisztina, Oláh Katalin, Király Ildikó
In this study designed for preschoolers, we investigate whether the cultural group membership of a person providing information would influence how well children remember the source. The hypothesis is that the source memory performance of children could be superior in the case of outgroup sources. Information obtained from an out-group model would not necessarily be encoded as generalizable, but could be useful in certain contexts in the future. In order to correctly identify these, the original source of the information needs to be retained. During the procedure, an experimenter introduces 4 characters via videos appearing on a screen, 2 of whom belong to the same cultural group as the participant, while 2 belong to another group. Following this, participants see all four people appearing in a series of videos, during which they see them demonstrating functions of different objects (8 in total). In the test phase, children see an image of the previously introduced objects one by one, as well as all four characters, and they are instructed to indicate by pointing which person had previously shown the object to them. Our online data collection is still ongoing. Currently, source memory performance seems to be slightly better in the case of out group members (IG = 1.25, OG = 1.83, n = 12), but this difference is not significant.
Multilevel fMRI adaptation for spoken word processing in the awake dog brain
Anna Gábor, Márta Gácsi, Dóra Szabó, Ádám Miklósi, Enikő Kubinyi, Attila Andics
Human brains process lexical meaning separately from emotional prosody of speech at higher levels of the processing hierarchy. Recently we demonstrated that dog brains can also dissociate lexical and emotional prosodic information in human spoken words. To better understand the neural dynamics of lexical processing in the dog brain, here we used an event-related design, optimized for fMRI adaptation analyses on multiple time scales. We investigated repetition effects in dogs’ neural (BOLD) responses to lexically marked (praise) words and to lexically unmarked (neutral) words, in praising and neutral prosody. We identified temporally and anatomically distinct adaptation patterns. In a subcortical auditory region, we found both short- and long-term fMRI adaptation for emotional prosody, but not for lexical markedness. In multiple cortical auditory regions, we found long-term fMRI adaptation for lexically marked compared to unmarked words. This lexical adaptation showed right-hemisphere bias and was age-modulated in a near-primary auditory region and was independent of prosody in a secondary auditory region. Word representations in dogs’ auditory cortex thus contain more than just the emotional prosody they are typically associated with. These findings demonstrate multilevel fMRI adaptation effects in the dog brain and are consistent with a hierarchical account of spoken word processing.
Understanding the role of linguistic distributional knowledge in cognition: A systematic comparison of tasks, models and parameters
Cai Wingfield, Louise Connell
Through exposure to natural language, humans learn patterns of linguistic distributional information which aid in cognitive tasks of varying conceptual complexity. Linguistic distributional models (LDMs), a type of computational model, learn representations of words from statistical patterns in large text corpora, and can predict semantic relationships from these representations. Typical LDMs fall into three classes: "count-vector" and "n-gram" models, which count collocations of words, and "predict" models, which use artificial neural networks to learn relationships between words and their contexts. Distributional semantics research, which often optimises for performance on tasks driven by semantic similarity, has tended to recommend predict models trained on the largest corpora. By contrast, cognitive psychology research frequently employs a broader range of cognitive tasks relying on complex conceptual relationships. Here, relatively simple context-counting models (e.g., n-gram and count-vector models) have proven effective predictors, despite being discounted from contemporary distributional semantics research. The present study comprises a systematic evaluation of LDMs from all families on a wide range of common cognitive psychology tasks involving conceptual relationships, which range from simple and similarity-based (e.g., synonym detection) to more complex, abstracted relationships (e.g., concrete/abstract semantic decision); and from those which measure semantic processing explicitly (e.g., similarity judgement) or implicitly (e.g., response time). Using Bayesian model comparisons, we make recommendations for the optimal LDM when modelling tasks with particular features. Our results show that when modelling human conceptual processes, different tasks require different LDMs, and no one model, or even family of models, does well at all tasks, and that models optimised for peak performance in one domain may not excel elsewhere.
Contributions of language and sensory experience to thinking about seeing: Evidence from blindness
Elizabeth Musz, Arielle Silverman, Marina Bedny
How do we learn about other peoples’ perceptual experiences? While we can make inferences based on our own firsthand sensory experiences, linguistic communication can also richly convey what other people can see, hear, and know. However, the relative contributions of these two routes to knowledge are not well understood. Here, we explored how sighted and blind adults reason about the perceptual experiences of others. Sighted (n=18) and congenitally blind (n=18) participants listened to scenarios in which sighted or blind observers either look at or hear another person (the target). Participants rated the likelihood that observers would know various features of the target (e.g., their age, gender, eye/hair color, etc.) based on what the observer saw or heard. We manipulated the distance between the observer and the target (nearby versus far away) and the duration of perceptual experience (extended versus brief). Blind and sighted groups agreed on the features of a target that are easiest to discern (e.g., hair color is easier to see than eye color), although blind participants’ judgments about vision were more variable. In addition, both groups judged that nearby distances and extended durations are more likely to result in knowing. However, the relative weights that participants placed on these factors varied systematically by subject group. Sighted participants’ judgements were more impacted by the distance of the seeing event, while blind participants’ ratings depended more on duration. This differential weighting suggests that blind people can effectively draw upon verbally-acquired knowledge to understand the basic variables that govern visual experiences (e.g., distance and duration), but first-person sensory experience fine-tunes and calibrates the relative importance of these variables. For reasoning about perception, language and sensory experience may provide distinct and complementary information: linguistic communication conveys the logically relevant variables, while direct experience enables the specific continuous parameter settings.
Does Irony Understanding Decline With Age?
Greta Mazzaggio, Hortense de Bettignies, Diana Mazzarella
The use of non-literal language is deeply embedded in everyday communication and the ability to comprehend it changes across life. Research indicates that older adults sometimes struggle in understanding pragmatic aspects of language, such as presupposition (Domaneschi & Di Paola 2019), humor (Bischetti et al. 2019) or sarcasm (Phillips et al. 2015). The present study aims at broadening our understanding of these age-related changes by focusing on irony understanding. To understand irony (e.g., ‘The weather is great!’ uttered under a pouring rain), one needs to recognize that the speaker is expressing a dissociative attitude towards a proposition that is blatantly irrelevant or false, which echoes an attributed thought or statement (e.g., the proposition ‘The weather is great’ attributed to the mistaken weather forecaster). Previous research shows that the ability to process irony is closely related to Theory-of-Mind (ToM) and working memory (WM). As there is evidence of an age-related decline in both cognitive abilities, this decline may impact irony understanding in late adulthood. In our ongoing study, we test the effect of age on irony-processing by comparing self-paced reading times of ironic and literal statements across two age groups (young adults: 19-25 yo and older adults: 65-74 yo). Crucially, we manipulate the degree of explicitness of the statement echoed by the ironic speaker. We predict that the difference between the reading times for ironic and literal statements will be modulated by age. Moreover, we predict that reading times will be faster when the echo is explicit compared to when the echo is implicated and that this effect will be stronger for older adults. Finally, we expect that ToM and WM will both be significant predictors and that WM will play a crucial role when the implicitness of the echo poses higher cognitive demands.
An attachment-related approach to differences in emotion perception and face memory
Karolin Suri, Kornél Németh
Attachment theory (Bowlby, 1969/1982) attempts to explain how our childhood relationships (primarily with parents) can have a lasting effect on our personalities. Attachment style may influence processing of emotionally significant stimuli, possibly through attention orientation or stimulus coding. As one of the most important sources of emotional information are facial expressions, we often rely on this to understand our partner’s intentions and to adapt our responses accordingly. Therefore, we examined whether there is an association between attachment style (ECR questionnaire), facial emotion perception and face memory. We also studied the association of attachment style with depression (BDI-13) and anxiety (STAI-T/S), and their relation to emotion recognition and memory performance. In addition, eye-tracking was used during the emotion decision task (“happy” vs. “sad” faces) and the subsequent facial memory task; performance and reaction time were also measured. The study included neurotypical individuals (19-36 years, N=50, 24 female). Based on correlation analyzes, the degree of trait anxiety is significantly correlated with the degree of attachment anxiety, and situation-dependent anxiety is significantly correlated with depressive characteristics. Furthermore, age is negatively correlated with the length of first fixations to the nasal area during the memory task for previously seen and new faces as well. Based on eye movement data, a general difference can be observed during the memory task in the length of fixations to previously seen and new faces, and in the order of fixations to the areas of interest (right eye, left eye, nose, mouth). These differences may result from alternative fixation patterns in individuals with different attachment orientations, suggesting that different viewing patterns might develop in connection with individual attachment styles. The results of our research may contribute to a more accurate understanding of human relationships in the light of attachment styles.
Does Increased Coordination in Joint Action Increase Young Children’s Commitment or Decrease Their Need to Social Reference?
Melissa Reddy, Sotaro Kita, John Michael, Barbora Siposova
In adult studies, increased levels of interpersonal action coordination between two actors can signal to observers increased commitment (Michael et al., 2016). But do young children see coordination as a cue to commitment? The current study investigated the impact of coordination – with or without ostensive cues - on children’s commitment to a joint activity. In a between-subjects design with 3 conditions (N=72), we compared 4-year-olds responses when their adult play partner used: A) low coordination; B) high coordination; or C) high coordination with ostensive cues. We measured children’s commitment by recording if and when they left the game to play an attractive alternative, and their verbal and nonverbal acknowledgements (e.g., social referencing). Results failed to support the ‘coordination creates commitment’ hypothesis: children were more likely to leave the game in high coordination conditions than the low coordination condition (ȥ = 3.834, p<0.001) and were less likely to show acknowledgements (ȥ = -2.159, p<0.05). There were no significant differences between conditions regarding the stage when children left the game. Rather than inferring that coordination reduced children’s commitment, we concluded that children needed to social reference with the experimenter more in the low coordination condition, to check it was ok to leave.
Computing Long-Distance Dependencies in Phonology: A Strong Procedural Model
Sayantan Mandal
We propose a strong procedural model for computing long-distance phonological relationships and illustrate its effectiveness with a discussion of vowel harmony. Traditionally phonologists have tried to explain away long-distance relationships in phonology by assuming that phonological relations hold under segmental adjacency. This has unfortunate side-effects ranging from iterative rule application to positing special properties inherent to neutral vowels and/or special constraints that apply uniquely to neutral vowels. In contrast, we propose a recipient-initiated SEARCH & COPY algorithm that works by linearly scanning phonological strings (SEARCH-ing) for specific valued features on donor-segments, and then COPY-ing said feature onto the recipient. Following Raimy’s (2000) arguments we assume that phonological strings ordered sets of timing slots associated with feature-bundles ( Σ = 〈X, ≤〉with the expression a ≤ b being read ‘timing slot a precedes b’), and all ordering of features is induced from this order. Following standard mathematical practice we define immediate precedence as a special sub-case of precedence ( a < b ⇔ a ≤ b & ∀ c ≠ a, c ≤ b ⇒ c ≤ a). Crucially, all locality conditions are derived from the syntax of the SEARCH algorithm. Further, such derived locality conditions are argued to be strictly asymmetric. Arbitrary conditions are allowed to be imposed on both SEARCH and COPY, and we show that this affords us the luxury of providing a unified account of all neutral vowels that (a) eschews any ad hoc assumptions regarding said vowels, and (b) reduces labels like transparent and opaque to the syntax of phonological rules themselves. We provide evidence from Turkish, Kirgiz and Bangla, and argue that our model is further capable of generalizing to all cases of consonant-vowel interactions in assimilatory processes, while taking important steps towards establishing cross-modular structural parallelism.
The meaning of food: the development and the variety of conceptual knowledge in the food domain
Abigail Pickard, Jean-Pierre Thibaut, Jérémie Lafraire
There are many different conceptual structures available in the food domain, such as taxonomic, script, thematic, which serve important roles in guiding our reasoning, decision-making, and behavior. Whilst taxonomic structures are heavily emphasized in guiding food decisions (i.e. five fruits and vegetables a day), individuals spend much time organizing their food experiences by identifying the temporal, functional, or spatial relations, such as expecting cereal at breakfast or soup to be served with a spoon. For children who are only beginning to master such conceptual structures, it is important to determine which conceptual structures guide their understanding and appropriate interaction with food. This research disentangles conceptual structures in the food domain by investigating the specific knowledge available to children, as well as their ability to appropriately apply such concepts in the food domain. In the first two studies, 3-6-year-old children (48 US children and 129 French children, respectively) participated in a forced-choice triad task depicting four common structures within the food context: functional (e.g. soup-spoon), thematic (e.g. bread-butter), meal scripts (e.g. breakfast-cereal), and event scripts (e.g. Halloween-candy). Results revealed that meal script knowledge shows greater cultural specificity and significantly later development than thematic knowledge in the food domain. Using framed scenarios, study 3 aimed to understand 4 to 7 year old’s ability to appropriately switch conceptual structures in contextual settings. The research witnessed that appropriate application for script structures in the food domain occurs significantly later than that for thematic concepts. This research not only concludes that thematic concepts in the food domain are available earlier than script concepts, but also that the development of thematic reasoning is greater than script reasoning in 4 to 7-year-old children.
A formal model to characterize how we perceive linguistic evaluative expressions
Adrià Torrens Urrutia
Modelling formally evaluative expressions and capturing or extracting the sentiment (or appraisal) behind those linguistic expressions is definitely a way to establish how we deal with evaluative expressions. I have found that even though machine learning applications can extract sentiment, they hardly extract both semantic intension and sentiment in gradient terms in their layout. Therefore, these evaluative expressions’ interpretability is usually subject to a sentiment number captured by an algorithm with these techniques. Moreover, as a general view, they always need more training data to improve, and they cannot fix a particular case out of the whole system created. Modelling evaluative expression has a lot of exceptional/borderline cases both in their semantics and sentiment. Therefore, the alternative approach of characterizing evaluative expression is manually annotating in a lexicon their prototypical and borderline properties. To do so, we needed first: -using a corpus of evaluative expressions already classified by machine learning and deep learning techniques by its sentiment polarities. -Recharacterize the corpus manually concerning fuzzy-logic formal-grammar framework to better capture these expressions’ gradient semantic intension, orientation, and sentiment. Therefore, this talk presents a new approach that combines various interdisciplinary methods to introduce a formal model that characterizes the cognitive perception fuzziness and vagueness in evaluative expressions as a linguistic construction. Moreover, it combines a formal characterization of gradient phenomena in language through a Fuzzy Property Grammar, together with Fuzzy Natural logic. Through our work, we acknowledge that these linguistic expressions have the following main cognitive traits:
• They are gradient.
• They can be associated with a semantic prime.
• They have a sentiment value.
• Their structure depends on a natural language grammar.
In some languages, such as in Spanish, those expressions have prototype structures and borderline structures, displaying different degrees of grammaticality. In some cases, less grammatical (non-prototypical) structures trigger equivalent meanings and without compromising the final process of their meaning.
The ontogeny of meaning: infants recruit kind-based concepts to interpret nonverbal communication
Barbara Pomiechowska, Barbu Revencu, Iulia Savos, Gergely Csibra
The idea that preverbal conceptual representations may be a source of word meaning (Macnamara, 1982) has recently started to elicit experimental interest and received support in word-learning experiments. Infants have been shown to interpret new words as expressing preverbal concepts, mapping them not onto the labeled entities or their surface features but directly onto concepts these entities represent (Yin & Csibra, 2015; Pomiechowska & Gliga, 2019). A recent study suggested that infants recruit kind-based concepts to derive meaning also in nonverbal communication while failing to do so in the absence thereof (Pomiechowska et al., in press). Therefore, conceptual representations may be used as a source of meaning of any signals deemed communicative, whether verbal or not. Here, we discuss this proposal in the context of a successful online replication of the work by Pomiechowska and colleagues. We tested 12- to 15-month-olds (n = 32) in a Zoom-based looking-while-listening study. Infants saw pairs of objects (one familiar and one unfamiliar, e.g., shoe - hourglass) and heard labelling phrases containing either a familiar-kind label (e.g., “Where’s the shoe?”) or an unfamiliar pseudo-word (e.g., “Where is the moxi?”). Prior to labelling, the familiar object was highlighted via nonverbal communication in the form of pointing on half of the trials. Infants recognised familiar words regardless of the highlighting manipulation, but only succeeded in disambiguating novel words following non-verbal communication. This pattern of results directly replicates the previous lab-based study and suggests that nonverbal communication triggered infants to set up kind-based conceptual representations of familiar objects, providing access to the associated lexical information necessary for excluding them as referents of new words. Critically, infants failed to display evidence of using kind-based object representations in the absence of nonverbal communication.
Effects of L2 proficiency on L1 lexical property evaluations of 600 words
Elif Altin, Nurdem Okur, Esra Yalcin, Asude Eracikbas, Asli Erciyes
The present study investigates the effects of L2 -English proficiency on L1-Turkish lexical property evaluations. According to the dual-coding theory, any input is encoded through two main systems: verbal and visual. In bilingual representation, the visual system is shared while there are two separate but interacting verbal systems for L1 and L2. This suggests that for any concept encoded in the visual system there must be at least two different verbal representations, which may lead to enhancement of the image for bilinguals. Based on this, we asked whether L2 proficiency has an effect on concreteness and imageability ratings of 600 words. The data collection is ongoing. We first hypothesized that the imageability and concreteness ratings will be positively associated with the L2 proficiency. Second, we expected that the correlation between imageability ratings and L2 proficiency will be more pronounced compared to correlation between concreteness and L2 proficiency. Last, for low frequency words, the correlation will be higher between imageability and L2 proficiency compared to high frequency words. Preliminarily data from 38 participants (L1-Turkish - L2-English) is collected via online testing platforms. Participants are given two tasks. First they rated concreteness and imageability of 600 words which were selected from the Word Frequency Dictionary of Written Turkish and later they completed PPVT-IV. Results indicate that there is a positive relationship between L2 proficiency imageability, r(37)= .414 and concreteness r(37)= .327 ratings, p’s<.05. Additionally, as expected, the correlation between L2 proficiency and imageability ratings is higher for low frequency words r(37)=. 442, than for high frequency words, r(37)= .384, p’s<.05. Findings have implications for literature investigating the relationship between L2 proficiency and linguistic outcomes. Additionally, findings point to the importance of considering the L2 proficiency of participants when lexical tasks that involve cue words or word lists are used.
Towards a Two-Factor Approach to the Cross-Race Effect
Greyson Abid
The cross-race effect is standardly characterized as the finding that individuals are generally better at recognizing previously observed faces of members of their own race than faces of members of other races. While the cross-race effect is a well-replicated finding, there is little agreement about the mechanisms underlying it. After outlining existing theories of the cross-race effect, I argue that they all face a similar problem. They at most explain our difficulty in recognizing other-race faces relative to own-race faces. However, a complete explanation of the cross-race effect must account for our difficulty in recognizing other-race faces along with our limited metacognitive awareness of this difficulty. A two-factor approach is needed to explain the cross-race effect. I sketch an outline of one specific version of a two-factor approach. I conclude by discussing how a two-factor theory sheds light on discussions concerning the epistemological significance of the cross-race effect.
The Relationship among Language, Memory, and Emotion in L1 and L2
Levent Emir Özder, Tilbe Göksun
Emotions considerably influence various cognitive processes, two of which are language and memory. To understand the relationship among emotion, language, and memory, we examined whether (1) people’s recall performances differed between L1-Turkish and L2-English when presented with short emotional narratives, (2) emotional properties of recalling narratives differed based on L1-L2 and valence of emotions. In an online study, 73 participants read eight short emotional narratives (half positive, half negative) and typed everything they would remember from those narratives in each language in a counterbalanced order. We coded structure-preserving recall (remembering a sentence’s structural position), meaning-preserving recall (remembering a sentence verbatim), and the use of emotional phrases, manifestations, and inferences for each narrative. For each of these concepts, the coding was carried out proportionally. That is, for each narrative, we divided the number of words/scores that participants used/received while recalling by the total word count/score of narratives. We hypothesized that people’s recall performances and their negative emotion use during recall would be higher in L1. We also predicted that participants’ structure-preserving recall scores would be higher than their meaning-preserving recall scores. Results showed no difference between L1 and L2 in narrative recall. However, as hypothesized, participants had higher structure-preserving scores than meaning-preserving scores in both languages. Furthermore, emotional properties of recall significantly differed between languages and types of emotion. Participants engaged in more positive emotional word use and emotional inference in both languages, whereas negative manifestation was higher in L1. These findings suggest that although recall does not differ between languages, emotions play a vital role in this process.
Comparative Illusion processing - Evidence against syntax-only, deep processing
Maria Goldshtein, Kiel Christianson
Comparative Illusions (CI) (e.g. More people have been to Russia than I have) are judged as grammatical despite being uninterpretable. Previous work relying on offline data proposes that the mechanisms involved in processing CI items in a meaningful way are syntactic in nature and that individual differences in processing and interpretation are negligible. This study asks: Does processing CIs rely on comparisons to similar structures (is it syntactic in nature)? Are other factors involved? Do interpretations vary across individuals? Do individuals extract meaning from CIs, and if so, do all individuals rely on the same processes/heuristics to do so? 182 participants participated in an experiment with a 3x2x2 design on MTurk, combining three types of stimuli (CI sentence, matched interpretable pair sentence, filler sentence), in/direct speech conditions (quotation marks vs. reported speech), and open-ended vs. yes/no responses and response times (for reading and for responding). Quotation marks have been shown to activate representations of direct speech in readers’ minds; we are less likely to notice sentences being ill-formed in natural discourse, so the quotation mark condition allows us to test whether variables other than syntax are involved in the processing of CI sentences. Results suggest that multiple cues influence CI processing. Responses to direct-speech CI items were faster than pair items, while being slower in the indirect speech condition. Open-ended responses showed evidence of different depths of processing (ranging from none to some), and different interpretations. The differences between identical CIs in the direct vs. indirect speech contradict claims that the interpretation of CIs is driven purely by syntax. Overall, CIs do not appear to be treated uniformly across individuals or within items. Qualitative data, detailing interpretations and reasoning behind them, support the quantitative results.
How child interpretations can inform us about semantic theories of gradable adjectives
Merle Weicker, Petra Schulz
Drawing comparisons among objects is part of human cognition and is often expressed by gradable adjectives (GAs) like ‘tall’ or ‘clean.’ The current study demonstrates how children’s interpretation of these adjectives can contribute to evaluating two competing semantic theories. Semantic approaches agree that context-sensitivity is part of the meaning of relative GAs (‘tall’): the standard of comparison varies with the context (tall for a Chihuahua=/=tall for a Rottweiler), but they disagree regarding the nature of absolute GAs (‘clean’, ‘dirty’). Although their standard is fixed to the maximal/minimal degree (‘clean’/‘dirty’), it can be susceptible to context (clean for a shirt=/=clean for a tuxedo). Absolute GAs’ context-sensitivity has been proposed (A) to be part of their meaning, or (B) to result from pragmatic reasoning about imprecision. In situations where the discourse context permits a precise standard, according to Approach (B) child learners should not deviate from the fixed standard for absolute GAs, while according to Approach (A) they may allow variable standards. An object-choice-task, favoring precise standards, tested children’s (n=43, age: 3-5) interpretation of absolute (‘clean’/‘dirty’), compared to relative (‘big’/‘small’), GAs. Children saw arrays of objects showing a property to different extents and were asked to select objects that matched the verbal description (e.g., ‘Give me the clean teddies’). The context (linguistic and/or visual) changed across conditions to investigate whether participants adjusted their choices (i.e., their standard). Our results revealed that children did not adjust their standard to changes in either context for absolute GAs, but they did so for relative GAs. We argue that this finding supports Approach (B) claiming that, unlike for relative GAs, context-sensitivity is not part of the meaning of absolute GAs. This difference between relative and absolute GAs is reflected in children’s interpretations early in acquisition, by age 3, suggesting that it results from existing pre-verbal concepts.
Organisational Teleosemantics
Milan Ney
In my talk, I present a novel form of teleosemantics. Teleosemantics is the view that intentional representations, including linguistic and mental representations, have some state of affairs as their meaning or content partly in virtue of possessing biological or proper functions that are realised only if the content obtains. Classical versions of teleosemantics assume that the proper functions of a trait are those functions it has been selected for, e.g. through natural selection. As a rival to that account, an account of proper function recently emerged that is based instead in the synchronic contributions a trait makes to an organism’s self-maintenance: the _Organisational Account_. I propose that a viable form of teleosemantics may be based on it: an _Organisational Teleosemantics_. According to Organisational Teleosemantics, roughly, an intentional representation has some state of affairs as its content because it is the product of a mechanism that contributes to the organism's self-maintenance by producing representations of that type only when states of affairs of that type occur. Organisational Teleosemantics solves problems related to swampman-type cases, neuroplasticity and the extended mind that threaten classical forms of teleosemantics. It further reveals a surprising connection between teleosemantics and inferential-role semantics. In fact, Organisational Teleosemantics can be understood as a form of inferential-role semantics. Unlike other naturalistic approaches in inferential-role semantics, Organisational Teleosemantics offers a plausible account of the seeming normativity of meaning and of twin-earth cases.
How prosody helps auditory stream segregation and selective attention in a multi-talker situation
Petra Kovács, Brigitta Tóth; Orsolya Szalárdy, István Winkler
To process speech in a multi-talker environment, listeners need to segregate the mixture of incoming speech streams and focus their attention on one of them. Potentially, speech prosody could aid either of these processes, but the contribution of prosody to the processing of one speech stream out of several is still largely unknown. For addressing these issues, in the present study, we extracted functional networks connecting brain regions from brain electric signals while participants listened to two concurrent speech streams. Prosody manipulation was applied on the attended speech stream for one group of participants and on the ignored speech stream for the other group. Prosody was synthetically flattened, naturally flattened, or intact. Our results showed that naturally flattened speech is difficult to focus on and highly susceptible to distraction from an unattended intact speech stream, while prosody manipulation on the ignored stream has no effect on target detection, although it disturbs remembering the content of the attended speech. The difference in brain electric activity between attending naturally flattened speech and intact speech was reflected in a frontoparietal attention network in the delta (0.5-4 Hz) and theta (4-8 Hz) bands. Further, suppressing naturally flattened vs. intact speech differed in a temporoparietal network operating in the high alpha band (10-13 Hz). Combining the behavioral and EEG data, it appears that speech prosody facilitates both attentional selection and stream segregation. These conclusion will be discussed in detail in the presentation.
What is special about words in infant conceptualization? Action and language cues may serve similar mechanisms in early object categorization
Ricarda Bothe, Nivedita Mani
Language is suggested to play an important role in how infants direct their attention to object commonalities, supporting processes that enable infants to internalize external categories. However, non-verbal cues such as actions and gestures are equally salient for infants. Do action- and word-cues facilitate object categorization to a similar extent across development? And are mechanisms underlying such learning independent from the nature of the input? To answer these questions we designed and pre-registered a study (https://osf.io/jc7kv/) to examine how words and arbitrary actions may shape categorization processes differently across the first two years of life when infants do not have direct access to the meaning of such associations. Based on looking times, we investigate infants’ object categorization success at 12- and 24-months (n = 120) across three conditions (no-cue, word-cue, action-cue) in a novelty-preference-task. Power analyses were based on an earlier study investigating infant categorization success and revealed 90% power with a sample size of 20 participants per condition in each age group. In line with open science practices, data analysis will proceed when data collection is completed. During familiarization, we present infants with videos of single-category objects that vary in color and other perceptual features, either accompanied by a word, an action being performed on the object, or no additional cue. At test, infants see a novel object of the just-learned category and a novel object from an unknown category side-by-side on the screen. Increased observation of the latter at test is typically interpreted as evidence for category formation and generalization of the objects from the just-learned category. Systematic differences in the extent to which input influences early category formation will allow us to make assumptions about mechanisms underlying object extension processes at different points in early development, even when the meaning behind such associations is not intuitive.
The polygrammaticalisation network of Early Middle Chinese zì: Linguistic evidence for a grammar-as-cues approach
Ryan Ka Yau Lai
Under the growing words-as-cues approach in cognitive science (Elman 2004, 2009, 2011, Casasanto & Lupyan 2015, Lupyan & Lewis 2019), words do not map to predefined, context-independent meanings, but serve as contextual cues to mental activity. An important consequence is infinite polysemy: meaning is dynamically constructed in every utterance, and word senses are not discrete, predefined primitives but statistical generalities over tokens of use (cf. Kilgarriff 1997). Unfortunately, the linguistic study of grammaticalisation, which may be cognitively regarded as the historical development of words and constructions from expressing more conceptual to more procedural information (Nicolle 1998), seldom interacts with this approach, despite its emphasis on gradience (McClelland & Bybee 2007, Traugott & Trusdale 2010). Semantic changes are traditionally framed as movement between predefined semantic poles like BODY > RELFEXIVE (e.g. Heine & Kuteva 2002). Promising grounds for convergence between the two traditions may come from Mainland Southeast Asia, where grammaticalised forms often possess numerous highly context-dependent and often non-mutually-exclusive meanings (Bisang 2008). Through the Early Middle Chinese (EMC) text Shìshuō Xīnyù, we argue that EMC zì is best treated as a cue activating various semantic and discourse meanings in different contexts. Originally a reflexive/emphatic with non-reflexive meanings such as ‘naturally’, ‘separately’ and ‘have always been’ in Old Chinese, its semantics broadened considerably in EMC, encompassing other aspectual meanings (‘already’, ‘continue to’) and textual and (inter)subjective meanings (e.g. self-evident truth, concession, and both ‘as expected’ and counter-expectation). Extending Craig’s (1991) notion of polygrammaticalisation, we argue that zì can cue a network of semantic properties, associated through metaphoric links based on force dynamics (Talmy 1988) and metonymic links based on contextual contiguity, that cannot be modelled under traditional chain-based approaches to grammaticalisation. The present study hopes to pave the way for tighter collaboration between cognitive psycholinguistics and diachronic studies of grammatical meaning.
The role of prior discourse in the context of action: Insights from real-time referential processing
Tiana V. Simovic, Craig G. Chambers
Pronoun comprehension involves linking semantically-impoverished expressions (she,it…) to entities in the comprehender’s mental model of discourse. These entities have often been previously mentioned, and are characterized as “linguistic antecedents”. Pronouns need not "match" antecedents with the same surface form (e.g., "I need a knife, where do you keep them?"), yet the notion of retrieval is often evoked in psychological frameworks. So, what exactly is the content of the relevant mental models? Experiment 1 explored whether the semantics of antecedent expressions are "retrieved" during pronoun interpretation. We created a 3×4 grid with numbered grid squares, containing six object images. Critical trials included two same-category objects (e.g., two houses in adjacent squares 9/10). Eye fixations were measured as participant-listeners heard a pair of instructions, starting with, e.g., “Move the house on the left to area 12”. This movement can entail that the "house on the left" now refers to the unmoved/unmentioned house. If so, when a subsequent instruction contains a pronoun (e.g., "Now move it to…"), the antecedent in memory no longer accurately describes the intended referent. If retrieving antecedent semantics is important for interpretation, we should observe processing costs (e.g., momentary consideration of the house that now matches antecedent semantics) relative to when initial movements do not invalidate antecedent semantics. However, fine-grained fixation measures showed the same pattern of effortless, immediate interpretation upon encountering pronouns, regardless of potential semantic mismatch. Experiment 2 elicited participant-talker's descriptions of objects in the displays. This confirmed spontaneous use of descriptions like "the house on the left" as well as other usage patterns, further supporting the notion that antecedent semantics are not retrieved in the course of language use. The results show retrieval is not a meaningful concept for understanding pronoun interpretation, and that frameworks grounded in attention rather than memory provide more rewarding theoretical foundations.
An eye-tracking study of thematic role identification heuristics in toddlers
Anna Babarczy, Tamás Káldi, Bence Kas
We investigate Hungarian-speaking children’ use of heuristics in identifying thematic roles in sentences. Various sources of information may be available to comprehenders in establishing agent-patient relations including animacy, word order and morphological marking. Children learning typologically different languages tend to rely on these cues to differing degrees. Investigating Hungarian-speaking preschoolers, MacWhinney et al. (1985) argued that since Hungarian word order is flexible and thus statistically unreliable while morphological marking is both available and consistent, children should learn to rely on the latter very early on. Our study examines the development of the relative importance of animacy and word order in younger cohorts of Hungarian-speaking children (24, 30 and 36 months) using the looking-while-listening eye-tracking procedure. Linguistic materials comprise transitive sentences with a subject, an object clearly marked for accusative case and a verb. There are two word-order conditions: SVO and OVS, and four animacy conditions with subject and object animacy (animate vs. inanimate) varied. The children listen to the sentences and watch two simultaneous animated scenes: the target scene depicting the meaning of the sentence and the distractor depicting the same action with the thematic roles of participants reversed. Analyses of the proportion of looks to the target scene reveal no effect of word order but there is a strong effect of animacy gradually decreasing with age: children look less at the target scene if the subject is inanimate and the object is animate. The time course of fixations shows that for 24-month-olds, the animacy effect appears as soon as the animation begins, for 30-month-olds the effect appears with a slight delay, and the oldest children transfer their attention after only about five seconds from the target to the prototypical animate-subject inanimate-object distractor. A possible explanation is that as morphosyntactic marking fades in memory, children revert to the animacy heuristic.
Straight paths on curved shapes: euclidean intuitions in the generalization of a geometrical primitive
Charlotte Barot, Véronique Izard
Historically, euclidean geometry has been regarded as the most natural geometry. To explore this idea, here we studied people’s intuitions about one fundamental concept of Euclidean geometry: the concept of a straight line. We probed how people would generalize this concept when lines are traced on curved surfaces. Crucially, the trajectories traced by objects going straight without turning on curved surfaces (called “geodesics” in mathematics) do not possess all the properties of planar straight lines. In particular, a geodesic may intersect itself, and more importantly it does not necessarily correspond to a planar intersection. Non-mathematician participants (N=23) were asked to identify “straight lines” in 26 trials showing lines traced on four surfaces (cone, cube, cylinder, sphere). The task systematically crossed the two factors of straightness and planarity. Our findings indicate that participants tended to use a planar heuristic. First, the factor of planarity was strongly predictive of participants’ responses (F(1,22)=53.32, p<.0001), even though it was orthogonal to straightness. Second, when comparing pairs of (non-straight) curves matched for length and curvature, participants were more likely to identify planar intersections as straight lines (t(23)=5.14, p<.0001). Third, participants’ answers were strongly correlated to the answers of another group of participants (N=11) asked instead to identify whether the lines were planar or not (r=0.81, t(24)=6.79, p<.0001), which suggests that the two concepts coincide. We conclude that the intuitive concept of straight line is biased to fit the case of a planar straight line. This bias may contribute to explain why so-called “non euclidean geometries”, which rely on a more general concept of straight line, appeared so late in the history of mathematics.
Wednesday (19th of May)
Sensorimotor and Linguistic Distributional Knowledge in Semantic Category Production: An Empirical Study and Model
Briony Banks, Cai Wingfield, Louise Connell
The human conceptual system comprises linguistic distributional and sensorimotor information, but the relative importance of each in conceptual processing is debated. We hypothesized that accessing semantic concepts during a category production task (a.k.a. verbal fluency) would rely on both, but particularly on linguistic distributional information which may provide a computationally cheaper shortcut. We tested this hypothesis in a pre-registered behavioral study of category production and a computational model of sensorimotor–linguistic knowledge. In the behavioral study, participant responses were predicted by a measure of sensorimotor similarity (based on an 11-dimension representation of sensorimotor experience), and linguistic proximity (based on word co-occurrences derived from a large corpus), calculated for each named concept and its category. Earlier and more frequent responses were similar in sensorimotor experience to, and often shared linguistic contexts with, the category concept. Critically, category production was better predicted when linguistic proximity was included in regression models compared to sensorimotor similarity alone. We further tested the combined role of linguistic and sensorimotor information, and the role of indirect relationships between a category and named concept using a computational model in which an initial concept would activate neighbouring concepts based on either sensorimotor or linguistic proximity. When only direct neighbours were accessible, the model was insufficient to predict participant responses. By allowing indirect activations—i.e. chains of activations between successive neighbours to reach distant concepts—both linguistic and sensorimotor information provided better predictions of participant responses. A model incorporating both linguistic and sensorimotor information with indirect activations achieved the best fit overall. Our results suggest that the category production task, and conceptual processing more broadly, utilise both sensorimotor and linguistic distributional information, and rely on indirect activations of concepts.
Swearwords as a window onto the interplay between emotions and social norms in bilinguals
Michał B. Paradowski & Marta Gawinkowska
In an ideal world, reactions and answers to ethical problems should be consistent irrespective of the medium through which the question or situation is presented. Yet recent research (Costa et al. 2014; Geipel, Hadjichristidis & Surian 2015, 2016; Cipolletti, McFarlane & Weissglass 2016; Corey et al. 2017; Hayakawa et al. 2017; Ĉavar & Tytus 2018; Brouwer 2019; Karataş 2019; Dylman & Champoux-Larsson 2019; Driver 2020) has shown that the same dilemma may elicit different moral judgements depending on the language in which it has been described. Using a covert 2×2×2 experiment where 61 bilinguals were asked to translate (L1↔L2) a passage peppered with swearwords, we show that the picture is much more complex. While the results ostensibly corroborate the Emotion-Related Language Choice theory (according to which bilinguals find their L2 an easier medium of conveying content that evokes strong emotional reactivity; Kim & Starks 2008), the effect was only observed in the case of ethnophaulisms, that is expletives directed at social (out)groups. This indicates that the key factor modulating response strength is not so much the different emotional power associated with the respective languages, but social and cultural norms. Slurs thus open a new window onto bilinguals’ cognition. At the same time, the orthogonal influence of the language medium on decisions, judgments and reactions has far-bearing consequences in our multilingual and multicultural world (not limited to such high-stakes scenarios as legal contexts).
Exploring the neural dynamics of semantic categories within and across images, first and second language words
Alexia Dalski, Gyula Kovács, Géza Gergely Ambrus
The representation of perceptual and conceptual categories in the human brain is a broadly studied but up to now an unclear phenomenon. In the current study we set out to investigate if the patterns of neural activation for semantic categories (living vs. non-living) and concepts within these categories, is similar for different presentation modalities or not. For this purpose, we recorded the EEG (n=50) during the presentation of 3 stimulus modalities: images, words in one’s mother tongue (German, L1), or as words in a foreign language (English, L2). Fourteen trial-unique stimuli were presented for five concepts within two categories (living: chicken, duck, goat, horse, rabbit; non-living: carpet, chair, shelf, table, and window). Multivariate pattern analysis was used to investigate the neuronal dynamics of the representations of semantic information both within and across the different presentation modalities. Our results show that the whole-epoch classification yielded above-chance decoding accuracies when classifying stimulus animacy within a given presentation modality, but not across modalities. As expected, time-resolved representational similarity analyses (RSA) differentiated between presentation modalities (L1 vs. L2 vs. images, and words vs. images), peaking around 200 ms after stimulus onset. We also observed significant living/non-living category separation within the image modality. Contrary to our expectations, we observed no significant effect of language (L1 vs. L2), or modality-independent concept. These results may serve as a contribution to future studies exploring how language processing and semantic representation might interact.
What’s in a name, and when can a [beep] be the same?
Jill Lany, Abbie Thompson, Ariel Aguero
Words influence cognition well before infants know their meanings. For example, three-month-olds are more likely to form visually-based categories when exemplars are paired with words than with sine-wave tones, a likely precursor to learning symbolic relations between words and their referents. However, it is unclear why words have these effects. We hypothesized that exaggerated “showing” gestures caregivers use when naming objects, and the resultant synchrony between a sound and object motion, promotes object categorization, as auditory-visual synchrony strongly impacts visual perception. We first showed that words and tones have different effects on 3-month-olds’ categorization. Infants (N=38) were familiarized to exemplars from one animal category (e.g., dinosaurs), with each exemplar paired with a word (Word condition) or a sine-wave tone (Tone condition). Infants in both conditions were tested by presenting a novel exemplar from the familiarized category side-by-side with an exemplar from a different category (e.g., a fish) in silence. Infants in the Word condition showed a greater preference for the exemplar from the familiar category than those in the Tone condition. To test how sound-object synchrony affects categorization, 4 additional groups of 3-month-olds (N=81) received pretraining with either tones or words before being familiarized and tested as in the Word and Tone conditions above. In the Synchronous conditions, they viewed an object being moved in synchrony with the presentation of either words or tones. In the Asynchronous conditions, infants viewed videos that were identical except that object motion and sounds were not synchronous. Pretraining infants with tone-object synchrony led tones to influence categorization as words do. Moreover, pretraining with word-object synchrony enhanced the facilitatory effects of words on categorization. Pretraining with asynchronous videos did not impact infants’ categorization performance. Thus, temporal structure within caregivers’ communicative behaviors may lead words to facilitate categorization, and ultimately to forming symbolic representations.
Exploring Task Co-representations and Theory of Mind Among Children Aged 3-5
Anna Kispál, Katalin Oláh
While engaging in joint actions humans form task co-representations of their partner’s complementary parts of the task which often appears in the form of interference in task achievement. The current study investigated whether task co-representations during a joint action can be observed among children aged 3-5. A crucial aim was to conduct a study targeting the younger age group, as joint action seems to appear at an early age, however the methods used in previous studies found evidence of task co-representations only after the age of 4. To test whether children can form task co-representations during solving a go-nogo reaction time task we designed a new paradigm called the Tower task. Instead of parallel task solving this task involves a clear common goal which helps children to understand the jointness of the situation. The study also investigated whether forming co-representations is connected to individuals’ theory of mind (ToM) abilities. According to our results, children in the 4-5 years old age group and in the 3 years old age group as well were able to form task co-representations in the Tower task. Furthermore, our results indicate that better ToM abilities helped children to avoid the interference cause by forming co-representations. To explore the relationship between co-representations and ToM more deeply we are currently conducting the online version of the study. Supported by the ÚNKP-20-2 New National Excellence Program of the Ministry for Innovation and Technology from the source of the National Research, Development and Innovation Fund.
KNOW First, without Knowledge First
Christopher A Vogel
The ontological status of knowledge as a mental kind has received increased attention with the advent of novel methodologies in the study of social cognition. This Knowledge First view involves two related claims: a conceptual claim that human and non-human primates represent minds as having primitive, factive, mental-states of knowledge; and an ontological claim that such representations are accurate, tracking a distinct mental-state kind. In a series of recent papers Jennifer Nagel contends that findings from mind-reading experiments supporting the conceptual claim bolsters philosophical considerations for the ontological claim. But, these sources of argument for the Knowledge First view depend on a troubled externalist theory of linguistic meaning. In citing both the failure of epistemologists in responding to Gettier-style cases and the grammatical properties of mental state verbs, Nagel assumes the troubled externalist theory that the meanings of linguistic expressions determine their truth-conditions. Likewise, in applying results from mind-reading studies with human children and primates to the conceptual claim, Nagel assumes a direct mapping from meanings to concepts that belies the flexibility of natural language. I argue that this externalist assumption is troubled, and in assuming it so too are Nagel's arguments for the Knowledge First view. Nagel is not alone among philosophers and cognitive scientists in adopting this externalist assumption, but serves as a vivid example of how central this assumption is to even naturalist philosophical inquiry.
Turkish children’s conceptualizations of belief-related verbs and its implications for Theory of Mind scale
Feride Nur Haskaraca Kızılay, Hande Ilgaz
Wellman and Liu’s (2004) seminal work on scaling Theory of Mind (ToM) tasks for English-speaking American children paved the way for the subsequent scaling studies in different cultures, e.g., China, Iran, Australia, Germany, and Turkey. Apart from the fact that these studies revealed inter- and intra-cultural variability in terms of in which order children passed the ToM tasks, the studies conducted with Turkish children have also hinted that Turkish children score lower in belief-related tasks [Diverse Belief (DB) and False Belief (FB) tasks] compared to their peers from other cultures. The current study investigates whether Turkish-speaking preschoolers’ performance in the belief-related tasks of ToM Battery is affected by the appropriateness (vs. inappropriateness) of the mental state verbs (MSVs) used in these tasks. To this end, 60 Turkish-speaking preschoolers (age range: 3- to 5-years) were tested on both the original ToM battery and on the pragmatically modified versions of the DB and FB tasks. The DB and FB tasks were modified by either a) replacing the MSV used in the task (i.e., “think”) with a pragmatically and semantically more appropriate one (e.g., “guess” or “falsely think”); or b) changing the epistemological circumstances of the task by adding an evidential basis for the belief so that the MSV used in the task (i.e., “think”) conformed with the pragmatics of Turkish. Results revealed that Turkish-speaking children only benefited from one modification that involved a manipulation of the epistemological basis for the MSV. This version was also found to show better correlations with executive functioning and language abilities of children and it was scaled more consistently. These results provide support for the argument that the pragmatic and semantic features of the ToM tasks may affect children’s performances on these tasks; therefore, place importance to studying ToM in linguistically more diverse and accurate ways.
The role of social interactions and the inversion effect when processing expressions conveyed by emotional female bodies
Gyopárka Barbara Lázár, Beatrix Lábadi
The aim of the current study was to examine the role of the inversion effect as well as perceived social interactions when processing affective female body stimuli. 22 adults, 6 men (age M = 24,17; SD = 3,31) and 16 women (Age M= 24; SD =1,93) participated. Grayscale photos of women with their faces being masked were taken, edited and standardized for this study. The stimuli varied based on three conditions: 1) the orientation of the figures (upright vs. inverted); 2) the type of social context (single bodies vs. pair of bodies); 3) the relative positioning of two bodies (interacting vs. noninteracting). Two emotions, namely anger and fear were presented. We hypothesized that the recognition of emotions would be the most accurate when single body stimuli are presented, also processing bodies in their upside down position would impair the perception of affective expressions. Most of our results showed reversed inversion effect, which indicates altered configural processing depending on the conveyed emotions. The recognition of anger was more rapid when noninteracting bodies were presented upside down. Fear on the other hand, showed similar results when facing, inverted bodies were shown. In some cases single affective bodies were recognized more accurately than interacting or nonfacing pair of bodies, demonstrating the importance of social context when processing bodily expressions. Furthermore, noninteracting bodies were perceived more accurately than interacting pairs. Our results might demonstrate the rapid and accurate recognition of emotions is primarily supported by part-processing of bodies rather than configural processing, especially when bodies are inverted. Significant differences in the perception of emotions also support the adaptive nature of detecting threatening social signals.
Infant’s perception of speaker-selection in multi-party conversations
Lilla Magyari, Bálint Forgács, Ildikó Király
A number of previous studies have shown that infants already around their first birthday or even earlier expect speech to evoke contingent responses between social partners during verbal interactions. However, it is not well understood how these expectations mature with developing linguistic and paralinguistic knowledge. Therefore, we studied infant’s expectations regarding communicative interactions, specifically turn-taking behavior in multi-party conversations from a third-person perspective using eye-tracking. We aimed to study whether infants between 16-20 months of age expect a response from a person who is addressed by a direct eye-gaze during a multi-party conversation and whether such an expectation is facilitated by linguistic information conveyed during the speaker’s turn. In our experiment, infants were watching short video clips in which three persons interacted communicatively. At the beginning of the clip, one person either uttered a two-syllable-long novel word (speech) or coughed. While she vocalized, she simultaneously either gazed at one of the other persons (speaker selection) or looked down (no-selection). Data gathering is still ongoing, but preliminary analyses suggest that infants had more fixation on the interlocutor who was selected by the speaker’s eye-gaze in the speech as well in the cough condition. However, it is not clear whether this effect reflects infants’ expectation of a response or infants just followed the gaze of the speaker without any anticipation of a turn exchange.
Physical pragmatics: Inferring the communicative meaning of objects
Michael Lopez-Brau, Julian Jara-Ettinger
Beyond language and gesture, people also have the remarkable capacity to communicate through objects: A hat on a chair means it is occupied, stanchions across an entrance means we should not cross, and, during snowy winters in the northeastern United States, a plastic chair on a shoveled parking spot means it is not up for grabs. How do people embed communicative meaning into these objects such that others may readily interpret them? How do we, as observers, extract this meaning? We propose that this capacity emerges from our ability to reason about the mental states underlying other agents' actions, also known as our Theory of Mind. We show that a computational model that infers mental states based on how agents manipulate their environment spontaneously gives rise to the ability to communicate through objects. As predicted by our model, in Experiment 1 (n=80), we reveal that objects that impose a low cost to observers are judged as more likely to be communicative than those that do not impose a cost. In Experiment 2, we found that people from both industrialized (n=160) and non-industrialized (n=150) societies can infer the communicative meaning of objects, even in the absence of a pre-existing convention, also predicted by our model. Moreover, in Experiment 3 (n=160), we show that, after inferring the meaning of an unconventional object, people can store and retrieve its meaning in subsequent encounters, revealing a mechanism behind how communicative objects may become quickly conventionalized. Our model sheds light on how humans use their ability to reason about other people's behavior to embed and extract social meaning from the physical world.
Auditory Figure-Ground Segregation is impaired by aging and age-related hearing loss
Péter Velősy, Ádám Boncz, István Winkler, Brigitta Tóth
Listening in noisy environments relies on the ability to extract structure from noisy sensory input while integrating sounds elements into a meaningful object (Figure-Ground Segregation - FGS). To assess whether aging is accompanied by impairments in FGS event-related brain potentials were recorded from normal-hearing young, older adults and hearing-impaired elderly during auditory recognition. The FG stimuli consist of a figure (rising sound stream) in a spectrotemporal overlapping background. The listener’s task was to report whether they detect a figure within either high or low background noise level (signal-to-noise ratio-SNR). An adaptive threshold detection method was used for determining the high and low SNR levels corresponding to 80% and 60% figure detection accuracy for each participant. Relative to the young and normal hearing elderly a higher number of coherent Figure tones was needed for the hearing-impaired group to reach the same performance level in low and high SNR conditions. Generally, correct Figure perception elicited object-related negativity (ORN), followed by a P600 response. Source localization revealed that ORN generated in the auditory, parietal, and midline frontal cortices while P600 linked with lingual and cingular cortices activity. Compared to the young, in the normal-hearing elderly group, the latency of ORN was delayed (200 ms) with decreased P600 amplitude which indicates that older adults may be able to compensate for hearing loss using more time for perceptual evaluation with higher decision criteria. Hearing-impaired elderly shown even more delayed ORN activity and especially in low SNR conditions relative to young and normal hearing elderly, which indicates that for elderly with hearing impairment, regardless of the less noisy sensory input the perceptual evaluation may take a longer time with a less certain outcome. Our results may provide evidence that age-related difficulty in listening in adverse conditions caused by impaired FGS processes in aging.
Problems in Audiovisual Filtering for Children with Special Educational Needs
Stephanie Armstrong-Gallegos, Rod Nicolson
There is pervasive evidence that problems in sensory processing occur across a range of developmental disorders, but their aetiology and clinical significance remain unclear. The present study investigated the relation between sensory processing and literacy skills in children with and without a background of special educational needs (SEN). Twenty-six children aged between 7 and 12 years old, from both regular classes and SEN programmes, participated. Following baseline tests of literacy, fine motor skills and naming speed, two sets of instruments were administered: the carer-assessed Child Sensory Profile-2 and a novel Audiovisual Animal Stroop (AVAS) test. The SEN group showed significantly higher ratings on three Child Sensory Profile-2 quadrants, together with body position ratings. The SEN participants also showed a specific deficit when required to ignore an accompanying incongruent auditory stimulus on the AVAS. Interestingly, AVAS performance correlated significantly with literacy scores and with the sensory profile scores. It is proposed that the children with SEN showed a specific deficit in “filtering out” irrelevant auditory input. The results highlight the importance of including analysis of sensory processes within theoretical and applied approaches to developmental differences and suggest promising new approaches to the understanding, assessment, and support of children with SEN.
More is not necessarily better – how different aspects of sensorimotor experience affect recognition memory for words
Agata Dymarska, Louise Connell, Briony Banks
We investigated the contribution of semantic information to word memory using imageability and sensorimotor strength as predictors. Semantic richness theory predicts that a semantic variable should facilitate performance on a memory task, but different types of semantic variables warrant separate investigation. For example, sensorimotor strength represents a multi-dimensional experience with a concept, while imageability focuses on conscious information biased towards visual experience, and therefore they could show diverging effects. Data from a mega-study of word recognition memory (Cortese et al., 2010; 2015), as well as from an online memory task, was analysed in a series of hierarchical linear regressions. Both sensorimotor strength and imageability had an effect on word memory performance, but not as strong as reported in previous literature. However, the effects were smaller when the memory task was unexpected, suggesting that the semantic effects are dependent on memory strategies (or context). Most importantly, we found that sensorimotor strength had varying effects on different memory measures, which was not in line with the prediction of the semantic richness theory. The findings highlight the importance of a multi-dimensional approach to measuring and testing semantic experience, and its effect on cognitive processing, and provide implications for the use of semantic variables in memory research.
Children’s use of collective versus individual personal pronouns to detect social alliances
Antonia Misch, Markus Paulus
From early on, children face the challenge of navigating an incredibly complex social world, which consists of multiple layers of social groups and categories. Often, social groups are not visibly marked or explicitly labeled, and therefore children have to rely on other, more subtle cues to infer others group membership. Language, in particular the use of certain personal pronouns such as “we” vs. “I”, can be a valuable indicator for social identity and belongingness, but no experimental research has investigated whether children infer others' social affiliation based on incidental pronoun use. In the current study we investigate this question by presenting children in two age groups (6-8 and 10-12yo) and adults with different social scenarios (e.g., a village, a campsite). Each scenario is presented by an individual using either collective (e.g., “we”, “our”) versus individual pronouns (e.g., “I”, “my”). We then ask children to rate these individuals’ relationship with their group on measures of belonging, cohesion, solidarity, and preference. Based on previous related research we expect that adults will use personal pronouns to make inferences regarding the individuals’ group characteristics. Research on children’s pronoun use, however, shows that while younger children produce and comprehend pronouns correctly, they sometimes fail to incorporate the perspective of another person. We thus expect that younger children will show pronoun-based inferences to a lesser extent than older children. Descriptive pilot data (N=19, data collection about to start), suggests that both adults and older children rate groups presented with collective pronouns as higher in cohesion (e.g., for belonging: M=6.69; M=6.75) than groups presented with individual pronouns (M=4.06; M=4.88), whereas younger children do not differentiate between these two contexts (M=5.92 vs. M=6.17). These results suggest that despite children’s early emerging group mindedness and categorization skills, inferences based on pronoun use seem to emerge several years later.
An epistemic theory of Fregean sense
Cheung Wai Lok
Relative to Gottlob Frege, names such as ‘Hesperus’ refers with its associated description ‘the astronomical object occupying the so-and-so position in the evening sky’, and ‘Hesperus’ refers to Venus because it satisfies the description. Saul Kripke points out that names refer without such mediation, leaving associated description not a semantic, but an epistemic, role. Ideas about an object, relative to Frege, differ from concepts about the object, in that different epistemic agents could have associated different ideas about it, but not with concepts; every agent’s concept of Hesperus is the same, but could have different ideas about it because while some think it beautiful, some do not. The associated description, instead of contributing to the semantic reference of the name, indeed expresses what the epistemic agent knows about the object. With properties that are essential of the object, such associated description is necessarily true of it, but some, such as ‘the astronomical object occupying the so-and-so position in the evening sky’, are only contingently true of the semantic referent of the name. Therefore, if I was correct, Frege would have permitted describing being the astronomical object occupying the so-and-so position in the evening sky as a mere idea of Hesperus, instead of its concept. It is his starting point with mathematical objects biases his theory that concepts are mostly of essential properties.
Cross-linguistic regularities in perception verb lexicons: a study of 100 languages
Elisabeth Norcliffe, Asifa Majid
Languages vary considerably in the distinctions they encode in words, even in their expression of the most basic and universal of human phenomena—sensory experiences. Although it seems obvious to distinguish what appear to be conceptually basic notions such as ‘hear’ and ‘smell’, or ‘taste’ and ‘touch’, some languages colexify two or more sense modalities with a single perception verb. Previous research suggests there might be underlying regularities in colexification patterns, despite surface crosslinguistic variation, pointing to our shared biology and cognition as possible constraints on the lexical expression of perceptual meanings (Viberg 1984; Evans and Wilkins 2000). Drawing on a genealogically and geographically stratified sample of 100 languages, we investigated whether colexification patterns in the perceptual domain demonstrate the predicted cross-linguistic regularities. We examined the relative frequencies of perception verbs that colexify two or more sense modality meanings using a weighted semantic network. The network revealed strong cross-linguistic regularities for some combinations of sense modalities that tended to colexify. The pairings {hear-touch}, {touch-taste} and {hear-smell} were the most frequent cross-modal combinations encoded by perception verbs; meanwhile {taste-smell}—although predicted to be common—was rarely colexified. Vision stood apart among the senses in showing a strong tendency to be lexicalized as a distinct concept. We suggest two independent constraints function in concert to give rise to these patterns: conceptual similarity, i.e., the tendency for similar concepts to colexify (Xu et al., 2020; Youn et al., 2016) and communicative need, i.e., the tendency for colexified meanings to occur in the most distinct contexts, given some minimal relatedness, in order to minimize ambiguity (Kemp and Regier 2012; Piantadosi et al., 2011). Overall, our results challenge simplistic notions that presume semantic categories can be simply read-off shared biology and cognition.
What are the semantic dimensions of word meaning? Moving beyond the Nijmegen method
Guillermo Montero-Melis, Tanita Duiker
We can perceive an infinite number of distinctions in virtually any domain, but words encode only a subset of those. A fundamental question concerns what cognitive dimensions words delineate. The “Nijmegen method” has been an influential approach where speakers label visual stimuli (e.g., different actions) and relevant meaning dimensions are inferred from naming patterns. However, the method has downsides. First, its visual nature biases it toward visual dimensions; second, because stimuli depict potential word referents, they may lead to undersampling of the relevant semantic space. We adopt an alternative approach in which native speakers make semantic judgments between words in a specific domain, after which meaning dimensions are extracted through statistical analyses. In two online studies, native Dutch speakers judged the meaning of the main 31 Dutch manner-of-motion verbs (assessed by lexical frequency and norming). In Study 1, 45 participants carried out a multiple arrangement task yielding a continuous measure of semantic similarity. Results suggest that verb meaning is not only structured along biomechanical visual dimensions as previously argued, but also along non-visual dimensions such as inner states (e.g., emotions, effort) and functions (e.g., goals). Study 2 verified our interpretation of the dimensions: 26 new participants grouped the same verbs into distinct categories and labelled each category. Similarity matrices extracted from this task correlated strongly with those in Study 1, suggesting robustness of the results (Mantel test: Spearman’s rho=.83, p<.001). Crucially, participant-generated category labels confirmed that verb semantics encode not only biomechanical visual features (51% of categories), but also inner states (23%), function (14%) and other criteria (11%). We are currently collecting data from additional tasks to assess how consistent semantic representations are across speakers. Our approach reveals semantic dimensions difficult to capture with the Nijmegen method and provides new insights into how linguistic meaning relates to perception and cognition.
Metacognition as a multimodal semiotic phenomenon
Henrique T. Perissinotto, João Queiroz
Metacognition ("thinking about thinking") depends on language and representation. Those who investigate metacognition have attempted to approach language and representation as internal knowledge structures, rather than as external-oriented semiotic processes. It is difficult to avoid being deceived into seeing language as symbolic words and discrete sentences. It is proposed here that semiosis (action of sign), in a rich, physically and culturally distributed multimodal form, is crucial for metacognitive tasks. We based our approach on Peirce's mature semiotic. Metacognition is treated as semiosis – the communication of a habit from an Object (first-order cognitive process) to an Interpretant (a cognitive behavior) through a Sign (second-order cognitive process), so as to constrain the interpreter’s behavior. By applying Peirce’s model of semiosis, the phenomenon of metacognition is observed as essentially triadic, interpreter, and context-dependent. It connects Sign, Interpretant, and Object, where the communicated form in the first-order cognitive process is embodied as a constraining factor of interpretative behavior. We explore how multimodal patterns of semiotic activity (not monomodal symbolic-based processes), can provide a more accurate description of metacognition. To develop our ideas, we examine the multimodal phenomenon of marking in dance, with a focus on marking-for-self. To mark is to perform a dance phrase in a simplified, schematic, or abstract way. When marking, dancers use their bodies in motion to represent some aspect of the complete phrase they are thinking. Marking-for-self is a specific type in which the dancer marks in his own idiosyncratic manner, a process that potentializes real-time reflection through external representations. Marking is a diagrammatic gesture. Diagrams signify through the arrangement of relations between their parts, which are analogous to the arrangement between parts of their objects. As such, the object of a diagrammatic hypoicon is always an intelligible relation.
Exploring children’s difficulties with the exhaustivity inference of focusing: The role of focus identification
Lilla Pintér, Balázs Surányi
Languages employ various devices to structure sentence content into parts with higher and lower relevance (focus and background, respectively, as in [BILL]focus [won the race]background). One aspect of sentence interpretation that emerges relatively late in L1 development involves inferences regarding the nature of the relevance of the delineated focus. Previous research found that a key inference of this type, the exhaustivity inference (namely that replacing the focus with any of its possible alternatives yields false propositions) is not computed at adult-like levels before 7 years of age. This contrasts with essentially similar and better researched scalar inferences, which appear to be acquired earlier. We hypothesized that children have difficulty generating the exhaustivity inference not because the derivation of the inference itself poses a special challenge, but because they have trouble identifying the focus with which the inference could be associated. A two-part experiment, currently running with 4-to-6-year-olds, was designed to test this hypothesis. Subexp1, tapping into focus-identification, requires children to correct any false sentence by replacing the focus with an alternative that makes the sentence true. Subexp2, which investigates sentences with a focus that is interpreted highly exhaustively by adults, requires children to judge the truth of sentences in relation to a picture that licenses either an exhaustive or a non-exhaustive interpretation. As a within-subject factor, target sentences are presented on their own in a first session, while sentences in the second session are preceded by an information question that helps identify the focus. We expect that (i) those children who can correctly identify focus in the NO-QUESTION condition of Subexp1 will also compute the correct exhaustivity inference in the NO-QUESTION condition of Subexp2; further, (ii) those children whose focus-identification is effectively helped by the preceding question in Subexp1 will derive more exhaustivity inferences in the QUESTION condition of Subexp2.
The Effects of Sentence Context on Imageability and Concreteness of Metaphors
Márton Munding, Bálint Forgács, Alex Ilyés
Figurative expressions occur regularly and naturally in everyday language, however, studies on the semantic processing of metaphors rarely manipulate sentence context variability. Broadening and narrowing of meaning is thought to be a basic semantic operation, yet its effect on metaphors and semantic factors, which could influence the processing of figurative meaning, has not been investigated systematically. Moreover, there seems to be a delicate interplay between the semantic variables relevant for figurative language: words with high emotional value can have high imageability but low concreteness ratings. To address the above question from a new angle, we constructed a large set of Hungarian sentences that ended on metaphorical or literal expressions, where the meaning of sentence-final target words were either broadened or narrowed (using high and low cloze probability as a proxy measure). Two norming studies were carried out: one to confirm the achieved cloze probability, and another to see its effect on six semantic variables: concreteness, imageability, interpretability, naturality, emotional arousal and valence. We hypothesized that imageability ratings will be less correlated with concreteness, due to emotional ratings, but only for metaphorical sentences, but not in literal ones, where imageability and concreteness were expected to correlate positively. We found that imageability and concreteness diverged indeed in metaphorical sentences, but intriguingly this was not due to the moderation effect of emotions. We also found that sentences with higher cloze probability ratings also received higher ratings for all non-emotional semantic variables, such as concreteness or imageability. Our results suggest that when meaning construction is going towards narrowing, sentence meaning is conceived more imageable and concrete – which raises the possibility that it is not broadening that evokes imagination, but narrowing.
We don’t need no education: A case study in using artificial language learning to investigate negative dependencies
Mora Maldonado, Jennifer Culbertson
Languages vary with respect to whether sentences with two negative elements give rise to double negation (DN) or negative concord (NC) meanings. For example, in Dutch, combining the negative indefinite ‘niemand’ with the negative marker ‘niet’ results in a double negation meaning: (1) Niemand rent niet. N-body run NEG. “Nobody doesn’t run” → “Everybody runs” By contrast, despite involving two negative elements, the Serbian sentence in (2) only contains one semantic negation, yielding a negative concord interpretation. 2) Niko ne trci N-word NEG run “Nobody runs” We explore an influential hypothesis about what governs this variation, namely, that whether a language exhibits DN or NC is partly determined by the phonological and syntactic nature of its negative marker (Jespersen, 1917; Zeijlstra 2004). In particular, one version of this hypothesis argues that languages with affixal negation must be negative concord (Zeijlstra 2004). We ran three artificial language experiments to investigate whether learners are sensitive to this hypothesized correlation between negative marker and interpretation. Experiment 1 tests whether adult English speakers find it easier to learn a DN language when the negative marker is an adverb than when is an affix. Experiment 2 replicates Experiment 1 but accentuates the dissimilarity between negative markers by further manipulating their morpho-phonological properties. Experiment 3 translates this question into inference patterns: Are learners more likely to infer that a language is DN or NC as a function of the type of negative marker? All together, our findings fail to provide evidence supporting the connection between interpretation and negative markers. Instead, our results suggest that learners find it easier to learn NC languages compared to DN languages independently of the properties of the negative marker. This is in line with evidence from natural language acquisition (Thornton, Notley, Moscati, & Crain, 2016).
False Memory to Reduce Dissonance: A Narrative Reproduction Study
Rabia Evgin, Mahmut Kurupınar, Salih Can Özdemir, Ali İzzet Tekcan
There have been a limited number of studies addressing the link between cognitive dissonance and false memory. These studies indicate that the feeling of dissonance created by induced-compliance and free-choice paradigms may lead to increased false memories. The current study adds to the literature by introducing a novel story paradigm to induce dissonance and by investigating the relationship between vicarious dissonance and false memory. Participants read a story and were asked to rewrite it after a ten-minute filler task. The story consisted of two parts, the first part was identical for both conditions and established the main character to be a good person. In the second part, the main character’s behavior was either dissonant or consonant with the positive impression given in the first part of the story. Participants’ reproductions were coded for correct recall, omissions, changes, and additions, the last two constituting false memory. The results show that the participants in the dissonant condition made more false memories than the ones in consonant condition. Furthermore, they made more memory errors in favor of the character, justifying his/her inconsistent acts. In line with the previous studies, the current study shows that the feeling of dissonance can cause false memory. Moreover, it demonstrates that a narrative paradigm can induce vicarious dissonance, with its readers falsely remembering its events in a way that could reduce that feeling of dissonance, suggesting that dissonance-induced memory errors may serve to resolve the feelings of dissonance. These results can have important implications for eyewitness testimony as remembering past experiences that could elicit the feeling of dissonance may be particularly susceptible to emergence of false memories.
The Expression of Agency and Causality in Intentional and Accidental Dynamic Events
Zeynep Adıgüzel, Salih Can Ozdemir, Tilbe Göksun
Intentionality is an important cue in describing events, evident both in our conceptualization of the agent (performer of an action) and expressing actions. Intentional events are usually described with agentive language (e.g., “She broke it.“) while accidental events alternating between agentive and non-agentive language (e.g., “The vase broke.“) across languages. An agent’s intentional performance on an object is mostly considered as causal. However, earlier research focused either on the expression of agency or causality in a language. Unlike previously studied languages, Turkish has an additional way of expressing agency by pronoun dropping (prodrop). For causality, in addition to lexical causatives (i.e., encoding both cause and effect in its semantics, e.g., “break”), Turkish has morphological causatives (i.e., connoting causality by changing a verb, e.g., uyu “to sleep” to uyu-T “to make someone sleep”). The present study examined the relation between agency and causality, focusing on Turkish-specific linguistic features. Participants (N=134) watched 13 videos of events in which an agent performed everyday actions intentionally and accidentally, and were asked to describe what happened. We conducted a 2✕3✕2 (Intentionality✕Agency✕causality) repeated-measures ANOVA. The results showed that both intentional and accidental events were most often described with agentive clauses, followed by prodrop clauses, with non-agentive clauses being the least used (p’s<.05). Moreover, intentional events had more agentive descriptions than accidental events, and accidental events had more non-agentive descriptions than intentional events (p’s<.05) (agency*type interaction), with use of prodrop elements not differing by event type (n.s.). Furthermore, participants used more causal verbs describing intentional events than accidental events, and lexical causatives were used more than morphological causatives (p<.05) (causality main effect). Thus, intentional events were more agentive and causal, whereas accidental events were more non-agentive and less causal. These findings from Turkish suggest that both causality and agency are prominent cues in intentional event conceptualization.
Do Mandarin nouns and classifiers individuate? Experimental evidence from a quantity judgment task
Ziling Zhu, Kristen Syrett
Languages differ in whether and how they encode a count/mass distinction between objects and substances. While English has count syntax, Mandarin does not. However, previous researchers have argued that Mandarin has ways to highlight an ontological object/substance distinction. Cheng & Sybesma (1998, 1999) argue that while Mandarin nouns are mass syntactically, some can be mass semantically, and count classifiers select an instance of the nominal denotation to individuate. Zhang (2013) argues that while all nouns are mass by default, some are delimitable, and certain classifiers reflect this individuability. In addition, Mandarin nominals have been claimed to either have a built-in count/mass semantic partitioning (Cheng & Sybesma 1998, Chierchia 2019), or carry no such semantic distinction (Borer 2005, Pelletier 2012). Thus, while Mandarin does not have syntactic frames that cue the count/mass distinction, it may have other means to individuate. We experimentally investigate whether nominals, classifiers, or both in tandem, individuate, thereby bringing empirical evidence to bear on theoretical claims about the cross-linguistic count/mass distinction, and the role of the syntax-semantics mapping. We implemented a quantity judgment task (Barner & Snedeker, 2005, 2006) with adult native Mandarin speakers via Prolific. Participants were introduced to two characters, each with a set of novel objects, and asked “whose is more.” We systematically manipulated linguistic prompts across four between-subject conditions: [+/ nominals, +/- classifiers]. The dependent measure is the percentage of responses reflecting a count/individual object interpretation. We therefore build upon previous research attempting to address this issue, but diverge from their methodologies and avoid potential confounds (Chien et al. (2003) used familiar stimuli and known nouns, Li et al. (2008) covaried solidity/substance of stimuli with the count/mass distinction, and Lin & Schaeffer (2018) focused solely on nouns). Results ("in progress") indicate that Mandarin nominals "can" individuate, while count classifiers do not exclusively select objects, nor mass classifiers substances.
Thursday (20th of May)
Acquiring the sociocognitive meaning of the complement-clause construction: Mutual influences between linguistic and sociocognitive development
Ditte Boeg Thomsen, Birsu Kandemirci, Anna Theakston, Silke Brandt
Across the languages of the world, the semantic content of a wide range of constructions is sociocognitive: designating persons’ perspectives on ideas. One crosslinguistically widespread perspective-marking tool is the complement-clause construction (e.g. “Mum thinks [it’s a slowworm]“), where the complement clause expresses an idea (“it’s a slowworm”) which is anchored explicitly in a person with a specific perspective (“mum thinks/I hope/he knows…“). How do children acquire the sociocognitive meaning of such constructions? A close relationship with sociocognitive development could be expected, but studies disagree on whether complement-clause acquisition depends on or supports sociocognitive abilities to represent and reason about people’s invisible beliefs. To examine direction of causality, we therefore conducted a longitudinal study and a training study with English-speaking two- and three-year-olds. The longitudinal study investigated whether children’s proficiency with complement clauses predicted their belief reasoning six months later and/or vice versa. Testing 45 children (2;9-3;5 years), we found a clear bidirectional relationship: Initial complement-clause proficiency explained unique variance in later belief reasoning, while initial belief reasoning explained unique variance in later complement-clause proficiency. To investigate directly whether complement-clause experience promotes sociocognitive development, we then conducted a training study with 76 three-year-olds (3;0-3;10 years). Children who had activities with mental-state contrasts mediated linguistically with complement-clause constructions (“I think that the book is on the table”) advanced significantly more in belief reasoning than children trained with simple clauses (“The book is on the table”). Together, the two studies provide strong evidence that children draw on both linguistic cues and general sociocognitive development when they acquire the complex sociocognitive meanings of complement-clause constructions: experience with complement-clause constructions explicitly spelling out relationships between persons and ideas helps children learn from situations with mental-state contrasts, while a nascent understanding of mental states helps children to discern the meaning of the complement clauses they hear.
Encoding time without tense
Roumyana Pancheva, Maria Luisa Zubizarreta
Tense is one of the key means of grammatically encoding the concept of time in language. Given the importance of time in cognition, it might be expected that tense is part of the grammar of all languages. Many languages indeed have tense morphemes and cross-linguistic research has uncovered remarkable similarity in the types of meanings that they express, giving support for the view that tense is universal. Yet, there are also languages that do not have to mark tense overtly: they either do not have overt tense morphemes or the tense morphemes are optional. Such languages come from a number of different families, suggesting the lack of overt tense is widely attested. Can tense be still considered a linguistic universal? The answer, within formal semantics, has so far been “yes”. The formally explicit semantic analyses that have been proposed for languages without obligatory overt tense all posit tense in one form or another. The analyses differ along two dimensions: how they accomplish reference to time intervals (e.g., via a syntactically represented covert pronoun or a purely semantic rule), and how they restrict the location of those time intervals (e.g., via covert lexical features or pragmatic constraints). We develop a different type of account that does not rely on tense for temporal reference. We propose that evaluation time shift, a mechanism independently attested in the narrative present in languages with tense, can be more widely used for encoding temporal meaning in the absence of tense. We illustrate this account for Paraguayan Guarani and identify several empirical advantages over accounts that employ tense. The broader consequence of our proposal is an enriched typology of temporal systems: some languages have tense, whether overt or covert, and others do not. And particularly notably, tense is revealed to not be a linguistic universal.
Representing the exhaustivity of collective and individual actions: an investigation of universal quantification in adults and infants
Nicolò Cesana-Arlotti, Tyler Knowlton, Jeffrey Lidz, Justin Halberda
Universal quantifiers are pervasive across languages, supporting representations that allow us to generalize over an infinite number of entities (e.g., every natural number has a successor). Yet, it is unknown whether learning the terms for universal quantification is a prerequisite for representing the logical concepts. To shed light on the cognitive basis of universal quantification and its developmental origins, we study how adults and infants represent exhaustivity in visual scenes. We found that adults (N=36) spontaneously use “all” to describe movies of exhaustive-collective actions (e.g., all the agents chasing the same ball together), and use “each” to describe exhaustive-individual actions (e.g., each agent is chasing a ball individually). Furthermore, we found that the probability that a participant used “each” for individual-actions dropped when the number of agents surpassed the multiple object tracking limit (>4; Scholl 2001), while the probability of using “all” did not, even in scenes with 11 chasers. This finding, corroborated in a large-scale conceptual replication (N=270), suggests a cognitive signature of two distinct representations of exhaustivity in visual scenes: collective exhaustivity – a property of objects grouped in a visual ensemble – and individual exhaustivity – a property of individuals tracked independently Next, in two habituation experiments, we presented 10-month-old infants (N=48) with the same types of movies. We found that they can discriminate collective-exhaustive cashing from individual-exhaustive chasing. In a follow-up experiment (N=28), we are probing whether infants’ representation of individual exhaustive chasing shows a four-agents limit as adults. By doing so, we will identify potential precursors of natural language quantification in infancy. Our findings align with our recent proposal that “all” expresses exhaustivity among members of a set, while “each” expresses exhaustivity across non-grouped individuals (Knowlton at al., under review). Moreover, they open the inquiry of whether language acquisition is required for these two types of logical computation.
Extending mind and self into the world: hybrid selves
Amanda Luiza Stroparo, Léo Peruzzo Júnior
This work advocates that one of the first consequences of the extended mind’s argument is that the distinction between perception, action and cognition is decreasingly functional to show, on the one hand, the demand for a “self-manager” and, on the other hand, that the task of cognitive processes is exclusively internal. Hypothesis: Thus, the self should also be extended. Nevertheless, perhaps its reconsidered boundaries need more than extensions to account for its social and action-oriented character. Methods: Through a discussion from Andy Clark’s work concerning the proposal of the extended mind, which contemplates studies about philosophy of mind, computational models, robotics, developmental psychology, and neuroscience, it is possible to ground a theory of a cognition that extends to the world, including for example language and culture as cognitive artifacts, and whose structure is roughly organized around predicting the best actions in the world. Considering the predictive processing (PP) story, it is argued that mind should be a system constituted by perception-action cycles that engage the world without “accurate internal representations”. Thus, we discuss that the delicate border between the users' mind and the instruments uses the environment to benefit cognitive processes. Results: We argue that social, affective, and embodied, interactions foster the birth of a social, affective, embodied, and extended self. Simultaneously, self emerges as narrative and symbolic. It is by this scaffolded mixture, jointly with self-consciousness, that the narrative and symbolic selves arise. Then, the self is not only extended, but also hybrid. In this way, extraneural elements can perform functions similarly to the internal elements, which turns the methodological game of monism and dualism into illusory expressions for the purposes of Cognitive Science. Finally, we seek to discuss the epistemic implications of those assumptions to the dissolution of the concept of “self”, mental plasticity and rational action.
Bilinguals’ Logical Inference-Making and Language Tagging
Asli Yurtsever, Sami Gulgoz, Tilbe Goksun
Inference-making is a crucial process in understanding and processing information daily. Psycholinguistic approaches and fuzzy trace theory in memory research suggest that people synthesize inputs into a whole and retain the whole (gist) instead of specific parts (verbatim). False recognition of inferred information offers evidence for it. We conducted two studies to examine whether false memories are induced similarly when bilinguals are tested concurrently on two languages, and whether bilinguals remember the language in which the information is received (language tag). In Experiment 1, we tested thirty-four Turkish-English bilingual students with Turkish, English, and dual-language sentence groups, which induce spatial inferences about objects. Inferences are falsely recognized in Turkish and English conditions, but not in dual-language. L2 proficiency, executive function abilities, and mental imagery were found not to predict differences. In Experiment 2, we found that lower EF and higher L2 proficiency predicted more false recognition of inferred sentences. Tag identification was overall accurate and partially predicted by higher EF. We conclude that inferring in L2 induces false memories and inferences are tagged with the language of encoding.
Reconciling the roles of angular gyrus and temporal pole in concreteness and semantic relatedness
Dominick DiMercurio II, Chaleece W. Sandberg
After brain injury, some people develop anomia and need word retrieval therapy, which relies on an inadequately understood semantic system. The juxtaposition of some findings suggest temporal pole as an abstract, taxonomic semantic hub and the angular gyrus as a concrete, thematic semantic hub, yet these ideas conflict with the different representational frameworks (DRF) hypothesis, which posits opposing concrete taxonomic versus abstract thematic organizations. The conflicting ideas impede the refinement of theoretically based aphasia therapy. Evidence for the conflicting ideas comes from behavioral, brain imaging, and clinical research; however, some ideas are debated. In the present study, 33 neurologically intact participants judged word relatedness during brain imaging. Pairs of concrete or abstract words of varying relatedness were presented. Stimuli were assigned concreteness and relatedness scores based on semantic space and semantic network measures. Regression coefficients for concreteness, relatedness, and their interaction on blood-oxygen-level dependent signal in proposed semantic hubs were analyzed. Angular gyrus showed a small effect of concreteness and a large effect of taxonomic relatedness compared to temporal pole. Neither region alone showed a remarkable interaction; however, secondary analysis showed a modest correlation of concreteness and relatedness when regions were examined jointly. The findings weakly support concrete word preferences in angular gyrus, abstract word preferences in temporal pole, and a neural basis for the DRF hypothesis, but strongly refute the dual hub theory. Thus, a proposed reconciliation of the roles of angular gyrus and temporal pole in concreteness and semantic relatedness is to reverse the dual hub theory, with taxonomic preference in angular gyrus and thematic preference in temporal pole. Further research will confirm if these findings generalize across demographic variables such as age, race, gender, handedness, language experience, and education history. The present paper comments on existing aphasia therapy and reaffirms the need for personalized medicine to match patient to therapy more optimally through individualized network profiles.
Beyond the implicit/explicit distinction: the pragmatics of accountability and plausible deniability
Francesca Bonalumi, Johannes Mahr, Pauline Marie, Nausicaa Pouscoulous
Implicit communication is considered to decisively affect speakers’ commitment: when the message conveyed is unreliable (false or otherwise not satisfied), implicit communication offers the speaker a room for plausible deniability, and thus to eschew potential social repercussions. However, most approaches assume that (1) the implicit/explicit distinction reveals a dichotomy, and thus (2) implicit communication always offers an opportunity to plausibly deny an intended content, provided that it was implicitly conveyed. In order to challenge these assumptions, we conducted an experiment in which we presented participants with scenarios describing social interactions and the presence of a broken implied commitment. We manipulated the level of meaning (its degree of explicitness) and meaning strength (its degree of manifestness) of the implied content, and whether the implied content is denied by the speaker. Our results indicate that participants blame the speaker more often when the implied content was strongly rather than weakly implied, and also that denial is successful only when the implied content is weak (but not when it is strong) and when the implied content is an implicature (but not when it is an enrichment). This suggests that to strategically manipulate the extent of their accountability, speakers must not only deploy implicit formulations, but also balance additional pragmatic factors: strategic communication is a complex phenomenon that goes way beyond the implicit/explicit distinction.
Mind over Body: Investigating Cognitive Control of Cycling Performance with Dual-Task Interference
Johanne Nedergaard, Mikkel Wallentin
In cognitive psychology, dual-task investigations have indicated that internal language plays a role in a variety of cognitive functions. This preregistered study investigated whether physical endurance as exemplified by cycling performance depends on internal language and internal visual experience. A sample of 50 physically active participants performed 12 cycling trials, each lasting one minute where they were required to cycle as fast as possible while remembering either a sequence of letters and numbers (verbal interference) or locations on a grid (nonverbal interference). We found that participants cycled a numerically shorter distance in the verbal interference condition than in the no-interference and the visuospatial inference conditions, although only the difference to the no-interference task was significant. Further, participants who reported that self-talk helps their sports performance were more negatively affected by verbal interference. Our study suggests that the inner voice plays a causal role in top-down control of sustained physical efforts.
Dynamic model of interpretation of metaphoric language use in therapeutic storytelling
Melinda Papp, Lívia Ivaskó
Our presentation attaches particular importance to a specific subtype of storytelling called therapeutic tale-telling. In understanding tales as specific stories, the use of creative, fairy-tale metaphors has proved to be of great importance, which might even trigger therapeutic processes in a listener. We will present some analysis of examples for metaphoric language use in discussions based on fairy-tale therapy. It can be said that the metaphorical term as a linguistically formed meaning-forming element is a unit with a particularly important meaning for participants in tale-telling situations, which requires the co-operation of several cognitive functions. During the processing of metaphorical forms of language use in therapeutic tale-telling, the individual makes cognitive efforts to find the interpretation that is optimally relevant to him or her. In this view, the use of metaphorical language can thus be interpreted as a specific tool for the functions of therapeutic discourse based on tale-telling. A dynamic model will be presented, which is suitable for interpreting metaphorical elements that appear in therapeutic tale-telling situations. Metaphor processing appears in several relations in a given situation where the participants have different roles and perspectives, as: a) in the interactions between the therapist and the client, b) in the relation between the client and the chosen story, c) in the connection between the client’s own interpretations and changes due to therapeutic application, d) in the relation between group members in cases of therapeutic groups. During the processing of therapeutic stories, contextual information can influence the development of the meaning of the given metaphorical language elements, which can be described in a dynamic model that allows finding a broader or a narrower meaning depending on the level of relevance of the participants and the recipient’s attitudes. The research was supported by the project EFOP-3.6.1-16- 2016-00008 co-financed by the EU.
The origin of human communication in pedagogy
Nima Mussavifard, Gergely Csibra
The different characterizations of human communication (often called ostensive communication) have led to radically diverging adaptive scenarios for accounting the evolutionary emergence of this unique system. We believe this is partly due to approaches that are committed to specific underlying cognitive mechanisms instead of specifying a genuinely ultimate, functional approach to ostensive communication. Defining ostension with regards to its function of marking behavior as communicative allows for a straightforward assessment of evolutionary hypotheses. We argue that the pedagogical hypothesis provides the strongest explanations for the peculiarities of human communication. Particularly, our distinctive capacity to mark an open-ended range of novel behaviors as communicative is best explained by active transmission of cultural knowledge, which by definition goes beyond what natural selection endowed humans with. The pedagogical theory accounts for not only the ‘uniqueness’ of ostensive communication but other limiting criteria that are applied for assessing the validity of evolutionary hypotheses about the origin of human communication. Crucially, teaching through demonstration allows for communication without conventional symbols – thus explaining the ‘immediate utility’ criterion. Finally, the pedagogical hypothesis is backed by empirical studies in developmental psychology, which point to the early emergence of cognitive mechanisms that facilitate the transmission of generic knowledge, mainly through action demonstrations, from parent to child. We argue that the predicate-argument structure that is foundational to the semantics of sentences as well the pragmatics of speech acts likely originated in these demonstrations. Whereas objects serve as arguments (exemplifying their kind), actions directed at them function as predicates expressing the properties and relations ascribable to the object kinds. This structure creates a productive mechanism for the iconic communication of generic proposition-like content and has arguably been present, both in ontogeny and phylogeny, even before mastery of conventional, linguistic predication.
Different mechanisms drive the change in the size effect in symbolic and non-symbolic numbers
Petia Kojouharova, Attila Krajcsi
In numerical cognition, the size effect is observed when two numbers are compared, e.g., which one is larger: Participants make fewer errors and are faster for smaller than for larger numbers. One widespread explanation for the effect is that it is a consequence of an innate, continuous numerical representation (the Analog Number System, ANS) that works according to Weber’s Law. An alternative account (the Discrete Semantic System, DSS) states that while for nonsymbolic numbers the size effect stems from the ANS, in the case of symbolic numbers the effect is rooted in their frequency. According to the DSS, manipulating the frequency of the numbers will modify the size effect in symbolic but not in nonsymbolic numbers, because in the latter it depends on the psychophysical properties of their representation. In our study the participants compared Indo-Arabic numbers presented with everyday, uniform, or reversed everyday frequency. The size effect was modified as expected - its slope was smallest in the reversed everyday frequency and largest in the everyday frequency condition. We repeated the experiment with nonsymbolic numbers (sets of dots). Unexpectedly, the size effect was modified similarly to that in Indo-Arabic numbers. To investigate whether the frequency manipulation impacts the same parameters for symbolic and nonsymbolic numbers, we applied the EZ-diffusion model. The model allows for the recovery of three unobserved variables: drift rate (quality of information), threshold (response conservativeness), and nondecision time. The results showed that, most importantly, the drift rate is modified only for symbolic but not for nonsymbolic numbers. This is in line with both the DSS explanation for symbolic numbers and the ANS account for nonsymbolic numbers. Changes were observed in the threshold and the nondecision time for both notations. These results further support the existence of different mechanisms behind symbolic and nonsymbolic numbers.
Investigating the role of multiple scripts during semantic access
Yoolim Kim
More than half of the Korean lexicon comprises words of Chinese origin. Sino-Korean words can be transcribed in both Hangul, the native Korean alphabet, and Hanja, the set of Chinese characters borrowed into Korean according to native pronunciation. Sino-Korean words are often compounds, each syllable standing for a Hanja character written in Hangul. Recent literature has shown native speakers’ ability to intuit whether a Korean word presented in Hangul is Sino-Korean or not. The dominance of Sino-Korean in the language offset by the primary use of Hangul in written Korean raises interesting questions regarding the extent to which the Sino-Korean stratum in the mental lexicon is represented independently of the presence or the knowledge of Hanja. Through a large-scale visual intra-modal lexical decision task, we set out to investigate empirically the status of Hanja within the Korean mental lexicon. We measured 64 adult native Korean speakers’ reaction times to Sino-Korean Hangul targets, preceded by one of three different Hangul primes. The prime-target combinations varied according to the strength of the semantic relationship (Directly Related, Indirectly Related, Unrelated), the degree of relatedness crucially dependent upon knowledge of the appropriate Hanja. We found that reaction times to targets in both the Directly and Indirectly Related conditions were not significantly faster than reaction times to targets in the Unrelated condition. Most striking was the varied priming effect sizes within the Directly Related condition, for which we had predicted robust priming. These findings appear somewhat consistent with the hypothesis that the mental representation of Sino-Korean exists independently of Hanja as well as previous research showing speakers’ ability to discern Sino-Korean from pure Korean. Building on the latter, our findings suggest that the mental representation of Sino-Korean potentially abstracts away the orthographic representation of Hanja to simply encode its semantic contributions.
Knowledge'wh' and Intermediate Exhaustivity
Ahmad Jabbar
Cremers and Chemla’s (2016) recent psycholinguistic experiments suggest that intermediately exhaustive (IE) readings for third person knowledge-‘wh’ ascriptions exist. When considered in conjunction with Groenendijk and Stokhof’s earlier observation — that owing to the introspection requirement for knowledge, only strongly exhaustive readings for first person knowledge-‘wh’ ascriptions exist — we encounter a puzzle: that while IE readings for third person ascriptions exist, they don’t for first person ones. As a solution, Theiler et al. propose that ‘know’ is ambiguous such that the interpretation for the external reading doesn’t require introspection, while the interpretation for the internal reading does. While parsimony is a theoretical virtue, and might be reason enough to resist ambiguity, in our paper, we present further arguments and data against ambiguity of ‘know’. To take one data point, note that an external observer, Mary, who ascribes knowledge to John under IE conditions, can felicitously utter the following: “I know which squares are blue and so does John”. Here, the elided VP in the second conjunct is supposed to have the same interpretation as the first one. Also note that Mary is self-ascribing knowledge here. Our other data concern agreement and disagreement. As our positive proposal, we present a novel semantics for ‘know’ that takes the interpretation of ‘know’ to be sensitive to an information state parameter provided by the context. In our semantics for ‘know’, we introduce a conditional s.t. introspection is to be met if the information states of the subject and the speaker are identical. A more general upshot of our proposal is that sensitivity to an information state is a property of a broader class of epistemic vocabulary; not just epistemic modals.
The Relationship of Executive Function, Theory of Mind and Social Communication in preschool children with High Functioning Autism
Boglárka Bűdi, Anna Babarczy
Autism spectrum disorder (ASD) is a highly heterogeneous and complex neurodevelopmental disorder that affects at least 1 in 54 children (CDC, 2020). Individuals with ASD exhibit impaired social interaction and communication, repetitive and restricted interests and behaviors. Studies have reported various ways of association between Executive Function (EF), Theory of Mind (ToM) and Social Communication (SC) in ASD. There is evidence of notable EF deficit across development in certain aspects such as inhibition and cognitive flexibility, and interruption in EF possibly contributes to impairments in ToM and broader social cognition. It is this relationship that the current study investigates. 5-6 year-old Hungarian speaking children (n=15) with high functioning autism (HFA) completed three tasks measuring inhibition (non-verbal Stroop task), processing speed (Cancellation task), and ToM (non-verbal Intention Attribution task). Verbal and non-verbal social communication skills were measured using a parental questionnaire (adaptation of Assessment of Social and Communication Skills for Individuals with Autism Spectrum Disorder (ASCS-2), Quill, 2000). It was hypothesized that EF (inhibition and processing speed) and ToM (intention attribution) performance would correlate with each other and both would be significant predictors of social communication performance. As predicted, a correlation was found between the EF and ToM measures, between intention attribution and verbal and non-verbal social communication abilities, and between processing speed and non-verbal social communication performance. No significant relationship was found, however, between inhibition and either verbal or non-verbal social communication ability, or between processing speed and non-verbal social communication skills. The results suggest that for the HFA population, ToM ability rather than the measured components of EF is the dominant factor in predicting social communication skills. The findings highlight the importance of further investigation within the current subject – the role of EF and ToM in Social Communication in ASD.
Modularity and Pragmatics: a reopened issue
Edoardo Vaccargiu
A shared assumption in cognitive pragmatics is that language comprehension requires the Theory-of-Mind (ToM), i.e., the ability to attribute mental states and to use those attributions to make hypotheses about others’ verbal behaviour. Relevance-theorists developed this assumption in a modular view of the mind, by arguing that pragmatics is a sub-module of the ToM-module. Recently, the modular view of pragmatics has been questioned, both on empirical and on theoretical grounds. This work takes a stance on the issue by making two original claims: (1) that the present debate is vitiated by a “coarse formulation” of the modular hypothesis, and (2) that the current state of the art allows to tackle the issue in a more empirically-informed way. To defend these claims, I first spell out the relevance-theoretic account of the linking between ToM and language comprehension by focusing on Sperber’s theoretical tripartition of interpretative strategies. Then, I discuss two different perspectives through which the tripartition has been read: a developmental one and a normative one. The developmental reading was encouraged by Wilson to defend the modular hypothesis in the light of data from developmental psychology. In that regard, my claim is that Wilson’s reading lacks empirical support. However, I argue that empirical data from Natural Pedagogy’s framework could support a “finer formulation” of the modular hypothesis, which I try to sketch. The normative reading is upheld by Kissine to integrate clinical data from autistic individuals within an ‘anti-modular’ approach to pragmatics. I claim that Kissine’s reading fits well with recent data from experimental pragmatics. However, I argue that his stance against the modular hypothesis is indeed addressed to the “coarse formulation” of it. Finally, I sketch some tentative hints to spell out more precisely the empirical predictions of the modular hypothesis in its “finer formulation”.
Language-Dependent Recall in the Recall of Fictional Stories
Ezgi Bilgin, Tilbe Göksun, Zeynep Adıgüzel, Sami Gülgöz
Previous research has demonstrated that people experience a memory boost when they learn and share information in matching languages. In contrast, learning and retrieving in different languages decrease memory performance; an effect called the language-dependent recall effect. In the present study, we investigated this effect in terms of accuracy and false memory in the free recall of fictional stories. We also examined how language-dependent memories were related to self-reported language proficiency and vividness of visual imagery. A total of 137 native Turkish (L1) speakers who were second language learners of English (L2) were assigned to one of four groups according to the language of reading and recall: 1) L1 reading – L1 recall, 2) L2 reading – L2 recall, 3) L1 reading – L2 recall, 4) L2 reading – L1 recall. Our hypotheses were as follows: 1) Recall would be more accurate when the reading and recall languages matched (the language-dependent recall effect), 2) accuracy would be higher when the match of languages occurred in the L1 compared to the L2, 3) vividness of recall would be higher when languages matched, and 4) differences in the accuracy scores between matching and non-matching language conditions would be related to L2 proficiency levels and vividness of visual imagery, expecting lower differences with higher proficiency and imagery levels. False memory analyses were exploratory. The results showed a language-dependent recall effect, higher accuracy when participants read and recalled the stories in the same language than when they did it in different languages. False memory was higher when participants read the stories in L2 but recalled them in L1 compared to other groups. Accuracy was not related to the L2 proficiency or vividness of visual imagery. The findings demonstrated that the language-dependent recall effect was a robust phenomenon that occurred both in cases of L1 and L2.
Understanding the role of hand gestures in creative idea generation and across domains of creativity
Gyulten Hyusein, Sarp Özdemir, İrem Türkmen, Melek Öyküm Yalçın, Tilbe Göksun
Research findings from the past decade have emphasised hand gestures’ beneficial role in creative thinking processes. For example, children who naturally gestured more told stories more creatively (Laurent et al., 2020). When encouraged to use their hands while trying to come up with alternative uses of objects, children also generated more ideas compared to children who were not prompted to gesture with their hands (Kirk & Lewis, 2017). Research conducted with adults showed that spontaneous gesture use was related to enhanced verbal improvisation (Lewis et al., 2015) and benefited both self’s and partner’s idea generation in a group brain-storming session (Liao & Wang, 2020). In the current study, we hypothesised that encouraging gesture use would facilitate idea generation on Guildford’s alternative uses task in a sample of young adults (N = 80). Half of the participants completed the task first in a gesture-spontaneous condition (no mention of gestures) and then, in a gesture-encouraged condition, while the other half only completed the task in a gesture-encouraged condition. Preliminary analysis of fluency (number of ideas generated), flexibility (number of different categories in their ideas), and elaboration (number of details added) in their responses showed that the group that was initially exposed to the spontaneous gesture condition had significantly higher elaboration scores during their gesture-encouraged condition both compared to their gesture-spontaneous condition and to the group that was exposed to the gesture-encouraged condition only. Furthermore, fluency of ideas in the gesture-spontaneous condition was significantly predicted by the scholarly domain of Kaufman’s Domains of Creativity, however, this was not the case in the gesture-encouraged conditions. Further analysis in terms of originality of the generated ideas, gesture use frequency and types of gestures used should shed light on the within- and between-group differences in the current preliminary results.
Bi-directional form-meaning mappings: unifying competence with performance
Hedde Zeijlstra
Bi-directional form-meaning mappings: unifying competence with performance Any theory of language must predict how every well-formed no ill-formed expression in a particular a language can be created. Since every well-formed expression in natural language is a form (comprising sound and signs) - meaning pair (or signal-thought pair), any theory of grammar must thus predict what the possible constraints are on mappings between form and meaning. Mainstream generative grammar proposes a Y-model where numerations of lexical elements are structurally combined and form the input for phonological and semantic interpretation (cf. Chomsky 1995 et seq). Those derivations that are fully interpretable by the sound and meaning systems of human cognition are grammatical; all others are not. As successful as this model has been empirically, it comes along with two major shortcomings: First, it is incompatible with adequate models of language production and perception (where meanings are assigned a particular form and vice versa), a shortcoming often obscured by the competence-performance dichotomy. Second, it is theoretically redundant as it requires a lexical selection operation whose presence is independent of the two levels of representation that feed interpretation. In this paper, I present a model of grammar that overcomes these shortcomings. In short, it takes every well-formed expression to be a form-meaning pair (P a phonological form; L a semantic form), such that the rules of grammar enable mapping P to L, and L to P. That is, only if a particular meaning can be mapped to a particular form and vice versa is a sentence well-formed; otherwise not. Such mappings ultimately look similar to traditional Y-model structures, as they decompose one form all the way into lexical units that can be recomposed into the other form, but this model no longer suffers from the earlier-mentioned shortcomings and can thus unify theories of language competence with theories of language performance.
Do body movements influence lexical choices in a language production task?
Isabel Ganter, Anne Vogt, Rasha Abdel Rahman
The investigation of sensorimotor experiences in language comprehension is already advanced, but its role in language production is less examined. In two studies we asked participants to verbally complete sentences which were presented in an ascending or descending direction with suitable nouns of their own choice. After running the sentence completion study, we obtained ratings of spatial orientation of the produced words by an independent group of raters. The results showed that produced nouns were not influenced by the presentation direction of the sentence. However, produced nouns matched the spatial characteristics of the sentence context such that the location of the situation predicted the spatial orientation of the produced nouns. The higher in space the situation indicated in the sentence takes place the higher up in the world would the referents of the produced nouns be located. These results indicate an influence of the experiential domain of space on language production. In a follow-up study we examine whether up- and downward bodily movements will influence the lexical choices of participants. To this end a downward or upward head movement is executed while listening to audio presentations of the sentences which have to be completed. Furthermore, we explore whether participants susceptibility to this experimental manipulation depends on their interoceptive sensibility. Two facets of the Multidimensional Assessment of Interoceptive Awareness (MAIA) questionnaire are used to investigate interoceptive sensibility (noticing, attention regulation). We aim to replicate previous findings and explore whether body movements have an influence on the spatial characteristics of produced nouns. In addition, we investigate whether interoceptive sensibility mediates or moderates this relation. Data are currently collected and will be available by the time of the conference.
Can correr más mean “run faster”? (Non-)monotonicity in Spanish verbal comparatives
Luis Miguel Toquero-Pérez
It is typically claimed that VPs (and NPs, e.g. (pseudo-)partitives) can only give rise to dimen- sions for measurement and comparison that track the part-whole structure of their domain. This is known as the Monotonicity Constraint (MC) (Schwarzschild 2006; Nakanishi 2007; Wellwood et al. 2012 a.o). The MC allows comparison along a dimension like distance but prevents interpretations in terms of speed (1). However, new data from Peninsular Spanish challenges this claim: atelic manner of motion predicates allow the speed interpretation (2). The claim has been verified with an acceptability study of sentences in context. (1) Al runs more than Bill. [√Distance/ *Speed] (2) Al corre más que Bill durante una hora [*Distance/ √Speed] Al runs more than Bill for an hour I argue that, rather than reformulating the MC, the data are best explained if the Spanish comparative morpheme más ‘more’ combines with an underspecified measure function and should not be restricted to only combine with quantity denoting measure functions. I propose an elaboration on where the null measure function can be quantity (i.e. monotonic) and where it cannot (non-monotonic): the choice of function is determined structurally by the syntactic position that the comparative occupies in the VP. In fact, I argue that that there are three different syntactic positions that the comparative can occupy. While an argument position and a low adjunction site are loci for monotonic measure functions, a high adjunction site is not. The MC applies only within a particular syntactic domain in the VP, much like Schwarzschild (2006) showed that it is syntactically constrained in the NP. The results illuminate our understanding of the set of dimensions for measurement encoded in natural language predicates, and suggest that there is a role for syntax in the determination of type of measurement.
Communication and action predictability: two interacting strategies for successful cooperation
Mateusz Wozniak, Guenther Knoblich
Making one’s actions predictable and communicating what one intends to do are two basic methods through which people can facilitate coordination between each other. However, what has been less investigated is how these two methods interact with each other. Here, across three experiments, we investigated how people coordinate their joint decision making if they are not allowed to communicate at all (Experiment 1), allowed to communicate with each other by sending a single 1bit signal (Experiment 2), or allowed to fully communicate (speak to each other: in Experiment 3). We found that if participants were not allowed to communicate then they attempted to coordinate with each other by maximizing predictability of their behaviour. On the other hand, if they were allowed to fully verbally communicate then their behaviour stopped being predictable, as they could disambiguate any unpredictability of their actions using language. Finally, when they were given the possibility to use 1-bit communication, they both used behavioural predictability, and developed simple communication systems using limited reciprocal one-bit communication channel to succeed in the task. Overall, our study demonstrates that people adaptively use both behavioral predictability and communication to facilitate successful coordination between each other.
Epistemically Trivial Disjunctions
Murali Ramachandran
A disjunction [A v B] is (epistemically) trivial for X, if X’s grounds for holding [A] are her sole grounds for holding [A v B]. So e.g. if X knows that Annie went to the party (A), but has no grounds for thinking that Bill went to the party (B), X will (rightly) take [A v B] to be true, but it will epistemically trivial for X. If [A v B] is trivial for X, X cannot legitimately use disjunctive syllogism to support (or acquire knowledge of) of either of its disjuncts. For the same reason, if [A v B] is trivial for X, X cannot legitimately use [~A > B] in a modus ponens or modus tollens inference to support (or acquire knowledge of) [B] or [A], respectively. Epistemically trivial disjunctions are little discussed, but they have significant ramifications for various philosophical issues. In this talk, I highlight two:
(1) They usher a novel objection against the material conditional reading of indicative conditionals, whereby [A v B] = [~A > B]. Previous objections appeal simply to disparate intuitive verdicts on truth values. The novel objection I shall pursue is that the very explanation of why one should hold a trivial disjunction explains why one should deny the corresponding indicative conditional.
(2) The ‘prediction’ paradoxes (including e.g. the surprise examination) supposedly generate a paradoxical regress from the supposition that a group of individuals X are apprised of:
• an existence premise, (E), which is equivalent to an exclusive disjunction, [D1 v … v Dn], and
• a ‘surprise’ premise, (S), stating that X cannot know which disjunct is true within certain parameters.
I shall argue that no regress arises when the knowledge of (E) is trivial for X.
You can’t see me now! - Testing the role of gaze in reputation management at an early age
Réka Schvajda, Ildikó Király
As humans we are constantly watched by others and we are judged based on our behaviour. Children tend to be more generous if someone observes them (Leimgruber et al., 2012) and according to the watching eyes effect the eyes play a special role in this. Children were more generous when they were exposed to a pair of open eyes, rather than a neutral picture (Kelsey et al., 2018). When children were exposed to a different cue, (a mouth), they were less generous, but this difference was not significant. This result pinpoints that the mouth cue could have ambiguously indicated an observer’s potential presence. Our goal was to replicate the study of Kelsey and colleagues (2018) online and to test whether the mouth cue introduced ambiguity in a potential agent being present or rather it is interpreted as a non-watching agent - for this reason we tested children with the presentation of a pair of closed eyes. After the training phase children participated in a resource-allocation task while being exposed to a pair of open or closed eyes or to a neutral picture (flowers). We wanted to explore how children behave in the presence of the different cues. If the mere presence of another person triggers prosocial behaviour, children would distribute resources in the presence of closed and open eyes alike. However if open eyes, and being seen plays a special role in situations where others could evaluate us, children will behave differently in the presence of the two types of eyes.
Four ways to metaphor comprehension: towards a bidimensional account of metaphor
Stefana Garello, Marco Carapezza
In this paper we will discuss the role of literal meaning and mental imagery in metaphor comprehension, showing their link and the problematic nature of these notions in pragmatics (Wilson & Carston 2019). We will try to overcome these problems by putting in dialogue the typology of metaphors offered by Carston (2010, 2018), based on the parameter of literal meaning, and the typology offered by Green (2017) based on the parameter of mental imagery. Carston (2018) recognizes the existence of two kinds of metaphors: (1) local metaphors such as “Giulio is a professor” in which a single lexical item - PROFESSOR - is modulated pragmatically; (2) metaphors such as “The yellow fog that rubs its back upon the window-panes” in which it is necessary to resort to the literal meaning of the sentence, metarepresenting it and deriving the metaphorical meaning as implicatures. In this kind of metaphor, mental imagery can be activated, playing a role in the derivation of metaphorical meaning. At the same time, Green (2017) distinguishes between (1) local metaphors such as "Giulio is a professor" that require local, pragmatic modulation and do not activate mental imagery (image-permitting metaphors), and (2) novel metaphors such as “the snow is a winter closet”, understood through pragmatic modulation but in which the activation of a mental imagery is necessary for metaphorical comprehension (image-demanding metaphors). We will analyse potentials and limits of these two typologies of metaphor comprehension and, combining the two accounts, we will recognize four kinds of metaphor and four ways to metaphor comprehension (instead of two ways to metaphor comprehension). Finally we will organize our proposal into a bidimensional account of metaphor, covering the full range of cases.
Language in the Head: Thinking About and Thinking Through Language
Wade Munroe
People frequently report that their thought has, at times, a vocal character (Heavey and Hurlburt 2008). Conscious thought commonly appears to be accompanied by silently ‘talking’ to oneself in inner speech. I join Alderson-Day and Fernyhough in (roughly) defining inner speech as, “the subjective experience of language in the absence of overt…articulation” (2015, 1). In this paper, I focus on the epistemic role played by inner speech in (i) thinking about an expression, e.g., thinking about the English number word, “five,” and (ii) thinking through an expression to its semantic content, e.g., internally using, “five,” in inner speech as a means of thinking about the number five itself. As I argue, inner speech can be used in both thinking about a linguistic expression, E, and thinking through E, thus thinking about E’s semantic content. I focus on inner speech in mathematical cognition in virtue of the wealth of literature on how we reason with numbers and how various forms of aphasia affect mathematical cognition. I argue that to think about a number or quantity, N, is just to token an internal representation of an expression in a semiotic system that refers to N. In other words, to think about a number is just to internally articulate its name. If I am right, it becomes challenging to distinguish when one is thinking about, say, the word, “five,” from when one is thinking through the expression, thus thinking about the number five itself. I offer a tentative means of drawing the thinking about/thinking through distinction by analogy to how one and the same overt utterance of an expression can be used to refer to the expression itself or used to refer to the expression’s semantic content.
Verb-Argument Structure and Semantics in Contextual Word Embeddings
Tianhu Chen, Timothy J. O'Donnell, Joshua Hartshorne
There is a long-running debate about the nature of syntax: is it a standalone system, specific to language, or is it a reflection of conceptual structure itself? A central test case is verb argument structure. Specifically, verbs vary in the kinds of sentences that they can appear in (A hit/broke/saw B; A hit/*broke/*saw at B). It has been argued that verb semantics determine which sentence structures they can appear in. Indeed, when verbs are grouped according to this structure (“verb classes”), it is often possible to identify semantic similarities between verbs within a single class. While suggestive, this evidence is limited. In particular, we do not know whether semantics is sufficient or determinative for class membership. Moreover, only a handful of the verb classes in English alone have been systematically investigated. We take a different approach, by utilizing large, contextual word embedding models such as BERT. These models, trained on large-scale corpuses to predict (sub)word co-occurrence, have been shown to capture elements of both syntax and word meaning. First, we show that verb classes can be successfully classified from BERT’s intermediate representations, outperforming other embedding methods. Accuracy is highest at the middle layers, consistent with previous literature. We then use a linear probe to disentangle verb classes, and find that they span a very low (4) dimensional subspace. This allows our critical analysis. Following previous work, we construct additional syntactic and word-sense probes, to analyze this “verb argument” subspace. We find that verbs projected using our verb-argument probe correlate more heavily with verbs projected from the word-sense compared to the syntax probe, suggesting the defining features of verb classes are semantic, not syntactic. Implications for theories of language, semantics, and concepts are discussed.
Effects of L2 on causal structures in L1 frog story narratives in children
Ayşe Doğan, Büşra Sena Özcan, Ceyda Özkan, Emre Lale, Aslı Aktan-Erciyes
Present study investigates the effect of learning a second language (L2-English) with different causal constructions compared to first language (L1-Turkish) on the causal language produced during L1 narrative constructions. While causal language input can support causal reasoning, there are crosslinguistic differences in causal constructions with regards to causal verb expressions. Turkish uses both morphological (changing a verb by adding suffixes to suggest causality e.g., Yap ‘’do’’, Yap-TIR ‘make someone do something’) and lexical (a verb that encodes the cause and effect within e.g., yakala ‘catch’) causatives. On the other hand, English only uses lexical ones. And both languages use causal conjunctions (e.g., çünkü ‘because’). In the present study, 5- and 7-year-old monolingual (L1-Turkish) and bilingual (L1-Turkish; L2-English) high SES children (N = 111, 60 monolingual, 51 bilinguals) elicited narratives in L1 for the picture book ’Frog, Where Are You?’. For narratives, we coded the following causal structures: Causal links, morphological and lexical causative verbs. With respect to the difference in the causal structure types across Turkish and English, we hypothesized that monolingual children would produce more causal structures compared to bilingual children. Our results showed that 5-year-old monolinguals used causal links more than 5-year-old bilinguals (p’s <.05) and 7-year-old monolinguals used lexical causatives more than 7-year-old bilinguals, (p’s <.05). These findings suggest that monolingual children who are more exposed to morphological causatives may have a better expression of causality reflected in causal links and lexical causatives due to the fact that L1-Turkish provides transparent cues for causality (e.g., causative suffixes). Unexpectedly, there were no differences in morphological causatives between the two groups. This might be due to the fact that frog story might not elicit morphological causatives to a great extent. Future studies should address differences in experimental tasks together with narrative constructions.
Friday (21st of May)
Polysemy and ambiguity in indirect evidential use around children
Emily Sadlier-Brown, Carla Hudson Kam
Indirect evidentials (IEs) indicate that a speaker’s evidence for what they are saying is inferred or second-hand. IEs are acquired late (Fitneva, 2018). It is not clear whether this is due solely to IEs’ conceptual complexity or to the input: IEs often have multiple pragmatic functions (e.g., mirativity/new-to-speaker information) (DeLancey, 2001) and children may not hear all of them/get information consistent with all of them. Here, we characterize children’s input for the English IE ‘apparently’, for 1) speaker’s evidence source; 2) whether uses are accompanied by overt cues to the speaker’s evidence source; and 3) whether other potential functions (irony, new information, surprisal) are present. Methods: All tokens of ‘apparently’ (n=24) from the Providence corpus (Demuth et al., 2006, chosen because it has video, enabling cue-availability assessment) were coded by the 1st author for: evidence type, presence of cues to speaker evidence, irony/humour, and new-to-speaker information. Surprise was assessed by 110 participants, who rated the 24 tokens (plus 24 tokens with ‘actually’, a word not associated with surprise) for how surprised the speaker seemed, using a scale where 1= ‘extremely unsurprised’ and 7 = ‘extremely surprised’. Results: Evidence type: 20/24 tokens = indirect evidence, 1/24 = direct evidence, 3/24 = unclear Cues to evidence source: 14/20 IE uses accompanied by visible/audible indication of evidence source Irony/humour: 12/24 uses New information: 14/24 uses Mean surprise rating: ‘apparently’ = 4.36/7, ‘actually’ = 4.09/7, (ns) Conclusion: In this corpus, ‘apparently’ is mostly consistent with indirect evidence, and is frequently accompanied by evidence that could be used to infer this meaning. However, many tokens are also consistent with irony, and with new information, though they are not generally associated with surprise. Thus, ‘apparently’ as produced around children is often ambiguous, which, if common to other IEs, could partly explain observed delays in their acquisition.
The role of odour in meaning
Laura Speed, Johan Lundström, Asifa Majid
According to some accounts, meaning is said to be grounded in sensory simulation: the perceptual systems of the brain are recruited for meaning making. Yet little attention has been directed to the proximal sense of olfaction. In order to directly test for a role of olfaction in meaning, we compared performance on a set of pre-registered language tasks in 57 participants with no sense of smell (anosmics) and 56 matched controls with an intact sense of smell. Participants completed a lexical decision task with odour (e.g., lavender), taste (e.g., basil), and vision-related nouns (e.g., brick). We found no difference in response time or accuracy between anosmics and controls. Next, participants completed a semantic similarity judgment task with odour-, taste-, and vision-related words. Participants had to judge which of two words was more similar in meaning to a target word (e.g., is patchouli or vinegar more similar to menthol?). Anosmics were overall slower and more accurate in the task, but this did not differ across word type. The lexical decision and semantic similarity judgment results suggest olfactory representations do not play a crucial role in the meaning of odour-related language. However, in an implicit memory task, anosmics remembered more odour-related nouns than control participants. Anosmics also rated odour- and taste-related nouns as more positively valenced on a seven-point valence scale than control participants did. Together, these results suggest that simulation of olfactory representations do not play a crucial role in the meaning of odour-related language, but odour-related language is more salient and emotional to anosmic participants, which could reflect the emotional experience of losing the sense of smell. Since no detriment to olfactory language was found in anosmics, this suggests odour-related language is not grounded in odour perception. On the contrary, some aspects of odour-related language are enhanced.
The Role of Source Monitoring and Evidential Markers in Turkish and British Children’s False-Belief Understanding: A cross-linguistic study
Silke Brandt, Birsu Kandemirci, Anna Theakston, Ditte Boeg Thomsen
In this study, we investigate the linguistic and non-linguistic factors that might support children’s false-belief understanding (FBU) and how these compare in two structurally different languages, Turkish and English. We look at children’s source monitoring abilities (SMA), use of evidential markers in Turkish, where they are obligatory, and use of non-obligatory, but related constructions in English (modal and mental-state verbs). No study to date has investigated the relationship between FBU, SMA and evidentials cross-linguistically. We compared Turkish-speaking (N = 50, mean age = 50.1 months) and English-speaking (N = 50, mean age = 50.6 months) 42- to 59-month-olds’ performance in three false-belief tasks. As factors that might impact children’s performance, we measured their SMA, using the Mode of Knowledge Access Task (Gopnik & Graf, 1988), evidential-marker competency, using the Direct Experience and Changed State of Objects tasks (Ögel, 2007; Aksu-Koç et al., 2009), receptive vocabulary, short-term memory, as well as age and gender. Age, receptive vocabulary, and short-term memory significantly correlated with FBU in both language groups (all p’s <.05). Additionally, SMA significantly correlated with FBU for Turkish-speaking children, rs (48) = .53, p < .001. Turkish-speaking and English-speaking children’s performances were also analysed together, using a generalised linear mixed effects model and by following the principle of backwards selection. The final model suggests that the language children speak, their short-term memory, and SMA significantly predicted their FBU. In line with Lucas et al. (2013), acquiring Turkish put children in an advantageous position in terms of their FBU. These cross-linguistic and language-specific results will be discussed in detail and the implications of these findings will be outlined. We will discuss whether this advantage was due to the mastery of evidential markers in Turkish, and to what extent a comparison of two languages with different grammatical structures might be informative.
Neural and Linguistic Considerations for Assessing Moral Intuitions Using Text-based Stimuli
Brandon L. Bretl
Assessing rapid moral intuition processing using text-based stimuli creates unique challenges and opportunities. This review takes a focused look at neural and linguistic considerations for assessing moral intuitions using text-based stimuli. Relevant time-courses and neural correlates of moral salience, emotional processing, moral emotions (shame and guilt), semantic processing, implicit stereotype activation (e.g., gender, age, and race stereotypes), and functional brain network development (the default mode network and salience network) are considered insofar as they relate to unique considerations for text-based instruments. What emerge are not only key considerations for researchers assessing moral intuitions using text-based stimuli but also considerations for the study of moral psychology more broadly, especially in developmental and educational contexts.
Beyond the Map: Nontopographic and associative neural signaling in olfaction and the implications for its cognitive handles
Ekin Erkan
In olfaction, “smell-images” do not have discrete spatial properties like visual objects do, and this must prompt us to re-theorize perception, if perception is to be more than a theory of vision. Contra vision (and audition), olfaction does not compute odors in a topographic manner—the brain recognizes smells via pattern recognition, not combinatorial coding and topographic mapping. What gives smell meaning is bound to the sensory process in which its perception partakes, and such processes define olfactory sensations as targets of experience. Smells are an interpretation of physical information in the context of continuous operations, physiological and cognitive—thus the same stimulus can have various interpretations and get processed into different odor-images. Looking at case studies of wine sommeliers and perfumists, we demonstrate how an odor image is specified by the process in which it partakes, investigating what this implies for representational theories of perception while motivating a theory of perception as measuring device. In everyday perception, smells act like “experiential tags” for the brain to use as a background-and-pointer. Contra representationalism, we ask what makes an odor image an accurate representation of its source. Our answer links to the mechanisms that create an odor image, physiological and psychological. Whether the categorization of information from the physical stimulus into a perceptual schema is accurate depends on the processes it serves. Many odorants, like sulfurol, are ambiguous and, thus, allow for multiple semantic attributions. The perceptual interpretation of a stimulus like sulfurol into a semantic object can involve various conceptualizations—thus, representational "accuracy" depends on the affordance of a stimulus, as well as the conditions of its interpretation. Veridicality in perceptual representation is misleading if connected to the idea of a designated universal percept—odorants can have multiple interpretations and their perception is based on an individual’s trained receptor repertoire.
Evidence for a general signature of face familiarity
Alexia Dalski, Géza Gergely Ambrus, Gyula Kovács
An open question in person perception research is the existence of neural signatures that reliably flag unknown and/or known faces, irrespective of the mode of acquisition or depth of encoding. Here, we explored the neural signatures of face familiarity using cross-experiment decoding of event-related potentials (EEG ERPs). Data came from three experiments from our laboratory that utilized different familiarization conditions (Perceptual, n = 42; Media, n = 24; and Personal, n = 23). Using a hereto less explored multivariate cross-classification (MVCC) method, we iteratively trained and tested on data from each experiment, combining elements of cross-modal, cross-participant, and leave-one-participant-out approaches. Time-resolved MVCC and temporal generalization analyses were carried out to investigate the temporal organization of information-processing stages. Predominantly over posterior and central regions of the right hemisphere, we observed significant cross-experiment familiarity decoding involving all three experiments, most prominently between the Media and Personal, and the Perceptual-to-Personal pairs, overlapping in the 270 – 630 ms time window. This effect is similar in temporal and spatial characteristics to electrophysiological components reported recently, suggesting that the same effect was observed in these previous studies. Cross-experiment decodability makes this component a strong candidate for a general neural signature of face familiarity. Furthermore, the sustained pattern of temporal generalization suggests that it reflects a single automatic processing cascade that is maintained over time.
Patterns of semantic variation differ across body parts: Evidence from the Japonic languages
John Huisman, Roeland van Hout, Asifa Majid
Human conceptual structure is grounded in the body, so it can be surprising to find that parts of the body singled out for naming vary across languages. Previous research suggests that although diverse languages differ in their body part lexicon, closely related languages show less variability. However, this conclusion may be premature as it is only based on a single study of the Germanic languages. The current study investigates the body part lexicon across the Japonic languages through both a body part naming task (Study I) and a body colouring-in task (Study II). Data from six Japonic languages shows that body part terminology can vary within a language family in substantive ways, and that this is reflected in semantics too. Novel application of cluster analysis on the naming data revealed different structuring principles for parts of the face and parts of the body: there was a relatively flat hierarchical structure for parts of the face, whereas parts of the body were organised with deeper hierarchical structure. Lexical similarity did not differ between parts of the face and body, despite the earlier suggestion that face parts may be more stable. Within parts of the face, we see highest similarity for bounded parts (i.e., mouth, ear and eye). The body colouring data confirmed that parts of the face were for the most part highly differentiated with little to no overlap, whereas there was clear evidence of more hierarchical relationships between subparts of the body. In addition, rather than finding clear differences between the face and the body, the colouring data again revealed that bounded parts show more stability than unbounded parts. Our study demonstrates that there might not be a single universal conceptualisation of the body as is often assumed, and that in-depth, multi-method explorations of under-studied languages are urgently required.
Consistent verbal labels promote odor category learning
Norbert Vanek, Márton Sóskuthy, Asifa Majid
Recent research shows that speakers of most languages find smells difficult to abstract and name. Can verbal labels enhance the human capacity to learn smell categories? Few studies have examined how verbal labeling might affect non-visual cognitive processes, and thus far very little is known about word-assisted odor category learning. To address these gaps, we tested whether different types of training change learning gains in odor categorization. After four intensive days of training to categorize odors that were co-presented with arbitrary verbal labels, people who learned odor categories with odor-label pairs that were more consistent were significantly more accurate than people with the same perceptual experience but who had odor-label pairs that were less consistent. Both groups' accuracy scores improved, but the learning curves differed. The context of consistent linguistic cuing supported an increase in correct responses from the third day of training. However, inconsistent linguistic cuing delayed the start of approximating to target odor categorization until after the fourth day. These results show that associations formed between odors and novel verbal labels facilitate the formation of odor categories. We interpret this as showing a causal link between language and olfactory perceptual processing in supporting categorization.
On words and meanings: what can vocabulary impairment in dementia tell us about semantics and cognition
Olga Ivanova, Juan José García Meilán
Alzheimer’s disease (AD), as the most common form of dementia, is clinically defined by vocabulary deficit, which is typically referred to as anomic aphasia. Anomia leads to impairment in naming (speakers cannot name the object they see, or they are asked to name after a definition), and it is still a controversial point whether anomia is due to disruptions in lexical access or in semantic access. Current research suggests that the etiology of naming problems makes the qualitative difference between healthy aging and dementia: while in the former problems with word retrieval are due to difficulties in lexical access, conceptual disruptions would explain vocabulary impairment in the latter. We tested 212 older speakers (non-pathological aging, n=126; MCI, n=48; AD, n=38), matched for age and educational level, on semantic verbal fluency task (Isaac’s Set-Test) for four semantic categories: colors, animals, fruit and cities. The test was applied following a division of the minute per semantic category into four intervals of 15 seconds. The results of our tests showed that speakers with AD indeed show disruption in semantic networks, since their performance on semantic verbal fluency tests significantly drops after the first 15 seconds, coinciding with the shift from automatic cognitive processes (0-15 seconds) to more extensive reliance on executive functions (15-60 seconds). In view of these results, we make a theoretical consideration of how semantic knowledge disruption in dementia mirrors impairment in different cognitive functions (episodic, working and semantic memory, and executive functions) and reflects semantic networks organization in the human mind. We discuss implications of word and concept typology disruption in AD for understanding semantic network construction and functioning, and also for characterizing its evolution during the lifespan.
Understanding the neural mechanism underline learning visuo-manual gestures
Sahal Alotaibi, Georg Meyer, Sophie Wuerger
It is conventionally thought that speech processing and articulation are mainly supported by Broca's area in the left inferior frontal gyrus. Recent neuroimaging studies confirm this role with newly learnt speech sounds. While vocal articulation is a unique feature for spoken languages, signed languages use facial expressions with hand shape and movements. In the present study, we aim to investigate the role of left inferior frontal gyrus for visuo-manual gestures in learners of British Sign Language (BSL). Functional magnetic resonance imaging (fMRI) brain images were taken from twenty native English speaking healthy volunteers at two time points; before and after the intervention. During the training course, participants were taught to sign basic sentences using BSL for three consecutive days (one hour per day). Behavioural performance was assessed every day in two variables: signing performance (teacher evaluation) and sign discrimination (automated test). The overall behavioural results show significant improvements in the both variables after training. FMRI results illustrate significant blood-oxygen-level-dependent (BOLD) signal increasing in the left inferior frontal gyrus. These results reveal a high degree of similarity in the neural processing underlying signed and spoken languages. Broca’s area seems to be involved not only in the verbal articulation but also in visual signs processing.
Belief is ambiguous
Jazlyn T. Cartaya
Philosophers have had a lot to say about what kind of attitude corresponds to our basic concept of belief. Some contemporary epistemologists think that the attitude that corresponds to our basic concept of belief is a strong doxastic attitude. A strong doxastic attitude, according to them, is mental state akin to sureness or certainty. This mental state is often called outright or full belief. Other philosophers argue that belief is a much weaker doxastic attitude, or a mental state akin to thinking or supposing. Hawthorne et al. (2015), for example, present evidence against the assumption that the role of ‘believe’ is to express the sort of state epistemologists intend by outright or full belief. They accept that there may be a theoretical notion of outright or full belief that is strong, but this does not correspond to our basic concept of belief. This debate rests on the idea that there is just one basic concept of belief. I argue, instead, that there are two basic concepts of belief, one that corresponds to a weak doxastic attitude and one that corresponds to a stronger doxastic attitude. Additionally, I wish to suggest that when belief is stronger it corresponds to an entirely different mental state than sureness or certainty When belief is stronger, I argue that it corresponds to a mental state akin to trust or faith. Thus, my view is that belief is ambiguous.
Language Influences Cognition, But Not Content
Andrew Knoll
I argue that recent neo-Whorfian claims that human natural language (HNL) alters the contents available to thought are incorrect. Such claims come in three strengths. The weakest is that natural language allows composition of thought contents not otherwise composable in a non-HNL Language of Thought. I argue that these weaker theories fail by their own lights. A stronger claim has it that HNL alters the typicality and similarity relationships amongst otherwise HNL-independent conceptual contents. Most strong yet is the claim that HNL imposes a categorical structure on thought contents, and thus creates contents that would not otherwise be available. I argue that the empirical evidence for these stronger claims is better explained without supposing that HNL alters the contents of thought. Instead, HNL alters cognition by directing attention and making available syntactic structures not otherwise available. Supposing that it alters thought content is both unnecessary and indeed undermines the explanatory benefits of attributing contents to thoughts in the first place.
Humans at birth appreciate the communicative power of language
Bálint Forgács, Tibor Tauzin, György Gergely, Judit Gervain
Newborns show greater brain activation to pseudowords with a repetition (ABB: “mufefe”) as opposed to no repetition (ABC: “selagu”), which suggests a sensitivity for linguistic structure from birth. We investigated, using fNIRS, whether newborns’ brains are further activated if linguistic stimuli are embedded in a structured communication of two voices. 1-3-day-olds were presented with pseudowords containing a repetition pattern (ABB) auditorily in three conditions: 1) different pseudoword tokens (ABB-CDD) produced by a female and a male voice taking turns, suggesting information transmission; 2) pairs of pseudowords repeated by a female and a male voice identically (ABB-ABB), and thus not allowing transmission of information; 3) different pseudowords (ABB-CDD) as in condition 1, but produced by a single speaker, i.e. lacking turn-taking. Of the three conditions, only condition 1 satisfies the two criteria for communication: the present of multiple social agents and information transfer. Fronto-temporal areas of newborns responded bilaterally with greater activation to the first, communicative condition than to either of the other two conditions. Our results suggest that newborns are sensitive to the communicative function of language, that is, that it can transmit information – and not merely to its physical properties or its abstract structure. The findings further demonstrate that newborns register exchanges between third party agents, outside the dyadic interactions between themselves and caretakers.
Meaning constrains grammar: Experimental evidence from eight languages
Ben Ambridge
Many approaches to language assume that at least one layer of grammatical (syntactic) representation is more or less impervious to syntax; that “syntactic representations do not contain semantic information” (Branigan & Pickering, 2017, p. 8). In this presentation, I summarize evidence in favour of the opposite view: that many, perhaps most, lexical verb restrictions that are often treated as arbitrary in fact have semantic motivations. This evidence comes from experimental studies of English, Balinese, Hebrew, Indonesian, Mandarin, Hindi, Japanese, and K’iche’ Mayan. In each case, native-speaking adult participants rated verbs for semantic properties sometimes argued to be characteristic of one or both constructions in each pair (or “alternation”). For example: --Prepositional/double-object dative: A acts on B, causing it to go (either literally or metaphorically) to C; A has B and then causes it to enter into the possession of C. --Figure/ground locative: The word describes the particular manner/way in which the action occurs; The word describes the end-state of an action. --Active/passive transitive: A causes (or is responsible for) some effect/change involving B; B changes state or circumstances. --Transitive/Periphrastic causative: B's ACTION/EVENT/CHANGE and A's causing of it are two separate events, that could happen at different times and/or in different points in space” vs “B's ACTION/ EVENT/ CHANGE and A's causing of it merge into a single event that happens at a single time and a single point in space”. Highly correlated properties were then combined into composite factors using principal components analysis. Finally, the resulting composite predictors were correlated with adult speakers’ judgments of the acceptability of each verb in each construction of the pair. For each alternation, a different set of semantic properties predicted verbs’ relative acceptability in each construction of the pair.
Indexical Information in Perceptual Representation
Catherine Hochman
There is a robust philosophical literature on ‘I’-thoughts. Put loosely, ‘I’-thoughts are thoughts that a thinker has about herself, such as "I am hungry" and "my pants are on fire". Described at the level of conceptual representation, they are said to contain a constituent representation of the self. It is theorized that this self representation functions analogously to ‘I’ in the English language, insofar as both representations are indexicals that refer to the individual who uses them. Moreover, the representation is said to have unique cognitive significance: its presence or absence partially determines the behavioral profile of a complex conceptual representation. While it has received considerable attention in the philosophical literature regarding conceptual thought (and language), the unique role of ‘I’ remains under-explored with respect to other representational forms. The first question that I will address is whether complex perceptual representations include information that functions analogously to how the representation ‘I’ functions in complex conceptual representations. After answering this question in the affirmative, I will investigate how this information is stored. More specifically, I will ask whether indexical information about the individual in perceptual representation is architecturally encoded or explicitly represented, and then argue in favor of the former possibility. If not intractable, my two questions are certainly abstract. So, rather than tackling them head on, I will ground my discussion in an analysis of spatial indexical information. By examining and modeling how one stores information about her spatial location, I will bootstrap my way up to an analysis of indexical information about the individual.
Recursive Processing in Language and Beyond
Edward Ruoyang Shi, Qing Zhang
The issues of domain general functions of the hippocampus and basal ganglia have been addressed from both clinical and evolutionary perspectives (Shi & Zhang 2020; Zhang & Shi 2021). In line with insights of these studies, we suggest that the functions of the hippocampus and basal ganglia in categorical perception and recursive processing in cross-modality systems pave the way for recursion observed in language. Specifically, we argue that the units for recursion to apply, namely lexical items, are discrete, which in turn was attributed to categorical perception (CP). CP is a widespread phenomenon detected across species, and seems a combination of nature and nurture (Zhang, Lei & Gong, submitted). We emphasize that the nurture part is realized by statistical learning during critical period across species, which plays a crucial role in both development and evolution. Concerning recursion itself, by reviewing comparative evidence in auditory and visual perception (Gentner et al., 2006; Van Heijningen et al., 2009; Rey et al., 2012; Abe & Watanabe, 2011), and also motor production in nonhuman animals (Johnson-Pynn et al., 1999; Herman et al., 1984), we argue that in cross-modality sensorimotor systems, recursion has appeared without the existence of language. The hippocampus plays a key role in statistical learning and lexical learning (Covington et al., 2018; Ullman, 2004), whereas the basal ganglia underlie implicit recursive processing and learning (Ullman, 2004; Progovac et al., 2018). Studies also implied the cooperative and competitive relation between statistical and implicit learning, supported by the hippocampus and basal ganglia respectively (Batterink et al., 2019). Hence, from both theoretical and empirical perspectives, recursion is not unique to human language faculty (Hauser et al., 2002). Instead, we argue that recursion could have already emerged without language.
Predictability as a Determinant of Young Children’s Selective Epistemic Trust
F. Ece Özkan, Aylin C. Küntay, Bahar Köymen
As selective social learners, children critically evaluate the information they are given (e.g., the relevance of an explanation). In this online study, we examined 5- to 6-year-old children’s (N=48, Mage=5,54) expectation of relevance in an explanation and the match/mismatch of their expected explanation with the explanation provided by an informant, i.e., predictability of the information/informant, as a determinant of selective epistemic trust. Participants were introduced to an informant who was to explain a given topic (e.g., what makes cars go). Before they heard the informant’s explanation, they chose one explanation among three, i.e., two relevant (e.g., pressing the gas pedal makes a car go) and one irrelevant explanation (e.g., when I am in the car, I watch outside the window), as their prediction of how the informant would explain the topic. In 92% of the trials, they chose one of the relevant explanations. We manipulated whether the informant’s explanation matched that of the child’s resulting in three conditions: 1) relevant-match, the informant provided the same relevant explanation, 2) relevant-mismatch, the informant offered the relevant explanation that is different from the participant’s, and 3) irrelevant-mismatch, the informant provided the irrelevant explanation. The participant’s epistemic trust was measured by three questions 1) Did the informant explain the topic well or could s/he have explained it better? 2) Would you ask the informant or someone else to learn about [the relevant topic]? and 3) Would you partner with the informant in a quiz game? Each participant went through 6 trials (2 in each condition). Overall, participants were more likely to trust the informant in the relevant-match condition compared to the two mismatch conditions. There was no significant difference between the two mismatch conditions in terms of preschoolers’ epistemic preferences. Thus, our results suggest that predictability is a significant predictor of young children’s selective epistemic trust.
Irony and cognition: How need for cognitive closure and need for cognition impact verbal irony comprehension
Katarzyna Branowska
Verbal irony is a type of ambiguous utterance based on the difference between what is said and what is meant. The most recent approaches emphasize its connection with cognitive functions and the need for cognitive effort in using and understanding irony. The aim of the research was to check whether the certain features of cognitive functioning have an impact on verbal irony understanding. The chosen cognitive characteristic were: need for cognitive closure and need for cognition, due to their influence on dealing with ambiguous stimuli. The main hypotheses were: 1) The high need for cognitive closure correlates negatively with verbal irony understanding. 2) The high need for cognition correlates positively with verbal irony understanding. Additional hypotheses: 3) Type of verbal irony – blame by praise (as more prototypical) is better understood than irony praise by blame (as less prototypical). 4) Non-figurative statements are better understood than verbal irony in general. Responses were collected from 250 Polish-speaking participants aged from 18 to 30. Research was conducted by Google Forms and was consisted of four parts: demographic data metrics, Irony Understanding Questionnaire (created by the author), Need for Cognition Scale (Polish adaptation) and Need for Closure Scale (Polish adaptation). Participants were invited by social media advertising. In the terms of H1 and H2, the results turned out to be insignificant – the relationship between need for cognitive closure/need for cognition and irony comprehension has not been proven. Te result for H3 was: t(176) = 15.92; p < .001. The difference in the mean scores was 1.42, blame by praise irony was better understood than praise by blame. The analysis for H4 showed a significant difference between literal and ironic statements; the result was: t (173) = 12.66; p < .001. Mean difference was 1.33, what means that literal statements were better understood than ironic ones.
Children’s representation of complex generics
Magdalena Roszkowski, György Gergely, Ernő Téglás
The exact representation of generics like `Zebras are striped’ or `Lions have manes’ has been the subject of a longstanding debate. Most semantic theories analyze generics in terms of a hidden operator with some sort of quantificational force, others treat them as simple kind predication. We propose an alternative approach which involves pluralities, i.e. sums of entities with an accessible part-whole structure, and hypothesize that, since plural morphology – analogous to generics – is mastered earlier than explicit quantifiers, children will be able to form a plurality-based generic representation. In addition, we argue that such an analysis can potentially account for some of the puzzling properties of generics. In contrast to previous experimental work, which has almost exclusively focused on simple generics, the present study uses generic sentences which contain conjoined predicates as in `Wugs are green and have wings’; to test for the availability of cumulative readings – a hallmark of pluralities. Our sentence-picture matching task allows us to have a closer look into preschool children’s prioritization of distributive and cumulative readings of such sentences and to reveal to what extent generics differ from quantified sentences wrt. cumulativity and non-maximality.
Attributive Descriptions, Benacerraf’s Identification Problem and The Expansion Of The Causal Theory of Designation
Matthew Menchaca
This paper presents a new dilemma for theories of linguistic meaning. I argue that some attributive descriptions have designational chains linking terms to objects, specifically: non-dietic uses of attributive descriptions which have a rich history of conventional use (like the description “number”). This claim is a response to Michael Devitt’s development of Saul Kripke’s causal/historic theory of reference, developed in Kripke’s Naming and Necessity and Devitt’s dissertation. I show how the reference of attributive descriptions (which depends on the Russellian definition of denotation) can be made consistent with Benacerraf’s identification problem (i.e., if abstract objects are not non-spatiotemporal). The upshot of this suggests that the reference of some abstract attributive descriptions are fixed by d-chains. Thus the same mechanism which explains the capacity we have to refer using genuinely designational terms (initial perceptual link + imbedded in a causal nexus) explain the capacity we have to refer when using abstract descriptions. I contend that these attributive descriptions which have a conventional use are closer to indefinite descriptions than definite descriptions, and thus their capacity to designate further supports the idea that there is a convention for using indefinites to genuinely refer. Like designating terms which link a speaker to an object via a perceptually/causally nexus, I argue that some abstract terms linked to objects via attributive descriptions depend on the causal/historic nexus characteristic of the thought; thus some attributive descriptions designate and denote regardless of the ignorance or error of the speaker while others require skilled mental activity (example: reference to the Goldbach Conjecture, an attribute of the integers). Finally, I show how a modified conception of rapport with an object (causal-conventional instead of causal-perceptual) explains the permissibility of exporting seemingly non-designational descriptions to other contexts.
Brain networks for speech and stone tool-use: a neuroarchaeology study
Natalie Uomini, Larry Barham, Michal Paradysz, Georg Meyer
The possible co-evolution of language and tool-use in human ancestors has been the focus of intense debate in recent years. Functional neuroimaging data can help us identify areas of co-evolution (exaptation) or, by providing evidence for the absence of functional overlap, separate evolution in the brain areas supporting these two skills. Recent work on word recognition in the visual word form area, which is a result of exaptation, has shown that exapted skills utilise highly individual representations. It is therefore imperative that neuroimaging data on both language and stone tool-use are collected from the same participants. We hypothesized that both skills would show separable activations reflecting separate evolution. We used fMRI with 14 participants to directly compare brain activations in the same individuals, with matched action observation paradigms for stone-age tool-use, modern tool-use, and speech syllable identification. In two action observation experiments, we found significant increases in functional activation in bilateral frontal mirror neuron regions (IFG and PMC), as well as the classic lateralised areas in left SMG and IPL. Two univariate analyses revealed consistent overlapping activation clusters for language and tool-use in the mirror neuron network. A multivariate analysis, MVPA, showed overlapping but separable activation patterns for speech and tool-use in these areas. Our study failed to find clearly separable networks for these two skills, suggesting a tight relationship between language and cognition more broadly. Taken together, our findings support the hypothesis of co-evolution (exaptation) for tool-use and language functions. In combination with fossil evidence from human brain evolution, we speculate that increased processing demands on mirror neuron regions for both stone toolmaking and language skills could explain why these bilateral frontal regions showed the earliest trace of brain reorganization in our fossil ancestors starting 3 million years ago.
Linguistic distributional information about object labels affects ultrarapid object categorisation
Rens van Hoef, Dermot Lynott, Louise Connell
When given unrestricted time to process an image, people are faster and more accurate at making categorical decisions about a depicted object (e.g., Labrador) if it is close in sensorimotor and linguistic distributional experience to its category concept (e.g., dog). Here, we investigated whether sensorimotor and linguistic distributional information affect object categorisation differently as a function of the time available for perceptual processing. We tested 128 participants using an ultra-rapid categorisation paradigm with backwards masking, in which we systematically varied the onset timing (17 to 133 ms) of a post-stimulus mask following a briefly displayed (17 ms) object. Our results suggest that linguistic distributional distance, but not sensorimotor distance, between an object and its category affects categorisation accuracy and response times even in ultra-rapid categorisation. Participants were faster and more accurate in recognising an object when its name (e.g., Labrador) overlapped more in linguistic distributional experience with the target category (e.g., dog), but these effects did not systematically vary with exposure duration. Overall, these findings support the role of a linguistic shortcut (i.e., using linguistic distributional information in place of sensorimotor information) in rapid object categorisation.
The effect of data filtering methods on reliability
Tamás Szűcs, Attila Krajcsi
In the current study, we examined the possibilities to increase the reliability of variables using data filtering techniques. We analyzed the reliability of three numerical effects in the number comparison task: The comparison distance effect (worse performance when numerical distance is smaller), the size effect (worse performance when the numbers are larger), and the priming distance effect (worse performance when the numerical distance between the prime and target is larger). The effects were measured using three metrics: Reaction time, error rate, and drift rate. The acquired data was then filtered according to three criteria and their combinations: Error in the current trial, error in the previous trial and whether the data point is an outlier (z-score greater than 3). Reliability was calculated using even-odd split-half reliability and two bootstrapping techniques. Results show that data filtering techniques do not increase the reliability of variables. In fact, the effect of such techniques depends on the metric used to measure the given effect and the method used to calculate its reliability. Our results suggest that current data filtering practices should be re-evaluated.
Not Not Interesting: Interpreting Double Negation
Yechezkel Shabanov, Einat Shetreet
What do we mean when we use double negation, e.g. “not not interesting”? Logically, it should mean the same as the affirmative (e.g. “interesting”). However, intuitively, we understand that the two expressions do not convey the same meaning. This study aims to examine the hypothesis that the two negations lead to a weaker statement than the logically equivalent affirmative, by compelling an unexcluded middle. If so, this means that “not not interesting” and “interesting” occupy different ranges of meaning on the same scale. Furthermore, we compared doubly-negated expressions (“not not interesting”) with approximators (“kind of interesting”), under the assumption that they serve similar functions, and should therefore occupy similar ranges of meaning on the same scale. In the experiment, participants had to determine the range that various expressions occupy on a given adjective scale (e.g., “not not interesting” on the scale between interesting and boring). From their responses, we extracted (i) the range’s size, (ii) its central point, and (iii) inclusion of the edge (e.g. interesting for “not not interesting”). Doubly-negated expressions differed from affirmatives on all 3 parameters: ranges for affirmatives were smaller, located closer to the edge, and included the edge more often than doubly-negated expressions. Additionally, the ranges of doubly-negated expressions were bigger and their center was closer to the edge than those of the approximators. These results confirm the hypothesis that double negation allows for a weaker interpretation of the equivalent affirmative, while still retaining the possibility of being interpreted logically. They also suggest that double negations afford a wider range of interpretation than approximators.
A non-lexical approach to NEG-RAISING
Zahra Mirrazi, Hedde Zeijlstra
Pragma-semantic approaches to Neg-Raising (NR) take NR readings to be the result of an excluded middle inference, either in terms of a presupposition, or in terms of scalar implicatures, which is special to a certain group of predicates known as Neg-Raising Predicates (NRPs) like ‘think’. While successful in accounting for many aspects of NRPs, these approaches face some non-trivial problems. (i) There are contexts under which NRPs receive a non-NR reading without resulting in a presupposition failure. (ii) Some non-NRPs (e.g. non-factive ‘know’) can get a NR reading. We propose a new implementation of a scalar implicature account to NR. Our analysis has two components: duality and strengthening of subdomain alternatives. We take the basic reading of negated NRPs to involve existential quantification over worlds where not-p holds, as a result of equivalence with the basic meaning of negated NRPs which involves a negated universal quantifier over worlds where p holds (duality). Parallel to contemporary implicature accounts of Free Choice and Homogeneity, this existential reading can be strengthened to a universal one via application of an exhaustification operator. Under this view, the(un)availability of NR readings for duality-allowing modals is reduced to whether exhaustification applies to the whole set of subdomain alternatives (yielding the strengthened reading) or over a subset after pruning singleton sets (yielding the weak reading). We take (i)-(ii) to show that the ability to trigger a NR reading is not a lexical property of NRPs. Our approach to NR is the only approach that can account for this. All other theories of NR take NRPs to carry some unique lexically-encoded property. Since the application of exhaustification is context-dependent, we allow every negated universal modal whose presuppositions do not block duality, to be able to yield a NR-reading, provided that the whole set of subdomain alternatives is contextually relevant.