talks

We are sorry to announce that our talks will be cancelled due to the increasingly alarming epidemic of the novel coronavirus.

Prof. Yohei OSEKI

(15:00-16:00, Feb. 7)

Construction and evaluation of neurocomputational models of natural language

Natural language processing (a branch of artificial intelligence) and the neurobiology of language (a branch of brain science) have been traditionally divorced. In natural language processing, on the one hand, computational bases of language have been developed under the shadow of deep learning techniques, but the question of how those computational bases are biologically realized in the human brain was not sufficiently addressed. In the neurobiology of language, on the other, neural bases of language have been revealed thanks to neuroimaging techniques, but the perspective on how those neural bases are algorithmically implemented with neural computations was largely neglected. However, despite being proposed relatively independently, those computational and neural bases show striking resemblance in that both constitute complex networks of various modules, so that the happy marriage of the two fields is highly desirable. For this purpose, in this research project, we will investigate computational and neural bases of language by constructing neurocomputational models based on symbolic automata and neural networks and evaluating them with neurophysiological measurements of human magnetoencephalography (MEG) and electrocorticography (ECoG).

Prof. Shu-Kai HSIEH

(16:00-17:00, Feb. 7)

Retrofitting the (Chinese) Lexicon: where we are standing now

In this talk, I'll present a critical review of the development of the computational lexicon. Following up on that, I will then present our newly proposed DeepLEX framework, a computational lexicon model implemented from the perspective of functional linguistics and alike. The underlying assumption includes that form-meaning pairing is granular in nature. I will argue that the design architecture should be tolerant for multimodal linguistic sign data fusion at the form part; and retrofitting of lexical and sublexical representation at the meaning part. The alignment/pairings are evolved and maintained with semi-supervised learning and human annotation. Finally, an extended Generative Lexicon model by integrating symbolic linguistic forms coming in with different granularity levels is proposed, the plan for the extrinsic evaluation via dialogue system is discussed.

Prof. Chenhao chiu

(17:00-18:00, Feb. 7)

Uncovering syllable-final nasal merging in Taiwan Mandarin

Syllable-final nasals /n/ and /ŋ/ in Taiwan Mandarin is known to be undergoing merging. Earlier perceptual studies have reported that the merging is context-sensitive and the merging directions are vowel-dependent. More aggressive merging has been found between /in/ and /iŋ/ and between /ən/ and /əŋ/ than between /an/ and /aŋ/. These findings have been attributed to dialectal and social factors, such as influences from Taiwan Southern Min. In this talk, I will summarize two production experiments to uncover tongue postures of these syllable-final nasal in different contexts. The results of the first experiment confirm that the degree of merging and merging directions of syllable-final nasals are vowel-dependent. Crucially, for some speakers, although nasals were merged in tongue postures, the degrees of nasalization of the preceding vowel were contrastive, suggesting that the merging process is incomplete. The second experiment examines if the gestural and acoustic contrast of the underlying /n/ and /ŋ/ would be enhanced in prosodically prominent positions (i.e., in prosodic focus condition). The results show that in the /i/N context, the prosodic contrast between different prosodic conditions is manifested more in the acoustics of the pre-nasal vowels (i.e., the degree of nasalization) than in the gestural contrast. That is, the degree of nasalization is enhanced in the prosodic focus condition but this enhancement is absent in tongue postures. Taken together, these experiments results suggest that the syllable-final nasal merger resides in articulation and merging may be vowel-dependent.

Prof. Michiru MAKUUCHI

(09:30-10:00, Feb. 8)

Hierarchical structure in drawing

Prof. Yao-Ying Lai

(10:00-10:30, Feb. 8)

Comprehension of “unstated” meaning and the associated neurocognitive mechanisms

In this talk, I will discuss how comprehenders obtain sentence meaning that is not morpho-syntactically expressed yet fully understood—a case of semantic underspecification. Processing such “enriched” semantic compositions in real-time engenders additional cost and localizable brain activity as compared to their transparent counterparts. I will show behavioral and neurological results that implicate the neurocognitive mechanisms underlying such meaning comprehension, one with morpho-syntactic components factored out. In addition, we found that online meaning processing is impacted by nonlinguistic cognition such as individuals’ autistic tendencies. Overall, the findings are consistent with a lexically-driven, contextually-constraining system for sentence meaning comprehension.