As humans, we are able to effortlessly understand new combinations of concepts that we have previously seen. AI systems are not always able to to do this. This research area examines the limitations of current AI systems in composing concepts, and builds compositional models of meaning.
Humans use abstraction to move from specific instances ("a small red bicycle") to more general features ("red"). Abstraction helps us to generalize from familiar situations to new contexts. We explore abstraction through analogical reasoning and metaphor understanding.
Image credit: Beth Pearson
Binary, classical computers form the basis of modern technologies, but there are other forms of computing that have their own benefits. We work with quantum methods to model the linguistic and conceptual capabilities of humans, looking at multiple modalities. We also use neurosymbolic methods to implement compositional and linguistically motivated models of meaning in spiking neural architectures.
Image credit: Mina Abbaszadeh