L&T24: Language and thought 

Topic leaders


Co-organizers and invited speakers:


Ivana Kajić , Google DeepMind

Guido Zarrella,  MITRE

Nicole Sandra-Yaffa Dumont, University of Waterloo


Goals

The goal of this topic area is to explore how cognitive-science inspired techniques can improve the performance, efficiency, creativity and evaluation of large foundational models. Large language models (LLMs) have achieved remarkable results on various natural language processing tasks, but they also face significant challenges and limitations, such as the high computational cost, the lack of generalization and robustness, the difficulty of incorporating or updating prior knowledge and reasoning, and the ethical and social implications of their use. This topic area aims to address these challenges by applying neuromorphic, brain-inspired computing principles that move the frontiers of today’s AI towards better power-efficiency, faster learning, low latency, and improved cognitive abilities. This topic area also connects from the area of neuromorphic computing and hardware engineering to other mainstream areas driving the future of artificial intelligence research.

Projects

Recommended reading: