L&T24: Language and thought 

Topic leaders


Co-organizers and invited speakers:


Ivana Kajić , Google DeepMind

Guido Zarrella,  MITRE

Nicole Sandra-Yaffa Dumont, University of Waterloo


Goals

The goal of this topic area is to explore how cognitive-science inspired techniques can improve the performance, efficiency, creativity and evaluation of large foundational models. Large language models (LLMs) have achieved remarkable results on various natural language processing tasks, but they also face significant challenges and limitations, such as the high computational cost, the lack of generalization and robustness, the difficulty of incorporating or updating prior knowledge and reasoning, and the ethical and social implications of their use. This topic area aims to address these challenges by applying neuromorphic, brain-inspired computing principles that move the frontiers of today’s AI towards better power-efficiency, faster learning, low latency, and improved cognitive abilities. This topic area also connects from the area of neuromorphic computing and hardware engineering to other mainstream areas driving the future of artificial intelligence research.

Projects

Materials, Equipment, and Tutorials:

Hardware: 

Software: 

Preparation material, literature:

The field of LLMs is extremely fast-moving but the organizers are directly working at the frontier of the field and will prepare preparatory and tutorial materials informed by the state-of-the-art in June 2024. The lectures will cover the background, concepts, and methods of the topic area, such as the basics of LLMs, cognitive science, and neuromorphic computing, and the open challenges and opportunities laid out above. There will be ample opportunities for hands-on learning thanks to potential cloud computing credits and the ability of organizers to provide locally accessible models that can be inferenced and updated on-premises. Due to a minimal reliance on on-site hardware it is also likely that this will be a remote-friendly portion of the workshop, and every effort will be made to engage virtual participants in working with the same models.