CALM
Compositionality and Abstraction in Language and other Modalities
Compositionality and Abstraction in Language and other Modalities
We investigate how humans and AI systems form abstractions, understand concepts, and compose these together. We are interested both in understanding the abilities and limitations of frontier AI models, and in building smaller, more interpretable, neurosymbolic systems. To find out about our research, please take a look at our projects and associated publications.
You can find out more about joining the lab here. We currently have a PhD vacancy and would love to hear from you if you are interested.
If you are interested in our research, we would love to hear from you! Feel free to get in touch with any of the team members, or contact us at m.a.f.lewis at uva.nl.
Communications of the ACM, IEEE Spectrum, Science News
We have gratefully received funding from the Dutch Research Council, Research Councils UK, the China Scholarship Council, Research Innovation Scotland, the Royal Netherlands Academy of Arts and Sciences.
March 2026: 🎓 2nd Workshop on Compositional Learning: Safety, Interpretability, and Agents accepted to ICML 2026! We have an amazing lineup of speakers and panellists. Please consider submitting!
Feb 2026: CALM lab will be attending CardiffNLP in June! Come and meet us there 👋
Dec 2025: Martha, together with collaborators Claire Stevenson and Lucia Donatelli, awarded a NIAS-Lorentz Theme Group Fellowship to work on Abstraction, Broad Generalization, and Composition: The ABCs of Analogy. We are so excited to work together on this!
Nov 2025: Beth is presenting her work Evaluating Compositional Generalisation in VLMs and Diffusion Models at *SEM 2025, and Zhijin's work on Quantifying Compositionality of Classic and State-of-the-Art Embeddings is presented at EMNLP (Findings). Congratulations Beth and Zhijin!
Nov 2025: Our paper Compositional Concept Generalization with Variational Quantum Circuits is presented at IEEE Quantum AI!