LLMs as GNNs (to understand how they generalize)
recording available on Youtube
Problems such as efficient (length) generalization are highly relevant for building agentic systems, yet they have puzzled LLM researchers for several years. However, they can sometimes be obvious through the lens of graph representation learning. This is because LLMs are, by their design, vulnerable to generalization issues from a variety of geometric perspectives.
A typical GNN researcher will already likely have a solid understanding of most of these perspectives! In this talk, we will use this knowledge, and attempt to scale the mountain of LLM generalization ⛰️
Discussion Session on the Future of Graph Learning Research
We want to use the February session to discuss what kind of research we want to do in graph learning in the next years and where we can make the most impact. Similar questions have already been discussed at length on big stages at various conferences. However, we think it is time to have this discussion with all graph learning enthusiasts, in particular (incoming) PhD students and postdocs who are actively shaping the field and whose research comprises the majority of what will be the field of graph learning in the next years. Whether you are excited, skeptical, curious, or frustrated about graph learning, we want to hear what you have to say!
Snir Hordan, Tal Amir, Nadav Dym
Jianan Zhao, Hesham Mostafa, Mikhail Galkin, Michael Bronstein, Zhaocheng Zhu, Jian Tang