Graph Foundation Models:
A New Era for Graph Machine Learning
Date: July 10th/11th | Location: Seoul, South Korea
Date: July 10th/11th | Location: Seoul, South Korea
Graph machine learning (GML) applies ML to structured data represented as nodes and edges, and underpins applications such as fraud detection, recommendation, drug discovery, weather prediction, and traffic forecasting. The prevailing practice trains a task-specific GNN per dataset: a data-hungry, compute-intensive workflow brittle to shifts in structure, features, or labels. In contrast, foundation models in language, vision, and audio transfer broadly with minimal tuning.
Graph foundation models (GFMs) extend this paradigm to graphs: pre-train once, then adapt across node/edge/graph tasks, feature/label spaces, and dynamic or multimodal settings. Methodological explorations span GNN-based backbones, Transformers, and LLM pipelines. Industry momentum signals practical readiness and a shift toward graph-centric modeling. Despite progress, key gaps remain:
Methodology: no consensus on model families, objectives, or pretraining for heterogeneous, dynamic, and multimodal graphs;
Scaling: limited understanding of data regimes, generalization and transferability to (large) graphs;
Evaluation: fragmented benchmarks, metrics, and protocols hinder fair comparison;
Transfer: unclear pathways to adapt language/vision/table techniques and generalize across schemas, label spaces, and tasks.
The goals of our workshop are to build a connected academia–industry community; Identify challenges and opportunities for GFMs, i.e. handling heterogeneity, scaling to massive graphs, and effective pretraining.; Standardize evaluation via shared datasets, metrics, protocols, and baselines.; and catalyze cross-domain transfer from language, vision, and tables, adapted to graph structure.
We welcome contributions on topics central to graph foundation models. The list below is not exhaustive, and we encourage submissions in related and emerging areas.
Graph learning with LLMs: LLMs for graph tasks; GFMs that interface with language models and retrieval, agentic planning, and tool use for graph reasoning.
Graph learning with Tabular Foundation Models (TFMs): TFMs for graph and relational database tasks, Tabularization techniques; training TFMs with graph priors.
New GFMs: New architectures, training objectives, scaling laws and strategies, graph tokenization, structural and positional encodings, prompting and in-context learning for graphs.
Domain-specific GFMs: GFMs for knowledge graphs, temporal graphs, and heterogeneous graphs, biological and molecular discovery, relational databases, recommendation systems, social networks, traffic networks, cybersecurity, and finance.
Theoretical advancements: Transferability, generalization, and expressivity analysis. Interoperability and explainability of GFM representations.
Benchmarking GFMs: Standardized benchmarks, metrics, and evaluation protocols, leaderboards, open source libraries, and reproducible pipelines.
We invite long papers (up to 9 pages) for mature, full-length contributions with extensive results and analysis and position papers (2–4 pages) offering concise, opinionated perspectives on defining/deploying GFMs, GFM–LLM/GFM-TFM integration, benchmarking, and lessons from other foundation model domains.
We will manage the paper submission through OpenReview, and the review process is double-blind. Please use the ICML 2026 LaTeX style files. We will select outstanding papers for oral talks, and the award for the best paper will be announced at the workshop. Any LLM involvement must be explicitly disclosed; human authors and reviewers remain fully responsible for all content. AI-generated papers will not be accepted.
May 3rd, 2026 Paper Submission Deadline
May 25th, 2026 Author Notification
July 10th/11th, 2026 Workshop@ICML
Jure Leskovec
Stanford University & Kumo.ai
Michael Galkin
Google Research
Marinka Zitnik
Harvard University
Stefanie Jegelka
TUM & MIT
Leman Akoglu
CMU
Neil Shah
Snap Inc.
Ismail Ilkan Ceylan
TU Wein & AITHYRA
Ron Levie
Technion
Xavier Bresson
NUS
Shenyang Huang
University of Oxford
08:00 Opening Remarks
08:15 Invited Talks 1 & 2
09:15 Coffee Break
09:35 Poster Session 1
10:35 Oral Presentations 1 & 2
11:15 Invited Talks 3 & 4
12:15 Lunch Break
13:15 Coffee Break
13:30 Oral Presentations 3 & 4
14:10 Poster Session 2
15:10 Invited Talks 5 & 6
16:10 Panel Discussion
16:40 Closing Remarks
To support the growth of junior researchers in the GFM community, we will run a mentorship program at the workshop. Early-career participants, especially those from historically underrepresented groups, will be matched with senior researchers. Mentor-mentee pairs will have an in-person mentorship lunch during the workshop day, and we will encourage continued interaction via Slack afterward.
Please fill in the form on the left to enroll.
Xingyue Huang
University of Oxford
Ben Finkelshtein
University of Oxford
Charilaos Kanatsoulis
Stanford University
Xiaoxin He
Meta & NUS
Xueying Ding
CMU
Reihaneh Rabbany
University of McGill & Mila
Michael Bronstein
University of Oxford & AITHYRA