Workshop date: August 4th, 2025, 1 pm – 5 pm
Submission deadline: May 25th, 2025
Decision notification: June 15th, 2025
Camera-Ready Deadline: July 11th, 2025
Temporal Graph Learning (TGL) is a rapidly growing field of research, driven by the prevalence across various domains and applications of dynamic and interconnected data. TGL methods represent data as graphs and model how the encoded relationships evolve and influence the observed dynamics at the node level. TGL has countless real-world applications ranging from the analysis of social and sensor networks to air quality monitoring, and epidemiology. Temporal characteristics of nodes and relations introduce substantial challenges compared to learning on static graphs. For example, in temporal graphs, the time dimension needs to be modeled jointly with graph features and structures whereas, in time series analysis, forecasts for one time series need to be conditioned on the observations at related time series. Nevertheless, recent studies demonstrate that incorporating temporal information can improve the prediction power of graph learning methods, thus creating new opportunities in applications such as recommendation systems, event forecasting, fraud detection, and more.
With this workshop, we aim at bringing together researchers interested in different aspects of processing dynamic relational data to present and discuss the most recent developments. We want to encourage cross-fertilization among the different sub-areas of TGL research, thus making the community more permeable to novel ideas from adjacent fields and associated application domains; particular emphasis will be put on the connections between TGL and time series processing. Finally, we aim at spotlighting foundational works and impactful applications of TGL models, ensuring that these contributions reach a broad audience of researchers throughout the ML community.
The workshop encourages discussion by hosting a poster session and panel discussion, and by welcoming position papers and extended abstracts. The venue is non-archival.
kumo.ai
Speaker Bio: Federico Lopez is a Machine Learning Engineer at Kumo.ai, bridging research and practice in graph transformers and in-context learning for relational data. He earned his PhD summa cum laude from Heidelberg University, where he studied neural graph representations in non-Euclidean spaces. His work integrates geometry, graph theory, and deep learning, with an emphasis on developing models that can reason about structured and temporal data.
Georgia Institute of Technology
Speaker Bio: Yao Xie is the Coca-Cola Foundation Chair, Professor at Georgia Institute of Technology in the H. Milton Stewart School of Industrial and Systems Engineering, and Associate Director of the Machine Learning Center. She received her Ph.D. in Electrical Engineering (minor in Mathematics) from Stanford University in 2012 and was a Research Scientist at Duke University. Her research lies at the intersection of statistics, machine learning, and optimization in providing theoretical guarantees and developing computationally efficient and statistically powerful methods for problems motivated by real-world applications. She received the National Science Foundation (NSF) CAREER Award in 2017, the INFORMS Wagner Prize Finalist in 2021, and the INFORMS Gaver Early Career Award for Excellence in Operations Research in 2022. She is currently an Associate Editor for IEEE Transactions on Information Theory, Journal of the American Statistical Association-Theory and Methods, Operations Research, Sequential Analysis: Design Methods and Applications, INFORMS Journal on Data Science, and an Area Chair of NeurIPS, ICML, and ICLR.
University of Munich
Speaker Bio: Yunpu Ma is a Postdoc at the University of Munich, working with Prof. Volker Tresp and Prof. Thomas Seidl on multimodal foundation models and dynamic graphs. Additionally, Yunpu is a research scientist at Siemens, specializing in quantum machine learning. Before Siemens, he spent three years as an AI researcher at LMU, where he earned his Ph.D., focusing on temporal knowledge graphs. Yunpu research interests encompass structured data learning, multimodal foundation models, and quantum machine learning. His ultimate research goal is to advance general AI.
PyG 2.0: Scalable Learning on Real World Graphs
Matthias Fey, Jinu Sunil, Akihiro Nitta, Rishi Puri, Manan Shah, Blaž Stojanovič, Ramona Bendias, Alexandria Barghi, Vid Kocijan, Zecheng Zhang, Xinwei He, Jan Eric Lenssen, Jure Leskovec
Empowering Interdisciplinary Insights with Dynamic Graph Embedding Trajectories
Yiqiao Jin, Andrew Zhao, Yeon-Chang Lee, Meng Ye, Ajay Divakaran, Srijan Kumar
Base3: a simple interpolation-based ensemble method for robust dynamic link prediction
Emma Kondrup
DyGMamba: Efficiently Modeling Long-Term Temporal Dependency on Continuous-Time Dynamic Graphs with State Space Models
Zifeng Ding, Yifeng Li, Yuan He, Antonio Norelli, Jingcheng Wu, Volker Tresp, Michael M. Bronstein, Yunpu Ma
Efficient Optimization of Inertial Energy Networks Using Physics-Informed Spatio-Temporal Model
Taha Boussaid, François Rousset, Marc CLAUSSE, Vasile-Marian Scuturici
Enhancing Deep Learning with Statistical Inference for Node Classification in Temporal Graphs
Jan von Pichowski, Vincenzo Perri, Lisi Qarkaxhija, Ingo Scholtes
Between Linear and Sinusoidal: Rethinking the Time Encoder in Dynamic Graph Learning
Hsing-Huan Chung, Shravan S Chaudhari, Xing Han, Yoav Wald, Suchi Saria, Joydeep Ghosh
Relational Graph Transformer
Vijay Prakash Dwivedi, Sri Jaladi, Yangyi Shen, Federico Lopez, Charilaos I. Kanatsoulis, Rishi Puri, Matthias Fey, Jure Leskovec
Are We Really Measuring Progress? Transferring Insights from Evaluating Recommender Systems to Temporal Link Prediction
Filip Cornell, Oleg Smirnov, Gabriela Zarzar Gandler, Lele Cao
Confidence First: Reliability-Driven Temporal Graph Neural Networks
Jayadratha Gayen, Himanshu Pal, Naresh Manwani, Charu Sharma
Multi-Modal Interpretable Graph for Competing Risk Prediction with Electronic Health Records
Munib Mesinovic, Peter Watkinson, Tingting Zhu
Charilaos Kanatsoulis
Stanford University
Federico Lopez
Kumo.AI
Yunpu Ma
University of Munich
Vagelis Papalexakis
UC Riverside
McGill University/Mila
McGill University/Mila
Mannheim University
Università della Svizzera italiana, IDSIA
SNSF Postdoc University of Oxford
McGill University/Mila
We invite researchers, practitioners, and industry experts to submit their original contributions to the Temporal Graph Learning workshop. All accepted papers will be exposed and presented at the poster session. All submissions should follow the workshop submission latex template.
In addition to standard papers, we welcome the submission of position papers and extended abstracts about ongoing or recent research that fosters discussions and collaborations within the community. Exceptional contributions will also be featured as spotlight talks.
Join us and share your latest findings!
The workshop focuses on advances, challenges, and applications in temporal graph learning and spatio-temporal time series analysis. The workshop aims to bring together a diverse community working on dynamic graphs, time-evolving networks, relational deep learning and their applications in various domains such as social networks, recommendation systems, biology, finance, and more.
We welcome submissions from a broad range of topics including, but not limited to,
Frontiers: new learning paradigms, connections to time series modeling, temporal graph foundation models, use of LLMs on temporal graphs, uncertainty quantification with dynamic relations, graph structure learning, connections with graph signal processing, temporal graph learning in reinforcement learning.
Applications: brain networks, temporal knowledge graphs, relational deep learning, video analysis, sensor networks, social networks, pandemic modeling, smart cities, financial markets, molecular dynamics, cyber security, misinformation detection, and question answering.
Theory: learning guarantees, expressiveness, explainability, fairness, privacy, spectral analysis, causal reasoning, adversarial robustness, and network properties.
Models: continuous time and space models, scalable and efficient models, state-space models, probabilistic models, generative models, neuro-symbolic models, and statistical models.
Methods: temporal graph clustering, community detection, link prediction, forecasting, anomaly detection, change detection, model adaptation, online learning, multimodal temporal graph learning.
Evaluation: novel benchmarks, visualization tools, evaluation protocols, software libraries, data augmentation.
Please don't hesitate to reach out if you have any questions, including the relevance of a particular topic to the workshop. You can contact us at temporalgraphlearning@gmail.com.
Submission Guidelines
Authors should prepare their articles following the KDD template and adhere to the page limits: up to 8 pages for full papers and up to 4 pages for position papers and extended abstracts; references and appendices are excluded from the page limit.
All papers must be submitted via OpenReview, preferably anonymizing the submission for double-blind review.
The deadline is May 25th, 2025.