Temporal Graph Learning Workshop
@ NeurIPS 2023
Important Information
Workshop Date: Dec. 16, 2023
Workshop Room: 203-205
Workshop Time Zone: New Orleans (GMT-5)
Workshop Time Zone: New Orleans (GMT-5)
Contact Email: temporalgraphlearning@gmail.com
Twitter: https://twitter.com/tgl_workshop
Theme
Graphs are prevalent in many diverse applications including Social networks, Natural Language Processing, Computer Vision, the World Wide Web, Political Networks, Computational finance, Recommender Systems and more. Graph machine learning algorithms have been successfully applied to various tasks, including node classification, link prediction and graph clustering. However, most methods assume that the underlying network is static thus limiting their applications to real-world networks which naturally evolve over time. On the one hand, temporal characteristics introduce substantial challenges compared to learning on static graphs. For example, in temporal graphs, the time dimension needs to be modelled jointly with graph features and structures. On the other hand, recent studies demonstrate that incorporating temporal information can improve the prediction power of graph learning methods thus creating new opportunities in applications such as recommendation system, event forecasting, fraud detection and more.
Investigation of temporal graphs provides the backbone of analysis of many different tasks including anomaly or fraud detection, disease modeling, recommendation systems, traffic forecasting, biology, social media, and many more. Hence, there has been a surge of interest in the development of temporal graph learning methods, from diverse domains spanning Machine Learning, Artificial Intelligence, Data Mining, Network Science, Public Health and beyond.
This workshop bridges the conversation among different areas such as temporal knowledge graph learning, graph anomaly detection, and graph representation learning. It aims to share understanding and techniques to facilitate the development of novel temporal graph learning methods. It also brings together researchers from both academia and industry and connects researchers from various fields aiming to span theories, methodologies, and applications.
Schedule
Keynote Talks
Daniele Zambon
Swiss AI Lab IDSIA
Speaker Bio: Daniele Zambon is a postdoctoral researcher at the Swiss AI Lab IDSIA at the Università della Svizzera italiana (Switzerland). He received his Ph.D. degree from the Università della Svizzera italiana with a thesis on anomaly and change detection in sequences of graphs. He holds Master's and Bachelor's degrees in mathematics from the Università degli Studi di Milano (Italy). Daniele has been a visiting researcher/intern at the University of Florida (US), the University of Exeter (UK), and STMicroelectronics (Italy). His main research interests encompass graph representation learning and learning in non-stationary environments. He is a member of the IEEE CIS Task Force on Learning for Graphs and has co-organized special sessions and tutorials on deep learning and graph data. He regularly publishes and reviews for top-tier journals and conferences in the field, including IEEE TNNLS, IEEE TSP, IEEE TPAMI, JMLR, NeurIPS, ICLR, ICML, and CVPR.
Senior Research Scientist at DeepMind
Speaker Bio: Kelsey Allen is currently a Senior Research Scientist at DeepMind. She received her PhD from MIT in the Computational Cognitive Science group, and her BSc from the University of British Columbia in physics. Her work has received awards including the international Glushko award for best dissertation in cognitive science, a best paper award from Robotics: Science and Systems (R:SS), and an NSERC PhD fellowship. Spanning robotics, machine learning, and cognitive science, her work aims to elucidate the mechanisms that give rise to adaptive and efficient learning in humans and machines, especially in the domain of physical problem-solving.
Talk Title: Simulation from states and sensors
“Intuitive physics”, or the ability to imagine how the future of physical systems will unfold, is a hallmark of human intelligence. This capacity supports many complex behaviors in humans including planning, problem-solving, and tool creation. In this talk, I will describe some of our work aiming to learn physical simulators from data in order to support these downstream behaviors. The first half of the talk will focus on what we can learn from state-based data. I will present a new type of graph neural network for learning realistic rigid body simulation by taking inspiration from classic approaches in graphics. I will show that these learned simulators can capture rigid body dynamics with unprecedented accuracy, including modeling real world dynamics significantly better than system identification with analytic simulators from robotics. The second half of the talk will focus on what we can learn from sensor data. I will present Visual Particle Dynamics, a method which connects neural radiance fields with graph neural networks to enable learning directly from RGB-D data. Unlike existing 2D video prediction models, we show that VPD's 3D structure enables scene editing and long-term predictions.
Julius-Maximilians-Universität Würzburg and University of Zurich
Speaker Bio: Ingo Scholtes is a Full Professor for Machine Learning in Complex Networks at the Center for Artificial Intelligence and Data Science of Julius-Maximilians-Universität Würzburg, Germany as well as SNSF Professor for Data Analytics at the Department of Computer Science at the University of Zürich, Switzerland. He has a background in computer science and mathematics and obtained his doctorate degree from the University of Trier, Germany. At CERN, he developed a large-scale data distribution system, which is currently used to monitor particle collision data from the ATLAS detector. After finishing his doctorate degree, he was a postdoctoral researcher at the interdisciplinary Chair of Systems Design at ETH Zürich from 2011 till 2016. In 2016 he held an interim professorship for Applied Computer Science at the Karlsruhe Institute of Technology, Germany. In 2017 he returned to ETH Zürich as a senior assistant and lecturer. In 2019 he was appointed Full Professor at the University of Wuppertal. Since 2021 he holds the Chair of Computer Science XV - Machine Learning for Complex Networks at Julius-Maximilians-Universität Würzburg, Germany.
Talk title: De Bruijn Goes Neural: Towards Causality-Aware Graph Neural Networks for Time Series Data
Graph Neural Networks (GNNs) have become a cornerstone for the application of deep learning to data on complex networks. However, we increasingly have access to time-resolved data that not only capture which nodes are connected to each other, but also when and in which temporal order those connections occur. A number of works have shown how the timing and ordering of links shapes the causal topology of networked systems, i.e. which nodes can possibly influence each other over time. Moreover, higher-order network models have been developed that allow us to model patterns in the resulting causal topology. Building on these works, we introduce De Bruijn Graph Neural Networks (DBGNNs), a novel time-aware graph neural network architecture for time-resolved data on dynamic graphs. Our approach accounts for temporal-topological patterns that unfold via causal walks, i.e. temporally ordered sequences of links by which nodes can influence each other over time. This enables us to learn patterns in the causal topology of time series data on complex networks, which facilitates to address learning tasks in temporal graphs.
Rex Ying
Yale University
Speaker Bio: I'm an assistant professor in the Department of Computer Science at Yale University. My research focus includes algorithms for graph neural networks, geometric embeddings and explainable models. I am the author of many widely used GNN algorithms such as GraphSAGE, PinSAGE and GNNExplainer. In addition, I have worked on a variety of applications of graph learning in physical simulations, social networks, knowledge graphs and biology. I developed the first billion-scale graph embedding services at Pinterest, and the graph-based anomaly detection algorithm at Amazon.
Talk abstract: Temporal graphs are widely used to model dynamic systems with time-varying interactions. In real-world scenarios, the underlying mechanisms of generating future interactions in dynamic systems are typically governed by a set of recurring substructures within the graph, known as temporal motifs. Despite the success and prevalence of current temporal graph neural networks (TGNN), it remains unclear how to explain such temporal graph predictions. To address this challenge, we propose a novel approach to explain temporal graph predictions via temporal motifs. The method, called Temporal Motifs Explainer (TempME), uncovers the most pivotal temporal motifs guiding the prediction of TGNNs. Derived from the information bottleneck principle, TempME extracts the most interaction-related motifs while minimizing the amount of contained information to preserve the sparsity and succinctness of the explanation. Events in the explanations generated by TempME are verified to be more spatiotemporally correlated than those of existing approaches, providing more understandable insights.
Harvard University
Speaker Bio: Marinka Zitnik is an Assistant Professor at Harvard University with appointments in the Department of Biomedical Informatics, Broad Institute of MIT and Harvard, and Harvard Data Science. Dr. Zitnik is a computer scientist studying applied machine learning with a focus on challenges in scientific discovery and medicine. Her algorithms and methods have had a tangible impact, which has garnered interests of government, academic, and industry researchers and has put new tools in the hands of practitioners. Some of her methods are used by major biomedical institutions, including Baylor College of Medicine, Karolinska Institute, Stanford Medical School, and Massachusetts General Hospital. Her work received best paper and research awards from International Society for Computational Biology, Bayer Early Excellence in Science Award, Amazon Faculty Research Award, Roche Alliance with Distinguished Scientists Award, Rising Star Award in Electrical Engineering and Computer Science (EECS), and Next Generation Recognition in Biomedicine. She is an ELLIS Scholar, a member of the Science Working Group at NASA Space Biology, co-founder of Therapeutics Data Commons, and faculty lead of the AI4Science initiative. She is also the recipient of the 2022 Young Mentor Award at Harvard Medical School, and was named Kavli Fellow 2023 by the National Academy of Sciences.
Talk title: Towards foundation models for time series
General-purpose foundation models have revolutionized deep learning, enabling the adaptation of a single model to a wide array of tasks with minimal additional training, eliminating the need for separate models for each task. While this paradigm has proven successful in vision and language domains, extending it to time series data poses significant challenges due to the diverse temporal dynamics, semantic variations, irregular sampling, system-related factors (e.g., different devices or subjects), and shifts in feature and label distributions inherent to time series data. These factors are not inherently compatible with the next-token prediction objective in large language models. In this talk, I describe our research efforts to realize crucial capabilities for time-series foundation models. We start with TF-C (NeurIPS 2022), a time-series pre-training strategy that leverages a self-supervised consistency objective, modeling both temporal and frequency representations within the time-frequency space. We then explore Raincoat (ICML 2023), the first approach for closed-set and universal domain adaptation that is robust to both feature and label shifts, allowing model transfer between source and unlabeled target domains, even in scenarios with no label overlap. To facilitate the analysis of time series model behavior, our TimeX approach (NeurIPS 2023) introduces an interpretable surrogate model, ensuring model behavior consistency, providing discrete attribution maps, and enhancing interpretability. Lastly, Raindrop (ICLR 2022) is an approach for learning from irregularly sampled multivariate time series, employing a graph neural network to capture time-varying dependencies among sensors and outperforming state-of-the-art methods in classification and temporal dynamics interpretation. Collectively, these approaches shed light on the evolving landscape of time series representation learning, offering a roadmap for future advancements in temporal learning.
Organizers
McGill University/Mila
McGill University/Mila
McGill University/Mila
Imperial College London
NEC Laboratories Europe /Mannheim University
Panelists
Imperical College London
Intel Labs
Swiss AI Lab IDSIA
Accepted Papers
Duc Thien Nguyen, Tuan Nguyen, Truong Son Hy, Risi Kondor
Amirmohammad Farzaneh
Spatial-Temporal DAG Convolutional Networks for End-to-End Joint Effective Connectivity Learning and Resting-State fMRI Classification
Rui Yang, Wenrui Dai, Huajun She, Yiping P. Du, Dapeng Wu, Hongkai XiongPredicting COVID-19 Pandemic by Spatio-Temporal Graph Neural Networks: A New Zealand's study
Bach Nguyen, Truong Son Hy, Long Tran-Thanh, Nhung NghiemHierarchical Joint Graph Learning and Multivariate Time Series Forecasting
JuHyeon Kim, HyunGeun Lee, Seungwon Yu, Ung Hwang, Wooyul Jung, Miseon Park, Kijung YoonTemporal Graph Models Fail to Capture Global Temporal Dynamics
Michal Daniluk, Jacek DabrowskiLeveraging Temporal Graph Networks Using Module Decoupling
Or Feldman, Chaim BaskinEffective Non-Dissipative Propagation for Continuous-Time Dynamic Graphs
Alessio Gravina, Giulio Lovisotto, Claudio Gallicchio, Davide Bacciu, Claas GrohnfeldtLearning Temporal Higher-order Patterns to Detect Anomalous Brain Activity
Ali Behrouz, Farnoosh HashemiBitGraph: A Framework For Scaling Temporal Graph Queries on GPUs
Alexandria BarghiMitigating Cold-start Problem Using Cold Causal Demand Forecasting Model
Zahra Fatemi, Minh Huynh, Elena Zheleva, Zamir Syed, Xiaojun DiDURENDAL: Graph deep learning framework for temporal heterogeneous networks
Manuel Dileo, Matteo Zignani, Sabrina GaitoDspGNN: Bringing Spectral Design to Discrete Time Dynamic Graph Neural Networks for Edge Regression
Leshanshui YANG, Clément Chatelain, Sébastien AdamGraph-based Time Series Clustering for End-to-End Hierarchical Forecasting
Andrea Cini, Danilo Mandic, Cesare AlippiSAUC: Sparsity-Aware Uncertainty Calibration for Spatiotemporal Prediction with Graph Neural Networks
Dingyi Zhuang, Yuheng Bu, Guang Wang, Shenhao Wang, Jinhua ZhaoExploring Time Granularity on Temporal Graphs for Dynamic Link Prediction in Real-world Networks
Xiangjian Jiang, Yanyi PuContinuous-time Graph Representation with Sequential Survival Process
Abdulkadir Celikkanat, Nikolaos Nakis, Morten MørupUsing Causality-Aware Graph Neural Networks to Predict Temporal Centralities in Dynamic Graphs
Franziska Heeg, Ingo ScholtesSTGraph: A Framework for Temporal Graph Neural Networks
Nithin Puthalath Manoj, Joel Cherian, Kevin Jude Concessao, Unnikrishnan CheramangalathMarked Neural Spatio-Temporal Point Process Involving a Dynamic Graph Neural Network
Alice Moallemy-Oureh, Silvia Beddar-Wiesing, Rüdiger Nather, Josephine ThomasDeep Graph Kernel Point Processes
Zheng Dong, Matthew Repasky, Xiuyuan Cheng, Yao XieGen-T: Reduce Distributed Tracing Operational Costs Using Generative Models
Saar Tochner, Giulia Fanti, Vyas SekarGraph Kalman Filters
Daniele Zambon, Cesare AlippiAnomaly Detection in Continuous-Time Temporal Provenance Graphs
Jakub Reha, Giulio Lovisotto, Michele Russo, Alessio Gravina, Claas GrohnfeldtAdaptive Message Passing Sign Algorithm
Changran Peng, Yi Yan, Ercan KuruogluInductive Link Prediction in Static and Temporal Graphs for Isolated Nodes
Ayan Chatterjee, Robin Walters, Giulia Menichetti, Tina Eliassi-RadTowards predicting future time intervals on Temporal Knowledge Graphs
Roxana Pop, Egor KostylevTopological and Temporal Data Augmentation for Temporal Graph Networks
Haoran Liu, Jianling Wang, Kaize Ding, James CaverleeTodyformer: Towards Holistic Dynamic Graph Transformers with Structure-Aware Tokenization
Mahdi Biparva, Raika Karimi, Faezeh Faez, Yingxue ZhangGenTKG: Generative Forecasting on Temporal Knowledge Graph
Ruotong Liao, Xu Jia, Yunpu Ma, Volker TrespTBoost: Gradient Boosting Temporal Graph Neural Networks
Pritam Nath, Govind Waghmare, Nancy Agrawal, Nitish Kumar, Siddhartha AsthanaDo Temporal Knowledge Graph Embedding Models Learn or Memorize?
Jiaxin Pan, Mojtaba Nayyeri, Yinan Li, Steffen StaabLarge-scale Graph Representation Learning of Dynamic Brain Connectome with Transformers
Byung-Hoon Kim, Jungwon Choi, EungGu Yun, Kyungsang Kim, Xiang Li, Juho LeeA Generative Self-Supervised Framework using Functional Connectivity in fMRI Data
Jungwon Choi, Seongho Keum, EungGu Yun, Byung-Hoon Kim, Juho LeeExploring Graph Structure in Graph Neural Networks for Epidemic Forecasting
Sai Supriya Varugunda, Ching-Hao Fan, Lijing Wang
Program Committee
Abdulkadir Celikkanat (Technical University of Denmark)
Amila Weerasinghe (Amazon)
Aisha Urooj (University of Central Florida)
Alexander Llywelyn Jenkins (Imperial College London)
Ali Behrouz (Cornell University)
Amirhossein Farzam (Duke University, Duke University)
Anindya Mondal (Jadavpur University, Kolkata)
Ayan Chatterjee (Research, Google)
Bikram Pratim Bhuyan (Universite Paris-Saclay)
Bin Lu (Shanghai Jiao Tong University)
Byung-Hoon Kim (Massachusetts General Hospital, Harvard University)
Can Koz (Department of Computer Science)
Carlos Ortega Vazquez (KU Leuven)
Chanyoung Park (Korea Advanced Institute of Science and Technology)
Chongyue Zhao (University of Pittsburgh)
Daniele Malitesta (Polytechnic Institute of Bari)
Derek Lim (Massachusetts Institute of Technology)
Dingsu Wang (University of Illinois at Urbana-Champaign)
Domenico Tortorella (University of Pisa)
Fabio Montagna (University of Lecce)
Farimah Poursafaei (McGill University)
Federico Errica (NEC Laboratories Europe)
Felipe Lopes (Federal Institute of Alagoas)
Hanhan Zhou (George Washington University)
Hansheng Xue (Australian National University)
Haoran Duan (Yunnan University)
Haowen Lin (University of Southern California)
He Zhang (Monash University)
Jeongwhan Choi (Yonsei University)
Jiaqing Xie (Swiss Federal Institute of Technology)
Jingzhou Shen (University of Texas at Arlington)
Jiri Minarcik (Czech Technical University of Prague)
Joana M. F. da Trindade (Massachusetts Institute of Technology)
Joseph Khoury (Louisiana State University)
Julia Gastinger (NEC Laboratories Europe & University of Mannheim)
Kartik Sharma (Georgia Institute of Technology)
Kellin Pelrine (McGill University)
Khaled Mohammed Saifuddin (Georgia State University)
Kijung Yoon (Hanyang University)
Kishalay Das (Indian Institute of Technology Kharagpur,)
Leshanshui Yang (Universite de Rouen - Haute Normandie)
Limei Wang (Texas A&M)
Luning Sun (Lawrence Livermore National Labs)
Manasvi Aggarwal (Indian Institute of Science, Indian institute of science, Bangalore)
Manuel Dileo (University of Milan)
Meng Liu (National University of Defense Technology)
Milind Malshe (Georgia Institute of Technology)
Ming Jin (Monash University)
Piotr Bielak (Wroclaw University of Science and Technology)
Pratheeksha Nair (McGill University)
Prathyush Sambaturu (University of Oxford)
Ramanarayan Mohanty (Intel)
Rob Rossmiller (University of Wisconsin - Madison)
Roxana Pop (University of Oslo)
S P Sharan (University of Texas at Austin)
Shan Xue (Microsoft)
Shenyang Huang (McGill University, Mila)
Shuowei Jin (University of Michigan - Ann Arbor)
Siyuan Chen (Guangzhou University)
Sofia Bourhim (ENSIAS)
Steven Le Moal (Ecole Superieur d'Ingenieurs Leonard de Vinci)
Suiyao Chen (University of South Florida)
Yassaman Ebrahimzadeh Maboud (University of British Columbia)
Yinkai Wang (Tufts University)
Yiqiao Jin (Georgia Institute of Technology)
Yue Tan (University of Technology Sydney)
Yunting Yin (State University of New York at Stony Brook)
Zachary Yang (McGill University)
Zhaoxuan Tan (University of Notre Dame)
Zheng Zhang (Emory University)
Zhengyu Hu (HKUST)
Zuhui Wang (State University of New York at Stony Brook)
Call for Papers
Calling all researchers in temporal graphs and related theory and applications! Submit your cutting-edge work to our workshop for a chance to present your accepted papers as posters during the poster sessions. Exceptional contributions will also be featured as highlighted talks. Join us and share your latest findings! If you would like to be a reviewer, please sign up here.
Important Dates:
Submission Deadline: Oct. 3rd, 2023, AoE
Accept/Reject Notification: Oct. 20th, 2023, AoE
Camera Ready Deadline: Nov. 23rd, 2023, AoE
Workshop Date: Dec. 16th, 2023
Topics:
We welcome submissions on a wide range of topics, including (but not restricted to):
Temporal Graph Modelling & Representation Learning:
Temporal Graph, Spatio-Temporal Graph, and Temporal Knowledge Graph Forecasting and Prediction
Temporal Graph Clustering, Community Detection, and Data Mining
Data Augmentation for Temporal Graphs
Hyperbolic Temporal Graphs
Scalability for Temporal Graphs
Multimodal Temporal Graph Learning
Temporal Graph Learning from Streaming and Online Data
Graphs for Multivariate Time Series Forecasting
Generative Modeling for Evolving Data, Synthetic Graph Models and Simulations
Dynamic System Representation and Excited State Dynamics
Temporal Graph Theory:
Expressive Power, Generalization
Signal Processing, Spectral Theories, and Spectral Learning
Neuro-Symbolic Temporal Learning
Causal Reasoning over Temporal Graphs
Temporal Graph Applications:
Integration of temporal graphs with other fields such as computer vision, natural language processing, reinforcement learning, financial security, etc.
Temporal Graph Modeling of Brain Networks, Molecular Dynamics, Human Action and Motion, E-commerce and Dynamic Finance, etc.
Anomaly Detection, Misinformation Detection, Polarization Detection and Cyber Security for Dynamic Networks
Video Analysis with Temporal Graphs
Recommender and Question Answering Systems based on Temporal Graphs
Fairness, Explainability, Robustness, Privacy
Temporal Graph Benchmarking:
Evaluation of Existing Methods and New Evaluation Approaches
Temporal Graph Datasets
Visualization
Submission Details:
When submitting, please follow these guidelines:
Authors can submit their papers through OpenReview. For the submissions, please use this latex template, and submit in .pdf format.
As part of the double-blind review process, please anonymize the papers appropriately.
The maximum length for submissions is 8 pages, plus unlimited pages for references and supplementary materials. For the supplementary materials, we recommend including only minor details like hyperparameter settings.
We also welcome shorter 4 page submissions that discuss work in progress or address open problems and challenges in the domain of temporal graph learning.
All accepted papers will be presented as posters during the workshop, and their camera-ready versions will be hosted on the workshop website. They will not be considered archival for resubmission purposes.
Four selected papers will have the opportunity to be presented as spotlight talks during the workshop, with one of them receiving the prestigious Best Paper Award.
Authors of accepted papers will be requested to create a brief video describing their work on SlidesLive, and this video will also be hosted on the website.
Questions:
Please don't hesitate to reach out if you have any questions, including uncertainties about the relevance of a particular topic. You can contact us at temporalgraphlearning@gmail.com.