Temporal Graph Learning Workshop
@ NeurIPS 2023

Important Information

Workshop Date: Dec. 16, 2023

Workshop Room: 203-205
Workshop Time Zone: New Orleans (GMT-5)

Contact Email: temporalgraphlearning@gmail.com
Twitter: https://twitter.com/tgl_workshop

Previous edition @ NeurIPS 2022

Theme

Graphs are prevalent in many diverse applications including Social networks, Natural Language Processing, Computer Vision, the World Wide Web, Political Networks, Computational finance, Recommender Systems and more. Graph machine learning algorithms have been successfully applied to various tasks, including node classification, link prediction and graph clustering. However, most methods assume that the underlying network is static thus limiting their applications to real-world networks which naturally evolve over time. On the one hand, temporal characteristics introduce substantial challenges compared to learning on static graphs.  For example, in temporal graphs, the time dimension needs to be modelled jointly with graph features and structures. On the other hand, recent studies demonstrate that incorporating temporal information can improve the prediction power of graph learning methods thus creating new opportunities in applications such as recommendation system, event forecasting, fraud detection and more.

Investigation of temporal graphs provides the backbone of analysis of many different tasks including anomaly or fraud detection, disease modeling, recommendation systems, traffic forecasting, biology, social media, and many more. Hence, there has been a surge of interest in the development of temporal graph learning methods, from diverse domains spanning Machine Learning, Artificial Intelligence, Data Mining, Network Science, Public Health and beyond.

This workshop bridges the conversation among different areas such as temporal knowledge graph learning, graph anomaly detection, and graph representation learning. It aims to share understanding and techniques to facilitate the development of novel temporal graph learning methods. It also brings together researchers from both academia and industry and connects researchers from various fields aiming to span theories, methodologies, and applications.

Schedule

Keynote Talks

Daniele Zambon

Swiss AI Lab IDSIA

Speaker Bio: Daniele Zambon is a postdoctoral researcher at the Swiss AI Lab IDSIA at the Università della Svizzera italiana (Switzerland). He received his Ph.D. degree from the Università della Svizzera italiana with a thesis on anomaly and change detection in sequences of graphs. He holds Master's and Bachelor's degrees in mathematics from the Università degli Studi di Milano (Italy). Daniele has been a visiting researcher/intern at the University of Florida (US), the University of Exeter (UK), and STMicroelectronics (Italy). His main research interests encompass graph representation learning and learning in non-stationary environments. He is a member of the IEEE CIS Task Force on Learning for Graphs and has co-organized special sessions and tutorials on deep learning and graph data. He regularly publishes and reviews for top-tier journals and conferences in the field, including IEEE TNNLS, IEEE TSP, IEEE TPAMI, JMLR, NeurIPS, ICLR, ICML, and CVPR. 

Kelsey Allen

Senior Research Scientist at DeepMind

Speaker Bio: Kelsey Allen is currently a Senior Research Scientist at DeepMind. She received her PhD from MIT in the Computational Cognitive Science group, and her BSc from the University of British Columbia in physics. Her work has received awards including the international Glushko award for best dissertation in cognitive science, a best paper award from Robotics: Science and Systems (R:SS), and an NSERC PhD fellowship. Spanning robotics, machine learning, and cognitive science, her work aims to elucidate the mechanisms that give rise to adaptive and efficient learning in humans and machines, especially in the domain of physical problem-solving.


Talk Title: Simulation from states and sensors

“Intuitive physics”, or the ability to imagine how the future of physical systems will unfold, is a hallmark of human intelligence. This capacity supports many complex behaviors in humans including planning, problem-solving, and tool creation. In this talk, I will describe some of our work aiming to learn physical simulators from data in order to support these downstream behaviors. The first half of the talk will focus on what we can learn from state-based data. I will present a new type of graph neural network for learning realistic rigid body simulation by taking inspiration from classic approaches in graphics. I will show that these learned simulators can capture rigid body dynamics with unprecedented accuracy, including modeling real world dynamics significantly better than system identification with analytic simulators from robotics. The second half of the talk will focus on what we can learn from sensor data. I will present Visual Particle Dynamics, a method which connects neural radiance fields with graph neural networks to enable learning directly from RGB-D data. Unlike existing 2D video prediction models, we show that VPD's 3D structure enables scene editing and long-term predictions.


Ingo Scholtes

Julius-Maximilians-Universität Würzburg and University of Zurich

Speaker Bio: Ingo Scholtes is a Full Professor for Machine Learning in Complex Networks at the Center for Artificial Intelligence and Data Science of Julius-Maximilians-Universität Würzburg, Germany as well as SNSF Professor for Data Analytics at the Department of Computer Science at the University of Zürich, Switzerland. He has a background in computer science and mathematics and obtained his doctorate degree from the University of Trier, Germany. At CERN, he developed a large-scale data distribution system, which is currently used to monitor particle collision data from the ATLAS detector. After finishing his doctorate degree, he was a postdoctoral researcher at the interdisciplinary Chair of Systems Design at ETH Zürich from 2011 till 2016. In 2016 he held an interim professorship for Applied Computer Science at the Karlsruhe Institute of Technology, Germany. In 2017 he returned to ETH Zürich as a senior assistant and lecturer. In 2019 he was appointed Full Professor at the University of Wuppertal. Since 2021 he holds the Chair of Computer Science XV - Machine Learning for Complex Networks at Julius-Maximilians-Universität Würzburg, Germany.

Talk title: De Bruijn Goes Neural: Towards Causality-Aware Graph Neural Networks for Time Series Data

Graph Neural Networks (GNNs) have become a cornerstone for the application of deep learning to data on complex networks. However, we increasingly have access to time-resolved data that not only capture which nodes are connected to each other, but also when and in which temporal order those connections occur. A number of works have shown how the timing and ordering of links shapes the causal topology of networked systems, i.e. which nodes can possibly influence each other over time. Moreover, higher-order network models have been developed that allow us to model patterns in the resulting causal topology. Building on these works, we introduce De Bruijn Graph Neural Networks (DBGNNs), a novel time-aware graph neural network architecture for time-resolved data on dynamic graphs. Our approach accounts for temporal-topological patterns that unfold via causal walks, i.e. temporally ordered sequences of links by which nodes can influence each other over time. This enables us to learn patterns in the causal topology of time series data on complex networks, which facilitates to address learning tasks in temporal graphs.


 

Rex Ying

Yale University

Speaker Bio: I'm an assistant professor in the Department of Computer Science at Yale University. My research focus includes algorithms for graph neural networks, geometric embeddings and explainable models. I am the author of many widely used GNN algorithms such as GraphSAGE, PinSAGE and GNNExplainer. In addition, I have worked on a variety of applications of graph learning in physical simulations, social networks, knowledge graphs and biology. I developed the first billion-scale graph embedding services at Pinterest, and the graph-based anomaly detection algorithm at Amazon. 

Talk abstract: Temporal graphs are widely used to model dynamic systems with time-varying interactions. In real-world scenarios, the underlying mechanisms of generating future interactions in dynamic systems are typically governed by a set of recurring substructures within the graph, known as temporal motifs. Despite the success and prevalence of current temporal graph neural networks (TGNN), it remains unclear how to explain such temporal graph predictions. To address this challenge, we propose a novel approach to explain temporal graph predictions via temporal motifs. The method, called Temporal Motifs Explainer (TempME), uncovers the most pivotal temporal motifs guiding the prediction of TGNNs. Derived from the information bottleneck principle, TempME extracts the most interaction-related motifs while minimizing the amount of contained information to preserve the sparsity and succinctness of the explanation. Events in the explanations generated by TempME are verified to be more spatiotemporally correlated than those of existing approaches, providing more understandable insights. 



Marinka Zitnik

Harvard University

Speaker Bio: Marinka Zitnik is an Assistant Professor at Harvard University with appointments in the Department of Biomedical Informatics, Broad Institute of MIT and Harvard, and Harvard Data Science. Dr. Zitnik is a computer scientist studying applied machine learning with a focus on challenges in scientific discovery and medicine. Her algorithms and methods have had a tangible impact, which has garnered interests of government, academic, and industry researchers and has put new tools in the hands of practitioners. Some of her methods are used by major biomedical institutions, including Baylor College of Medicine, Karolinska Institute, Stanford Medical School, and Massachusetts General Hospital. Her work received best paper and research awards from International Society for Computational Biology, Bayer Early Excellence in Science Award, Amazon Faculty Research Award, Roche Alliance with Distinguished Scientists Award, Rising Star Award in Electrical Engineering and Computer Science (EECS), and Next Generation Recognition in Biomedicine. She is an ELLIS Scholar, a member of the Science Working Group at NASA Space Biology, co-founder of Therapeutics Data Commons, and faculty lead of the AI4Science initiative. She is also the recipient of the 2022 Young Mentor Award at Harvard Medical School, and was named Kavli Fellow 2023 by the National Academy of Sciences.

Talk title: Towards foundation models for time series

General-purpose foundation models have revolutionized deep learning, enabling the adaptation of a single model to a wide array of tasks with minimal additional training, eliminating the need for separate models for each task. While this paradigm has proven successful in vision and language domains, extending it to time series data poses significant challenges due to the diverse temporal dynamics, semantic variations, irregular sampling, system-related factors (e.g., different devices or subjects), and shifts in feature and label distributions inherent to time series data. These factors are not inherently compatible with the next-token prediction objective in large language models. In this talk, I describe our research efforts to realize crucial capabilities for time-series foundation models. We start with TF-C (NeurIPS 2022), a time-series pre-training strategy that leverages a self-supervised consistency objective, modeling both temporal and frequency representations within the time-frequency space. We then explore Raincoat (ICML 2023), the first approach for closed-set and universal domain adaptation that is robust to both feature and label shifts, allowing model transfer between source and unlabeled target domains, even in scenarios with no label overlap. To facilitate the analysis of time series model behavior, our TimeX approach (NeurIPS 2023) introduces an interpretable surrogate model, ensuring model behavior consistency, providing discrete attribution maps, and enhancing interpretability. Lastly, Raindrop (ICLR 2022) is an approach for learning from irregularly sampled multivariate time series, employing a graph neural network to capture time-varying dependencies among sensors and outperforming state-of-the-art methods in classification and temporal dynamics interpretation. Collectively, these approaches shed light on the evolving landscape of time series representation learning, offering a roadmap for future advancements in temporal learning. 

Organizers

Farimah Poursafaei

McGill University/Mila 

Shenyang Huang

McGill University/Mila 

Kellin Pelrine

McGill University/Mila 

Emanuele Rossi

Imperial College London

Julia Gastinger

NEC Laboratories Europe /Mannheim University

Reihaneh Rabbany

McGill University/Mila  

Michael Bronstein

University of Oxford

Panelists

Alexander Modell

Imperical College London

Michael Galkin

 Intel Labs 

Ingo Scholtes

CAIDAS,

Julius-Maximilians-Universität Würzburg


Daniele Zambon

Swiss AI Lab IDSIA

Accepted Papers

Duc Thien Nguyen, Tuan Nguyen, Truong Son Hy, Risi Kondor

Amirmohammad Farzaneh

Program Committee

Abdulkadir Celikkanat (Technical University of Denmark)

Amila Weerasinghe (Amazon)

Aisha Urooj (University of Central Florida)

Alexander Llywelyn Jenkins (Imperial College London)

Ali Behrouz (Cornell University)

Amirhossein Farzam (Duke University, Duke University)

Anindya Mondal (Jadavpur University, Kolkata)

Ayan Chatterjee (Research, Google)

Bikram Pratim Bhuyan (Universite Paris-Saclay)

Bin Lu (Shanghai Jiao Tong University)

Byung-Hoon Kim (Massachusetts General Hospital, Harvard University)

Can Koz (Department of Computer Science)

Carlos Ortega Vazquez (KU Leuven)

Chanyoung Park (Korea Advanced Institute of Science and Technology)

Chongyue Zhao (University of Pittsburgh)

Daniele Malitesta (Polytechnic Institute of Bari)

Derek Lim (Massachusetts Institute of Technology)

Dingsu Wang (University of Illinois at Urbana-Champaign)

Domenico Tortorella (University of Pisa)

Fabio Montagna (University of Lecce)

Farimah Poursafaei (McGill University)

Federico Errica (NEC Laboratories Europe)

Felipe Lopes (Federal Institute of Alagoas)

Hanhan Zhou (George Washington University)

Hansheng Xue (Australian National University)

Haoran Duan (Yunnan University)

Haowen Lin (University of Southern California)

He Zhang (Monash University)

Jeongwhan Choi (Yonsei University)

Jiaqing Xie (Swiss Federal Institute of Technology)

Jingzhou Shen (University of Texas at Arlington)

Jiri Minarcik (Czech Technical University of Prague)

Joana M. F. da Trindade (Massachusetts Institute of Technology)

Joseph Khoury (Louisiana State University)

Julia Gastinger (NEC Laboratories Europe & University of Mannheim)

Kartik Sharma (Georgia Institute of Technology)

Kellin Pelrine (McGill University)

Khaled Mohammed Saifuddin (Georgia State University)

Kijung Yoon (Hanyang University)

Kishalay Das (Indian Institute of Technology Kharagpur,)

Leshanshui Yang (Universite de Rouen - Haute Normandie)

Limei Wang (Texas A&M)

Luning Sun (Lawrence Livermore National Labs)

Manasvi Aggarwal (Indian Institute of Science, Indian institute of science, Bangalore)

Manuel Dileo (University of Milan)

Meng Liu (National University of Defense Technology)

Milind Malshe (Georgia Institute of Technology)

Ming Jin (Monash University)

Piotr Bielak (Wroclaw University of Science and Technology)

Pratheeksha Nair (McGill University)

Prathyush Sambaturu (University of Oxford)

Ramanarayan Mohanty (Intel)

Rob Rossmiller (University of Wisconsin - Madison)

Roxana Pop (University of Oslo)

S P Sharan (University of Texas at Austin)

Shan Xue (Microsoft)

Shenyang Huang (McGill University, Mila)

Shuowei Jin (University of Michigan - Ann Arbor)

Siyuan Chen (Guangzhou University)

Sofia Bourhim (ENSIAS)

Steven Le Moal (Ecole Superieur d'Ingenieurs Leonard de Vinci)

Suiyao Chen (University of South Florida)

Yassaman Ebrahimzadeh Maboud (University of British Columbia)

Yinkai Wang (Tufts University)

Yiqiao Jin (Georgia Institute of Technology)

Yue Tan (University of Technology Sydney)

Yunting Yin (State University of New York at Stony Brook)

Zachary Yang (McGill University)

Zhaoxuan Tan (University of Notre Dame)

Zheng Zhang (Emory University)

Zhengyu Hu (HKUST)

Zuhui Wang (State University of New York at Stony Brook)


Call for Papers

Calling all researchers in temporal graphs and related theory and applications! Submit your cutting-edge work to our workshop for a chance to present your accepted papers as posters during the poster sessions. Exceptional contributions will also be featured as highlighted talks. Join us and share your latest findings! If you would like to be a reviewer, please sign up here.

Important Dates:

Topics:

We welcome submissions on a wide range of topics, including (but not restricted to):


Submission Details:

When submitting, please follow these guidelines:

Questions:

Please don't hesitate to reach out if you have any questions, including uncertainties about the relevance of a particular topic. You can contact us at temporalgraphlearning@gmail.com.