Keynote Speakers
Haggai Maron
Technion & Nvidia
Talk: Deep Learning on Structured and Geometric Data
Deep Learning of structured and geometric data, such as sets, graphs, and surfaces, is a prominent research direction that has received considerable attention in the last few years. Given a learning task that involves structured data, the main challenge is identifying suitable neural network architectures and understanding their theoretical and practical tradeoffs.This talk will focus on a popular learning setup where the learning task is invariant to a group of transformations of the input data. This setup is relevant to many popular learning tasks and data types. In the first part of the talk, I will present a general framework for designing neural network architectures based on layers that respect these transformations. In particular, I will show that these layers can be implemented using parameter-sharing schemes induced by the group. In the second part of the talk, I will demonstrate the framework’s applicability by presenting neural network architectures for several structured data types.
Eran Treister
Ben-Gurion University
Talk: On Learning and Estimating Graph Operators in Graph Neural Networks and Graph Signal Processing
Graph Neural Networks (GNNs) are becoming a main tool for solving various graph-related problems, through data-driven learning. At the heart of the neural process, there is a graph propagation operator, which is the specialized ingredient of GNNs. In this talk, we will present an elegant method to define and learn graph operators in GNNs, inspired by numerical discretization of partial differential equations, involving a combination of diffusion, reaction, and advection operators to support various behaviors that are needed to model the given data, depending on its characteristics.
In the second part of the talk, I will present a method to estimate a graph operator from given data through a statistical framework and optimization. This problem is instrumental in the field of graph signal processing. The estimation of such graph operators, that promote smoothness across edges, can also be used for regularization for other inverse problems, or, in the future, as graph operators for GNNs. We show an efficient optimization method to estimate the graph operator, based on second-order approximation and non-convex regularization.
Nadav Dym
Technion
Talk: Mathematically principled aggregation
Aggregation functions are a basic building block of graph neural networks, which map multisets of unordered vectors to a single vector, using permutation invariant operations such as summation, averaging, maximization or sorting. We will discuss two basic mathematical questions, which aim to characterize how well the output vector of the aggregation function represents the initial multiset (a) Injectivity: can we build aggregation functions guaranteed to map multisets injectively to the output vector, and (b) Bi-Lipschitzness: can the aggregation functions preserve the distance in the original multiset space.
The results of our analysis suggest a new variant of sort-pooling as the most promising aggregation, from a mathematical perspective. We will complement this with some initial promising experimental results, showing the advantage of this sort-pooling mechanism for long range graph learning and other tasks.
Chaim Baskin
Ben-Gurion University
Talk: Sequential Signal Mixing Aggregation for Message Passing Graph Neural Networks
Message Passing Graph Neural Networks (MPGNNs) have emerged as the preferred method for modeling complex interactions across diverse graph entities. While the theory of such models is well understood, their aggregation module has not received sufficient attention. Sum-based aggregators have solid theoretical foundations regarding their separation capabilities. However, practitioners often prefer using more complex aggregations and mixtures of diverse aggregations. In this work, we unveil a possible explanation for this gap. We claim that sum-based aggregators fail to "mix" features belonging to distinct neighbors, preventing them from succeeding at downstream tasks. To this end, we introduce SSMA, a novel plug-and-play aggregation for MPGNNs. SSMA treats the neighbor features as 2D discrete signals and sequentially convolves them, inherently enhancing the ability to mix features attributed to distinct neighbors. By performing extensive experiments, we show that when combining SSMA with well-established MPGNN architectures, we achieve substantial performance gains across various benchmarks, achieving new state-of-the-art results in many settings
Ethan Fetaya
Bar-Ilan University
Talk: Machine learning in deep weight spaces
With the widespread use of deep neural networks, as machine learning models as well as object representation, there is an increasing interest in designing models that work directly on the network weight space. In this talk, we will discuss the challenges in learning on deep weight spaces, how incorporating symmetries into the model plays a critical role in overcoming these challenges, and what challenges still need to be addressed when learning on deep weight spaces.
Ron Levie
Technion
Talk: A Functional Basis for Graph Neural Networks
Graph neural networks (GNNs) were historically defined constructively, as specific types of computations on graphs, without a formal definition of their underlying function spaces. In this talk, we will present a comprehensive functional basis for GNNs, describing GNNs as continuous functions over well-defined compact metric spaces. This new theory elegantly leads to machine learning results, such as universal approximation and generalization theorems. We will also explore how this functional framework leads to practical applications, like novel GNNs that efficiently scale to large graphs.