Citation: Graph Neural Networks are for the modeling of relations between entities in the nature as Fibonacci sequence is for the modeling of structures entities in the nature.
Graph Neural Networks Description
A) Graph Neural Networks
Graph Neural Networks is a deep learning method that takes graph data as input. Graph Neural Networks can be divided into Convolutional Graph Neural Networks, Recurrent Graph Neural Networks, Graph Auto-Encoders, and Spatial-Temporal Graph Neural Networks. Convolutional.
Graph Neural Networks can be decomposed into two branches: Spectral-based GNNs where graph spectral theory was applied to learn node representations, and spatial-based GNNs based on local-dependence assumption of graph structured data that graph convolution is defined in the spatial domain as aggregating and transforming local information. Thus, spectral-based GNNs extends classical signal processing to graph data, designing graph filters in the frequency domain by borrowing from graph signal analysis, and realizing graph convolution whereas spatial-based GNNs describe the pairwise relations between objects.
B) Hypergraph Neural Networks
Although GNNs show excellent performance in handling graph data, the potential higher-order relations among objects will be lost if there are simply represented by a graph. Hypergraph Neural Networks recognized as a flexible modeling tool for complex and higher-order data, including a vertex set and a hyperedge set. Objects are considered vertices, multiple vertices are connected to form a hyperedge representing higher-order relations.
Graph Learning
A) Transductive Graph Learning
Transductive learning involves making predictions on partially labeled datasets, using the structure of the data to predict the remaining unlabeled instances. Moreover, transductive learning does not have a clear distinction between the training and testing phases, as it leverages both labeled and unlabeled data simultaneously (Prummel et al. 2023) (Chang et al. 2025).
B) Inductive Graph Learning
Inductive learning is a popular paradigm that involves training machine learning models on labeled datasets to make predictions on new unseen data. This approach has been widely applied in tasks such as image classification, object detection, and semantic segmentation (Prummel et al. 2023) (Chang et al. 2025).
J. Chang, H. Ren, Z. Li, Y. Xu, T. Lai, “A Unified Transductive and Inductive Learning Framework for Few-Shot Learning using Graph Neural Networks”, Applied Soft Computing, 2025.
C) Contrastive Graph Learning
Self-supervised learning is a key method for training deep learning models when labeled data is scarce or unavailable. While graph machine learning holds great promise across various domains, the design of effective pretext tasks for self-supervised graph representation learning is challenging. Contrastive learning, a popular approach in graph self-supervised learning, leverages positive and negative pairs to compute a contrastive loss function. Current graph contrastive learning methods often attempt to fully use structural patterns and node similarities.
References
W. Prummel, J. Giraldo, A. Zakharova, T. Bouwmans, "Inductive Graph Neural Networks for Moving Object Segmentation", IEEE International Conference on Image Processing, ICIP 2023, Kuala Lumpur, Malaysia, October 2023.
J. Chang, H. Ren, Z. Li, Y. Xu, T. Lai, “A Unified Transductive and Inductive Learning Framework for Few-Shot Learning using Graph Neural Networks”, Applied Soft Computing, 2025.