We draw an analogy between Stochastic Gradient Descent and the dynamics of a thermodynamic system and we define a temperature for neural network parameters, proportional to their average squared gradient over equilibrium epochs.
Different parameter groups exhibit different temperatures, which we use as a pruning criterion: “cold” weights (low average squared gradient) can be removed, while “hot” weights should be retained.
We conduct a systematic comparison between Probabilstic Graphical Models (PGMs) and Graph Neural Networks (GNNs), examining how the two frameworks leverage input features and graph structure.
We find that GNNs are outperformed by PGMs when input features are low-dimensional or noisy, and when the heterophily of the graph is increased, both common scenarios in real-world data.
We exploit the relational inductive bias of Graph Neural Networks (GNNs) to segment complex and irregular microstructure geometries in multi-phase 3D volumes reconstructed via X-ray computed tomography (XCT). This task is particularly challenging, as segmentation is performed on experimental data after training on low-resemblance synthetic data.
GNNs provide a promising alternative to conventional Convolutional Neural Network (CNN) approaches, offering greater parameter efficiency and improved adaptability in data-scarce experimental settings.
Coming soon!