Learning mesh-based simulation with Graph Networks
Tobias Pfaff*, Meire Fortunato*, Alvaro Sanchez-Gonzalez*, Peter Battaglia
ICLR 2021 outstanding paper
Paper preprint: arxiv.org/abs/2010.03409
ICLR talk: iclr.cc/virtual/2021/poster/2837
Code and datasets: github.com/deepmind/deepmind-research/tree/master/meshgraphnets
All Experiments
MeshGraphNets Rollouts
FlagDynamic
cloth dynamics w/ self-collisions
2767 nodes (avg.), adaptively remeshed
250 time steps
ground truth simulator: ArcSim
SphereDynamic
cloth dynamics w/ self+obstacle collisions
1373 nodes (avg.), adaptively remeshed
500 time steps
ground truth simulator: ArcSim
DeformingPlate
structural mechanics
color: von-Mises Stress
1271 nodes (avg.)
400 time steps
ground truth simulator: COMSOL
CylinderFlow
incompressible fluid dynamics
color: velocity in x-direction
1885 nodes (avg.)
600 time steps
ground truth simulator: COMSOL
Airfoil
compressible fluid dynamics
color: velocity in x-direction
5233 nodes
600 time steps
ground truth simulator: SU2
Generalization experiments
Airfoil: faster & steeper
Model trained on dataset Airfoil
training range:
angle of attack: [-25, 25]
mach number: [0.2, 0.7]
extrapolation test range:
steeper: angle of attack [-35, 35]
faster: Mach number [0.7, 0.9]
TL;DR:
We retain plausible behavior when moderately extrapolating outside the training bracket.
FishFlag (鯉のぼり)
Model trained on dataset FlagDynamic
self-collisions, learned remeshing
TL;DR:
We obtain accurate results when testing on shapes unseen in training; both forward model and sizing field model generalize to new scenes.
HippoFlag 🦛
Model trained on dataset FlagDynamic
(wind speed varied, but constant within trajectories)
self-collisions, learned remeshing
TL;DR:
We can smoothly vary wind direction and speed at test time, even though varying wind speeds/directions were never observed in training.
WindSock
Model trained on dataset FlagDynamic
self-collisions, learned remeshing
training set: 2.7k nodes,
simple rectangular flaginference: 20k nodes,
cylinder with tassels
TL;DR:
We can apply a model trained on a simple setup to significantly bigger and more complex scenarios at test time.
Comparisons & Analysis
Dynamic Remeshing
Dataset FlagDynamic
Comparison: MeshGraphNets with
ground truth meshing
learned remeshing
learned remeshing w/ estimated targets
TL;DR:
We obtain good results with all remeshing variants.
Comparison: GNS
Dataset FlagSimple
fixed, regular mesh, 1579 nodes
Comparison:
GNS w/ 1 step of history
GNS w/ 5 steps of history
GNS w/ rest positions
MeshGraphNets
TL;DR:
GNS fails on cloth, GNS+mesh-pos works for regular meshes, but is more prone to show artifacts (and does not work for irregular meshes).
Comparison: GCN-MLP
Dataset Airfoil
fully dynamic prediction
Comparison:
GCN-MLP (GCN with with improved architecture, best variant we tried)
MeshGraphNets
TL;DR:
Even improved GCN variants fail to properly predict the dynamics on harder datasets.
Comparison: UNet
Dataset CylinderFlow
resampled on 128x128 grid
Comparison:
UNet from [Thuerey et al. 2020]
MeshGraphNets
TL;DR:
UNet can model overall dynamics well, but are less accurate in the wake regions.
Comparison: UNet
Dataset Airfoil
resampled on 128x128 grid
(here: showing the 107x107 center region)
Comparison:
UNet from [Thuerey et al. 2020]
MeshGraphNets
TL;DR:
UNet can model overall dynamics well, but are less accurate around the airfoil, and more prone to show artifacts.
Comparison: GCN (SteadyState)
Dataset AirfoilSteady
single-step, steady-state prediction
Comparison:
GCN-baseline from
[Belbute-Peres et al. 2020]MeshGraphNets
TL;DR:
All methods show good results for simple steady-state prediction tasks.
Long rollout
Dataset FlagSimple
trained on trajectories with 400 steps
rolled out for 40000 steps
TL;DR:
Our models remain stable even for extremely long rollouts