# Constraint-based graph network simulator

# Model comparison on Rope

Rope simulations have 5-10 nodes and 160 time steps.

All graph-network models use two message-passing steps to match the number of model parameters. For each model type, we choose the random seed with the median performance.

The constraint-based models (C-GNS-GD and C-GNS-FP) as well as Iterative model preserve the shape of the rope and oscillates in sync with the ground-truth simulation.

In Forward GNN model the rope nodes oscillate; the shape of the rope is not preserved. This demonstrates that that 2 message-passing steps is not sufficient for the forward GNN model to solve the rope simulation, but it is enough for constraint-based models.

C-MLP models work well in some rollouts, but the rope falls apart in others.

# Model comparison on Bouncing Balls

Bouncing Balls simulations have 5-10 nodes and 160 time steps.

All graph-network models use one message-passing step to match the number of model parameters. For each model type, we choose the random seed with the median performance.

C-GNS Gradient Descent, Iterative model and Forward GNN all learned the bounce off the walls and other objects. On the Forward GNN one can notice that the model does not account for different masses of the objects (e.g. collision of the orange and brown balls or pink and orange balls in the first rollout). For Iterative GNN, sometimes the balls sometimes overlap before the collision is resolved. For all models, the ball trajectories eventually diverge from the ground truth, demonstrating that it is hard to predict the correct bounce trajectory.

C-GNS Fast Projections, C-MLP Gradient Descent: the collisions between the balls is not handled correctly, the balls pass through each other.

C-MLP Fast Projections: the model learned only the the forward motion and the balls pass through the walls.

# Model comparison on Bouncing Rigids

Bouncing Rigids simulations have 3-6 nodes and 160 time steps.

All graph-network models use one message-passing step to match the number of model parameters. For each model type, we choose the random seed with the median performance.

C-GNS Gradient Descent and the Iterative model preserve the shape of the object and model the bounce off the wall. In Forward GNN with the same number of message-passing layers, predictions the nodes diverge from the original shape.

C-GNS Fast Projection and C-MLP Fast Projection does not preserve neither the shape nor the boundaries of the box. C-MLP Gradient Descent also looses the shape of the rigid structure.

# Model comparison on Box Bath

BoxBath simulation with 64 nodes representing the cube and 960 nodes for the water particles.

The simulation has 150 time points.

All graph-network models use one message-passing step.

C-GNS Gradient Descent and Iterative model adequately model both the fluid and the cube. For C-GNS Fast Projection the predicted fluid motion is not smooth, particularly towards the end of the simulation.

Forward GNN model succeeds in modelling the fluid, but does not preserve the shape of the cube.

# Combining the learned constraints with extra obstacle constraints

The model optimises the sum of the learned constraint and an extra constraint representing the obstacle and produces plausible trajectories. The extra constraint is added at test time only.

The video demonstrates the ground-truth and the model rollout without the additional constraints as well as rollouts with different obstacle constraints.

**Top video**: combining the learned constraint with an additional obstacle constraint and length preservation constraint. The model respects both constraints on the obstacle and the length of rope links and produces the plausible dynamics.

**Bottom video**: combining the learned constraint with the obstacle constraint only. The model modifies the lengths of the rope links to pass around the obstacle.

# Producing more accurate simulations at test time using more solver iterations

**Zero-shot generalization to a larger rope **

The C-GNS-GD model trained on ropes with 5-10 nodes can generalize to a longer rope of 20 nodes.

**More solver iterations at test time**

We take the C-GNS-GD model trained on 5 constraint solver iterations and vary the number of solver iterations at test time. The simulation is closer to ground-truth with more solver iterations.

We used a single C-GNS-GD model trained on ropes with 5-10 nodes with 5 constraint solver iterations for all the visualisations in this section

**Top video**: rollouts on the ropes with 5-10 nodes that the model was trained on.

**Bottom video**: rollouts on the ropes of 20 nodes (>2x longer than during the training)

Grey: ground-truth

Red: model predictions with different number of iterations of the constraint solver at test time.