Evaluation was performed on the same 20% DFT test set. Results:
DFT Scratch Model
MAE: 0.534 eV
RMSE: 0.734 eV
BVSE → DFT Fine-Tuned Model
MAE: 0.260 eV
RMSE: 0.366 eV
Transfer learning reduces DFT test MAE by approximately 51%.
The parity plot compares predicted vs true DFT migration barriers.
Observations:
Scratch model shows large vertical spread.
Fine-tuned model clusters tightly around the 45-degree line.
High-barrier predictions improve significantly after transfer learning.
This indicates improved calibration and generalization.
Error analysis across barrier ranges shows:
Reduced bias at high barrier values.
Lower variance across the entire energy range.
More consistent prediction across materials.
The large BVSE dataset enables the model to learn:
Local coordination physics
Bottleneck geometry effects
Structural diffusion patterns
Fine-tuning adjusts the energy scale to match DFT accuracy. This demonstrates that approximate simulations can serve as effective pretraining corpora for quantum-accurate prediction.