We apply our proposed method on Bayesian neural network over UCI datasets. We compare W-AIG, W-GF and SVGD.
The averaged results over 20 independent trials are collected in the above table. We observe that on most datasets, W-AIG has better test root-mean-square-error and test log-likelihood than W-GF and SVGD. This indicates that W-AIG may have better generalization than W-GF and SVGD.