RQ2
Experiment Setup
We choose the classical ResNet-110, Wide-ResNet-38, and VGGNet-19 as the subject model architectures. In addition, we train the models on MNIST, FMNIST, and CIFAR-10 datasets until the model performance converges. With a total of 9 (3*3) configurations, we train each configuration (i.e., architecture-dataset combination) 10 times. There will be 90 (9*10) models. The hyperparameters' details are as follows:
batch size: 128 learning rate: 0.01 weight_decay: 0.005 epochs: 300 initializer: he_normal
Results
ResNet-110 on CIFAR-10
ResNet-110 on FMNIST/MNIST
Wide-ResNet-38 on CIFAR-10
Wide-ResNet-38 on FMNIST/MNIST
VGGNet-19 on CIFAR-10
VGGNet-19 on FMNIST/MNIST
Code Demo
https://github.com/hnurxn/Deep-Arc/RQ2