Trang chủ‎ > ‎IT‎ > ‎Data Science - Python‎ > ‎Tensorflow‎ > ‎

Early Stopping

One option is to implement Early Stopping yourself by evaluating your model’s performance on a validation set every N steps during training, and saving a “winner” snapshot of the model (using a Saver) when the model outperforms the previous “winner” snapshot. At the end of training, just restore the last “winner” snapshot. Note that you should not stop immediately when performance starts dropping, because it may get better a few steps later. One good strategy is to count the number of steps since the last time a “winner” snapshot was saved, and stop when this counter is large enough that you can be confident that the network is never going to beat it.

Another option is to use TensorFlow’s `ValidationMonitor` class and set its `early_stopping` parameters. This is documented here.



Comments