Once you already produced your training sets and converged on your desired NNN architecture, you can start mass production of your own NNN trained models.
So how do I train the NNNs?
Go to the script NuclearNeuralNetwork.py, found in
$NuclearNeuralNetworks/python_scripts_for_analysis/TrainNNNs/
Using this script you will decide on different parameters related to the training. It imports NNNfunctionsCompsPlusEps.py where you defined the architecture of the NNNs (our design is found in the same directory). The only components of the architecture you can control through this script are the number of layers (layersNum) and the pre-factor multiplying the number of neurons per layer that is defined in the NNNfunctionsCompsPlusEps.py script(neuronsNumFactor). For most layers the basic neuron number is 256, so setting neuronsNumFactor=8 results in layers with 2048 neurons.
You can control the number of data points that go to training and validation through trainingDataNum (the rest will be assigned as testing datasets), decide whether to start the training from scratch of from a saved checkpoint through nucelarNN, and determine for how long you want to perform the training through max_time (or adopt another stopping condition). Note that we ultimately did not make use of the test set, but rather evaluating the performance of the NNNs by generating a test set with initial compositions that are similar to a commonly used small nuclear reaction network for comparison, as we explain in the testing section.