The architecture is the heart of the NNNs. We designed feed forward fully connected deep neural networks to predict the results of nucleosynthesis computations with the architecture you saw here. This is the structure that lead to the best results within all the things we tried, by the amount of possibilities in machine learning is basically infinite. Beside the input and output layers, which are the parameters essential to define a burning zone, you can experiment and change every part of the architecture.
So how do I control the NNN architecture?
Go to the script that defines our NNN architecture: NNNfunctionsCompsPlusEps.py, found in
$NuclearNeuralNetworks/python_scripts_for_analysis/TrainNNNs/
This file receives the training sets (cvs files) generated here or found in
$NuclearNeuralNetworks/training_sets/mesa_${net}/
where net=mesa_80 or net=meas_151 through the parameter database_file.
You can change the number of outputs loaded per each training cycle (batch_size), number of isotope in the nuclear reaction network (isotopesNum; 80 or 151 if you are relying on our training sets), division between training, validation and testing data, activation functions, loss functions, optimizer and more. You can perform manual hyper-parameter search as we did, or write your own scripts to do this stage more automatically.
You can control the number of layers (layersNum) and the pre-factor multiplying the number of neurons per layer that is defined in the architecture script (neuronsNumFactor) through the file NuclearNeuralNetwork.py found in the same dir. This script will later be used to define define different parameters for the training process.
You could also define a completely different NNN architecture in the script NNNfunctionsCompsPlusEps.py that is better suited for your needs. Given the same inputs and outputs, it should work fine with the existing training sets and scripts.