We started with a U-Net implementation available in the references section. We then adapted this network to fit our constraints :
To train our network to recognize elements in a picture, we divided the data set into a training set and a testing set. The training set is also divided into several mini-batches.
During training, the network goes over every mini-batch and computes an error. Then we test the network without modifying the weights, to mesure the ability of the network to generalize.
An epoch is comprised of a training over every mini-batch of the training set, and a test over the testing set. Several epochs are computed during a calculation.
Expected Loss Graph
The graph shown above is what we expect a loss graph to look like : it shows that after each epoch, the error reduces. This is not what we obtained in our first results, as shown on the graph below.
Loss Graph from one of our first trainings
There are several parameters that we have studied during this project to improve our results. These parameters allow us to influence the values of weights and bias, and therefore, they are the cursors of our neural networks that we needed to alter to enhance our results. We also implemented other solutions to improve our network. The next section is a discussion of everything we implemented for that.