Fun Features
This project is written largely for my own amusement. I am not working towards any fancy goal but to enhance this project in whatever direction the day asks.
Network Description Language
This came in a little early on, re-compiling the project just to add a layer should not require me to calculate input and output sizes again and again, what do we have computers for?
The grammar to describe the network is not exactly free but is close.
Here's a sample of how this language looks:
Network Description Langauge
->NetworkDescription
ErrFName : MeanSquareError
MaxEpocs : 10
->EndNetworkDescription
->ConvLayer
Name : Conv1
IpSize : [28,28,1] # Must have an input size for first layer
Activation : TanH
NumKernels : 5
KernelSize : [5,5]
KernelStride : [1,1]
->EndConvLayer
->MaxPoolingLayer
Activation : RELU
WindowSize : [4,4]
->EndMaxPoolingLayer
->FullyConnectedLayerGroup
Out , Sigmoid : 10
->EndFullyConnectedLayerGroup
Plugin Style eveything
Want a new activation function, just add about 10 lines of code to calculate activation and differential of activation and plug it in, want a new error function, just that's just another 10 lines, plugins make enhancement really easy.
Threaded Progress Monitor
Why not use a thread to watch the Pass rate climb? I would love to have a nice CSS page with WebGL graph that updates a graph, but as of now I have a stub for this idea as a monitor to Network's progress
Data agnostic architecture
This one thing that vexed me in many "tutorial" neural network was that the whole network architecture will have to be changes if you want to use one data set over another. Thus we have a PatetrnSet class in the project in order to create an interface between the network and data, swapping out data sets is much easier now. The github code reads MNSIT and CIFAR-10 data, adding different data types is trivial.
Top Classification Failures as Images
Top-N failures: when that last 1% of data set fails to classify correctly, you want to know what's so terrible about them. So the network can keep track of worst offenders and show how they look. This helped me find out something about the MNIST classifying network: that the top failures always contained images that appeared as though the pen that wrote them had run out of ink or the scanner was glitching. So I tried to model the input so as to simulate such an image and voilà! the accuracy jumped.
Badly classified images:
Which brings us to:
Attenuation Layer.
This layer simply adds a Gaussian (or uniform random) noise to the input images and the rest of the network trains on the this garbled output, avoiding failures like above layer.
Image Read Write
Not strictly a Neural network feature, but what good is a training filter if we can't see how it looks. Or how the images that failed classification look, thus a PPMIO::Write and PPMIO::Read functions.