GOAL: build a platform to explore different neural network algorithms
First positive results (tax day), using Weighted Sum feed forward, Sum activation, and custom simple weighted feed backward.
Targets are blue, predictions are yellow/green, data is red (for now).
Legend: grey inputs, white "hidden layer neurons", green output or prediction, red not ready
Legend: grey inputs, white "hidden layer neurons", green output or prediction
Square rooting the correction throttles the bounce (back and forth, above and below the target, getting closer each cycle).
Below, showing some test data on left. Training data is hooked up.
Target is blue. Green is prediction. Grey is data. White (and green) are neural nodes.
Doesn't settle, maybe because more sharing is needed.
Original perceptron.
Example of one valley of matching weights. Notice 22.25 value for "hidden layer" neuron.
Another valley of matching weights for the same target. Notice 15.94 value for same "hidden layer" neuron.
Perceptor working.
Using just Red value weighted by inverse distance squared
Using RGB values weighted by inverse distance squared
I started with Youtube. After two weeks of education from brilliant minds, I started sketching. After understanding the basic concepts, I came up with this first Unity neural network attempt.
This neural network:
On the left are horizontal markers, showing groupings of scores. Closer scores are similar.
Similar patterns have similar scores.