Neural Network

GOAL: build a platform to explore different neural network algorithms

NNClass:

  • loads input data
  • randomize or "test" weights
  • uses learning rate
  • can generate user-defined number of hidden nodes (column of three)
  • corrects weights by adding deltas that are proportional to the weight / (sum of weights)
  • each dataset has user-defined number of inputs and a guess and an answer
  • can train user-defined number of times
  • last node's value (on right) is the neural network's guess

First positive results (tax day), using Weighted Sum feed forward, Sum activation, and custom simple weighted feed backward.

Sigmoid:

  • Studying how sigmoid affects values across nodes.
  • Seems to retain uniqueness.

Training:

  • Cycle thru all training dataSets.
  • Sum up all errors.
  • FeedBackward recursive using Learning Rate.
  • Testing DeltaSum and WeightedSigmoid

Ready To Test:

  • Images show training data and test data match (second)
    • forwardFeedSumDelta: sum of neighbor's value / value
    • backwardPropagation: CorrectWeighted: Correction = Learning rate * (Target - Prediction).
    • Correction starts at output mode (top, right), distributed amongst each input (weighted by weight values), recursively, down to each input node.
    • Test platform is almost ready to use!
    • Easy scale up of dataSet sample size.

Targets are blue, predictions are yellow/green, data is red (for now).

First prediction:

  • 3 training data sets (left) and 1 live data set.
  • initial flash is reaction to random weight seeds.
  • live data has grey target.
  • data predictions are yellow until made.
  • target and data is loaded into nodes
  • nodes feed forward and correct until within 99.9%.
  • weights learn from all training data.
  • data is red until processed (train / predict)
  • learned weights are used to predict live data.
  • each training set can have different targets.
  • targets are "correct predictions" and are known.

Data Set Management:

  • easy to import data sets
  • Test data flag
  • Each set has a prediction and a target (if test)
  • Load data set into nodes
  • Feed forward
  • Correct weights (simple scalar)
  • Proceed to next untrained data set when confidence is 99.9%
  • Scans data sets to train new test data or predict live data

Legend: grey inputs, white "hidden layer neurons", green output or prediction, red not ready

Organic Perceptrons:

  • Correction factor = target / prediction.
  • Back Propagation = Multiple each weight by the correction factor.
  • Each neuron can have unique number of inputs (and weights)
  • No layers, more difficult to set up (for now)
  • Simple math
  • No bias
  • Should scale up nicely
  • Benchmark by number of neurons
  • Need nice test data sets

Legend: grey inputs, white "hidden layer neurons", green output or prediction

Square rooting the correction throttles the bounce (back and forth, above and below the target, getting closer each cycle).

Below, showing some test data on left. Training data is hooked up.

Target is blue. Green is prediction. Grey is data. White (and green) are neural nodes.

Doesn't settle, maybe because more sharing is needed.

Original perceptron.

Example of one valley of matching weights. Notice 22.25 value for "hidden layer" neuron.

Another valley of matching weights for the same target. Notice 15.94 value for same "hidden layer" neuron.

Perceptor working.

Perceptor:

  • graphic feedback

Using just Red value weighted by inverse distance squared

Using RGB values weighted by inverse distance squared

What an adventure:

I started with Youtube. After two weeks of education from brilliant minds, I started sketching. After understanding the basic concepts, I came up with this first Unity neural network attempt.

This neural network:

  • is for images
  • creates one node for each pixel.
  • each node tests (numPixels -1) connections to each one of its neighbors
  • each node has an inputValue and an outputValue
  • inputValue is pixel value (color value)
  • outputValue is the ave of the neighboring pixel factors divided by the distance squared
  • neighboring pixel factors are the scale factors to multiply by to reach the neighboring values. 1 x 2 = 3, or 3 x .333 = 1
  • outputAll is the ave of the neighboring pixel outputVals weighted by distance to the image center.
  • similar images have similar outputAll values
  • self-classifying
  • no back-propagation
  • no random seeding, each node is unique, and its neighbor set is also unique
  • further: auto-add more nodes to increase difference between close values to increase confidence
  • further: remember last "frame" of input

Fruit:

On the left are horizontal markers, showing groupings of scores. Closer scores are similar.

Concept:

  • 25x25 res images
  • 9 pics
  • 625 pixel nodes
  • 390,000 connections (625 x 624)
  • each connection is a ratio of node values over the square of the distance
  • score range: 16.21 to 17.07
  • .57 sec on MacPro

Similar patterns have similar scores.