Neural Networks
Table of Contents
Feedforward network
Back-Propagation Algorithm
Recursive least Square
Levenberg-marquardt
Radial-Basis functions
Hopfield network
Recurrent Neural Network
KSOM can be used to capture the topological map underlying the input data space. KSOM is quite useful to interpret data where input-output relationship is not known. KSOM can also be used to extract features where a large amount of input-data is available.
KSOM Learning comprises of following steps:
Form a 2-D or 3-D or higher dimensional lattice where each node represents a neuron. Each such node is assigned a weight vector of same dimension as that of input neuron. Weights are randomly initialized in the input space and these weights represent the neuron location in the input space.
Input data is presented to the network one by one and the neuron with minimum distance from this input is declared a winner.
Weights of this winner neuron as well as its neighbors (defined by a neighborhood function) are update so that they move closer towards the input neuron.
Steps 2-3 are repeated for a large number of times.
Click here to download the program which implements a KSOM algorithm where a 2D lattice is used to learn the sinusoidal relationship between input and output. In the adjacent figure, the red dots represent initial location of neuron postions. As the training proceeds, these neurons organize themselves so as the capture the topological relationship hidden in the input data. This code was contributed by Balaji Uggirala, a PG student at my lab.
Click here to download the program for estimating the weights for the adjacent recurrent neural network with only one node. This network is trained using RTRL algorithm to learn the following dynamics Y(t+1) = -0.5 Y(t) - Y(t-1) + 0.5 u(t)
Training data:
u(t) is generated randomly between 0 and 1 and output is generated using above equation. Nearly 100 such datas are generated and this constitutes an epoch. The neural network is now trained over this epoch. The network output is given by
y(t+1) = w1 * y(t) + w2 * y(t-1) + w3 * u(t)
Training Parameters: No. of epochs = 1000, learning rate = 0.003
Final estimated weights: -0.421636 -0.947706 0.456937
input - output data
weights
mean square error