Pattern recognition
This is a implementation of a artificial neural network composed by perceptron in a multilayer architecture that is capable to learn the alphabet letters represented as a grid of symbols in a specific arrangement. After the training phase, the system is capable to identify variations of modified letters or patterns in the grid interface.
INTRODUCTION
A multilayer perceptron is a feedforward artificial neural network model that maps sets of input data onto a set of appropriate output. It is a modification of the standard linear perceptron in that it uses three or more layers of neurons (nodes) with nonlinear activation functions, and is more powerful than the perceptron in that it can distinguish data that is not linearly separable.
Activation function
If a multilayer perceptron consists of a linear activation function in all neurons, that is, a simple on-off mechanism to determine whether or not a neuron fires, then it is easily proved with linear algebra that any number of layers can be reduced to the standard two-layer input-output model (see perceptron). What makes a multilayer perceptron different is that each neuron uses a nonlinear activation function which was developed to model the frequency of action potentials, or firing, of biological neurons in the brain. This function is modeled in several ways, but must always be normalizable and differentiable.
The two main activation functions used in current applications are both sigmoids, and are described by
in which the former function is a hyperbolic tangent which ranges from -1 to 1, and the latter is equivalent in shape but ranges from 0 to 1. Here yi is the output of the ith node (neuron) and vi is the weighted sum of the input synapses. More specialized activation functions include radial basis functions which are used in another class of supervised neural network models.
The multilayer perceptron consists of three or more layers (an input and an output layer with one or more hidden layers) of nonlinearly-activating nodes. Each node in one layer connects with a certain weight wij to every node in the following layer.
Learning occurs in the perceptron by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through backpropagation, a generalization of the least mean squares algorithm in the linear perceptron.
We represent the error in output node j in the nth data point by ej(n) = dj(n) − yj(n), where d is the target value and y is the value produced by the perceptron. We then make corrections to the weights of the nodes based on those corrections which minimize the error in the entire output, given by
Using gradient descent, we find our change in each weight to be
where yi is the output of the previous neuron and η is the learning rate, which is carefully selected to ensure that the weights converge to a response fast enough, without producing oscillations. In programming applications, this parameter typically ranges from 0.2 to 0.8.
The derivative to be calculated depends on the input synapse sum vj, which itself varies. It is easy to prove that for an output node this derivative can be simplified to
where
is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is
This depends on the change in weights of the kth nodes, which represent the output layer. So to change the hidden layer weights, we must first change the output layer weights according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function.
THE SYSTEM
The interface system its shown in the figure 1
Fig. 1. Interface od the system
The systems has the training data in text plain format where as it is possible to see in the figure 2.
Fig. 2. Training patterns
As is possible to see at the figure 2, there are patters as letters in the american ABC.
Every 5 characteres represent a row at the grid in the interface's system and with empty (0) or filled (1) cells the letters as paterns are shown.
To use the system first it has to be trianed: Click on cargar datos de entrenamiento and choose the entrenamiento.txt file.
Once the files is uploaded, cilck on entrenar (training).
Once the training is done, you can use the system: try changing the patters in the left grid. First load one from the combo, when it appears in the first grid, change it editting with x letters the pattern and click on Hacer prueba to make the recognition of the pattern
In the Figure 3 is shown a complete test using the system
Fig 3. testing
As is possible to see at the figure 3, once the net has been trained the pattern number 14 (n letter) is choosen and it is shown in the left grid. Then editting the pattern is modified to clikc on hacer prueba button and the make the recognition.
The executable file of this projects is available to dowload a the downloading links in this page