ReLU Activation Function

Activation Function

Activation function is a simple mathematical function that transforms the given input to the required product that has a certain range. From their name they start the neuron when output reaches the set threshold value of the function. Principally they're responsible for switching the neuron ON/ OFF. The neuron receives the sum of the product of inputs and anyway initialized weights along with a static bias for each grade. The activation function is applied on to this sum, and an product is generated.

Rectified linear activation function ( ReLU)

The rectified direct activation function or ReLU is anon-linear function or piecewise direct function that will product the input directly if it's positive, else, it'll output zero.

It's the most generally applied activation function in neural networks, especially in Convolutional Neural Networks (CNNs) & Multilayer perceptrons.

The rectified linear activation function ( called ReLU) has been shown to lead to veritably high- performance networks. This relu function takes a single number as an input, returning 0 if the input is negative, and the input if the input is positive.

Then are some examples

relu (3) = 3

relu (-3) = 0

Large networks use nonlinear activation functions like the ReLU in its deep layers, which also fail to admit ReLU formula gradient data that's applied. The error is also backpropagated and used for weights updates .However, it's propagated using the chosen activation function derivation, If the error totals drop with the layers. At one point, the ReLU equation grade is zero, and the lack of incline means inactive nodes create the dematerializing grade problem and the network learning impasses.

To help this problem, a small direct value is added to the weights by the ReLU to insure the grade of the ReLU graph no way becomes zero in the ReLU vs sigmoid comparison.

Conclusion

We hope you get clear idea about Rectified linear activation function .