A recurrent neural network looks very much like a feedforward neural network, except it also has connections pointing backward. Let’s look at the simplest possible RNN, composed of just one neuron receiving inputs, producing an output, and sending that output back to itself.
Since the output of a recurrent neuron at time step t is a function of all the inputs from previous time steps, you could say it has a form of memory(understand how it stores).
An RNN can simultaneously take a sequence of inputs and produce a sequence of outputs.
This type of network is useful for predicting time series such as stock prices: you feed it the prices over the last N days, and it must output the prices shifted by one day into the future (i.e., from N – 1 days ago to tomorrow).
You could feed the network a sequence of words corresponding to a movie review, and the network would output a sentiment score.
Example is from image processing. The input could be an image, and the output could be a caption for that image.
You could have a sequence-to-vector network, called an encoder, followed by a vector-to-sequence network, called a decoder (see the bottom-right network). For example, this can be used for translating a sentence from one language to another.
This problem causes pre-mature learning completion.Note that RNN mathematical formula has recurrence which causes this.
RNN and its derivation LSTM both has this problem. Due to this, it is not used in NLP
https://images.app.goo.gl/k5ndCMjPtampELqW7