Research Blogs
Research Blogs
In this age of AI revolution, Recurrent Neural Networks have emerged as one of the most powerful techniques for learning from sequential data. Through a process called Back-propagation Through Time (BPTT), these RNN models propagate errors through time to learn the model parameters.Â
However, these models are criticized for their vulnerability to vanishing gradients and training difficulties, particularly for non-linear networks. Conversely, current research has focused on developing biologically plausible algorithms to emulate the human brain, which has a greater capacity for accuracy and efficiency in learning than most algorithms. BPTT-based RNNs lack biological plausibility because they are purely linear and use a backwards network with symmetric weights.
Here, we are going to investigate the credit assignment for RNNs using a method known as Target Propagation Through Time (TPTT), which approaches credit assignment by defining targets for each layer in time rather than propagating faults. Unlike earlier BPTT-based RNNs, this approach mitigates the biological plausibility concerns and allows non-linearities in networks
Click here to read in details.