P-MSTRNN

Supplementary page forĀ 
Predictive Coding for Dynamic Vision: Development of Functional Hierarchy in a Multiple Spatio-Temporal Scales RNN Model
Minkyu Choi and Jun Tani

Video1 - six primitives

Supplementary Video1. The six types of movement patterns are used for the experiments. This video shows the reconstructed movement patterns in a closed-loop manner with network's neural activations. Each vertical line of figures correspond to context layers of the network. The higher layers are placed in left side and lower layers are in right side. The rightmost figure is the prediction output. As can be seen in the video, the lower layers placed in right parts are changing its neural activation faster while the higher layers are slowly changing their activations.

Video2 - Predictive Imitation

Video3 - Predictive Imitation

Supplementary Video2 (up) and Video3 (down). Two videos show the results of predictive imitation. As the network recieves input frame streams from the outside of network, it tries to imitate input streams by actively predicting future frames. Video2 shows the network's neural activations while the network is performing predictive imitation. Video 3 shows the neural activations processed with Principal Component Analysis. When the pattern changes from one to another, error between prediction and target rises sharply and soon goes down as the network adapts to new input.


Supplementary Table1. Network size used in the experiments.






Comments