"Exploring Robotic Minds: Actions, Symbols, and Consciousness as Self-Organizing Dynamic Phenomena" by Jun Tani, Oxford University Press

OUP page

Videos

Some videos related to the robotics experiments described in the book can be seen in the following.

-- Section 7.1 (related to Fig. 7.3)

A recurrent neural network model driven by chaos dynamically search for optimal plans for sensory-motor sequence patterns reaching to a specified goal. The current video shows that the RNN generates a set of possible sensory-motor sequence plans with different step sizes while minimizing the cost function dynamically.

After a set of possible sensory-motor sequence plans for reaching a specified goal are generated, two of them were enacted. The corner in the bottom in the right-hand side is the goal specified. It is noted that the second action is a sort of redundant but still reaching to the goal.

-- Section 7.2 (related to Fig. 7.7)

A mobile robot learns to predict next encountering visual landmarks while exploring a given workspace. The robot starts to rehearse the experienced image sequence in terms of memory consolidation after exploring the environment for a certain period.

-- Section 8.4 (related to Fig. 8.7)

A mobile robot with a hand learns to associate primitive sentences and corresponding behaviors with certain level of generalization. In the video, a robot, by recognizing a sentence “hit red”, generated the corresponding behavior. The robot was implemented with RNNPB model.

-- Section 9.2 (related to Fig. 9.6)

A humanoid robot learns to acquire a set of primitive behaviors and their combinations in developmental manner. This robot experiment utilizes MTRNN model which is characterized by its composition of fast and slow dynamics parts. The video shows 3 stages of the developmental learning.

-- Section 9.3 (related to Fig. 9.8)

A simulated humanoid robot was tutored to grasp one of objects specified by human subjects by gestures. The current video shows that the simulated robot after the tutoring grasped an object in the left-hand side as pointed by a human subject appeared in the video screen. The right-hand side panel shows dynamic neural activation patterns in the MSTNN model while receiving video image stream.

-- Section 9.3 (related to an experiment shown in Choi & Tani [2016])

MSTRNN as a predictive coding type dynamic neural network model was trained with video images for 3 differently concatenated human movement sequence patterns. The current video shows regeneration of these learned dynamic visual patterns by the model.

-- Section 10.1 (related to Fig. 10.1)

-- Section 10.1 (related to Fig. 10.4)

The first video shows visual imagery for rational combination of two actions, laying down an object and then grasping it by both hands to put on a table. The second video shows generation of fault imagery involving with the lying down object suddenly stands up after being put on the table which is physically impossible.

-- Section 10.2 (related to Fig. 10.8)