Intrinsically-Motivated Robot Learning of Bayesian Probabilistic Movement Primitives
by
Thibaut Kulak and Sylvain Calinon
Mirco Mutti, Mattia Mancassola, and Marcello Restelli
I.2. The Significance of Reward in Reward-Based Exploration
David Isele
I.3. Curious exploration in complex environments based on Hopfield networks
Menachem Stern, Clélia de Mulatier, Philipp Fleig, and Vijay Balasubramanian
Rania Rayyes, Heiko Donat, and Jochen Steil
I.5. Intrinsically-Motivated Robot Learning of Bayesian Probabilistic Movement Primitives
Thibaut Kulak and Sylvain Calinon
I.6. Making Curiosity Explicit in Vision-based RL
Elie Aljalbout, Maximilian Ulmer, and Rudolph Triebel
I.7. Self-Improving Semantic Perception for Indoor Localisation
Hermann Blum, Francesco Milano, René Zurbrügg, Roland Siegwart, Cesar Cadena, and Abel Gawel
II.1. Curiosity-Driven Learning of Abstract Plan Feasibility
Michael Noseworthy, Caris Moses, Isaiah Brand, Sebastian Castro, Leslie Kaelbling, Tomas Lozano-Pérez, and Nicholas Roy
II.2. Learning Novel Objects Continually Through Curiosity
Ali Ayub and Alan R. Wagner
II.3. Curiosity-Driven Decision-Making for Autonomous Robots in Uncertain Environments
Ran Tian, Haiming Gang, and David Isele
II.4. Influencing Behavioral Attributions to Robot Motion During Task Execution
Nick Walker, Christoforos Mavrogiannis, Siddhartha Srinivasa, and Maya Cakmak
II.5. Fractional Binding in Vector Symbolic Representations for Efficient Mutual Information Exploration
P. Michael Furlong, Terrence C. Stewart, and Chris Eliasmith
II.6. Curiosity in Path-Planning: Synthesizing Path-Planners for Efficient Exploration
Lucas Saldyt and Heni Ben Amor
II.7. Adversarial Intrinsic Motivation for Reinforcement Learning
Ishan Durugkar, Scott Niekum, and Peter Stone