Accepted papers

No.1 Construction of a Multimodal Learning Model Based on Integrating Stochastic Models

Ryo Kuniyasu, Tomoaki Nakamura, Takayuki Nagai, and Tadahiro Taniguchi

No.2 Measuring Task Uncertainty in Meta-Imitation Learning

Tatsuya Matsushima, Naruya Kondo, Yusuke Iwasawa, Kaoru Nasuno, and Yutaka Matsuo

No.3 Integrating Simultaneous Localization and Mapping with Map Completion Using Generative Adversarial Networks*

Yuki Katsumata, Lotfi El Hafi, Akira Taniguchi, Yoshinobu Hagiwara, and Tadahiro Taniguchi

No.4 Cognitive Architecture for Joint Attentional Learning of word-object mapping with a Humanoid Robot

Jonas Gonzalez-Billandon, Lukas Grasse, Alessandra Sciutti, Matthew Tata, and Francesco Rea

No.5 Combining Causal Generative Model and Deep Reinforcement Learning for Cognitive Agents in Minecraft.

Andrew Melnik, Lennart Bramlage, Hendric Voss, Federico Rossetto, and Helge Ritter

No.6 A Perceived Environment Design using a Multi-Modal Variational Autoencoder for learning Active-Sensing

Timo Korthals, Daniel Rudolph, Malte Schilling, and Jurgen Leitner

No.7 Learning Deep Features for Multi-Modal Inference With Robotic Data

Atabak Dehban, Lorenzo Jamone, and Jose Santos-Victor

No.8 Integration of Multiple Generative Modules for Robot Learning

Kazuki Miyazawa, Tatsuya Aoki, Takato Horii, and Takayuki Nagai