Call for papers

CALL FOR PAPERS

IROS 2019 Workshop on Deep Probabilistic Generative Models for Cognitive Architecture in Robotics

Web: https://sites.google.com/site/dpgmcar2019/

Submission deadline: September 20, 2019 -> October 6, 2019 [Extended]

OBJECTIVES

==========

The goal of this workshop is to bring together researchers from robotics and machine learning to share knowledge about deep and probabilistic generative models to develop a future cognitive architecture for robots. The workshop also aims at examining the challenges and opportunities emerging from the interdisciplinary research field covering machine learning, cognitive science, and robotics. Success in deep learning has enabled robots to recognize their environment, e.g., visual and speech recognition, and to learn behaviors efficiently, e.g., reinforcement and imitation learning. However, most of the success in deep learning is heavily depending on labeled data or hand-crafted reward functions that need to be prepared before the learning process.

In this workshop, we will investigate how we can create a cognitive architecture for a robot using deep and probabilistic generative models. To this end, we aim to share knowledge about the state-of-the-art machine learning methods that contribute to modeling language-related capabilities in robotics, and to exchange views among cutting-edge robotics researchers with a special emphasis on the usage of deep generative models in robotics and modeling a wide range of cognitive capabilities using probabilistic generative models. The workshop will include keynote presentations from established researchers in robotics, machine learning, and cognitive science. There will be a poster session highlighting contributed papers throughout the day.

INVITED SPEAKERS

================

We will have distinguished speakers who are the forerunners of this interdisciplinary research effort.

Kenji Doya, OIST

Igor Mordatch, OpenAI

Masahiro Suzuki, The University of Tokyo

Tomoaki Nakamura, The University of Electro-Communications

Douwe Kiela, Facebook AI Research USA

SUBMISSION

==========

Submissions must be in PDF following the IEEE conference style in two-columns and be limited to 2 pages:

http://ras.papercept.net/conferences/support/support.php

All submissions will be peer-reviewed. Accepted papers will be presented during the workshop in a poster session. A number of selected papers will be presented as oral presentations or spotlight talks.

Send your PDF manuscript indicating [DPGMfCAR 2019] in the subject to the following email: dpgmfcar[at]rlg.sys.es.osaka-u.ac.jp

TOPICS

======

We invite contributions in the following topics that are indicative but by no means exhaustive:

* Language acquisition by robots

* Symbol grounding/emergence in Robotics

* Multimodal communication

* Emergence of communication

* Learning complex motor skills and segmentation of time-series information

* Concept formation

* Probabilistic programming and reasoning in robotics

* Human-robot communication and collaboration based on machine learning

* Deep learning for robotics

* Bayesian modeling for high-level cognitive capabilities

* Application in communicable service robots

* Language understanding by Robots

IMPORTANT DATES

===============

Submission of abstracts: September 20, 2019-> October 6, 2019 [Extended]

Notification of acceptance: September 31, 2019-> October 12, 2019 [For extended deadline]

Workshop: November 8, 2019

ORGANIZERS

==========

* Takato Horii, Osaka University

* Tadahiro Taniguchi, Ritsumeikan University

* Tetsunari Inamura, National Institute of Informatics

* Lorenzo Jamone, Queen Mary University of London

* Takayuki Nagai, Osaka University

* Yiannis Demiris, Imperial College London

PROGRAM COMMITEE MEMBERS

==========

* Xavier Hinaut, INRIA

* Michael Spranger, Sony Computer Science Laboratories Inc.

* Emre Ugur, Bogazici University

* Yoshinobu Hagiwara, Ritsumeikan University, Japan

* Tetsuya Ogata, Waseda University

CONTACT

=======

Takato Horii,

Osaka University, Japan

takato@sys.es.osaka-u.ac.jp

/