Continual Lifelong Learning

Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to the long-term memory consolidation and retrieval without catastrophic forgetting. Consequently, lifelong learning capabilities are crucial for computational learning systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep and shallow neural network models that typically learn representations from stationary batches of training data.

Multisensory Integration and Conflict Resolution

The ability to efficiently process multisensory information is a key feature of the human brain that provides a robust perceptual experience and behavioural responses. Consequently, the processing and integration of multisensory information streams such as vision, audio, haptics and proprioception play a crucial role in the development of autonomous agents and cognitive robots, yielding an efficient interaction with the environment also under conditions of sensory uncertainty. Our interdisciplinary research work provides important insights into how multisensory integration and conflict resolution can be modelled in robots and introduces future research directions for the efficient combination of sensory observations with internally generated knowledge and expectations.

Human Action Recognition, Prediction, and Assessment

The robust recognition of others’ actions represents a crucial component underlying social cognition. Humans can reliably discriminate a variety of socially relevant cues from body motion such as intentions, identity, and affective states. Neurophysiological studies have identified a specialized area for the visual coding of complex motion in the mammalian brain, comprising neurons selective to biological motion in terms of time-varying patterns of form and motion features in a wide number of brain structures. The investigation of the biological mechanisms of action perception is fundamental to the development of artificial systems that should account for the robust processing of body motion cues from cluttered environments and rich streams of information.

Organized Workshops & Special Issues

2018-10-05 - Workshop on Crossmodal Learning for Intelligent Robotics @ IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)'18, Madrid, Spain
2018-09-08 - Workshop on Intelligent Assistive Systems @ IEEE World Congress on Computational Intelligence (WCCI-IJCNN) '18, Rio de Janeiro, Brazil
2018 - [CFP] Special Issue on Crossmodal Learning - Cognitive Systems Research journal (Elsevier)
2017-09-18 - Workshop on Computational Models for Crossmodal Learning @ Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EPIROB) '17, Lisbon, Portugal

Invited Seminars

2018-06-23 "Lifelong Learning on Humanoid Robots", Dept. of Computer Science, Oxford University, Oxford, UK
2018-03-16 "Continual Learning of Representations with Deep Neural Network Self-Organization", Stanford AI Lab (SAIL), Stanford University, Stanford, CA, USA
2017-07-11 "Deep Neural Network Self-Organization for Lifelong (Multimodal) Learning", Kreiman's Lab, Harvard University, Cambridge, MA, USA
2015-11-20 "A neurocognitive assistive robot: Human action learning with neural network self-organization", Asada Laboratory, Osaka University, Osaka, Japan.
2015-11-10 "Emergence of multimodal cognitive representations from neural network self-organization", Cognitive Neuro-Robotics Lab, KAIST, Daejeon, South Korea.
2014-12-18 "Neurocognitive assistive robotics", Dept. of Theoretical Computer Science and Mathematical Logic, Charles University in Prague, Czech Republic
2014-12-11 "Neural integration of pose-motion features for human action recognition", Slipguru, Universita' di Genova, Genoa, Italy.

Research projects

2016-2018 - TRR 169 "Crossmodal Learning"
2013-2016 - Cognitive Assistive Systems (CASY)
2013-2015 - Cross-modal interaction in natural and artificial cognitive systems (CINACS)