Towards Realizing Whole-Brain Computational Models Guided by Cognitive Models (WBCM-CogM)

Aim and Scope:

In recent years, there has been a focused effort to develop Whole Brain Computational Models (WBCMs), aiming to represent the entire brain's functions and contribute to creating artificial intelligence with human-level capabilities. WBCMs involve not only neuroscientific but also cognitive models, especially in constructing a cognitive architecture for consistency. Cognitive models enhance interpretability in implementing WBCMs into AI agents, providing insight into thought processes. This approach, resembling human cognition, offers potential psychological reassurance to users. The discussion about the relationship between cognitive models and WBCMs is linked to AI alignment debates, crucial as powerful AI systems develop. The workshop aims to discuss methodologies to realize WBCMs, emphasizing the role of cognitive models.

The topic of interests:

Important Dates:

Location:



Please register with WCCI at the following site before you attend this workshop.
One-Day Registration is available for participation only on 30 June.

https://2024.ieeewcci.org/registration

Map:

Contents and Schedule: 

WCCI Program案

Oral presentations are 15-minute talks plus a 5-minute Q&A per person.

The poster panel size is 2100mm high and 900mm wide, so A0 size is acceptable.


📌 Following the workshop, we will organize a special issue in the Frontiers journal.

Submission of papers extending your presentations in the workshop is encouraged.

Submissions of papers not presented at this workshop are also welcome.

Title: 

Development and Application of Cognitive Models for Robots Based on Concept Formation using Probabilistic Generative Models

Abstract: (Click here)

It is critically important for robots to integrate multiple cognitive functions, similar to humans, to perform more complex behaviors. This integration expands the range of applications for robots and significantly enhances their value. In our research, we are developing a cognitive model based on multimodal categorization using probabilistic generative models to integrate multimodal information, such as visual, auditory, and tactile data, acquired by robots. These models treat the formed multimodal category representations as concepts and use these concepts for prediction to realize various cognitive functions, such as decision making, language learning, planning and the integration of each cognitive function.

In this talk, the structure of cognitive models and their applications will be presented, with a particular focus on integrated cognitive models that use formed concepts as hubs to connect modules and combine multiple modules.

Title: 

Generalized navigation and mapping as the core of high-level cognition for embodied agents: Towards artificial consciousness and human-mimetic artificial general intelligence

Abstract: (Click here)

Simultaneous localization and mapping (SLAM) represents a fundamental problem for autonomous embodied systems, for which the hippocampal/entorhinal system (H/E-S) has been optimized over the course of evolution. In this talk, I will first describe a collaboration with roboticists who developed a biologically-inspired SLAM architecture based on latent variable generative modeling within the Free Energy Principle and Active Inference framework, which affords flexible navigation and planning in mobile robots. I will then describe how the H/E-S may realize these functional properties not only for physical navigation, but also with respect to high-level cognition understood as a generalization of the SLAM problem from an ecological rationality perspective. That is, I propose the H/E-S orchestrates SLAM processes within both concrete and abstract spaces, and that differential parameterizations of (generalized) SLAM phenomena may represent some of the most impactful sources of variation in cognition both within and between individuals, with modulators of H/E-S functioning representing fundamental cybernetic control parameters for autonomous systems, and potential guides to how we may develop AI with capacities for both "System 2" and creative cognition. However, I will go on to suggest that reverse engineering these cognitive architectures may require us to also consider the particular ways in which they are embodied/embedded/extended/enacted in biological systems. Finally, time-permitting, I will describe ongoing work on computational models of consciousness and their potential relevance for developing human-like (and beyond) A(G)I.

CFP:

Application form: [Google form]


Manuscripts should be written according to the following guidelines.


We welcome you and your colleagues to present papers and participate in discussions. 

Submitted manuscripts will be reviewed by the organizers. 


This workshop will mainly consist of a general presentation session and invited talks. The general presentation sessions include poster and oral presentations. Preference will be accepted at the time of application, but which one may change depending on the status of the application.


Organizers:

Ritsumeikan University, Japan

The University of Tokyo, Japan

Shizuoka University, Japan

This work was partially supported by JSPS Grants-in-Aid for Scientific Research (KAKENHI)  under Grant numbers, JP22H05159.