24th Workshop "From Objects to Agents" (WOA23), 6th-8th November, Rome

CALL FOR PAPERS

After the successful 23 editions: Parma, Modena, Milano, Villasimius, Torino, Camerino, Catania, Genova, Palermo, Parma, Rimini, Rende, Milano, Torino, Catania, Napoli, Catania, Scilla, Palermo, Parma, Bologna (online), Bologna, and Genova, the 24rd edition of the Workshop “From Objects to Agents” (WOA) will be held in Rome, organized by the Institute of Cognitive Sciences and Technologies, ISTC - CNR.

The 24th edition of the Workshop “From Objects to Agents” (WOA) will be held in Rome (in presence) to serve as a forum for researchers and practitioners working on all aspects of agents and Multi-Agent Systems (MAS). Following the significant interest that all facets of Artificial Intelligence (AI) have been recently obtaining, the topic for WOA 2023 is

Cognition: an outdated goal or a permanent challenge for the new AI paradigm?

The challenge that AI systems are representing for society is of enormous importance: their competence and generality of support is growing and consequently their pervasiveness in each area of our existence: individual and social. They will change not only our material and institutional world/reality but our mind.

The new AI systems that are spreading respond to the problems submitted to them, with results that are often completely indistinguishable from those that humans themselves can achieve. At the same time, the real intelligent capabilities of these models are debated and questioned. In fact, even if we are comforted by some of their amazing results, they appear to proceed according to strictly data-driven approaches, which are far from obvious with respect to the performances expressed. It therefore becomes interesting to ask whether an approach purely guided by the relationships that emerge from data is sufficient to reproduce some typical attitudes of humans in the exercise of their intelligent performances and interactions. And what sense does it make today to build architectures, top-down, that make use of "cognitive modeling". Also in the Agents and MAS domain, as we know, this direction has often been followed mainly using the symbolic approach. In fact, between the 80s and 90s of the last century, the cross-fertilization with the social and cognitive sciences and the encounter with the MAS favored a more cognitively plausible vision of agents (for example with the Belief Desire intention (BDI) model), as autonomous systems endowed with the ability to represent, planning and social action.

With the development and significant acquisitions of machine learning, especially deep learning, data-driven science is imposing a substantial reversal of scientific investigation. We can say that BigData Science determines a dominance of the predictive scientific function over the explanatory one. In fact, the adoption of statistical models and data-mining techniques for the extraction of information from data leads to the identification of regular trends, which in some cases allow the projection on the future of the same trends without an investigation into the producing causes, also neglecting the possibility of indicating and suggesting the guidelines or principles to which the results of these models should comply. A direct consequence is the scarce role of social and cognitive mechanisms: the more the ability to extract regularities from vast databases is refined, the less the individual mechanisms that generate them are investigated, since these mechanisms are judged irrelevant in the prediction of phenomena.

Therefore, basic questions arise that need to be addressed for the conscious and useful development of this scientific area:

What about Explanatory AI, Transparency in HAI? Is not “mind reading” necessary for that?

Are cognitive capacities compatible with deep learning systems?

What cognitive capacities, if any, do current deep learning systems possess? And in case, what do deep learning-based Agent and MAS tell us about human and social cognition? 

Does it still make sense to pose the problem of the explicit representation of the world? And in case, what kind of representations can we ascribe to artificial neural networks?

How can we develop a theoretical understanding of deep learning MAS?

What are the key obstacles on the path from current deep learning systems to human-level cognition? 


Topics of Interest:

 



Moreover, the following topics are also more than welcome:

along with any other MAS-related topic.