Special Issues

Two special issues are organized on the main topics of the conference. Both special issues are open to the scientific community and will follow the acceptance criteria of their respective editorial boards.

Special issue on

Modeling imprecise information and knowledge to improve explainability in AI

Aims and scope

XAI lies at the intersection of different fields, which include, let alone Artificial Intelligence, Cognitive and Social Sciences, Human Computer Interaction, Philosophy and Psychology among others. The strong multidisciplinary character of XAI is due to the centrality of people in all aspects of the development and deployment of XAI systems. People have an exceptional ability to manage the complexity of phenomena through mental processes such as organization, granulation and causation. A key factor is the capability of managing imprecision in forms that are well captured by several theories within the Granular Computing paradigm, such as Fuzzy Set Theory, Rough Set Theory, Interval Computing and hybrid theories among others. Endowing XAI systems with the ability of dealing with the many forms of imprecision, not only in the inference processes that lead to automated decisions, but also in providing explanations, is a key challenge that can push forward current XAI technologies towards more trustworthy systems and full collaborative intelligence.

Topics of interest include, but are not limited to:

  • Foundational and philosophical aspects of imprecision in information and knowledge

  • Theoretical advancements in imprecision modeling in AI

  • Imprecision modeling methods to improve explainability in AI New technologies for representing and processing imprecision in XAI systems

  • Real-world applications and case studies that demonstrate explainability improvements through imprecision management

Submission guidelines and review process

Papers must be submitted according to the standard procedure of Information Sciences, selecting the S.I. "Managing imprecision and uncertainty in XAI systems”. All submitted papers should report original work and provide meaningful contributions to the current state of the art.

Each submitted paper will undergo a first screening by the Guest Editors. If the submission falls within the scope of the SI, it will undergo a regular revision process. Acceptance criteria are the same as regular issues of the journal.

Important dates

  • Submissions start: November 1st, 2021

  • Paper submission deadline: January 28th, 2022

  • Final decision: July 15th, 2022

  • Tentative period for final publication: Fall 2022

SI webpage: https://www.journals.elsevier.com/information-sciences/call-for-papers/modeling-imprecise-information-and-knowledge-to-improve-explanability-in-ai

Authors guidelines and journal information can be found at https://www.journals.elsevier.com/information-sciences

Special issue on
Human-Centered Intelligent Systems

Aims and scope

Nowadays, Artificial Intelligence has become an enabling technology that pervades many aspects of our daily life. At the forefront of this advancement are data-driven technologies like machine learning. However, as the role of Artificial Intelligence becomes more and more important, so does the need for reliable solutions to several issues that go well beyond technological aspects:

  • How can we make automated agents justify their actions? and how to make them accountable for these actions?

  • What will be the social acceptance of intelligent systems, possibly embodied (e.g. in robots), in their interaction with people?

  • How will automated agents be made aware of the whole spectrum of human nonverbal communication, so as to take it into account and avoid missing crucial messages?

  • Is it possible to avoid amplifying human biases and ensure fairness in decisions that have been taken automatically?

  • How can we enable collaborative intelligence amongst humans and machines?

Purely data-driven technologies are showing their limits precisely in these areas. There is a growing need for methods that, in a tight interaction with them, allow different degrees of control over the several facets of automated knowledge processing. The diversity and complementarity of Soft Computing techniques in addressing these issues is playing a crucial role.

This Special Issue aims to collect the most recent advancements in the research on Human-Centered Intelligent Systems with special focus on Soft Computing methods, techniques and applications on the following and related topics:

  • Trustworthiness, explainability, accountability and social acceptance of intelligent systems;

  • Human-computer interaction to foster collaboration with intelligent systems

  • Affective computing and sentiment analysis to promote nonverbal communication in intelligent systems

  • Fighting algorithmic bias and ensuring fairness in intelligent systems;

  • Real world applications in health-care, justice, education, digital marketing, biology, hard and natural sciences, autonomous vehicles, etc.

Proposals related to further topics are welcome as long as they fall within the general scope of this special issue, which is computational intelligence methods in human-centered computing.

Submission guidelines and review process

Papers must be submitted according to the standard procedure of Soft Computing, selecting the S.I. “Human-Centered Intelligent Systems”. All submitted papers should report original work and make a meaningful contribution to the state of the art.

Each submitted paper will undergo a first screening by the Guest Editors. If the submission falls within the scope of the SI, it will undergo a regular revision process. Acceptance criteria are the same of regular issues of the journal.

Important dates

  • Submissions open: December 1, 2021

  • Paper submission deadline: February 1, 2022

  • Final decision: May 1, 2022

  • Tentative period for final publication: Fall 2022


Authors guidelines and journal information can be found at https://www.springer.com/journal/500