ReaLX 2018

The SICSA Reasoning, Learning and Explainability Workshop 2018

June 27th 2018

Welcome to ReaLX 2018

Reasoning, Learning and Explainability are key to AI systems that must interact naturally to support users in decision making. Systems need to be capable of explaining their output. Regulations increasingly support users rights to fair and transparent processing in automated decision-making systems. Addressing this challenge is steadily becoming more urgent as the increasing reliance on learned models in deployed applications continues to be driven by the recent success of deep learning and other data-driven systems.

Though models learned directly from data offer improved accuracy, mapping these concepts to facilitate human reasoning is difficult. In contrast, reasoning systems can offer transparency through logical alignment of representation and reasoning methods to allow the necessary insight into the decision-making process. This is a core principle behind explainability and is critical if we are to use AI with the intent of improving user performance and experience.

The ReaLX’18 workshop will provide a forum to share exciting research on real AI methods, highlighting and documenting promising approaches, and encouraging further work, thereby fostering connections among SICSA researchers interested in AI. We expect to draw interest from AI researchers working in a number of related areas including NLP, ML, reasoning systems, intelligent user interfaces, conversational AI and adaptive user interfaces, causal modelling, computational analogy, constraint reasoning and cognitive theories of explanation and transparency.

Proceedings of the workshop have been published via CEUR and are available to view here.