The International Robust AI Workshop

Workshop Summary

Aim

Methods in AI for robotic control, mobile platforms, and cognitive cyber-physical systems are developing rapidly. They tackle the challenging task of modeling real-world systems and environments through data, using machine vision, reinforcement learning for control, probabilistic machine learning, among many others. Such data-driven approaches have led to many concerns regarding the robustness, stability, and overall safety of these systems.

While data-driven approaches based on learning algorithms have seen huge success in the last decade, when applied to cyber-physical systems such as manufacturing applications and healthcare robotics, the lack of safety guarantees causes trust issues. A central challenge is defining and implementing robustness for different applications and providing methods for analyzing and verifying models. This workshop investigates the diverse meaning of robust AI and gathers a wide array of approaches to the problem.

The RAW - workshop provides a forum for bringing together researchers from academia and industry to explore and present their findings in Robust Artificial Intelligence with theories, systems, technologies, and approaches for testing and validating them on challenging real-world, safety-critical applications.

Topics

Interesting research topics for the workshop and for papers include, but are not limited to:

  • Cognitive models and architectures

  • Explainable AI

  • Knowledge-driven models

  • Safe exploration

  • Hybrid-models

  • Reasoning-based methods

  • Trustworthiness

  • Understanding and controlling machine learning biases

  • Adversarial attacks and defense


Call for Papers

IMPORTANT DATES

  • Paper submission deadline: (Extended) 9 May 2021

  • Notification of acceptance: 20 May 2021

  • Camera ready: 28 May 2021

The workshop is an invited session at the internal KES conference, Szczecin, Poland, 8-10 September 2021

Authors who submit and present their work will have their work published and indexed internationally by Elsevier's Procedia Computer Science.

Invited session at the 25th International Conference on Knowledge-Based and Intelligent Information & Engineering Systems
September 8-9, KES2021, Szczecin, Poland

Invited Speakers

Name: Prof. Sebastien Gros

Affiliation: Department of Engineering Cybernetics, NTNU, Trondheim, Norway

Field: Safe reinforcement learning and data-driven MPC

Google scholar link: https://scholar.google.com/citations?user=38fYqeYAAAAJ&hl=en

Research gate link: https://www.researchgate.net/profile/Sebastien-Gros-2

Friday 10 Sept 2021: 10:30 - 11:30 CET

Model-Based Optimal Control and Reinforcement Learning: a Path to Safe Data-Based Policies?

Abstract: Reinforcement Learning and MPC have natural connections that we will explore in this talk. We will discuss a recent, fundamental result in learning-based MPC, which formally explains how RL and MPC can be combine, and clarifies the role of the (inaccurate) model in MPC. One of the key motivations for using MPC in the context of Reinforcement Learning is the possibility to introduce formalism and guarantees on the behavior of the resulting policy. In particular, stability and safety guarantees can be formally discussed in the context of MPC-based RL. We will discuss recent results on that topic.

Bio: Sebastien Gros got his PhD degree from EPFL in 2008 in the field of control and optimization. After a journey by bicycle from Lausanne to the Everest base camp, he worked for a short while in the energy industry before returning to academia as a post-doc in KU Leuven, in the field of numerical optimization. In 2013, he joined Chalmers University of Technology in Sweden as an assistant Prof., where he was promoted to associate. He joined NTNU in 2019 as a full Prof. He is currently leading a research group working on the combination of formal optimal control methods with Reinforcement Learning methods.

Name: Prof. Nirmalie Wiratunga

Affiliation: Institute for Innovation, Design and Sustainability (IDEAS), Robert Gordon University, Aberdeen, UK.

Field: Algorithms, information science, modeling, linear programming, constraint programming, decision support systems, computer-aided decision making, interval analysis, combinatorial designs, uncertainty and decision making

Google scholar link: https://scholar.google.com/citations?user=6M9aAAoAAAAJ

Research gate link: https://www.researchgate.net/profile/Nirmalie-Wiratunga

Friday 10 Sept 2021: 11:30 - 12:30 CET

Role of case-based reasoning for explainable AI

Abstract: A right to obtain an explanation of the decision reached by a machine learning model is now an EU regulation. Different stakeholders may have different background knowledge, competencies and goals, thus requiring different kinds of interpretations and explanations. In this talk I will present an overview of explainable AI (XAI) methods with particular focus on the role of case-based reasoning (CBR) for XAI. Specifically, we will look at recent work in post-hoc exemplar-based explanations that use CBR for factual, near-factual and counterfactual explanations. An alternative role of CBR involves reasoning with end-users’ explanation experiences to enable the sharing and reusing of experiences by users for users. Here I will present our initial work towards creating the iSee XAI experience reuse platform (https://isee4xai.com/) where our aim is to capture and reuse explanation experiences.

Bio: Nirmalie Wiratunga is a research professor at RGU’s School of Computing, with over 20 years’ experience in computer science and Artificial Intelligence (AI) research and education. She is also adjunct IDUN professor at the Norwegian University of Science and Technology, and best known for her work on case-based reasoning for personalized decision support systems. Her recent work funded through EU’s Horizon2020 and ERA-NET schemes explores learning with few-data and explainable AI for neural computation models. She serves on numerous AI program committees and has senior and advisory roles on the international joint conference on AI and International conference on CBR respectively.

Accepted Papers

Friday 10 Sept 2021: 14:50 - 17:00 CET

  • Dynamic path finding method and obstacle avoidance for automated guided vehicle navigation in Industry 4.0.
    Mr. Yigit Can Dundar.

  • Integrating Experience-Based Knowledge Representation and Machine Learning for Efficient Virtual Engineering Object Performance.
    Dr Syed Shafiq, Dr Cesar Sanin, Prof Edward Szczebicki.

  • Robust reasoning for autonomous cyber-physical systems in dynamic environments.
    Prof Anne Håkansson, Dr. Aya Saad, Mr Akhil Anand, Ms Vilde Gjarum, Mr Haakon Robinson, Ms Katrine Seel.

  • Safe Learning for Control using Control Lyapunov Functions and Control Barrier Functions: A Review.
    Katrine Seel, Akhil Anand, Vilde Gjærum, Prof Anne Håkansson, Haakon Robinson, Dr. Aya Saad.

  • Robustness of Sparse Neural Networks
    Mehdi Ben Amor, Prof. Dr. Michael Granitzer, Julian Stier.

Organizers

Akhil S Anand, PhD student, akhil.s.anand@ntnu.no, The Norwegian University of Science and Technology, Høgskoleringen 1, 7491 Trondheim, Norway

Anne Håkansson, Professor, anne.hakansson@uit.no, UiT Norges arktiske universitet, Postboks 6050 Langnes, 9037 Tromsø, Norway

Aya Saad, Postdoctoral fellow, aya.saad@ntnu.no, The Norwegian University of Science and Technology, Høgskoleringen 1, 7491 Trondheim, Norway

Haakon Robinson, PhD student, haakon.robinson@ntnu.no , The Norwegian University of Science and Technology, Høgskoleringen 1, 7491 Trondheim, Norway

Katrine Seel, PhD student, katrine.seel@ntnu.no, The Norwegian University of Science and Technology, Høgskoleringen 1, 7491 Trondheim, Norway

Vilde Gjærum, PhD student, vilde.gjarum@ntnu.no, The Norwegian University of Science and Technology, Høgskoleringen 1, 7491 Trondheim, Norway

Email & Contact Details

Aya Saad, Postdoctoral fellow, aya.saad@ntnu.no, NTNU, Trondheim, Norway

Professor Anne Håkansson, anne.hakansson@uit.no, IFI, UIT, Tromsö, Norway