The XPro project tackles an important problem in engineering and robotics, namely the problem of probabilistic explainability of deep-models for robot-critical applications such as autonomous robot-perception, self-driving robot-vehicles, and decision making. In this project we will focus on explainability (which is not the same as interpretability) by developing post-hoc techniques applied to existing pre-trained deep models applied to robot perception. In terms of case studies, XPro will explore two application domains, robotic perception and autonomous robotic-vehicles. Collectively, the XPro team will work in collaboration with young and senior researchers, as well as PhD students, for developing new probabilistic-based post-hoc calibration towards explainable models, new uncertainty-quantification evaluation metrics, real-world application domains (ie, relevant case-studies), dissemination and short-term missions will be carried-out as well.
Objectives
Developing new explainability frameworks for existing pre-trained deep architectures applied to critical/safety classification in mobile robotic problems.
Evaluating the proposed frameworks, and their components, by means of SOTA existing and novel quantification metrics that take account for partial information and misclassification.
Exploration and validation of the developed frameworks in relevant application domains, namely perception systems for robots and self-driving (object detection).