The intrinsic statistical error introduced by any machine learning algorithm is under severe criticism by safety and security engineers. Safe AI is the sub-discipline of AI that looks for solutions to this problem, identifying under which conditions any autonomous decision may lead to dangerous hazards for humans and/or environments. Research in EU is increasing its interests in trustworthy artificial intelligence (AI), a paradigm collecting all the requirements that AI systems must fulfill to be designed, developed and deployed in real-world, potentially sensitive, applications. These requirements broadly include lawfulness, ethics and robustness.
Leveraging on the experience from REXASI-PRO European project, this tutorial will aim at presenting some of the main methodologies, best-practices, and challenges behind Safe AI. The topic will be addressed under different, multidisciplinary, points of view, in line with the requirements established by the recent paradigm of Trustworthy AI, which collects all the fundamental properties that AI systems must fulfill when deployed in real-world, potentially sensitive, applications such as assisted mobility.