The advent of Deep Learning (DL) in computer vision – since the pioneering work of Hinton in 2012 – changed the perspective of the community, adopting in this way DL as the go-to technique for different computer vision tasks for medical image analysis. This emergence is justified by DL’s superior performance on various vision tasks including classification, recognition and image segmentation. However, despite the fact that DL is a powerful tool, there is still a lack of understanding of its mechanisms and decision strategies, which is not the case with more classical computer vision approaches as they are more tractable, and often offer a clear understanding about how they work. This interpretability gap is an important topic, especially in decision-sensitive applications like medical imaging, where reliability and explainability of solutions are essential in decision making.