Theme:
Deep learning's automatic extraction of features and good results have made a big difference in how medical image computing is done. But, when used in clinical practise, they have severe limitations that cause doctors to maintain their skepticism. For instance, deep learning models are effectively "black boxes" that lack the ability to explain how they make decisions, making it challenging to debug them when necessary. Due to the poor explainability, clinicians who are trained to infer logical conclusions from clinical data are sceptical. However, their generalizability is still limited in clinical settings due to the large variety of imaging techniques, wide variations in how disorders manifest through pictures, and rare diseases whose accompanying data may not have been used during training. The generalizability problem is made much more evident when a deep learning model that has been trained on data from one medical centre is applied to data from other medical facilities, especially when there are significant variances in the data or when the domain is distinct from the training set. Therefore, it is critically necessary to develop new methodologies to improve the deep learning techniques' explicability and generalizability so that they can be used more frequently in clinical practice. This special issue calls for innovative, explainable or interpretable, and generalizable deep learning algorithms for intelligent medical image computing applications in order to solve the shortcomings of deep learning approaches in medical image computing.