Home

Deep learning methods have rapidly become omnipresent within the MICCAI community in recent years, due to their many attractive properties including state-of-the-art accuracy in many tasks in areas such as segmentation and classification. However, now that the initial excitement about these new techniques has led to many successful applications within the MICCAI domain, we need to begin developing a better understanding to demystify deep learning. To this end, we invite you to a new workshop in conjunction with MICCAI 2018 (Sept 16-20 2018, Granada/Spain) dedicated to understanding the “edges” of deep learning: what are its current limitations? what are some MICCAI problems that are not well-suited for existing DL methods? what are some failures the community has encountered in DL? how can we better understand the “mysteries” we encounter, whether an algorithm works unexpectedly well or unexpectedly poorly? where is the field going? etc.

We invite 8-page papers for this workshop. Some example ideas for possible contributions are listed below; however, this is by no means an exhaustive list, and we invite the MICCAI community to brainstorm about deep learning. To further enhance discussion at the workshop, papers are encouraged to provide datasets as supplementary data (to be publicly released). We will also allow until September for community engagement through 1-page “highlight” papers/e-posters.  

  • Reports on negative results and “mysteries”
  • Problems in MICCAI that are currently better addressed with traditional methods than DL, and why
  • Papers describing intentional ways to break deep learning in unexpected ways (like the “One pixel attack for fooling DNN”)
  • Issues of information governance with deep learning models - i.e. if I train a model on a protected dataset, and release this trained model, am I leaking information about the protected dataset?
To be clear, the goal of this workshop is not to disparage deep learning methods. As such, we will not allow papers that indiscriminately trivialize deep learning, such as a paper showing negative results using a generic network model that has not been adapted or fine-tuned to address a specific problem. Rather, we encourage submissions that are in the spirit of constructive criticism, with the aim of evaluating the strengths and weaknesses of DL, as well as identifying the main challenges in the current state of the art and future directions.